“Reliability”

How reliable is your data used for decision-making?  FDA, in the opening section of their DI Q&A guidance, states that “FDA expects that all data be reliable and accurate”.  In this blog post, I would like to explore the concept of reliability in two different ways, both in the context of GXP decision making: 

  1. Reliability in the sense that the data are able to be trusted/are dependable

  2. Reliability in the sense that the data are statistically significant/representative enough to be dependable

In case #1, this generally applies to routine operations, for example, where defined procedures and practices are regularly evaluated by the regulators during inspection.  The focus here is on building a risk-based data governance strategy (considering criticality) that demonstrates trust in the data (as much as is needed, considering the “intended purpose”), effectively managing the four sources of variability: hardware, software, personnel, and documentation.  Expectations here are outlined in the PIC/s guide for data management as well as ICH Q9, and focus on the three pillars of governance: Design, Operation, Monitoring. 

In case #2, this generally applies to the non-routine use of data within the “investigations” umbrella (OOS/deviation/complaint/etc.) of CGXP.  How much data is enough to demonstrate reliability?  Demonstrating reliability of data used in the investigation process (especially product impact evaluation) has been the Achilles heel for QA teams across the world for generations.  This is without even mentioning the fact that that without a strong point #1 already established, point #2 was doomed from the start…  Let’s assume for this blog that point #1 is well understood and properly managed.

Our best piece of guidance on point #2, in my opinion, can be found in section I.9 of ICH Q9, where the following statement is made:

“Statistical tools can support and facilitate quality risk management. They can enable effective data assessment, aid in determining the significance of the data set(s), and facilitate more reliable decision-making.”

When data is to be collated and used for non-routine decision-making, don’t forget that statistical tools can be a powerful tool for demonstrating the reliability of your decision, considering that the primary role of the regulator is to challenge the scientific justification for your conclusions!  This is their obligation to the patient as a health authority…  It’s not personal, just business. 

With that said, focus on the business of demonstrating data reliability, both for routine and non-routine GXP operations – using statistical tools whenever feasible to justify why you do what you do.  The business case for data reliability has long been proven across the spectrum of manufacturing.  With data/metadata considered the “new currency”, don’t risk getting left behind (neither your organization nor your career)!

Previous
Previous

Thoughts on RCA

Next
Next

“It is what it is”