Blog

April 26th, 2024

“It is what it is”

The most important data is unknown and unknowable. This concept, attributable to W Edwards Deming, was made decades ago, yet still rings true despite the many technological advances we have made as a civilization. Just ask any front line employee tasked with managing day to day deviations!…  No AI/ML model is going to come to the rescue here…. (although they may certainly help).  It’s as universally fundamental as the first law of thermodynamics. “It is what it is” - forevermore, so to speak.

Essentially, the most important data would point us to a perfect process control strategy, which is, as we know from years of best efforts, impossible. It is sometimes managements supposition, however, that zero defects are indeed possible if employees would stick to the procedures and training. Wrong. Front line employees have little influence over Quality. Let that sink in for a moment… 

Don’t let your management fool you on this point, or you will find yourself working in a miserable environment, struggling to meet questionable KPIs and wondering why you are failing. Losing sleep over something in which you have little influence.  If the process could speak, it might say “it’s not you, it’s me”.  As Deming once taught, employees have the right to be happy and find meaning in labor.

So what’s the answer?  Ah!  The regulatory roadmap has already been published = the two enablers found in ICH Q10, or what Dr. Deming once termed a “system of profound knowledge”. That’s the best we can do!  If you find yourself in a company striving to build this system, you are in the right place. I bet you love your job!  That’s a Quality Culture, enjoy it. Life is short.

Pete.

April 8th, 2024

“Level of Risk”

So… when is it expected that we identify the root cause related to an “unexplained discrepancy” (see 211.192)?  Or when can we close the investigation with potential contributing factors (without identifying the “root cause” with some level of certainty)?

This question has caused us GMPers headaches for decades, due to the significant resources needed to dig down to the root.   The Live Oak tree grows roots up to ~4 feet underground.  Sometimes, depending on the investigation, a root cause dig can feel nearly impossible, much like digging 4 feet into our Texas rocky & clay-based soil!  So, when is enough, enough – considering our limited resources?

Let’s take a look at a recent case study (drawn from an FDA Form 483 (Observation 1) issued 02FEB2024) for some insight:

  • The firm finds “unknown extraneous peaks” in cleaning verification swab results for what appear to be product contact surfaces (oral solid dosage).   

  • Further analysis via LC-MS finds the identity of two peaks to be other drugs manufactured within the facility, with the remaining peaks being unidentified/unknown. 

  • Investigations are closed without identifying the “root cause”, based on impact assessments and CAPAs.

The investigator concluded that the cleaning program was “deficient and unreliable” based on the findings summarized above.  Namely, the failure to identify the “root cause”.  Why were the impact assessments and CAPAs insufficient?  Let’s look to ICH Q9(R1) for some guidance:

  • Firstly, the amount of effort we put into any investigation (211.192) depends on the “level of risk” (Q9).

    The problem here is that we as an industry do not empower our employees to comply with this expectation.  We often see the “zero defects” or “quality culture” on massive company posters accompanied with pictures of our patients, which translates to employees as we do not tolerate any risk to patient.  Transparency with reality is not an option.  There is no usable SOP to even refer to!  As a result, we have to skip this step and hope that no regulator ever asks the question, with our fingers crossed that the level of effort we put into the investigation was OK.  The truth is that there is increased risk, otherwise we wouldn’t have opened the investigation in the first place!

As an FYI here: risk is defined as the severity of a hazard occurring, combined with the probability or vulnerability of that hazard occurring.  The tool used to determine this “level of risk” (Q9) varies (I recommend the qualitative nine-box methodology!).

  • In this case study, because the investigation was initiated due to carry-over residue of APIs and unknowns, it appears that the investigator considered the “level of risk” (Q9) to be high (severity of hazard * vulnerability).  This is generally the case with cleaning (high * high) due to:

  1. Severity: The unknowns that come with consuming impurities (think nitrosamines…) = high

  2. Vulnerability: The highly manual and variable nature of cleaning (minus CIP) = high

  • The firm should have performed this calculation from the very beginning.  We might refer to this as the North Star – guiding the investigation team through the process regarding the level of “effort, formality and documentation” (Q9).  Management could have then dedicated resources to this particular issue, while diverting resources from other, lower risk investigations. 

Without this initial calculation, or the ability to perform this calculation due to inadequate investigation SOPs, the investigation team is lost at sea.  No North Star to guide them, and insufficient deckhands.  The team will not know how far to dig (is 2 feet enough?), and management will not know how to “[free] up resources for managing higher risk issues” (Q9). 

Lessons learned: Empower your investigation teams to follow the principles of ICH Q9(R1) – starting with determining the “level of risk” of any given issue.  This small investment in employee development will reap dividends well into the future, by addressing the reality of risk, patient, and limited resources

February 28th, 2024

Supplement the Swag

Boston is one of my favorite places on the East Coast.  Something about the energy of the city inspires me.  Maybe it’s the students packed into the bars and cafés solving the world’s challenges over bubble tea?  Or maybe the Irish legacy of true hospitality through human connection?  Maybe it’s a confluence of root causes.  Either way, it’s cool, and a place to live in the moment. 

At the Bruins vs. Kraken game last night, I struck up a conversation with the folks sitting to my left, a father and son, and once learning I was visiting from Texas, couldn’t help but make me feel right at home.  Explaining the history of the team, and making sure my beverage holder was never empty.  This place is simply inspiring and stimulates creativity in any visitor.  As I boarded the plane back to Austin, I started searching the audiobook library for something to cap off my work week, and came across David Brooks’ latest work: “How to Know a Person: The Art of Seeing Others Deeply and Being Deeply Seen”.  In the first chapter, David discusses his emotional awakening, and references several studies and publications along the way.  One caught my attention, which I had briefly reviewed a year or two ago: the 2021 McKinsey study titled “‘Great Attrition’ or ‘Great Attraction’? The choice is yours”.  I couldn’t help but make the connection to what the regulators are trying to inspire in both the Data Governance and Quality Management Maturity (QMM) initiatives.  As I fly somewhere over western Virginia, here are my thoughts:

According to the study, “forty percent of the employees in our survey said they are at least somewhat likely to quit in the next three to six months.”  Many of these responded that they would leave even without another job in hand…  This suggests a significant trend in the post-pandemic working environment.  Chatting with folks across the world in workshops and conferences (disclaimer: very limited sample size), I also notice a common thread that aligns with the McKinsey conclusions: a human search for purpose and meaning in labor.  I feel an energy in the air when a team develops and presents a strategy for managing a hybrid system using a risk-based governance approach.  Why?  Because we have ditched the concepts of “inspection readiness” and “compliance”, and have rather unleashed critical thinking and focused on why we do what we do; we facilitated ownership

You can feel ownership within five minutes of a facility walkthrough: folks are excited to present their strategy for managing their workflows.  They are nowhere near perfect, but that was never an option anyway, and they understand that.  They present a workflow that contributes to improving a patient’s quality of life, or perhaps even saving a life.  Compliance and inspection readiness (old-school buzz words) create an environment of fear and uncertainty, because they command perfection.  But we and our systems are all flawed, it is one of the universal realities.  Remember the second law of thermodynamics from high school chemistry?: disorder only increases (or at least stays the same) with time!

FDA has spent over a decade researching the concept of Quality, and has had a few false starts (remember “Quality Metrics”?) along the way.  In the latest attempt, we see the introduction of the five elements of Maturity.  One of which is titled “employee engagement and empowerment”.  I think we are now close to getting it right.  Just read the mood in the room full of employees and you will understand why.  I am sure many industry executives find this ridiculous [QMM], and have most certainly submitted comments against its implementation, as they cannot envision the connection between “empowerment” and “compliance”.  This rejection [by management] aligns with the McKinsey study, as managers surveyed responded that the causes for employee departure were primarily transactional (e.g., search for better salary).  Shocker: they were not

FDA is first and foremost an agency acting to “protect and promote public health”.  Drugs are manufactured and tested by employees, and our workflows are highly manual (and therefore variable).  It is these employees that directly impact patient safety (not the C-Suite).  Someone has to stand up for employees and (unfortunately) force management to take quality culture seriously, as it appears many sites remain siloed and disconnected.  The “Quality Culture” mugs are cool (don’t get rid of them), just complement the swag with action. 

Considering the current challenges to public health: drug shortages, severe departures from regulatory expectations, and continued globalization and supply chain complexity: the agency basically has two options to address the common root cause of poor manufacturing and quality practices (see recent FDA Warning Letters and 2019 Drug Shortage Whitepaper for any doubts).  These two options are:

·       Revise and enhance the GMP regulation (210/211)

·       Revise and enhance regulatory guidance (the “C” in CGMP)

The latter is the obvious first choice, as it is exponentially more agile and allows for flexibility, considering the diversity of drug and biologic products.  If this doesn’t work, however, the GMP regulation must be next.  The ICH Q’s, DI/DG Guidance, QMM, and the Drug Shortage Whitepaper highlight the key enablers to achieve “employee engagement and empowerment”: here they are:

·       Data Governance: workflow (vs. system) validation to achieve a “right environment” and meaning in labor

·       Knowledge Management: building a library of knowledge to facilitate problem solving and stimulate innovation

·       Risk Management: focusing limited employee resources on areas of greatest risk and reducing wasted efforts

An investment in people via the three enablers, in my opinion, addresses the faults found in the McKinsey study.  Invest in compliance and you will see short-term benefits (e.g., GMP certification), but you will struggle to remain relevant in the future.  Invest in people and you will reap benefits for generations and leave a legacy of good in this world. 

February 8th, 2024

“One Wing and a Prayer”

plus: diamonds in the rough

Today I am having a little fun with the title of this blog entry…  We have seen multiple recent FDA Warning Letters for OTC or other non-application products as the regulators around the world respond to catastrophic ongoing public health emergencies.  These Warning Letters are for serious and basic failures, such as the lack of incoming raw material evaluation and/or lack of a process validation program (e.g., products containing glycerin). 

Note: I did not use “lack of process validation”, as that would suggest that process validation is a one-time activity involving three batches and a report no one really ever uses. 

You might initially dismiss these Warning Letters as not relevant learning opportunities for application-based GMP facilities, as I once did (to be completely honest).  However, there is a wealth of knowledge regarding CurrentGMP expectations (diamonds) hidden in the rough (egregious basic GMP deficiencies).  The most recent Warning Letter as of the writing of this entry is dated 31JAN2024 (WL# 320-24-18).  Check out this guidance, and let’s break it down:

“You have not validated your manufacturing process used to produce homeopathic drug products, including those intended for infants and children. During the inspection, you indicated that you were not aware of the requirement.”

– Yikes!  No comment here.

And now the learning part:

“…you must assure that your production process is capable of assuring all of your drugs are of acceptable quality, identity, strength, and purity. During process validation, the process design should be evaluated to determine if it is capable of reproducible commercial manufacturing.”

– Here we see FDA point directly to Design as the key ingredient to Quality assurance.  The quality of commercial product is uncertain without a demonstration of good design principles.  Passing the PPQ protocol requirements does not itself demonstrate good design.  Rationale: PPQ batches are typically done under heavy scrutiny, as the are required for entry into the market, and are not representative of commercial manufacturing (e.g., volume/resources/etc.).  A mature organization already knows this, so supplements these limitations with a plan for the future = “vigilant monitoring” (data & process mapping + qualitative risk assessment)

“A detailed summary of your validation program for ensuring a state of control throughout the product lifecycle, along with associated procedures. Describe your program for process performance qualification (PPQ), and ongoing monitoring of both intra-batch and inter-batch variation to ensure a continuing state of control.”

“This includes, but is not limited to, evaluating suitability of equipment for its intended use, ensuring quality of input materials, determining the capability and reliability of each manufacturing process step and its controls, and vigilant ongoing monitoring of process performance and product quality.”

– Here we see a focus on the “validation program” for PPQ batches and commercial manufacturing, as discussed earlier. 

Q: How can we demonstrate adequacy of the “validation program”? 

A: Via knowledge of the variability inherent to each process step (remember the DI guidance? variability arises from the four scoundrels: Hardware, Software, Personnel, Documentation). 

What you plan to vigilantly monitor depends on the outcome of your variability assessment (aka risk assessment)!  Again, this is necessary because of the non-representative nature of PPQ batches.

– Let’s be honest, there is no way to determine the capability and reliability of each manufacturing process step and its controls via a traditional PPQ protocol/report.  If we rely on this report alone, we might refer to that as a “wing and a prayer”. 

FYI (a bit of history here): “a wing and a prayer” probably originated from the film Flying Tigers (1942), where planes were barely able to return to base after flying over the “hump” or Himalaya range as they brought in supplies to China during WW2, hence: “coming in on one wing and a prayer”. We cannot enter commercial manufacturing on “one wing and a prayer” - we can do better than that!

Conclusion: Process Validation is how we demonstrate quality, despite all the unknowns that come with batch manufacturing (meaning we cannot test each individual unit).  It is a combination of development and design experimentation, risk management, traditional GMP controls, psychology and culture.  What it is definitely not: a compliance report that simply consumes energy on some server in an attempt to be “inspection ready”…  Honestly, I think we need to start moving away from that term “inspection ready” – it causes unnecessary fear and panic among staff, which never adds value.  Just be confident in your processes and inspections will become an afterthought.

January 21st, 2024

Compliance?

As I was reading a recently published FDA Warning Letter dated 21NOV23 (for what must have been the tenth time – this one is full of insight…), specifically section 2B, outlining the firm’s failure to investigate NVP excursions, I was immediately consumed with concept of “compliance”.  This particular Warning Letter is an extensive insight into the rationale behind the C in GMP – what it means and where it is going, specifically on the topic of data governance and the rationale behind this expectation. 

According to the letter, the firm had established a permitted NVP excursion buffer, meaning that an investigation would only be performed if the alarm persisted for some defined timeframe.  I am sure they had established procedures to ensure DI and ALCOA+ were evaluated, and had a signed and archived “DI risk assessment” somewhere in the quality system.  Most if not all sites already have this program and (unfortunate) checklist in place.  I imagine internal audits and inspection readiness checklists were completed with 5 stars, but no one bothered to evaluate governance of the process and ultimate decision-making (in this case to release a DP).  An ALCOA+ evaluation will pass this scenario every time, but a governance approach would have failed.  This gives the site a false sense of inspection readiness.  Governance is ALCOA+, plus an evaluation of good decision-making.  Regulators are now expecting both – which we could define as CGMP. 

Let me expand on this: 

Governance would have required applying the ICH Q9 vertical flowchart to each step/interface in the NVP process.  One of the hazards (HI) identified would have been potential product contamination in the case of an NVP alarm.  No way the firm could have actively accepted that risk, as the severity would be “H” and the probability would be unknown, defaulting to either “H” or “M” (depends on many factors).  Either way, using the “nine-box”, we would have a high risk hazard, triggering a procedure that requires an investigation (especially as we are in the final step of DP manufacturing…).  However, and unfortunately, it is unlikely the governance approach was applied, allowing the firm to settle on an ALCOA+ compliance checklist – with a 50% inspection score (fail).  This was passive acceptance of risk – which was deleted (rightfully so) from Q9 and is no longer an option in R1.  This is worrying: that basic risk management (2005) and governance principles (2021) are still gathering dust on the internet, despite being around for almost (at least in the case of Q9) a generation! 

The reason for issuing QRM, data integrity and data governance guidance provided by the regulators over the past 19 years is not simply some random compliance expectation.  If a firm does not understand the rationale behind compliance, they are unlikely to survive much longer, especially as QMM comes into force.  Compliance is not the ultimate goal of CGMP: never has been and never will be: it is a way of doing things, thought processes, critical thinking tools, and culture.  The ultimate goal is good scientific decision making, and decisions are made by humans evaluating data via some process that has inherent risks…  compliance is simply a prerequisite.  Compliance is (perhaps?) GMP, but compliance is not CGMP. 

Always remember – quality always ensures compliance, but the reverse is almost never the case. 

Let’s go!

January 8th, 2024

A Quality Framework:

AI/ML

This blog entry was written by myself, along with Ulrich Köllish and Jennifer Roebber, GxP-CC GmbH, and originally published on bioprocessonline.com.

As the use of AI/ML in society has increased exponentially in the past few years, primarily because of increased availability and reduced costs of implementation, it is natural to expect its eventual and rather slow introduction into the GXP world. GXP users are generally slower to introduce new and disruptive technologies because of the heavily regulated nature of our business processes and low tolerance for risk/uncertainties. AI/ML has been used in drug discovery for some years and to some extent the GLP/GCP environments; however, it has not yet been adopted by the GMP commercial community.

Recently, the FDA published a discussion paper on AI/ML in drug manufacturing and solicited feedback from industry. The industry’s comments show that an important factor preventing AI/ML implementation lies in a lack of regulatory guidance for the unique aspects of AI/ML not covered by existing computer system validation, such as, for example, minimizing bias in models, data protection, and quality for model training. Limited industry guidance, at least, has been proposed and outlined in the ISPE GAMP 5, second edition, published in July 2022.

In 2023, FDA published two discussion papers in an attempt to gather industry’s views on additional use cases, implementation blockers, necessary data management activities for implementation, and other comments that may be useful as the agency develops more concrete guidance. Also, in 2021, the FDA's medical device division (CDRH) published AI/ML guidance, which we believe is much aligned with the existing pharmaceutical data integrity and risk management framework of subjectivity, formality, and decision-making. For example, the 10 guiding principles forming the Good Machine Learning Practice for Medical Device Development parallel the existing CDER guidance for three-stage process validation, published back in 2011. We expect that the guidance for pharmaceutical products will follow a similar path, with future guidelines echoing risk management within the existing cGMP framework.

In this article we discuss guidance already available that provides an excellent foundation for the eventual GMP AI/ML revolution, with the goal of preparing industry for future implementation while minimizing future growing pains. Waiting for further guidance to establish an AI/ML framework from the regulators to get started would represent a serious mistake, as the journey is a marathon rather than a sprint. As such, we propose taking action now via the following three steps:

Before we can begin to develop a pathway forward, we must remember the difference between qualification and validation. In the case of AI/ML, we would speak of qualification in the context of the model or platform itself, and validation as the use case in the context of the overall business process, including the process, the environment, and the operators. FDA, in its guide for data integrity, cites validation for intended use including “software, hardware, personnel and documentation.” Qualification provides the scientific evidence that the model functions appropriately within the Good Machine Learning Practice framework (yet to be defined by CDER, but likely following on the heels of the CDRH SaMD guidance), while validation demonstrates that risks arising from the use of the model in a GXP environment are controlled according to its “intended performance.” Demonstrating either of these requirements during regulatory inspection is tricky for AI/ML and likely must rely heavily on risk management. The burden will be on the company to demonstrate how the use of the model within the validated process does not add unnecessary risk to patient safety (meaning better than the status quo). Considering this burden, a black-box defense will likely not be tolerated, as it does not mitigate the potential for risks such as biases in model performance.

Step 1: QRM And Intended Use

A solid quality risk management (QRM) framework within your network is essential, as the intended performance requirements of AI/ML will vary greatly depending on patient proximity, from upstream process control (generally lower risk) to downstream QA decision-making (higher risk). Experience using new tools, such as the informal/semi-formal and qualitative tools (e.g., data and process mapping) pushed in the new ICH Q9 revisions during routine legacy GMP operations, will pay dividends in the future, as it allows the SME to clearly explain the intended performance of the model during regulatory inspection, greatly reducing the potential for confusion and misunderstanding. In the same sense, quantitative and formal tools relying on multiplication of risk categorization impede critical thinking and will likely ultimately fail to produce the rationale necessary to fulfil the burden of intended use.

Step 2: Data Governance And Acknowledging Bias

In any industry adopting AI/ML applications, a solid data governance program is a prerequisite. No amount of good machine learning practices can address unknown biases in the data sets to be used for qualification and any future decision-making. Luckily, the road map to success for governance has been clearly outlined in the PIC/s guide for data management and integrity (design, operation, monitoring). Good data governance does not mean that bias is eliminated but rather acknowledged, reduced to an acceptable level (if necessary), and regularly monitored.

According to the State of MLOps Industry Report 2023, “over a quarter (27%) of ML practitioners surveyed believe that bias will never truly be removed from AI-enabled products.” So, it appears the goal is not to demonstrate freedom from bias (perfection) but rather to identify sources of bias and acknowledge limitations in the model output accordingly, which happens to be the same goal described under the PIC/s data governance guidelines.

In anticipation of AI/ML in future applications, we also recommend dedicating significant resources now to strengthen governance programs that go above and beyond minimum GMP requirements. Creating new roles within your organization such as a chief data officer to propose strategies for use of your data, assigning ownership and responsibility to data sets, and management’s promotion of an overall data culture will provide the governance foundation necessary to benefit from any future AI/ML use cases.

Step 3: Development Of Internal AI/ML Standards And Quality Oversight

In our recent experience, we have found that many companies are already well into qualification activities for AI/ML models, which is in-line with published use cases from natural language processing to QC data trending to digital twin However, our conversations demonstrated that most sites are currently just beginning the process of establishing AI/ML frameworks within their quality system and developing as they go. By not preemptively establishing standards, these companies are making the future defense of the model for its intended use difficult as there is limited ability to recreate the qualification activities using scientific evidence (e.g., documentation). We foresee a difficult journey ahead if the quality unit is not involved from an early stage, implementing a standard set of expectations that must be followed.

We recommend implementing a good AI/ML standard prior to initiating development activities, so that any knowledge about the model gathered during early development can be referenced within the risk management program alongside some Quality oversight. As is well understood within the AI/ML community, one of the biggest risks is model bias, which comes in different forms and can arise at different points within the model development. We anticipate that the inability to produce documentation regarding the prevention of bias during model development will be a significant source of regulatory concern. As AI/ML is designed to drive future decisions regarding events that are likely to occur based on what the model “learned” from its training data, we have to demonstrate that the way the training data was collected and selected was unbiased – meaning free from influence that would cause the model to no longer meet fundamental scientific standards. An analogy can be drawn to human behavior: it is well understood that employee training in any particular activity is heavily influenced by any bias the trainer exhibits during the training activity, such as, for example, a dislike for a specific action outlined in the SOP. Demonstrating that the model is free of bias can be tricky and very high-risk – as evidenced by the many established case studies in society where bias in AI/ML has caused serious harm. On the bright side, once we develop QRM maturity, implement good risk-based governance, and roll out a standard framework with Quality oversight, AI/ML becomes feasible and, in our opinion, will greatly enhance the quality of medicines via multiple means yet to be discovered. 

Conclusion

The potential applications of AI/ML in drug manufacturing are meaningful and will lead to reduced errors, recalls, and drug shortages. We see AI/ML as a needed tool to overcome current life-threatening situations and so much more than just a cool new technology. FDA’s Drug Shortage report from 2022 describes a dire situation with regard to access to essential medicines, with shortages increasing 30% as compared to 2021, with 62% of these shortages due to quality and manufacturing issues. Surely, new cGMP solutions are necessary and cannot wait – patients are waiting, literally. Our recent conversations with some in industry have been promising, and we are excited to see this transformation already underway. With a few minor additions to our existing quality system referencing existing regulatory guidance, we can be sure the future is bright.

December 18th, 2023

Validation

From a CPV perspective

Regarding “Validation” in the context of GMP:

Let us start with the definition.  After querying Merriam-Webster, I was able to find two meaningful extracts relevant to this blog:

1.       To support or corroborate on a sound or authoritative basis, and

2.       To confirm the validity of.

If we look to the regulations (Parts 11/211), we find a total of eight instances where a variation of the word “validate” is cited.  For this blog, we will focus on those two instances specifically referring to what we have now come to term Computer System Validation “CSV” – Part 11.10(a) and Part 211.68(b).  These regulations refer to expectations for commercial manufacturing, and we will use the concepts of FDA’s Process Validation Guidance (specifically stage 3 “Continued Process Verification” (CPV)) for this discussion.  CSV, in actuality, is a term used to describe the individual activities of software/hardware “Qualification” (e.g., IQ/OQ/PQ), in an attempt to facilitate a systematic means for compliance with these regulations.  It is assumed that once these individual software qualification actions have been performed successfully according to a pre-defined protocol, the system is “validated” and can be used in commercial manufacturing.  Decades of past regulatory action and public health tragedies including drug shortages, however, lead us to believe this is a false assumption.

Let’s go back to basics for a moment and simplify what has become (in my opinion) a large contributor to the ~62% of drug shortages attributable to “manufacturing and quality issues” =  failure to embrace the true principles of stage 3 process validation (CPV) and instead focus on individual compliance tasks within disparate department siloes all the while making unfair assumptions in high risk manufacturing areas.  We have failed in our attempt “to support or corroborate on a sound or authoritative basis” that our products are safe and effective!  Why?  Because we have only completed 50% of the job.  During inspection, the regulator will be looking for two things during an evaluation of any given process:

·       System Qualification (Hardware and Software)

·       Workflow Validation (Hardware, Software, Personnel and Documentation)

Note that the Workflow Validation simply references the System Qualification when necessary, while addressing in detail the Personnel and Documentation components.  For a detailed explanation of Workflow Validation expectations, check out Q3 of FDA’s Data Integrity Guidance.  Once we break it down into these two concepts, and finally align with regulatory guidance (e.g., CPV), we can now properly define the efforts needed at the design, operation, and monitoring stages within routine commercial manufacturing that need to be addressed with regard to both bullet points.  Old school CSV has a healthy serving of System Qualification, but zero Workflow Validation.  The inspection grade is 50% (an F). 

Folks in the CSA camp have tried to define the reduced efforts needed at the System Qualification stage via a risk-based approach, but I fear they are missing the scientific justification: a link to the Workflow Validation.  Without this link, the intended use is unknown, and therefore a risk-based approach cannot be justified.  This will be seen as an opinion by the regulator (without the link), which may be the reason CDER did not sign off on the guidance: perhaps is tries to simplify and standardize a concept that requires critical thinking and a case-by-case evaluation…? 

The two concepts are attached at the hip and fully reliant on each other to provide your scientific rationale during inspection.  Don’t be fooled here, CSA cannot be used as a means to standardize your qualification efforts via a decision-tree!  As humans we LOVE checklists, flowcharts, and really anything that make our lives easier (TNTC examples here).  I’m afraid, however, that there are no shortcuts available here: don’t fall into the trap.  We will have to use critical thinking and proceed on a “case by case basis” – to steal the working from Q9. 

I am feeling bold this evening as I fly somewhere over West Virginia: I would propose we scrap the concept of CSV altogether.  Leave it in the back storage room alongside the dot matrix printers we used to print and weigh chromatography peaks.  Looking forward to 2024, they should both exist in glass cases within your local technology museum.  I am not questioning the value of Qualification, but rather the assumptions that come with “CSV”.  Regarding these assumptions, and to provide further evidence, I see a direct conflict with two existing FDA Guidance Documents:

1)      Process Validation (2011): “The goal of the third validation stage is continual assurance that the process remains in a state of control (the validated state) during commercial manufacture.” CSV is generally a one and done activity.

2)     Data Integrity (2018): “Controls that are appropriately designed to validate a system for its intended use address software, hardware, personnel, and documentation.” CSV only addresses hardware/software.

3) 21 CFR Part 11.10(a): “Validation of systems to ensure accuracy, reliability, consistent intended performance, and the ability to discern invalid or altered records.” CSV does not consider the intended performance (aka workflow).

I struggle to find any significant modern regulatory publication/guidance related to pharmaceutical manufacturing that aligns with CSV.  As an investigator, I was provided with countless examples of industry publications on the matter, however, as a regulator I was not interested in an opinion of what CSV should or should not entail (with the exception of the lunch breaks); there is no time for such discussion during a time sensitive inspection.  I simply used the guidance listed above to determine if the processes that were directly influencing patient safety were under a sufficient level of control according to risk management principles outlined in ICH Q9.  This is demonstrated via an evaluation of the following components of any given process: hardware, software (Qualification), personnel and documentation (SOPs/training/batch records/forms/etc.).

I understand that this is a paradigm shift, and as humans even small shifts are terrifying – but we must tackle this misunderstanding (with best efforts to be as non-invasive as possible) and re-calibrate the way we as an industry understand and implement “validation”.  The concepts of true validation as outlined by the regulators are not hard to understand, but anytime change is necessary we must see value to create motivation.  Success is determined by a combination of resources and motivation.  Resources are provided by executive management, and motivation must be self-generated by staff tasked with executing the work.  In theory, motivation should be easy for us considering the life-saving products we manufacture, but we have allowed compliance to spoil it.  Some may place this blame on the regulators, but the concepts of risk management have been around for over a decade.  Every site I visit around the world is filled to the brim with potential and motivation.  Don’t let an outdated interpretation of compliance get in the way of world class manufacturing.

Never implement significant organizational change without careful planning, which almost always should include execution of a pilot study.  The pilot is performed not only to evaluate the immediate result of the change, but equally important evaluate its effect on employee motivation and reduce the anxieties that come with significant change.  It’s a simple litmus test: did the change create an environment that will motivate staff to find purpose and meaning in labor?  If yes, let’s go!  If not, it’s back to the whiteboard.

Now back to validation…  When Workflow Validation and System Qualification come together to form our control strategy for CPV, we can achieve the following, which I have no doubt will improve motivation:

System Qualification: The least burdensome (aka risk-based) qualification approach is taken (including screen shots/etc.) based on those areas within the workflow that are of highest risk to our patients.  When a script is executed or a screen shot grabbed, it is because it matters, a confirmation is needed because our patients depend on a right first time strategy.  This lack of dilution brings purpose and meaning to the activity = motivation.

Workflow Validation: Procedures, logbooks, training, worksheets, batch records, and other control strategies are established using the least burdensome (aka risk-based) approach.  When an audit trail review is required, it is because it matters, a review is needed because our patients depend on a right first time strategy.  This lack of dilution brings purpose and meaning to the activity = motivation.

Further Reading: Paste into Google (or your preferred search engine): “FDA Warning Letter Inter and Intra Batch Variability”

Once critical thinking and motivation is allowed to flourish, and executive management are on board, now we can talk “Quality”

-          Pete

November 15th, 2023

Knowledge Management

“it’s a no-brainer”

OK, so what exactly is knowledge management (KM)?  It seems like we should cover it here since Q10 has become the hottest topic in our industry at the moment as FDA moves forward with the QMM protocol assessment in response to increasing drug shortages.  Q10 is finally getting the attention it deserves, what a pity that it took over a decade…

Just for background, KM is one of the two “enablers” within ICH Q10 that may permit a site to achieve an advanced (or mature) Quality System.  This is a site who reliably produces high quality medicines “without extensive regulatory oversight”.

The concept of KM is defined in ICH Q10 as follows:

Product and process knowledge should be managed from development through the commercial life of the product up to and including product discontinuation. For example, development activities using scientific approaches provide knowledge for product and process understanding. Knowledge management is a systematic approach to acquiring, analyzing, storing, and disseminating information related to products, manufacturing processes, and components. Sources of knowledge include, but are not limited to, prior knowledge (public domain or internally documented); pharmaceutical development studies; technology transfer activities; process validation studies over the product lifecycle; manufacturing experience; innovation; continual improvement; and change management activities.

Sounds great, but a definition without practical examples is difficult to envision, especially when operating within a highly regulated environment and very limited resources.  I have been focusing on this concept for the past few months, trying to develop a means for explaining KM that is “non-invasive”, so as to not disrupt ongoing operations.  I have come to the conclusion that KM can be implemented in various ways, however, in my opinion, the most efficient and effective way is to capture knowledge (explicit and implicit) within our workflow validation packages (see Q2 of FDA’s Data Integrity Guidance).  A workflow validation package includes consideration of hardware/software/personnel/documentation, and implements a risk based strategy for ensuring the accuracy and completeness of data via the three pillars of governance: Design – Operation – Monitoring

Example #1: An analyst knowing that an extra minute of sonication during sample preparation reduces variation in assay results is potential knowledge that can be integrated into the quality system to reduce OOS’s and drug shortages.  However, this will remain “information” in the analyst’s head until there is a management strategy in place to document this implicit information within the workflow validation risk assessment, at which point it then becomes knowledge.  Information does not become knowledge until it is documented, and the most efficient capture of this information is within a workflow validation risk assessment!  This forms the library of knowledge that can be used throughout the organization when needed (in this case, during an OOS root cause evaluation). The ability to assign a potential root cause by referencing the KM library will improve efficiency and demonstrate the “scientific rationale” necessary to perform re-testing/sampling. 

Example #2: It is explicit information that failure to rinse the probe prior to pH analysis will cause contamination of the sample, lowering the pH and possibly resulting in an OOS.  This is just science.  This will remain information, however, until there is a management strategy in place to document this implicit information on the workflow validation risk assessment, at which point it then becomes knowledge.  This forms the library of knowledge that can be used throughout the organization when needed (in this case, during an OOS root cause evaluation).  The ability to assign a potential root cause by referencing the KM library will improve efficiency and demonstrate the “scientific rationale” necessary to perform re-testing/sampling. 

Without a KM library to reference, the regulators often conclude the firm is relying on their opinion to justify root cause conclusions/product impact statements/rationale for re-testing/other critical GMP functions.  This is the “root cause” of TNTC 483s! 

It is time to get on board with KM – it’s a “no-brainer”  🥸

October 15th, 2023

QMM Updates

Objectivity

FDA has recently published an update on their ongoing development of the Quality Management Maturity (QMM) assessment protocol, which has further refined the elements to be included in the evaluation of a site’s advanced pharmaceutical quality system (PQS).  The Agency has outlined five elements that are likely to be included in the eventual assessment protocol:

1.       Management Commitment to Quality

2.       Business Continuity

3.       Advanced Pharmaceutical Quality System (PQS)

4.       Technical Excellence

5.       Employee Engagement and Empowerment

These elements should come as no surprise, as they have been communicated in some form in previous QMM and/or drug shortage publications; and are fully aligned with the principles already described in ICH Q8, Q9 and Q10.  The question is not the value in investing in such elements, but rather how to perform an objective evaluation and ensure consistency, considering the diversity of manufacturing processes and varying levels of resources available to manufacture different product classes (e.g., generic vs. innovator). 

This will no doubt be the biggest challenge to implementation of the program, and will eventually have to settle on achievable goals for each element.  In my opinion, this will include an evaluation of the tools provided to site staff to achieve success in each area (it will not revolve around how well site management can speak to the elements) – but rather what are you doing about it.  One meaningful management metric for element #4 might be “number of human hazards existing within our CPP/CQA workflows”.  Success in each area will depend on the effective use of the “enablers” outlined in Q10:

1.       Knowledge Management (effective utilization of unstructured data/information)

2.      Quality Risk Management (focus on design, operation, and monitoring of key workflows)

For those sites operating with limited resources, you might be wondering if you can compete within the eventual QMM program.  The answer is = absolutely.

Answer = By embracing and investing in the principles of “Non-Invasive Data Governance”, each of the five elements above is addressed through efficient use of limited resources.  It involves two key principles:

1.       Integrating existing processes into the formal Quality System

2.       Re-directing existing resources to achieve process ownership (away from non-value-added activities and into Quality-focused-activities)

Disclaimer: As a training company focused on element #5, I obviously have a conflict of interest here. 

I’m excited to see how the QMM protocol evolves, especially as they try to tackle the problem of objectivity.  If they design the protocol to include a measurement of how the site actively works to reduce risk to patient safety (drug shortages + safety/efficacy) through the introduction of new QRM and KM tools, then I believe this years-long initiative will result in a significant return on investment.

September 30th, 2023

15 Years!

Dr. Califf Visits India

This past week FDA commissioner Dr. Califf was visiting India to celebrate 15 years since the establishment of the New Delhi office, a place that holds special meaning for me.  As a result, I was interested in the details of the visit, and one particular activity caught my attention: Dr. Califf’s 25 minute sit-down interview with CNBC’s “The Medicine Box” on September 29th.  I was impressed with the direct and meaningful nature of the questions.  It was evident that the journalist was well aware of the current compliance environment, and unafraid to ask tough questions!  The interview covered several hot topics:

  1. Unannounced Foreign Site Inspections: my take on the interview is that we can expect these to continue and most likely be expanded, especially because Dr. Califf actually states “I think a mixture of announced and unannounced is important, and you will see more unannounced inspections.”  Essentially, no firm should now be surprised if an FDA Investigator shows up in a foreign facility without notice, especially anywhere the FDA has a permanent presence (EU, South/Central America, Asia), which makes the logistics of such a visit easier to manage.

  2. Re-Shoring Essential Manufacturing Capabilities: FDA has little influence on the factors that could encourage re-shoring, as a science-based public health agency.  Economics has a large role to play here, among other things outside of the FDA’s control.

  3. Emerging Concern with Clinical Data Integrity: Dr. Califf himself inserts the “including clinical” comment several times as a concern, including when responding to questions about post-COVID compliance. This is notable considering the recent press regarding the reliability (or not) of clinical trial data being used to approve drug applications.  This is a worldwide concern and potential nasty can of worms about to be opened.  Be prepared for a crackdown on enforcement of good clinical data governance.  See my earlier blog post for an evaluation of the recent European guidance for GCP data management, which provides excellent advice and is largely harmonized with previously published commercial DI guidance.

  4. Maturity: Dr. Califf emphasizes a focus on the transition to digital, which could aid the agency in determining when and where to perform inspections, as well as specific areas of interest in which to focus during those inspections.  We are likely some years away from such a scenario, but the comments fit in quite nicely with the recent FDA update earlier this month to the QMM journey and potential rewards for Quality investments, which they have been pushing hard the past few years.  In parallel to the discussion on digital transformation, there is an emphasis on employee training and empowerment, a necessary ingredient for true Maturity. Industry should be prepared for a change in tactics when it comes to the FDA evaluation of employee training programs, with an emphasis on the ability to demonstrate training effectiveness, rather than simply providing evidence that training was provided.

I could keep going here on the link between the interview and industry’s QMM journey – but will save that for the next Blog post, stay tuned! 

September 12th, 2023

Objective 4

Updates to FDA’s Pre-Approval Inspection Program

FDA’s “Compliance Program Guidance Manual” (CPGM) 7346.832, which provides an in-depth guide for the investigator on factors to consider when performing a “Pre-Approval Inspection” (PAI), was revised in October 2022 with little fanfare.  This is surprising, considering the significance of this CPGM to the overall application approval process.  The previous guidance had been released in August 2019, and was effective for just over three years (a very short lifespan for regulatory guidance!).  Interestingly, the PAI program was an outcome of serious fraud and corruption uncovered in the mid 1980’s, commonly referred to as the “generic drug scandal”.  The program was designed to prevent fraudulent applications from gaining approval via a vigorous on-site inspection.  The inspection outcome performed according to the CPGM, combined with the expert review of the various application components, come together near the end of the process to inform the overall approval decision.  The PAI process has long included the following three traditional objectives:

  • Objective 1 - Readiness for commercial manufacturing: is the firm capable of producing the drug at commercial scale, or will the site struggle to maintain a consistent supply, resulting in drug shortages?

  • Objective 2Conformance to Application: was the process, equipment, and other factors consistent with what is observed during the on-site inspection?

  • Objective 3 Data Integrity: were the data submitted in the application accurate and complete?

Firms have traditionally prepared for evaluation of these three objectives the best they can, considering the investigator will likely take a risk-based approach to coverage of each objective. 

What stands out in this new revision is the addition of a new objective, titled “Objective 4 - Commitment to Quality in Pharmaceutical Development”.  This objective is a break from the traditional compliance approach to inspection, and is aligned with other recent FDA whitepapers, namely FDA’s vision for Quality Management Maturity (QMM).  This objective appears to be an attempt to gauge the level of maturity within the firm’s product development program, with two keywords standing out: knowledge management and quality risk management.  For the regulator (and patient), these ingredients are important, considering the approval process is largely based on trust: the expert review and inspection activities only evaluate the tip of the much larger product development iceberg.  Has the firm been transparent with all relevant development outcomes?  Again – the process is largely built on trust, which can be tricky considering human nature – namely the universal faults with regard to conflict of interest and cognitive dissonance!

How can we limit these universal human flaws?  The answer comes in the form of systematic tools for capturing knowledge (knowledge management) and evaluating that knowledge through the patient safety lens (quality risk management).  Without firmly established and mature tools, bias is allowed to creep in like the cockroaches of central Texas…  and the result is unpleasant for everyone involved.

Speaking of systematic tools, ICH Q8 “Quality by Design”, which has been around since 2006, provides a harmonized guide for building quality into the development process, so in reality the addition of objective 4 and the evaluation of a firm’s maturity should come as no surprise…  Any grace period needed to adapt the concepts behind Q8 into a given development program has long since passed. 

A quick self-evaluation can be performed to gauge one’s inspection readiness for 2024:

  • Knowledge Management: Has management provided the solutions necessary to ensure information gathered during development can be transformed to knowledge via digitization?  To gauge one’s maturity, use the FAIR principles as the standard for success.

  • Quality Risk Management: Has management updated the overall QRM program in line with last year’s revisions to ICH Q9?  To gauge one’s maturity, use the spectrum of formality with regard to available QRM tools (from a risk memo to FMEA) as the standard for success.

This is already a long post for a blog, and my flight is landing soon, so I will wrap it up here: In summary, management investment in the product development program should be sufficient to allow adaption to updated regulatory guidance.  If data are still captured on paper (or paper on glass), your firm runs the risk that a poor maturity evaluation may lead to a loss in confidence regarding the accuracy and completeness of your application.  This will likely lead to delays in product approval until additional verification can be completed either through written correspondence or a follow-up secondary on-site/remote inspection.  Consider the approval process from the regulatory/patient perspective and the iceberg analogy, and it really is a “no-brainer” – a mature quality system within development (don’t worry, this is different than commercial GMP) is no longer optional.  It’s time to implement flexible and risk-based tools that will unleash critical thinking throughout your organization and bring high-quality medicines to patients without regulatory delays.

June 18th, 2023

Formality

As I was reading through a recently published FDA Form 483 issued on 12 Aug 2022, the following statement caught my attention: “a patient death occurred while using the…”  Reading through observation one, we find that the device had reached its “upper limit of notifications that can be transferred”, causing a delay of ~one week (transmissions were supposed to occur daily). 

I immediately began imagining what tools the firm used for Quality Risk Management.  Just like in the garage, the ability to fix anything well depends on the tools provided to the mechanic (it’s generally not the mechanic, but the resources available within the garage that matter most)!  If the mechanic is only provided a multimeter to fix an electrical fault, then the outcome of the repair will be in commensurate with this limitation.  The mechanic will struggle to remove the parts necessary to access the full electrical system to determine potential faults.  In my experience, sites that have world-class risk management programs are those that take advantage of the full spectrum of formality (see ICH Q9(R1)), just like a mechanic should have full access to a wide variety of tools, from pliers (less formal) to a multimeter (more formal).  

I wonder if combining less formal tools (data & process mapping) at the hazard identification stage with more formal tools at the risk mitigation/acceptance stage could have led the site to identifying, mitigating and controlling this critical risk appropriately?  PIC/s provides an excellent roadmap within their guidance “ASSESSMENT OF QUALITY RISK MANAGEMENT IMPLEMENTATION”, which states:

“The QRM process normally consists of several steps including:

  • Process mapping identifying all inputs, outputs and existing control measures;

  • Risk Assessment (incl. risk- identification, risk-analysis and evaluation);

  • Risk Control (including risk reduction and risk Acceptance);

  • Communication (i.e. the residual risk should be communicated to the regulators and customers);

  • Regular Review of the risks”

When a site is provided only the most formal tools (multimeter) – they are generally not able to trigger the critical thinking necessary for a successful QRM strategy.  Just like the mechanic will never be able to remove the machine parts without the pliers (now only the ability to access the main battery)…  What’s in your site’s toolbox?  Do you have the full spectrum of tools necessary for success?  Don’t be afraid of less formal tools – take another look at ICH Q9(R1) section 5.1 to gain the confidence necessary to expand the tools in your garage – it will pay dividends well into the future.

Perfection was never an option - just do the best you can with the resources you have and you will suddenly find yourself operating a world class manufacturing operation.

May 4th, 2023

Artificial Assistance

As I was addressing audience comments submitted following my recent webinar on Quality Intelligence, the question was posed: “can you comment on the importance of Artificial Intelligence and Data Analytics in drug manufacturing?”  During my response, I stated that “advanced data analytics will allow our industry to improve process control strategies in ways we cannot comprehend without artificial assistance”.  Artificial Assistance?  Surely someone has termed that phrase, so I Googled it, but alas – Google did not provide me with a definition, but rather directed me to links advertising various artificial assistants (totally different thing, but very cool!).  I was hoping to reference another trendy term in this new world where AI and the pharmaceutical industry convergence, but no such luck. 

The reason I felt compelled to Google the phrase was that as the two words appeared on my screen, I immediately felt that it described the “intended use” (see 21 CFR Part 11.10(a)) of the various AI tools within our heavily regulated industry.  When we define the intended use, we can then proceed with a validation strategy. We can feel confident moving forward alongside AI solutions, addressing the validation hurdle by considering how these tools are to be used: artificial assistance.  One example that comes to mind: AI will enhance (or “assist”) the human (QA) to ensure that the final batch disposition decision is based on our best efforts of analyzing the data set for any indication of a quality defect.  In this context, the validation burden becomes manageable using existing qualitative risk-management tools outlined in ICH Q9, considering the “intended use”.  Risk to patient safety introduced from use of these tools can be managed by demonstrating that the decision (disposition) will not be based solely on the outcome of an AI algorithm alone, without human verification, but rather an enhancement of traditional human data review and approval processes alongside an artificial assistant that improves the disposition process in a demonstrable manner.

Two is always better than one!

It would be foolish to move ahead with the attitude that AI will not introduce risk to the example mentioned above, but why would we let risk prevent us from improving the process? No great innovations come about without confronting new challenges. By acknowledging and managing risk, humans have been improving processes since the beginning of time! All innovations present new and different hazards, but lucky for us a clear and concise roadmap for risk management has already been provided by the regulators!

We already have the tools - so let’s get started!

March 31st , 2023

Uncertainty & Assumptions

Failure to perform adequate investigations into "unknown discrepancies" continues to be the #1 cited GMP regulatory observation, year after year, often leading to unfortunate regulatory action (e.g., Warning Letters & recalls).  The regulation requires discrepancies to be “thoroughly investigated”, but how thorough is thorough enough?  To answer this question, we must reflect through the lens of the modern CGMP.  To do this, we need a clean break with the past.  Past pitfalls that have led to regulatory problems can generally be attributed to the failure to address and rationalize uncertainty and assumptions that were included in the investigation (scope, root cause, product impact), causing the end result (conclusion & CAPA) to fall short of regulatory and patient expectations. 

The modern CGMP Quality System, especially considering the tools outlined in ICH Q9 (R1), does not allow one to:

1.      Leave uncertainty unacknowledged, and

2.      Include assumptions without acknowledging them as such.

It is perfectly acceptable to be uncertain (in fact it is required in most investigations).  It is also perfectly acceptable to include assumptions within an investigation.  What is not acceptable, from a regulatory and patient safety standpoint, however, is to pretend that these inclusions are as good as true (or scientifically justified in the same category as fact)!  Nope.  These are in separate categories, each carrying different weight when it comes time for the final decision to be made.  This is a critical fault that we see regularly cited in FDA Form 483s, with wording such as failure to provide “scientific rationale” for the ultimate conclusion & CAPA.  Modern QRM tools take all the subjectivity encountered during the course of the investigation and force us to acknowledge what we don’t know.  As a result, brilliantly, we can take the subjectivity out of the final conclusion & CAPA.  That is how we can deal with uncertainty and assumptions while still ensuring patient safety.  Very cool.

ICH Q9 (R1) states that “While subjectivity cannot be completely eliminated from quality risk management activities, it may be controlled by addressing bias and assumptions, the proper use of quality risk management tools and maximizing the use of relevant data and sources of knowledge (see ICH Q10, Section 1.6.1).”

“All participants involved with quality risk management activities should acknowledge, anticipate, and address the potential for subjectivity.”

In conclusion, and as a friendly reminder, transparency is always rewarded in the end.  It takes practice and the right tools, but lucky for us, the roadmap for success has already been drawn through harmonized guidance.

Join us this May as we discuss, debate, and work through real-life case studies with the central theme of conducting thorough & scientific investigations that will pass any regulatory scrutiny, and ultimately ensure safety of our patients despite the inevitable: unexpected events. 

Let’s go!

Pete

March 21st , 2023

Sources of Variability

If someone this afternoon forced me to propose one concept/hot topic to focus on based on recent 483s (and warning letters), both in the clinical and commercial GXP space, I would have to settle on “understanding sources of variability” in your end-to-end process, whatever GXP process that may be.  Sources of variability can be defined as steps along a data collection -> review -> reporting process where hazards (see the revised ICH Q9 R1) exist that could affect the scientific justification for the ultimate decision-making step.  Generally, these fall into one of two categories:

  • Technical Hazards (e.g., system-system interface)

  • Procedural Hazards (e.g., access to edit the contents of a report template)

In the PIC/s Guide for Data Integrity, we find an entire section dedicated to “hybrid systems”: those process that involve computerized systems that may not be fully complaint with regulatory expectations (e.g., simple laboratory instrumentation).  These are obvious sources of variability within our manufacturing/testing processes due to reliance on procedural/human controls that need to be well understood and monitored appropriately.  This is covered in the PIC/s Guide – and should be well understood in industry.

In reality, however, in 2023 and for the foreseeable future, nearly all of our processes are actually in some way “hybrid”, even if the computerized system is “compliant” with the requirements found in Part/Annex 11.  I can think of few GXP processes (collection -> review -> reporting) that are truly automated.  Most of the data & metadata that eventually make it to the decision-making step involve several hazards downstream from the “original” record.  Even those that are mostly automated, involve at least some human intervention along the way, like the printing of a batch report following completion of an aseptic fill. 

In pharma, we find it hard to be transparent with all procedural hazards.  In my opinion, this is due to the serious nature of the trials we conduct and products we manufacture.  Our processes have direct health and safety implications to potentially millions of our fellow humans.  To admit that a hazard in our process is due to an employee failing to print and attach all filter integrity results (despite significant training and education) is difficult.  This is due to the universal human condition called “cognitive dissonance”: defined as the discomfort that one experiences when dealing with a conflict between our beliefs and reality.  To avoid the discomfort (extreme in our case), sometimes we choose to ignore reality.  Let’s break it down:

  • Belief: Employees working in a GXP environment understand the serious nature of their roles and responsibilities, and will act with integrity while performing their day-to-day functions.

  • Reality: Employees do understand the serious nature of their roles and responsibilities, however, can only work with the tools available to them within the GXP workplace.  Ultimately, the need for personal wellbeing (financial, mental, etc.) will overcome requirements of the GXP workplace.

This second point is extremely uncomfortable – that’s the cognitive dissonance kicking in.

Hence – “understanding sources of variability”.  In the preamble to 21 CFR Part 11, published on March 20, 1997 (nearly 26 years ago exactly!) states “the agency’s experience with various types of records and signature falsification demonstrates that some people do falsify information under certain circumstances”.  The Agency is not making a statement disparaging the GXP workforce.  On the contrary!  The Agency is pushing management to acknowledge Reality & the universal human condition.  It’s OK to have processes that are not perfect, and sometimes under pressure, humans act in ways that they would not otherwise consider.  If we are aiming for true process control, acknowledging the human condition in our data governance strategies gets us as close to perfection as possible. 

Let’s do it.

Pete

March 10th , 2023

GCP Data Integrity

(EMA 2023)

I just finished reading the newly published EMA guideline titled: “Guideline on computerised systems and electronic data in clinical trials”.  I think I have now transformed into a total data nerd, because I found myself completely engrossed in the text!  My first thought was one of relief: I found the guide to be 100% harmonized with the many existing GXP guides for data integrity.  We will not find in the document a prescribed checklist to follow when setting up or evaluating a trial management strategy.  This is by design.  Checklists lead to trouble!  We are the experts in our data processes, and only we have the ability to design a risk-based management strategy that meets our patient expectations! 

We do (at least) see an excellent high level critical-thinking roadmap to success (Section 4) for inspection readiness (via patient protection), some thoughts on which are outlined below:

1.       Data Integrity

  • The guideline uses the concept of data governance to ensure data integrity.  This is a necessary addition, as governance focuses industry on the process (a wholistic approach), rather than just the people!

2.       Responsibilities

  • In this section we see an emphasis on data ownership (although that exact terminology is not used).  When ownership is assigned, required tasks get done!

3.       Context

  • Here we see the regulator explain that metadata provides the context necessary to generate information.  Data by itself (without the associated context) cannot be used for decision-making.  Only by including the associated metadata in the governance strategy can we generate information used for making decisions and conclusions (for example, by ensuring via Audit Trails that the data entry was made by the authorized employee, at the designated timepoint, etc…). 

4.       Source Data

  • This is an excellent section that outlines the ability to take a risk-based approach to ensuring accuracy of our data.  Here is where we see a break from downstream GMP requirements for “original records”, however, this will most likely be adapted within the GMP framework as we move into more advanced data analytics (e.g. machine learning).

5.       Criticality & Risks

  • To wrap up the emphasis on governance, the guide directs us over to the concepts outlined in ICH Q9 (R1) – with an almost cut/paste strategy!

6.       Data Capture

  • Due to the wide variety of data acquisition tools that may be involved in a study, the regulators state that a “detailed diagram” of the data flow should be available.  This is excellent guidance and is not optional (see the section on data integrity above), as the data & process map is the best tool to initiate process understanding, with the ultimate goal of true governance. 

7.       Electronic Signatures

  • This section is a reminder regarding the requirements for e-sigs, the reason being that without these safeguards in place, the regulator cannot consider the resulting decisions to be based on accurate and complete data sets.  Just a friendly reminder!

8.       Data Protection

  • Confidentiality of subject data cannot be compromised.  The regulator includes here a section on GDPR to ensure that the risk-based governance plan considers confidentiality and implements controls that are commensurate with this risk.  This is not optional!

9.       Validation

  • To understand the roadmap at this point, we are directed to Annex 2 of the document, which is an extensive guide to CSV.  We see alignment here with FDA’s 2018 DI guidance, with an emphasis on evaluating hardware, software, personnel and documentation as part of the validation process (see section A2.3).  We also see some limited aspects of an Agile approach to validation (see section A2.5), following a risk-based approach.  In summary, this section is a brief summary of an expected modern approach to GXP validation lifecycle.

10.   Direct access     

  • This section (in my opinion) may be a prelude to the expectation that inspectors be provided remote access to GXP data sets (read-only access), as the regulators attempt to reduce travel burden and improve efficiency through remote evaluations. 

Section 5 and 6 get down into the weeds Computerised Systems (5) and Electronic Data (6), largely harmonized with Part/Annex 11 expectations.  Pay special attention to section 6.2, as it outlines some very specific inspector expectations regarding Audit Trails, for example, the ability to export the “entire audit trail as a dymamic data file”… This means the inspector expects to receive an exported data file with the ability to sort/filter/process the data during inspection (data analytics).  It appears that the days of providing exported PDFs are over, as this is not in a usable format and the inspector has no ability to perform their job of protecting and promoting public health. 

In summary, this is another excellent harmonized GXP guide for industry with some additional specific requirements for the BIMO world.  The takeaway message, as with all recent DI guides, is to:

1.       Activate critical thinking and be open to flexible data management strategies (perfection is not the goal)

2.       Each step and interface within a process must be evaluated for hazards/risks (think accuracy and completeness)

3.       When relevant aspects of the design/operation/monitoring of the process have been evaluated and the management strategy decided, we are now operating within a validated state – and are inspection ready!

Let’s get to it!

Pete

February 23rd , 2023

Root Cause Analysis

The “5 M’s” or sometimes known as the “Fishbone (Ishikawa) Diagram” is an excellent tool for performing a structured root cause analysis (RCA), and is found in many GXP investigations.  I have seen it used all over the world, with varying degrees of success when it comes to finding the root cause(s) in the context of an issue (say: deviation) potentially affecting patient safety.  For the sake of this blog, let’s take a step back for a moment and gain some perspective…  When we gain perspective, we are then able to think critically and act decisively.  Without perspective, sometimes we get lost in the mundane world of “compliance” and act without clear direction. So:

Q: Why do we do GXP root cause analysis?

A: To identify the source(s) of the (deviation), to set us up for proposing a meaningful and effective CAPA.  In theory, this prevents re-occurrence of the event.

Great, I think we can all agree with the above thought process, it’s simple and universal.  So why is it then, that despite the widespread use of this tool, we continue to see repeating issues within our organizations that often demonstrate an unacceptable trend during Health Authority inspection?  Maybe, it is due to lack of perspective leading to failure to use the tool to the best of its ability.  A “compliance mindset” puts pressure on personnel to find the root cause within a pre-defined timeframe (typically 30 days) and lose the perspective that drive us to do the best we can with what we have.  We try to fit a square peg through a

The goal with any RCA tool used in a GXP setting is to trigger critical thinking by the user.  Facilitating a dedicated workshop session to the use of the tool and expanding the SOPs to include model examples are easy improvements to achieve this goal.  But, if we are being serious, this isn’t going to do much, otherwise this blog entry would not be necessary.  What we really need is a paradigm shift in the way RCA is done in our industry – which (in my opinion) starts by taking the advice of the FDA written in many recent Warning Letters (since ~2019): focus on the “management strategy”.  RCA outcomes are only as good as the tools available.  We can’t expect industry leading investigations when our tools are not kept up to date with CGMP expectations!  There is a reason why the regulators are pointing us in this direction, as we see in ICH Q10:

“Senior management has the ultimate responsibility to ensure an effective pharmaceutical quality system is in place to achieve the quality objectives, and that roles, responsibilities, and authorities are defined, communicated, and implemented throughout the company.”

So if no one is evaluating the effectiveness of the abovementioned expectations, then maybe we have lost perspective?  Chasing our tails with layer upon layer of CAPAs which generally increase burden on front line employees and decrease the “right environment”…

There are two ways I see this getting done (evaluating the management strategy using our RCA tools):

1.       Transform 5 M’s into 6 M’s, with the addition of “Management Strategy”

a.       Although this is listed as the first option, it may not be ideal in some cases.  For sites with a long history of operations, the addition of the extra branch might be necessary to ensure a change (old habits die hard), however, for new sites, management strategy can be integrated into the personnel branch right from the beginning.

2.       For major/critical investigations, apply a secondary RCA tool once the fishbone is completed – the “5-whys” – with a special emphasis on examining the current management strategy and “why” the event was allowed to occur in the first place. 

a.       For example:

           i.      Why has management failed to invest sufficient resources? 

           ii.      Why did our leading Quality Indicators fail to adjust the process before failure?

b.       For further guidance on this point, check out the “Right Environment” concept within MHRA’s Data Integrity guide. 

RCA is no walk in the park, and it takes a bold team of individuals within management to be open to honest evaluation (= vulnerability), but when we take a step back and gain perspective (why we do what we do), we see clearly that a true Quality Culture (and ultimately the patient) would expect nothing less.  Without this deeper evaluation, we might imagine a cook tasked with preparing a gourmet meal, but only provided canned ingredients – it is a guaranteed failure.  In essence, it is a guarantee that we will continue to see adverse trends if we do not adjust the tools (or ingredients!).

The culture that exists within an organization when management expresses vulnerability by reacting to what the data tells us (even if it means admitting our original efforts were wrong) is one that fosters growth and will emerge in the future as an industry leader. 

Oh and BTW – is a workplace where employees thrive, and find meaning in labor.  Very cool.

Let’s get to it.

Pete

It appears that lately, the idea of Quality Intelligence has been a topic of growing interest among industry and the regulators.  The concept should not be new to us, as the concept was originally introduced along with FDA Process Validation guidance more than 10 years ago as “Stage 3” continuous process verification (CPV).  We may now be seeing more regulatory enforcement in this area, as regulators work hard to push companies to prevent the quality and manufacturing issues behind 62% of drug shortages in the United States.  One recent, expertly written 483, for example, linked downstream re-occurring QC investigations with an upstream failure to evaluate “inter- and intra-batch variability” during process validation.  As far as I can tell, this is new 483 language, and exactly what the patient expects.

Prevention of problems can only be fully realized when a site takes the gigabytes (terabytes?) of data they are required to retain as information, and translate them into “knowledge”.  That sentence was easy for me to write… but is much more difficult for a site to “do”.  As one of my heroes W. Edwards Deming would say, “by what means”?  This is where things get tricky, but also (at least for me) is the fun part!  Does a modern “Stage 3” CPV simply consist of continuously updated control charts with our CPPs & CQAs (and perhaps some other application commitment data points)?  Nope.

What is knowledge?: GXP data/metadata/information becomes knowledge once put in the context of patient safety.  It really is as simple as that.

By first understanding our processes, specifically sources of variability arising from:

·       software & hardware,

·       facilities & equipment,

·       and personnel & documentation (see ICH Q9),

we can design a modern Quality Intelligence approach that takes available information behind sources of variability and works to reduce variation by acting on knowledge.  This allows us to step off the treadmill (we are tired anyhow), and break through the barriers that prevent us from achieving a world-class manufacturing & testing facility. 

Let’s get to it! 

Pete

February 10th , 2023

Quality Intelligence

The GEMBA walk is an important ingredient to the pharmaceutical process control strategy and/or data governance strategy.  The GEMBA walk, as it was originally intended, exists to establish a direct line of communication/visibility between upper management and front-line employees.  How can upper management make critical steering decisions based on hearsay? – it is inefficient and risky!  This very cool concept was designed to identify sources of variability that lead to excessive waste, which causes increased costs, a reduction in quality, and worker frustration.  It is accomplished by listening, collaborating, and making corrections where improvements are needed.  Not inspecting, examining, or interrogating; this kind of behavior works contrary to the goals of the program as it is management by fear. 

The GEMBA, if conducted in the right spirit, can be a form of spontaneous brainstorming that can add incredible process improvements.  It is simply human nature that sometimes those involved in the day-to-day activities (front-line) or managing the day-to-day activities (middle management) are too invested in the process themselves to notice these opportunities. 

I have observed some additional benefits of the GEMBA walk, that I would like to share here:

1.       Upper management on the shop floor demonstrates that communication is essential, which motivates middle management to perform their essential role: create the “right environment” for employees (see MHRAs DI guidance).  Some even call this a “speak up culture”.

2.       Upper management on the shop floor listening to employees with several layers of middle management between them demonstrates that everyone participates in quality, and old-school quality metrics goals (e.g. reduction of deviations through “re-training”) are now fossilized deep underground, never to be misused again through management by fear.  We now measure performance of the process, and value everyone’s efforts to reduce sources of variation. 

Quality culture gimmicks are fun (t-shirts, games, etc.), and are a welcome break from our routine activities.  But what to employees really need in order to thrive both personally and professionally?  Meaning in Labor.  Feeling Valued.  Helping Others. 

The products we produce make these goals easy to achieve.  Let’s get to it! 

Pete

January 16th, 2023

The GEMBA