When to CAPA?
This is an age-old question that has introduced serious confusion and difficulties for GMP generations, and is highlighted within this FDA Form 483 from early 2025. Let’s discuss a new & modern strategy in this blog entry…
In a modern quality system, a CAPA should never be initiated without sound scientific rationale, which can be found in a documented risk assessment. This risk assessment is key. Process/workflow changes can be disruptive, require extensive re-training of staff, and introduce confusion when requirements are continuously altered within daily routine tasks. The result is an increase in mistakes; the opposite desired outcome.
So… when is a CAPA deemed necessary? This really depends on the maturity of the site’s Quality System. For example, if detailed workflow-level risk assessments have been established, predicting errors along with an assessment of risk (severity * vulnerability), CAPAs should be reserved only for systemic, repetitive, or high-severity errors. In all other cases, [which make up the vast majority of our deviations], mistakes (deviations) may be considered common-cause (we knew that was going to happen based on vulnerabilities within the workflow). A CAPA may not be required when low-severity errors occur, unless a trend or widespread scope is identified.
Common-cause variation could be defined as “the inherent, random, and predictable fluctuation that exists in any process due to its design.” I argue that this predictable fluctuation also applies to human errors, which are likely the largest category of deviations within your program. If we have identified and assessed potential human factor hazards and proactively determined the impact of such errors (e.g., low/medium/high), and regularly trend such errors with granularity, we have sound scientific rationale available during the FDA inspection to justify no CAPA.
In the FDA Form 483 example above, the site would have needed to provide the investigator with the scientific rationale to justify no CAPA; with that rationale demonstrating that the error did not fall into one of the three categories mentioned above (systemic, repetitive, high-severity). But without that… no dice. It is of critical importance that the site grasps the difference between a proactive and reactive risk assessment. In a modern quality system, proactive risk assessments are available to demonstrate a state of control (validation), and then utilized as a base for reactive risk assessments performed during the deviation exercise. Note that the severity dimension of risk is utilized as the primary factor when evaluating the need for a CAPA. If a CAPA is then deemed necessary, the site has 3 primary options, arranged here in order of priority: 1) re-design the hazard out of the workflow, 2) reduce the vulnerability dimension via the addition of technical/engineering solutions, or 3) simplify and streamline. But…. where does this leave the addition of procedural controls, you might ask? These are a last resort and should be avoided whenever possible as they do not reduce vulnerability - a more likely means to reduce mistakes is via simplification and streamlining as outlined in option #3, even though vulnerability of the hazard occurring remains the same.
Without a proactive quality system established with comprehensive workflow-level risk assessments, however, it may be impossible to demonstrate such rationale within each deviation record due to limited resources/time – hence a burdensome reactive quality system emerges where CAPAs are always required for any error regardless of their risk, scope and/or frequency, because the site has no other option available. This is a sad situation, and makes it nearly impossible for the site to achieve its common aim – with heavy waste incurred as folks scramble with re-trainings, more errors, and employee disengagement.
Pete

