The Digital Twin
It was a pleasure attending this year’s PDA Week here in Denver, CO. It’s Tuesday, with one more day to go! The overarching theme of the conference is the future of Quality. Lots of engaging talks on the topic of AI-GMP integration, among other things… I thought I would take a moment here by the coffee stall and write up some thoughts from the session on Digital Twins. A digital twin uses real time sensor data, fed through a credible model, to provide a simulated prediction of how a batch might perform in future steps [among other applications], so that a human in line/open-system [at least for now] can act on the signal accordingly to reduce the chances of the product experiencing an OOS. There is no doubt that this AI-GMP integration will become standard industry practice sooner rather than later.
I especially liked the talk on risk management of the AI-GMP workflow, presented by Jason Tugman. Jason spoke about the need to focus on the data/model governance aspect, otherwise the site might find itself in “risk debt” (an adaptation of the well known “tech debt”), and “all debts must be paid”… eventually. I thought this comment was valuable and insightful. Jason went on to say that once the debt has become too large, the organization becomes “incapable of change”, and the quality system will suffer or collapse under this debt. I immediately drew comparisons to the financial collapse of 2008 under the weight of subprime mortgage lending.
One use case of the digital twin presented was “predictive maintenance”, which may eventually replace “preventative maintenance” currently performed according to strict schedules. Maintenance may be considered medium criticality, as well as a GMP predicate rule, hence the need to explain during regulatory inspection:
How the model operates within the GXP workflow [explainability]
Credibility of the model, including that the output is “reproducible” - otherwise the decision on maintenance timelines will not be defendable.
This last point caught my attention - the need to demonstrate reproducibility… Reproducibility ensures that when the same input data and configuration are applied to the same digital twin model, the same results are obtained. Because maintenance is a predicate rule, there is little chance for flexibility [uncertainty] here, in my opinion. We need to have high confidence in the reproducibility of the model, typically performed via studies that evaluate the uncertainty within the model output.
If we go too far into “risk debt” within our “context of use” - we may find it impossible to pull ourselves out of the pit of debt and ultimately find our workflows collapsing under the weight. It is critical to start forming your AI-GXP guardrails now. Establish clear governance strategies for your AI models, “trust” your data science folks but always “verify”, and study the principles of risk management so that you can stand with confidence when someone proposes a context of use that will plunge your organization into “risk debt”. Thanks Jason for an excellent presentation.
Pete

