2 minute read | January.13.2025
The FDA has shared its first draft guidance on how sponsors should assess the credibility of artificial intelligence (AI) models to support FDA decisions regarding drug safety, effectiveness or quality.
The agency noted an “exponential increase” in the use of AI in drug development and regulatory submissions in the last several years.
Like the FDA’s draft guidance for AI-enabled medical devices, this guidance envisions a risk-based framework to evaluate a model’s credibility for a particular context of use.
In the context of AI models, the FDA considers “credibility” to refer to trust established through collecting “credibility evidence.” That evidence can support the credibility of a particular AI model output for a specific context of use.
The activities used to establish credibility should be commensurate with the model risk and tailored to a specific context of use, the agency said.
The FDA recommends sponsors follow these seven steps to for assess an AI model’s credibility:
The FDA encourages sponsors to engage early with the agency to:
The FDA is seeking public comments on the draft guidance until April 7.
Want to know more? Our team can help companies engage with the FDA on AI credibility issues. We also can help prepare and submit comments. Contact one of the authors to learn more.