The EU AI Act: Key Takeaways for Life Sciences and Digital Health Companies

5 minute read | May.20.2024

This update is part of our EU AI Act Essentials Series. Click here to view additional updates.

The European AI Act is a landmark piece of legislation that establishes the first major comprehensive framework for regulating the use of artificial intelligence, including machine learning. The AI Act aims to provide clear requirements and obligations for developers and deployers of AI, while also considering risks associated with certain industries and use cases. 

The AI Act brings important considerations for life sciences and digital health companies leveraging or offering AI/ML tools, for example, to augment drug discovery, as part of companion diagnostics, to streamline patient identification for clinical trials, in connection with medical devices or in vitro diagnostics or for processing data in support of submissions for regulatory approval. 

This note provides an overview of the AI Act’s applicability to use cases in the life sciences and health tech spaces. It also addresses key considerations for companies deploying or leveraging AI/ML tools in these industries. You can view our update on who is covered under the AI Act here and our update on assessing your obligations and steps to take in the first six months here.

Regulating Life Sciences and Health Tech Use Cases Under the AI Act

Under the AI Act’s risk-based approach, only certain use cases are classified as high risk and therefore subject to heightened regulations.

This includes the use of AI in medical devices subject to the medical device regulation and with in vitro diagnostics subject to the in vitro diagnostics regulation

As medical devices incorporating AI systems may implicate risks not addressed by traditional regulatory requirements, the AI Act aims to fill some of those gaps, particularly in high-risk scenarios.

Whether AI systems are classified as high risk largely depends on the intent behind their use, considering whether they are used for clinical management of patients, such as diagnosing patients and informing therapeutic decisions, or in precision medicine. 

These contexts typically fall under medical device regulation and are subject to third-party conformity assessment and therefore give rise to the high-risk use classification under the AI Act. Such conformity testing requirements under the applicable device regulations may incorporate the requirements of the AI Act, but the AI Act will not itself impose additional requirements for these high-risk uses.

Lower Risk Classifications

Other AI/ML use cases in the life sciences and digital health spaces likely fall under non-medical device, lower risk classifications.

Examples include use in drug discovery applications (e.g., identifying potential targets and therapeutic pathways), non-clinical research and development (e.g., using AI/ML modeling techniques to augment or replace animal studies) and earlier stage clinical trials (e.g., analyzing data and modeling future studies), a view shared by the European Medicines Agency (EMA) in its draft guidance on using AI/ML in drug development. 

Many AI/ML use cases may not fall under the heightened scrutiny attendant to medical devices and in vitro diagnostics considered “high risk” under the AI Act. Yet these use cases still may be impacted by the AI Act’s rules governing general purpose AI models and systems. From a developer’s perspective, these largely involve heightened transparency obligations, risk assessment and mitigation.

Governance Considerations

As AI/ML tools have increasingly made their way into the life sciences and digital health spaces, deployers of these tools and companies using them must keep in mind the importance of responsible AI/ML practices. AI/ML users should adopt internal governance systems to ensure they:

  • Obtain rights for data used to train models and adhere to any confidentiality obligations with respect to data sets used for training.
  • Rely on diverse and reliable data sets as bases for training models (particularly when there is higher potential for risk, such as in clinical trials compared to earlier stage drug discovery or non-clinical applications).
  • Use AI/ML tools to augment and automate processes without marginalizing the need for human oversight.  

At a smaller scale, deployers of AI/ML tools frequently must contend with traditional pharmaceutical or biotechnology companies concerned about their data training models that competitors may use. 

How can AI/ML tool deployers continue providing services in the life sciences and digital health space despite such concerns? They may consider sandboxing and firewalling data sets for engagements or running data sets through pre-trained models where the specific inputs are not used to train the overall model. The AI Act does not necessarily speak to these commercial or competitive considerations, adding another element for AI/ML deployers to navigate amidst a burgeoning regulatory environment. 

The AI Act does, however, provide for “regulatory sandboxes” where companies may test novel technologies under the supervision of a regulator for agreed-upon periods of time. The aim is to create controlled environments where companies can test and on-ramp technologies while regulators gain insight into how these technologies function prior to more widespread adoption by consumers. 

What’s Next?

While the AI Act and input from the EMA have helped clarify some of the high-risk, high-regulation scenarios and use cases involving AI/ML in the life sciences and digital health spaces, many open questions remain, from legal, regulatory, and commercial perspectives.  

Developers of AI/ML technologies should examine the extent to which their technologies fall under the AI Act. They may also consider using regulatory sandboxes to ensure their product and service deployment aligns with regulators’ evolving expectations.

Finally, given the increasing importance of AI, stakeholders should monitor legislative developments across jurisdictions as sector-specific laws begin to emerge.

If you have questions about the EU AI Act, reach out to the authors (Julia Apostle, Daniel Kadin, David Sharrow) or other members of Orrick’s AI team.