Transactions in the Age of Artificial Intelligence: 5 Potential Pitfalls to Consider

4 minute read | August.03.2023

Soaring interest and rapid growth in artificial intelligence (AI) has made it a major focus of technology transactions – but the standard acquisition agreement has not kept pace.

AI companies present unique risks to potential buyers that a standard transaction approach may not address, such as those posed by AI’s reliance on data and the dynamic nature of its insights. Buyers of AI companies should consider tailoring standard merger and acquisition agreements for AI-specific attributes, issues and risks to minimize deal risks and obtain an accurate picture of an AI system’s output and predictive capabilities. 

Here are five potential risks of AI transactions that support a tailored approach:

  1. The target may have trained its AI using customer data, third party licensed data or other data sets that may create material risks or impair the AI’s value.
    • Customer contracts may prohibit use of customer data as training data or restrict the use of AI trained on their data on a field or sector basis.
    • Third-party training data in-license agreements may contain use and field restrictions that may have already been breached, could be triggered by the proposed transaction, or could limit the acquiror’s use of such data and models built using such data.
    • Any training data use restrictions whether contractual or regulatory could impair the value of an AI company’s technology, limit the types of algorithms and technology developed from such training data, or limit how an AI company commercializes its technology.
    • AI focused diligence of these training data flows, contracts and licenses can help to identify risks early.
  2. Training data ownership may be uncertain thereby creating litigation risks. 
    • Determining the ownership of training data is a complex and uncertain exercise. Training data used but not owned by an AI company may be subject to third-party ownership claims, infringement claims, privacy and regulatory issues, or additional liabilities.
    • Any of the above could jeopardize the target’s current operations, create litigation risks, and jeopardize ownership of any algorithms and models built using such training data.
  3. Standard representations on ownership of IP and IP improvements may be insufficient.
    • Output data generated by AI trained from third-party supplied training data may be vulnerable to ownership claims by data providers.
    • A third-party data provider may require that it owns modifications, improvements or derivations of its data, resulting in mixed claims to ownership of the target’s algorithms and to use rights to an AI model’s generated output data to further train and improve their algorithms.
    • Statutory protections regarding AI generated intellectual property are uncertain. Buyers should assess to what extent material intellectual property was generated by AI and assess the risk of validity and enforceability challenges.
    • Agreements that define the rights applied to the unique parts of an AI system can help mitigate these issues. Where ambiguity exists, companies can segregate data sets and algorithms to minimize cross-contamination risks.
  4. Inadequate confidentiality or exclusivity provisions may expose an AI system’s training data inputs and material technologies to third party copycats.
    • If training data and AI technologies are not exclusive, confidential or proprietary to an AI company, competitors may use the same data and technologies to build similar, competing or identical models.  This is particularly the case with AI models developed using open sourced or publicly available data sets and machine learning processes.
    • Companies should understand the sources of the data and algorithms used to build an AI system and ideally maintain barriers to entry around those sources.
  5. The value of dynamic AI models may atrophy without dynamic retraining and updated data feeds.
    • Buyers should consider whether a target company’s AI models are static or require dynamic retraining. Owning an AI algorithm might not be meaningful if post-termination data lifecycle issues are not considered, especially when use of such data post-termination is required to maintain an algorithm’s value.
    • Buyers should focus their diligence and contractual protection efforts on ensuring continued availability of material data feeds post-acquisition.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until it does, companies should protect their IP, data, algorithms and models, by ensuring their transactions and agreements address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Want to learn more? Contact one of the authors.