9 minute read | September.18.2023
The surge in use and development of AI systems and products, particularly generative AI, has increased interest in investing in and acquiring companies that offer AI solutions or that have integrated AI into their operations.
The EU is close to finalizing a law to regulate AI, and many countries are considering their own legislation. The outcome of lawsuits against generative AI providers also may impact nascent business models and product development strategies.
Here are 10 steps to consider when investigating the acquisition of a company that uses or develops AI technologies. These issues span the due diligence stage to preparing the sales and purchase agreement (SPA) and related documents.
Companies should also keep in mind several pitfalls unique to tech transactions involving AI – and five potential risks that argue for a tailored approach that goes beyond standard acquisition agreements.
Determine whether a company’s products actually constitute AI. Not all data processing or analytics solutions qualify as “artificial intelligence.” For instance, product recommender systems and chatbots can be developed without artificial intelligence. The target should be able to explain why a product qualifies as AI.
Given the rapid evolution of AI, companies also should evaluate the long-term viability of a target’s products and the product roadmap. Consider making this assessment from a strategic and legal perspective.
Assess the skills and retention levels of data scientists, engineers, researchers and others as well as the company’s ability to continue to develop the technology.
Retaining key employees is crucial for an acquired company’s success. Consider retention bonuses, equity and stock options, clear career paths or innovation and research opportunities. The acquirer should collaborate with the acquired team to realize a vision or accelerate product adoption and usage.
Review employment agreements with a focus on confidentiality and noncompete obligations. We frequently see employment agreements that are insufficiently robust in confidentiality obligations imposed on key employees. It may also be advisable to amend these as a pre- or post-closing obligation, given that in many jurisdictions, the algorithms that underpin AI systems are protected as trade secrets, not via intellectual property rights.
Ask these questions to identify risks at each stage of the AI life cycle:
Some of these issues will be similar to those arising in any software acquisition or investment deal – in relation to open-source software, for instance. Yet protecting AI innovation requires a hybrid strategy involving copyright, patent, third-party licenses, trade secrets and database protections. We recommend asking whether the company uses AI tools to generate code and whether it has a process to identify machine-generated code.
The importance of data protection compliance in M&A transactions has increased markedly over the past few years. The surge of interest in AI-related deals will only intensify that focus.
If an AI product’s development and/or use involves personal data, it will be subject to data protection laws, possibly in multiple jurisdictions. Sanctions for processing illegally obtained personal data can include deletion of entire databases and even algorithms (the latter measure is known as ‘algorithmic disgorgement’ and has been imposed by the U.S. Federal Trade Commission (FTC) on more than one occasion).Determine early on whether the target company’s AI systems – licensed in or developed in-house – process personal data. Some questions to be addressed include:
We expect increased data protection regulatory scrutiny and enforcement actions in connection with the use of AI systems in the short and long term. Any risks identified may impact valuation. Rectifying them (if possible) may constitute a pre-closing obligation.
Does the target have an AI risk management framework, such as the one proposed by the National Institute of Standards and Technology? Does it adhere to an ethical framework like the one developed by the EU High Level Expert Group on Trustworthy AI?
Although not yet a legal requirement (this will likely change with the adoption of Europe’s AI Act), following these and similar recommendations will indicate that the target has taken steps to mitigate AI-specific risks involving things like security, bias and discrimination and that it has a compliance culture.
Consider these issues:
AI systems pose unique cybersecurity risks (e.g., in the form of software vulnerabilities or susceptibility to attacks). Request and review security audits and information on risk mitigation measures the target company has adopted.
Similarly, request reports of assessments in relation to AI tools used or developed by the target company tied to accuracy, reliability, robustness and bias testing. A technical specialist should review these reports.
Finally, consider whether any sector-specific laws apply to the use or development of AI tools by the target company (for example, rules now in force in New York City require employers and employment agencies to adopt measures related to Automated Employment Decision Tools).
When preparing transaction documents
In an increasing number of jurisdictions, transferring ownership in artificial intelligence technologies may trigger scrutiny. Companies should plan for this given the potential that an acquiror and/or target may need prior authorization before implementing the deal.
Foreign Direct Investment (FDI) Review
Regulatory scrutiny of foreign investments has increased around the world in recent years. Authorities may see a company with AI capabilities as a business with national security implications, particularly if the technology can be used for defense purposes.
Analyze potential political and regulatory implications early in the process – the Committee on Foreign Investment in the United States (CFIUS) is reviewing a record number of transactions for national security risks.
Concentration of control over the data required to train AI systems means that reviewing the use of data is in scope for antitrust analysis. Agencies may consider how the data a target company collects might trigger anti-competitive concerns if added to the acquirer’s data pool.
Furthermore, antitrust agencies in the United States and Europe are aggressively challenging so-called “killer” acquisitions of nascent competitors. Any move to acquire an AI company by one of its competitors may at the very least raise questions from regulators.
Transacting parties should have a clear understanding of what remedies they would be willing to offer, if any, should authorities challenge a transaction.
They also should anticipate that broader inquiries are likely to increase transaction costs and the time to closing.
There is some debate as to whether AI-specific representations and warranties are necessary – risks may be covered by more broadly applicable guarantees addressing intellectual property, IT, data protection, cybersecurity, material contracts and compliance.
In our view, for transactions where the AI is of strategic importance to the target company, AI-specific representation and warranties should be included given the specific nature of the risks created by AI technologies. That will help focus the target on responding to the buyer’s due diligence inquires. It also can help surface latent risks.
W&I policies typically cover most important representations and warranties, including those relating to intellectual property ownership, and, subject to some scrutiny by underwriters, freedom to operate, data privacy and security and compliance with employment laws. However, this is an evolving subject. Practices may evolve, particularly in response to AI-specific risks covered by what remain non-standard representations and warranties.