Founder Series: Leveraging AI for Business Growth in the UK


9 minute read | September.28.2023

Orrick's Founder Series offers monthly top tips for UK startups on key considerations at each stage of their lifecycle, from incorporating a company through to possible exit strategies. The Series is written by members of our market-leading London Technology Companies Group (TCG), with contributions from other practice members. Our Band 1 ranked London TCG team closed over 320 growth financings and tech M&A deals totalling US$9.76bn in 2022 and has dominated the European venture capital tech market for 30 consecutive quarters (PitchBook, Q2 2023). View previous series instalments here.

The growth in the development and use of artificial intelligence ("AI") has been rapid; according to the UK government, one in six companies now use AI to support their businesses, while the number of AI companies has increased by 668% since 2013. However, while AI provides a number of solutions, it can also introduce risks, including compliance risks, in a legal landscape that always lags behind technological innovation.

As legal and regulatory decision-makers catch up with the quickening pace of AI supply and demand, companies developing and using AI should consider not just the law as it stands today, but the law they will have to comply with in the future.

In the fifteenth instalment of Orrick's Founder Series, our Cyber, Privacy & Data Innovation and Technology Transactions teams offer key guidance on what UK founders should look out for when developing or using AI in the UK.

The Creation Stage

Acquiring data from third parties and using it to train models

Ensure that the commercial terms of any agreement you enter into will not restrict future growth.

Founders should consider whether a third-party will require them to share insights derived from acquired data, and whether the company is limited to using the acquired data for a pre-defined purpose or whether they have more flexibility.

When acquiring data from multiple third-party sources, make sure you track which data you have acquired from which third party. That will enable you to separate data sets as needed.

Additionally, as with any data set you acquire, ensure the data supplier can grant you the rights to use the data, and that your use will not breach anyone else’s rights, including intellectual property and privacy rights. In particular, ensure that that your use of personal data in the training set meets your obligations under the agreement and applicable data protection laws.

Responsibility for lawful use of personal data will ultimately lie with you and your startup, so check where the data came from, how it was collected and what guarantees the provider can offer to help you mitigate risk.

Training on scraped data

The past year has seen an increase in cases brought against companies that trained AI tools using data scraped from the internet.

When scraping data from the internet, ensure that your use of that data does not infringe third party rights. UK legislation includes certain exceptions which permit the use of copyright-protected work without the permission of the copyright holder – but these primarily relate to use for non-commercial purposes.  

You should also consider the quality of the scraped data and whether it contains data that could lead to harmful content, bias or misinformation (see "AI Outputs" section below for additional tips).

Issues Arising From Third Party AI Tools

The importance of licence terms

Licence terms for access to third-party AI tools vary widely. Companies should review terms before implementing AI tools. If your use of an AI tool and any outputs relate to core commercial operations, you may need to enforce ownership rights over any outputs. Some licence terms may only grant you a limited licence to use outputs, with ownership ultimately sitting with the AI tool provider.  

Remain cautious when using tools provided on open-source licence terms, as they may restrict your ability to implement certain code in commercial products (see our fourth instalment, Protecting Your Ideas, for more tips on this). The licence terms usually set out your rights to use the tool and any associated restrictions. They may also include additional provider-friendly terms, such as a grant to the AI tool provider of rights over the information/data you submit.

As many AI tools are provided on an ‘as is’ basis, you may also want to include service availability and accuracy protections in your contract with the tool provider if the tool will be a critical part of your product or infrastructure.

Other key terms to consider include IP ownership, allocating risk in the event of a third-party infringement claim, confidentiality and use of your inputs as training data.

Preventing trade secret / information leaks

Many licence terms for AI tools allow the provider to use your inputs to train their models. Without a clear internal policy setting out which AI tools can be used for what purpose, and what information your staff can feed into third party AI tools, you run the risk of staffers inadvertently submitting your business’ and your clients’ / suppliers’ trade secrets or confidential information to the provider for their and their users’ benefit. Staff also may submit other information to the tool that triggers data protection and IP concerns or breaches your confidentiality obligations to other parties (e.g. to your customers). Implementing generative AI policies and training to make your staff aware of these risks is therefore crucial.

For further information on what questions you should ask your generative AI tool provider in a US context, please our article on 8 Intellectual Property and Commercial Questions to Ask Your Generative AI Tool Provider.

Understand what data the tool has been trained on

Understand what data third-party AI tools have been trained on and where the provider obtained that data. While some tools have been trained on data properly authorised for such training, others may have been trained on works that were unlawfully accessed (see "The Creation Stage" section above).

Your use of these AI tools, and any outputs you create using them, may risk attracting infringement claims from interested third parties and enforcement action from data protection regulators. These claims can be costly and risk reputational damage. It is therefore crucial to determine if the licence terms offer warranty protection regarding the source of training data and/or related indemnity coverage.

Cybersecurity

AI applications pose unique risks in addition to the security risks inherent in increasing the number of applications into which we place commercial and personal data for processing. Code generated by AI can contain vulnerabilities owing to the technology’s relative lack of ability to identify and cure the issues. The complexity of the algorithms within generative AI applications can make it difficult for software engineers to identify and patch security flaws. As threat prevention and detection continues to play catch-up in this respect, companies should remain vigilant around using generative AI and data fed into AI applications to reduce the risks of a security incident.

AI Outputs

Is it risky to rely on AI?

Hallucinations (seemingly reliable AI tool responses that are not justified by training data), misinformation and deep fakes can be common when using or deploying AI tools. This can carry a degree of risk, as AI increasingly informs decision-making and companies sell outputs or otherwise put them in the public domain. Developing and disseminating incorrect or misleading outputs due to AI-generated content could affect your commercial relationships and attract negative media attention. You also could be responsible for any misinformation, injury or distress caused by an unreliable or incorrect AI output (see the "Where does liability lie" section below).

Further, questions around the reliability of AI have led to greater legislative and regulatory scrutiny. Those developing and deploying AI should consider their legal obligations and be aware that fines may be imposed for non-compliance. For example, the EU’s proposed AI Act would require AI systems that manipulate audio or video content to comply with transparency requirements including labelling content created by AI and artificially generated and / or manipulated. Non-compliance may trigger GDPR-level fines.

Automated decision-making, bias and discrimination

Bias and discrimination have been observed in AI models used for automated decision-making. In and of itself, this can result in negative commercial outcomes for the business. However, it also can bring potential legal liability.

Biased and discriminatory employment decisions based on AI can result in claims in the Employment Tribunal. Under the UK and EU GDPR, in many circumstances a person has a right not to be subject to legal or significant decisions made purely on the basis of automated decision-making. Human oversight of those decisions is necessary to comply with the law and should help reduce bias and discriminatory effects. The proposed EU AI Act also would mandate human oversight of AI systems if automated decisions would affect a person’s fundamental rights.

Who owns AI-generated outputs?

The default position on ownership of AI-generated works is still a nascent and relatively untested area of IP law in the UK. UK copyright legislation provides for “computer-generated” works to be owned by the “person by whom the arrangements necessary for the creation of the work are undertaken.” Debate persists about whether that refers to the developer who created the AI tool or the user who directed the tool to generate the work. Parties should clarify ownership of outputs up-front – that is usually covered in the tool’s licence terms. Although we are starting to see customer ownership of outputs becoming a popular market position, you may find that a licence to use any such outputs could be sufficient for your specific use case.

Where does liability lie – with the developer or user?

In most cases, the question of liability will likely go back to the licence terms (see "The importance of licence terms" section above). Bear in mind that many AI tools’ standard terms will seek to exclude most liabilities and push risks onto users. There are also some liabilities (i.e. for death or personal injury – which can include distress, or your obligations under statutes such as the GDPR) you may not be able to contract out of. Consider the risks of using AI at an early stage. In cases involving standard terms and conditions with no possibility of negotiation, consider what your exposure may be for AI-generated output.

Our Cyber, Privacy & Data Innovation team can assist you at all stages of your cybersecurity process from preparation, incident response and post-incident review, kicking off any advice with an introductory meeting where they can learn about your business and objectives, and provide strategic advice on:

  • Tackling cybersecurity expectations and requirements from investors, customers, and regulators.
  • Key cybersecurity risk areas and controls.
  • Cybersecurity insurance.
  • Industry frameworks (e.g., NIST, ISO, CIS and SIG) and Certifications (e.g., SOC 1, SOC2, and HITRUST).

If you would like more details on any of the issues above, please contact Kelly Hagedorn.