Orrick Legal Ninja Snapshots
18 minute read | April.08.2026
Prefer to listen?
Enjoy this text-to-speech recording of this article:
As artificial intelligence reshapes industries, venture capital documentation has to evolve quickly to address new risks and opportunities. The latest iterations of both the NVCA Model Documents and BVCA standard forms introduced sophisticated AI-related representations and warranties that reflect investors' growing focus on AI governance, data usage and regulatory compliance.
For German founders raising capital from international investors – or simply preparing for future rounds and that desired big-ticket M&A exit – understanding these new expectations isn't optional.
Drawing on our experience advising on both sides of the Atlantic, including direct involvement in the BVCA working group that shaped the latest UK standards, this two-part mini-series provides German entrepreneurs with practical insights into navigating the new AI representation landscape. In the Anglo-Saxon legal world, people draw an astonishingly fine line between "representations" and "warranties" – a distinction that tends to cause puzzled looks in German contract negotiations and usually ends up tossed into the same synonym bucket during translation. Since we don’t intend to venture into the murky depths of Common Law here, and German practice typically lumps both terms together under "guarantees" anyway, we’ll simply refer to them as "warranties" for the sake of simplicity.
We'll decode what these clauses actually mean, why investors care and how to position your company for success in an increasingly AI-conscious funding environment.
This Legal Ninja Snapshot …
This two-part mini-series is structured as follows:
Part 1 presents the new AI warranties in the BVCA and NVCA model documentation and how they are redefining venture capital due diligence
Part 2 gives German founders guidance on how to future-proof their start-ups for US investors' and buyers' due diligence
…and Much More in OLNS#9 and OLNS#13
For comprehensive background information on raising capital from investors and M&A processes, we refer you to our OLNS#9 – Venture Capital Deals in Germany and our OLNS #13 – M&A in German Tech.
The artificial intelligence era isn't just changing how start-ups operate – it's rewriting the playbook for venture capital and M&A transactions. AI-native companies are reaching unicorn status at breakneck speeds, building their entire business models around machine learning capabilities. While we're not (yet) in a world where a chatbot assisted solo founder can conjure up a billion-dollar company overnight, the impact of AI on the start-up landscape is undeniable.
This transformation touches every corner of start-up life. Off-the-shelf AI tools let founders experiment and iterate faster than ever, but a real competitive edge seems to come from building proprietary AI systems using carefully curated datasets. With that, questions about data provenance, model training and algorithmic transparency move from the engineering machine floor to the board room (okay, that was a bit cheesy).
But it is true – here's where things get interesting for investors and buyers: the old diligence checklists no longer cut it. Now, investors and acquirers are digging deeper, scrutinizing how start-ups manage AI-related risks, from data access and model explainability to ethical use and regulatory compliance. Whether you're raising your next round or prepping for an exit, your AI strategy and governance might well be front and center in the evaluation process. We’ve seen that, as this new wave of AI deals began, investors and buyers often struggled to accurately price the market – leading in some instances to complex price adjustments, convoluted anti-dilution provisions and layered liquidation preferences. Today, with the bar for due diligence and warranty coverage set higher than before, founders need to watch out for new pitfalls that can arise from these more rigorous processes.
For founders trying to attract investors, this means you will have to understand the risks they are concerned about and address them early on in your growth trajectory.
We find it helpful to break down the legal and regulatory landscape into three interconnected clusters (or, if you prefer, buckets) that capture the essence of AI-related risk:
Of course, this cluster framework is a simplification – AI governance is a tangle of edge cases, overlapping jurisdictions and evolving standards that resist neat categorization. But thinking in terms of training corpus, system and output integrity gives you a practical roadmap for navigating the maze of legal requirements and risk management.
With this structure in mind, let's look at how recent AI developments have changed the way investors and buyers approach warranties in VC and M&A deals – and what that might mean for the future.
Venture capital investors – and even more so, buyers in M&A transactions – aren't just looking at your pitch deck and codebase. They want legal assurances that your business is what you say it is, especially when it comes to AI. Enter the world of warranties. Without getting lost in legal jargon, here's the gist: warranties are formal promises in your transaction documents that certain facts about your company are true, and if they turn out not to be, you're on the hook to fix it (or pay for it). They're essential in venture capital and M&A deals because they give investors and buyers confidence that you've done your homework and a legal remedy if you haven't. When they're well-crafted, warranties build trust, clarify obligations and they set clear liability limits.
If you're drafting or reviewing warranties, it's smart to start with the model documents from the industry's heavyweights. In the US, the National Venture Capital Association (NVCA) has set the gold standard since 1973 with its widely used "NVCA Model Legal Documents." These are the go-to templates for American VC deals and often serve as a reference point for international investors. If you want to learn more about the NVCA model docs and how they compare to German market practices, our OLNS#11 – Bridging the Pond is a great starting point.
Across the Pond, the British Venture Capital Association (BVCA) plays a similar role in the UK. While BVCA templates aren't quite as universally adopted as their NVCA counterparts and continue to be up for continued adjustments and negotiations, they've gained significant traction and are increasingly influential in cross-border deals.
Both the NVCA and BVCA have recently updated their model documents to address the growing importance of AI. The NVCA's Stock Purchase Agreement and the BVCA's Subscription Agreement now include comprehensive AI-specific warranties – essentially, checklists and toolkits for tackling AI risks in VC transactions. For M&A, AI related representations have equally become market, and are often more detailed and robust than the warranties in the NVCA and BVCA forms for earlier stage companies. Buyers often differentiate between AI reps for target companies using AI (deployers) and targets offering AI products or services (developers). The representations roughly follow the clusters of AI risk described earlier and you need to expect scheduling obligations around training data sources, related contracts and proprietary AI systems and third-party AI systems too. To ensure you are not running into a scheduling burden when you are in the middle of the transaction (that is, if you are the target), it is recommended that you implement an internal tracking system from the get-go.
In Germany, the legal equivalent to the US Stock Purchase Agreement or the English Subscription Agreement is the Investment Agreement, which typically contains a broad set of warranties given by the company and (here German market standards still annoyingly differ from international standards) the founders. In order to understand how the German market approach to AI-related warranties will likely develop, it is helpful to understand the NVCA and BVCA approaches as they will be the reference point for Anglo-American investors and their legal counsels when pulling up German investment documents.
As AI becomes a core value driver (and risk factor) for start-ups, both the NVCA and BVCA have updated their model documents to help investors get comfortable with AI-related exposures. But how do their approaches compare, and which is more company- or investor-friendly? Let's break it down with our three-cluster framework: training corpus, system and output integrity.
The NVCA Stock Purchase Agreement takes a focused approach, zeroing in on the use of generative AI tools – think large language models and image generators. The reps are relatively general: they require unconditional guarantees for using generative AI tools ("in [material] compliance with the applicable license terms, consents, agreements and laws.") with little room for knowledge qualifiers or carve-outs.
The BVCA, by contrast, casts a much wider net. Its warranties explicitly cover all AI and machine learning technologies, not just generative models. While the BVCA language is broader and there is more focus on processes (seems to be a European thing…), it is also in some instances more pragmatic: for example, it explicitly contemplates the option of warranting (material) compliance with AI legislation only "as far as the Company is aware.” In plain English, you're only on the hook for what you actually know (or should know), and you’re judged on the fundamental compliance areas that actually matter – a softer landing for founders. It is worth noting that the new-style BVCA contains a lot more square brackets than prior iterations – a positive in the sense that some of this softer, more founder-friendly language is accessible – but not a guarantee that it won’t be up for negotiation by investors.
NVCA warranties do not explicitly address training corpus integrity and issues around the legality of the underlying data collection. However, this should not be mistaken for a free pass because the associated risks are still covered through the NVCA’s more general warranties such as on compliance with laws and non-infringement of third-party IP rights.
The BVCA, meanwhile, goes into more detail. In particular, the BVCA warranties are explicit about the legality of data collection (including web scraping), demand robust processes for disclosure and audit regarding data provenance and require compliance with license terms for all third-party data. In short: the BVCA wants to know not just what your training corpus is made of, but exactly how you made it. This aligns with the BVCA's new approach to disclosure – they want to ensure information flow from company to investor (the other purpose of warranties) as opposed to solely having warranties that deal with risk allocation. Furthermore, the BVCA explicitly requires that AI systems do not produce inaccurate or biased results, which is tied to the quality of the training corpus.
On system integrity, the NVCA requires that generative AI tools are used in material compliance with all relevant license terms, consents, agreements and laws. This is a baseline check to ensure your AI systems aren't ticking legal time bombs.
The BVCA, however, layers on more detail. It explicitly references the EU AI Act and data protection rules and goes further by requiring that AI systems do not pose a risk or cause harm. The BVCA also demands that companies have documentation and governance policies in place to ensure ethical use, transparency, and readiness for regulatory scrutiny – including the ability to provide human-readable explanations for AI-driven decisions if a regulator comes knocking. This is a clear nod to the EU's evolving regulatory landscape and the growing expectation that companies can "show their work" when it comes to AI, though interesting to see that investors' expectations on "standard" policies here have yet to be aligned.
When it comes to output integrity, the NVCA and BVCA diverge. NVCA reps require confirmation that generative AI tools haven't been used to develop any material IP in a way that could materially compromise your ownership or rights therein – an explicit nod to IP ownership risks. The BVCA doesn't address IP in its AI reps as directly, but it does require general compliance with applicable laws, which would include laws covering IP rights, and covers IP ownership in its more general IP reps. Notably, the BVCA puts more emphasis on the fairness, accuracy and non-harmful nature of AI outputs, reflecting a broader European focus on ethical and societal impacts.
In summary, the BVCA warranties are more specific than the NVCA's. The BVCA approach requires companies to be more proactive, with explicit documentation and governance obligations, and applies to all AI tools, not just generative models. The NVCA warranties, while stricter in their guarantees, are narrower in focus. For founders, the BVCA approach means more paperwork and processing, but also more flexibility (thanks to optional knowledge qualifiers). For investors, the NVCA warranties offer more certainty – at least for the specific risks they cover – while the BVCA warranties provide a more holistic but potentially softer risk allocation.
When looking at the warranties in both the NVCA and BVCA model form documents, one might wonder what's really new here. Don’t these warranties mainly cover "old" risks using new packaging with some AI terminology sprinkled on top?
To some extent, the answer is "yes" – and yet it's advisable to prepare for specific AI-related warranties. There's certainly overlap between the AI-specific warranties and the catalogues dealing with data privacy, cybersecurity and IP rights that have been standard for years.
But don't underestimate the signaling effect of providing potential investors or buyers with specific warranties covering AI. It signals awareness and understanding of AI-related risks. Holding yourself out as an AI startup (and which startup isn't an AI startup these days?) comes with the expectation that you can be more specific in your contractual promises around AI in an acquisition, especially now that the market-defining NVCA and BVCA templates provide specific AI-related warranties.
Besides this signal, there's also a solid practical reason for AI-specific warranties: AI actually enhances and transforms traditional privacy, IP and other risks in ways that merit specific attention and targeted disclosures.
AI systems are data-hungry by design, requiring massive datasets often obtained through web scraping, API calls or user-generated content aggregation. This creates risk for personal information that goes beyond traditional data processing:
The BVCA reps explicitly address these challenges by requiring robust data collection processes and transparency about data provenance, reflecting the reality that AI models can inadvertently ingest personal information, copyright protected or otherwise proprietary information at scale. This isn’t just a theoretical risk: Both IP rights holders and regulators are increasingly scrutinizing how AI training data is collected, and a single misstep can lead to regulatory action, reputational damage, IP infringement claims or investor concern.
AI systems introduce new attack vectors and vulnerabilities that traditional cybersecurity measures weren't designed to handle:
The complexity of deep learning systems makes these vulnerabilities particularly difficult to anticipate and mitigate, which is why the BVCA's emphasis on documentation and governance policies isn't just bureaucratic box-ticking – it's essential for risk management.
This is where AI-specific reps become truly essential. Here is just a small selection of potential risks that need to be addressed:
Here, the legal framework is rapidly evolving. Just a few highlights:
The NVCA's explicit focus on ensuring AI hasn't compromised ownership or rights reflects these real uncertainties. Unlike traditional IP warranties that assume human creation, AI-specific warranties must address the fundamental question of whether protectable rights exist at all.
Unlike human decision-making, AI operates through mathematical predictions based on patterns in historical data. This creates unique discrimination risks:
The algorithm doesn't "see" discrimination – it simply optimizes for patterns that may be inherently unfair. This is why specific provisions addressing algorithmic fairness aren't just about compliance; they're about protecting the company's reputation, market position and social license to operate.