5 minute read | January.25.2024
Whether a company formally allows it, more employees are using generative AI (Gen AI) tools for everyday tasks at work.
Companies cannot simply rely on legacy internal policies and practices to address the risks Gen AI tools present. Instead, they should evaluate how they use and plan to use Gen AI and develop or update policies to address new and enhanced risks.
Given the variety of legal, technical and business issues involved, partnering with outside advisors to discuss and document strategic goals and internal requirements can help companies stay current on risks and learn how peers are approaching a Gen AI-powered future.
The new Gen AI Policy Builder makes it easy for a company to create a comprehensive internal policy to govern current and future use of Gen AI tools. By answering a few questions, a company can receive a human-authored draft policy designed to reduce risk at a time of rapid technological change. The Gen AI Policy Builder also helps companies adapt to new regulations and technical developments.
Building on Orrick’s extensive leadership in advising developers and users of artificial intelligence technologies, the firm’s Gen AI Policy Builder enables a company to engage Gen AI legal issues, accelerate internal discussions and benefit from the collective, cross-practice experience of the Orrick team.
Gen AI holds transformative potential, but it also introduces risks. Here are just a few risks companies may face – along with a look at how the Gen AI Policy Builder can help reduce that risk:
Details: Gen AI tools are susceptible to “hallucinations” and often lack the practical human wisdom to create and verify that Gen AI outputs are ready for company use and reliance. Without adequate guardrails, companies could find themselves unintentionally trusting unvetted information internally or incur brand damage if they unwittingly distribute false information.
Mitigation Approach: The Gen AI Policy Builder helps companies formulate an initial set of acceptable and prohibited internal uses based on their indicated nature of use and risk tolerance.
Details: Employees can access Gen AI tools online at little or no cost, often without company knowledge. If employees are using these tools for the everyday tasks of their role (especially if they generate intellectual property or create public-facing content), they could thwart a company’s intellectual property strategy, lead to breaches of confidential information or increase the likelihood of litigation, consumer complaints or regulatory inquiries.
Mitigation Approach: The Gen AI Policy Builder creates a customizable framework for internal Gen AI tool oversight. It also offers a standardized approval process, balancing internal visibility and cohesion with operational practicality.
Details: The chat interface for many popular Gen AI tools makes it simple for someone to unwittingly distribute sensitive confidential information or trade secrets (such as proprietary source code) or personal data to third party products and services. That could cause companies to lose legal protection for their proprietary information, lead to breaches of contractual obligations with third parties or result in violations of laws and regulations.
Mitigation Approach: The Gen AI Policy Builder suggests new and supplemental requirements to bolster company practices and policies to account for the ways confidential information, trade secrets and personal data may be used.
Details: some use cases for Gen AI tools create new material risks to a company (e.g., externally distributing unverified output), others may create opportunities for innovation without a meaningful change in a company’s risk profile (e.g., creating internal training documentation).
Mitigation Approach: The Gen AI Policy Builder suggests initial guidelines to differentiate among potential types of employee uses. The Policy Builder establishes commensurate internal checks that correspond to the likely nature and magnitude of potential risk (such as differentiating between internal-only use and public-facing content and implementations).
Details: Relatively few authorities worldwide have implemented rules or requirements around Gen AI, but many are reviewing AI tools more broadly, focusing on their capabilities and possible harms. Companies should consider how existing laws could extend to Gen AI tools and stay abreast of new regulatory developments.
Mitigation Approach: The Gen AI Policy Builder accounts for the latest regulatory guidance (even where not yet binding) to prepare companies for upcoming legal compliance obligations and best practices.
Access the Gen AI Policy Builder to begin. If you have questions about Gen AI or the Gen AI Policy Builder, reach out to the authors (Annette Hurst, Shannon Yavorsky and Daniel Healow) or other members of the Orrick team.