7 minute read | August.13.2025
This update is part of our AI FAQ Series. Learn more at our AI Law Center.
The definition of “artificial intelligence” can vary, but it is generally understood to include engineered or machine-based systems designed to operate with varying levels of autonomy to generate outputs (such as content, recommendations or decisions) for a set of explicit or implicit objectives.
AI is often characterized as being either generative or predictive. Generative AI is designed to generate new content in the form of text (including code), audio, images or video based on instructions received through input prompts. In comparison, predictive AI is designed to make decisions or predictions about future events or behaviors, based on a set of input data and observations.
We strongly recommend developing an AI business strategy. An AI-based business strategy provides a roadmap for integrating AI into your business processes, products and services and ensuring that investments in AI are aligned with your overall business objectives. You should regularly revisit your AI strategy over time, as business objectives, technical capabilities of AI, legislation and market dynamics are rapidly evolving.
Evaluate how AI can add value, improve customer experiences and optimize operations. It’s also critical to consider potential risks and ethical implications, and to ensure that AI aligns with your company’s strategic goals and compliance standards.
Implementing AI in your products requires a systematic approach to ensure the technology integrates seamlessly and provides the desired benefits
Start by clearly identifying the business objectives and use cases for the technology. Next, assess the feasibility of the plan, such as considering whether you could obtain the data, technical resources and expertise necessary to advance your objectives.
If feasible, consider developing an implementation strategy that includes a roadmap with defined milestones and timelines, allocating budget for AI product development and securing stakeholder support. From there, you can begin preparing your data, assessing AI technologies and vendors, and integrating AI into your products.
The risks relating to AI vary depending on your role and specific use case.
The following is a non-exhaustive list of the most common risks presented by AI, but it is important to keep in mind that appropriate measures can reduce or mitigate risks in whole or in part.
Privacy issues may arise throughout the AI lifecycle, and this is a key area of risk that should be managed, including considering the requirements of each jurisdiction in which an AI system will be deployed. Requirements may include ensuring that data used in AI systems is collected, processed and stored in compliance with applicable privacy laws and regulations. AI systems should be designed to uphold key privacy principles, such as data minimization, having a lawful purpose and legal basis for processing personal data, and the respect of individual rights in relation to their personal data. When using data to train models, it is important to ensure that any personal data has been lawfully obtained and may be processed for this purpose.
Security issues may arise throughout the AI lifecycle, and organizations should consider security a key area of risk that should be managed. In doing so, an organization should consider how it is using AI, the requirements of each jurisdiction in which an AI system will be deployed, and the applicable legal or contractual requirements.
Businesses should pay attention to what information is shared with AI providers through use of a particular tool and the contractual safeguards for that information. Security issues in deploying or using AI include protecting the data that AI systems process from unauthorized access, ensuring the integrity of AI systems against tampering, and safeguarding the AI infrastructure from cyberattacks.
Security risks also include what’s known as poisoning attacks to data sources; for example, manipulating large language models or other third-party data sources in ways that introduce vulnerabilities or by using other poisoning techniques. For this reason, it’s important to understand the supply chain for your AI systems, including your data sources.
Compliance teams should coordinate with their IT/security teams to identify the range of potential threats and security issues that your company may face specific to the types of data processed, the AI tools you use, and your proposed use cases. This risk assessment should also consider your sector and jurisdiction, given that some countries have specific cybersecurity requirements for different types of systems that may be AI-based.
AI can impact people with disabilities positively or negatively. AI can enhance accessibility, such as through speech-to-text tools, AI-driven prosthetics and personalized learning. However, bias in AI algorithms could lead to discriminatory – and possibly illegal – outcomes. It’s important to ensure your AI products are designed inclusively, considering accessibility from the outset and testing extensively with diverse user groups. Some countries also have specific laws regarding the accessibility of services, which should be considered.
Avoiding AI entirely may not be feasible – or beneficial, and companies shouldn’t assume that avoiding AI entirely is the right move by default.
AI offers numerous advantages, including efficiency, scalability and the ability to innovate. Even if your company isn’t using AI directly, it’s likely at least some of your vendors, suppliers or contractors are leveraging AI for the products and services they provide to you, and your employees may be using AI without your permission. This necessitates considering AI-related issues and developing an appropriate AI strategy, which may include expanding the scope of vendor and other partner onboarding procedures. Regardless of the strategy you choose to follow, it’s essential to implement AI responsibly, with a focus on ethical considerations, transparency and compliance with legal standards.
Consider engaging counsel not only to ensure that AI use is compliant, but also to help understand the scope of AI use that is feasible for your business.