AI for Business Basics: What Is AI and How Should We Approach It?


7 minute read | August.13.2025

This update is part of our AI FAQ Series. Learn more at our AI Law Center.

1. What is artificial intelligence (AI)?

The definition of “artificial intelligence” can vary, but it is generally understood to include engineered or machine-based systems designed to operate with varying levels of autonomy to generate outputs (such as content, recommendations or decisions) for a set of explicit or implicit objectives.

AI is often characterized as being either generative or predictive. Generative AI is designed to generate new content in the form of text (including code), audio, images or video based on instructions received through input prompts. In comparison, predictive AI is designed to make decisions or predictions about future events or behaviors, based on a set of input data and observations.

2. What is the role of AI in business strategy? Do I need an AI-based business strategy?

We strongly recommend developing an AI business strategy. An AI-based business strategy provides a roadmap for integrating AI into your business processes, products and services and ensuring that investments in AI are aligned with your overall business objectives. You should regularly revisit your AI strategy over time, as business objectives, technical capabilities of AI, legislation and market dynamics are rapidly evolving.

3. How should we approach using AI in business?

Evaluate how AI can add value, improve customer experiences and optimize operations. It’s also critical to consider potential risks and ethical implications, and to ensure that AI aligns with your company’s strategic goals and compliance standards.

4. How can I implement AI in my products?

Implementing AI in your products requires a systematic approach to ensure the technology integrates seamlessly and provides the desired benefits

Start by clearly identifying the business objectives and use cases for the technology. Next, assess the feasibility of the plan, such as considering whether you could obtain the data, technical resources and expertise necessary to advance your objectives.

If feasible, consider developing an implementation strategy that includes a roadmap with defined milestones and timelines, allocating budget for AI product development and securing stakeholder support. From there, you can begin preparing your data, assessing AI technologies and vendors, and integrating AI into your products.

5. What are the risks associated with AI?

The risks relating to AI vary depending on your role and specific use case.

The following is a non-exhaustive list of the most common risks presented by AI, but it is important to keep in mind that appropriate measures can reduce or mitigate risks in whole or in part.

  • Inaccuracy / Misinformation: AI may produce outputs that are completely or partially incorrect, unrealistic, or inconsistent with the input or desired output.
  • Intellectual Property:
    • Third parties may raise intellectual property claims (including patent, trademark, trade secret and copyright claims) relating to:
      • The actual algorithms and code that run the AI model.
      • The data used to train or test the AI model.
      • The outputs produced by the AI model.
    • AI-generated material may not benefit from intellectual property protections to the same extent as human-generated material. Careful consideration to intellectual property ownership and protections should be given before deploying AI to develop products and services.
  • Confidential Information: AI trained using confidential information may produce outputs like the confidential information on which it was trained. Inputting confidential information into third-party AI tools may compromise the information’s confidentiality.
  • Security: AI presents opportunities for security vulnerabilities and threat actor attacks, including:
    • Prompt Attacks: Threat actor includes malicious instructions in AI model prompts designed to influence the behavior of the model to produce outputs not intended by the model’s design (e.g., instructions to ignore the model’s standard safety guidelines).
    • Model Backdoors: Threat actor leverages direct access to the back-end model to covertly change the behavior of the model to produce incorrect or malicious outputs.
    • Adversarial Examples: Threat actor leverages obfuscation strategies to embed hidden input characteristics behind seemingly appropriate prompts to produce a highly unexpected output from the model.
    • Data Poisoning: Threat actor obtains access to a model’s training data and manipulates the data to influence the model’s output according to the attacker’s preference.
    • Exfiltration: Threat actor uses otherwise legitimate query prompts in an attempt to exfiltrate protected data or content (e.g., training data or model IP).
    • Traditional Security Vulnerabilities and Attacks: Threat actors may leverage AI models to carry out traditional attacks by exploiting security vulnerabilities (e.g., leveraging a vulnerability in an AI system’s code to backdoor into the organization’s broader environment).
  • Privacy: Artificial intelligence trained using personal information may produce outputs like the personal information on which it was trained or use personal information in a way that is incompatible with the original purpose for collection or the reasonable expectations of the data subject. Inputting personal information into third-party AI tools may compromise the information’s confidentiality or otherwise be incompatible with the original purpose for collection or the reasonable expectations of the data subject.
  • Autonomy: Artificial intelligence presents risks to the ability for individuals to make informed choices for themselves, whether because of unintended consequences or intentional design practices developed with the goal of tricking or manipulating users into making choices they would not otherwise have made.
  • Bias, Discrimination and Fairness: Artificial intelligence can “learn” the inherent bias contained in training data or otherwise held by those developing the model, which in turn can result in biased, discriminatory, or unfair outputs or outcomes.

6. What are the privacy issues relating to AI?

Privacy issues may arise throughout the AI lifecycle, and this is a key area of risk that should be managed, including considering the requirements of each jurisdiction in which an AI system will be deployed. Requirements may include ensuring that data used in AI systems is collected, processed and stored in compliance with applicable privacy laws and regulations. AI systems should be designed to uphold key privacy principles, such as data minimization, having a lawful purpose and legal basis for processing personal data, and the respect of individual rights in relation to their personal data. When using data to train models, it is important to ensure that any personal data has been lawfully obtained and may be processed for this purpose.

7. What are the security issues relating to AI?

Security issues may arise throughout the AI lifecycle, and organizations should consider security a key area of risk that should be managed. In doing so, an organization should consider how it is using AI, the requirements of each jurisdiction in which an AI system will be deployed, and the applicable legal or contractual requirements.

Businesses should pay attention to what information is shared with AI providers through use of a particular tool and the contractual safeguards for that information. Security issues in deploying or using AI include protecting the data that AI systems process from unauthorized access, ensuring the integrity of AI systems against tampering, and safeguarding the AI infrastructure from cyberattacks.

Security risks also include what’s known as poisoning attacks to data sources; for example, manipulating large language models or other third-party data sources in ways that introduce vulnerabilities or by using other poisoning techniques. For this reason, it’s important to understand the supply chain for your AI systems, including your data sources.

Compliance teams should coordinate with their IT/security teams to identify the range of potential threats and security issues that your company may face specific to the types of data processed, the AI tools you use, and your proposed use cases. This risk assessment should also consider your sector and jurisdiction, given that some countries have specific cybersecurity requirements for different types of systems that may be AI-based.

8. What should I know about the impact of AI on people with disabilities?

AI can impact people with disabilities positively or negatively. AI can enhance accessibility, such as through speech-to-text tools, AI-driven prosthetics and personalized learning. However, bias in AI algorithms could lead to discriminatory – and possibly illegal – outcomes. It’s important to ensure your AI products are designed inclusively, considering accessibility from the outset and testing extensively with diverse user groups. Some countries also have specific laws regarding the accessibility of services, which should be considered.

9. Should we just avoid using AI?

Avoiding AI entirely may not be feasible – or beneficial, and companies shouldn’t assume that avoiding AI entirely is the right move by default.

AI offers numerous advantages, including efficiency, scalability and the ability to innovate. Even if your company isn’t using AI directly, it’s likely at least some of your vendors, suppliers or contractors are leveraging AI for the products and services they provide to you, and your employees may be using AI without your permission. This necessitates considering AI-related issues and developing an appropriate AI strategy, which may include expanding the scope of vendor and other partner onboarding procedures. Regardless of the strategy you choose to follow, it’s essential to implement AI responsibly, with a focus on ethical considerations, transparency and compliance with legal standards.

Consider engaging counsel not only to ensure that AI use is compliant, but also to help understand the scope of AI use that is feasible for your business.