The risks relating to AI vary depending on your role and specific use case. The following is a non-exhaustive list of the most common risks presented by AI, but it is important to keep in mind that appropriate measures can reduce or mitigate risks in whole or in part.
Inaccuracy / Misinformation: Artificial intelligence may produce outputs that are completely or partially incorrect, unrealistic or inconsistent with the input or desired output.
Intellectual Property: Third parties may raise intellectual property claims (including patent, trademark, trade secret, copyright claims and torts such as right of publicity and defamation) relating to:
Confidential Information: Artificial intelligence trained using confidential information may produce outputs similar to the confidential information on which it was trained. Inputting confidential information into third-party AI tools may compromise the information’s confidentiality.
Security: Artificial intelligence presents opportunities for security vulnerabilities and threat actor attacks, including:
Privacy: Artificial intelligence trained using personal information may produce outputs similar to the personal information on which it was trained or use personal information in a way that is incompatible with the original purpose for collection or the reasonable expectations of the data subject. Inputting personal information into third-party AI tools may compromise the information’s confidentiality or otherwise be incompatible with the original purpose for collection or the reasonable expectations of the data subject.
Autonomy: Artificial intelligence presents risks to the ability for individuals to make informed choices for themselves, whether as a result of unintended consequences or intentional design practices developed with the goal of tricking or manipulating users into making choices they would not otherwise have made.
Bias, Discrimination and Fairness: Artificial intelligence can “learn” the inherent bias contained in training data or otherwise held by those developing the model, which in turn can result in biased, discriminatory or unfair outputs or outcomes.
Child Sexual Abuse Material (“CSAM”): Various state laws heavily regulate at a criminal level CSAM that is artificially generated, such as by an AI model.