AI Update: EU High-Level Expert Group Publishes Requirements for Trustworthy AI and European Commission Unveils Plans for AI Regulation


August.19.2020

Assessment List for Trustworthy Artificial Intelligence

On July 17, 2020, the European High-Level Expert Group on Artificial Intelligence (“AI HLEG”) presented its final Assessment List for Trustworthy Artificial Intelligence (“ALTAI”), to help companies identify AI-related risks, minimize them and determine what active measures to take, through self-evaluation.

Background

In December 2018, the European Commission (“EC”) announced in a communication its vision for artificial intelligence (“AI”) which supports “ethical, secure, and cutting-edge AI made in Europe”. To implement this vision, the EC created the AI HLEG, a group of 52 experts in the field of AI, to draft guidelines on AI ethics and policy and investment recommendations.

On April 8, 2019, the AI HLEG published its Ethics Guidelines for Trustworthy AI (“Guidelines”), which implemented more than 500 comments from stakeholders received through an open consultation procedure. While the Guidelines were not meant as a legally binding document, they aim to establish a framework of guiding principles to assist developers and deployers in achieving “Trustworthy AI”, i.e., AI that is lawful, ethical and robust. In particular, the Guidelines identify four “Ethical Imperatives” including privacy considerations derived from EU fundamental rights, which are crucial to ensure that AI systems are developed, deployed and used in a trustworthy manner: respect for human autonomy, prevention of harm, fairness and explicability. Fortunately, these principles do not stay at an abstract level but include specific, practical questions to consider when building AI technology, for instance: does the AI system interact with decisions by human end-users, e.g., does it recommend actions or decisions or present options to the user? If the AI system supplements part of human work, have the task allocations and interactions between the AI system and humans been considered and evaluated to allow for appropriate human oversight and control?

Seven Requirements for Trustworthy AI

The concept of Trustworthy AI, as set out in the Guidelines, is premised on seven key requirements, which are intended to apply continuously throughout an AI ’system’s life cycle:

  1. Human Oversight. AI systems should enable humans to make their own informed decisions and foster fundamental rights, and not decrease, limit or misguide human autonomy by concealing the AI origin of certain information or decisions. This requirement is mainly aimed at AI systems that guide, influence or support humans in decision-making processes, for example, algorithmic decision support systems or risk analysis/prediction systems. To achieve this goal, AI systems will require human oversight mechanisms (human-in-the-loop, human-on-the-loop and human-in-command), to decide when and how to use, or cease to use, the AI system in any particular situation.

  2. Technical Robustness and Safety. Trustworthy AI requires algorithms to be secure and sufficiently robust to deal with errors or inconsistencies during all phases of AI systems. This includes ensuring there is a failsafe fall-back plan to address AI systems errors, as well as ensuring systems are accurate, reliable and reproducible.

  3. Privacy and Data Governance. Individuals should have full control over their own data. AI systems should incorporate protections regarding privacy, as well as ensure the quality and integrity of the data used.

  4. Transparency. The processes of AI development should be documented to allow AI systems’ outcomes to be traced. Companies should be able to explain the AI system’s technical processes and the reasoning behind the decisions or predictions that the AI system makes. Consumers need to be aware that they are interacting with an AI system and must be informed of the system’s capabilities and limitations.

  5. Diversity, Non-discrimination and Fairness. AI systems should be inclusive, available and addressed to all users, regardless of age, gender, abilities or other characteristics. Unfair bias should be avoided, as it could have multiple negative implications including the marginalization of vulnerable groups.

  6. Societal and Environmental Well-being. AI systems should benefit all human beings and must be sustainable and environmentally friendly. The AI system’s impact on parts of the economy as well as the society at large should also be considered.

  7. Accountability. Mechanisms should be put in place to ensure responsibility and accountability for the development, deployment and use of AI systems, especially in the occurrence of negative impact on consumers. AI systems should be available for evaluation to auditors and provide adequate and accessible redress procedures to users.

Trustworthy AI Assessment List

The Guidelines set out an Assessment List, intended to define the key requirements of Trustworthy AI. Following a pilot process in 2019, the final version of the Assessment List was published on July 17, 2020. The ALTAI supports companies to identify the risks of their AI systems and implement appropriate measures to mitigate those risks through the implementation of the seven key requirements. While the ALTAI is voluntary, it is an important step on the path to formal regulation of AI, as it enables companies to signal compliance with it, and thus foster consumer trust. The AI HLEG noted that the Assessment List should be used in a flexible manner, and companies may choose to focus on some elements more than others, depending on the particular industry or sector in which they operate.

Key Features

The AI HLEG recommends that organizations perform a fundamental rights impact assessment (“FRIA”) to determine whether their AI systems respect the EU Charter of Fundamental Rights and the European Convention on Human Rights. The FRIA should include questions such as:

  • Does the AI system potentially negatively discriminate against people on the basis of race, gender, age or any other characteristics?
  • Have adequate measures been put in place to ensure the protection of personal data with respect to the development, deployment and use phases of the AI system?
  • Have adequate processes been put in place to test and monitor for potential infringement on freedom of expression and information, and/or freedom of assembly and association, during the development, deployment and use phases of the AI system?

Following the FRIA performance, organisations can then proceed to carry out their self-assessment for Trustworthy AI. The assessment consists of a set of questions for each of the seven requirements for Trustworthy AI. A non-exhaustive list of key questions is set out in the ALTAI. Such questions include:

  1. Human Agency and Oversight
    1. Is the AI system designed to interact with, guide or take decisions by human end-users that affect humans or society?
    2. Could the AI system generate confusion for some or all end-users or subjects on whether they are interacting with a human or AI system?

  2. Technical Robustness and Safety
    1. Were adequate measures put in place to ensure the integrity, robustness and overall security of the AI system against potential attacks during its life cycle?
    2. Were the risks, risk metrics and risk levels of the AI system defined in each specific use?

  3. Privacy and Data Governance
    1. Was the impact of the AI system considered as it relates to the right to privacy, the right to physical, mental and/or moral integrity and the right to data protection?
    2. Were adequate measures put in place to ensure compliance with the GDPR or a non-European equivalent (e.g., data protection impact assessment, appointment of a Data Protection Officer, data minimization, etc.)?

  4. Transparency
    1. Were adequate measures put in place to address the traceability of the AI system during its entire life cycle?
    2. Were the decision(s) of the AI system explained to users?

  5. Diversity, Non-discrimination and Fairness
    1. Was a strategy or a set of procedures established to avoid creating or reinforcing unfair bias in the AI system, regarding both the use of input data as well as the algorithm design?
    2. Was a mechanism put in place to allow users to flag of issues about bias, discrimination or poor performance of the AI system?

  6. Social and Environmental Well-being
    1. Where possible, were mechanisms established to evaluate the environmental impact of the AI system’s development, deployment and/or use (for example, the amount of energy used and carbon emissions)?
    2. Does the AI system impact human work and work arrangements?

  7. Accountability
    1. Did you establish mechanisms that facilitate the AI system’s auditability (e.g., traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?
    2. Did you consider establishing an AI ethics review board or a similar mechanism to discuss the overall accountability and ethics practices, including potential unclear grey areas?

European Commission Unveils Plans for AI Regulation

Building upon all of the above-mentioned guidance, as well as the recent White Paper on AI, the European Commission finally unveiled its inception impact assessment for AI legislation on July 23, 2020. While the completed impact assessment is not expected until December 2020, this initial roadmap defines the scope and goals of the ongoing impact assessment study. The European Commission currently welcomes feedback on this roadmap to AI legislation through September 10, 2020.

The study, covering the EU-wide digital market, would examine different legislative options for AI regulation, ranging from no action or only soft law guidelines, to the implementation of voluntary industry-led compliance schemes and codes of conduct, all the way to full-scale AI regulation. Many factors are considered as part of the study, namely how different types of AI regulation would impact small- and medium-sized enterprises (“SMEs”) as compared to well-established large companies, potential competitive advantages AI regulation may bring to the EU digital market by fostering consumer trust, as well as resulting legal fragmentation and uncertainty in the absence of EU-wide AI regulation, if EU Member States were left to regulate AI individually.

The goal of this impact assessment is to determine the best legislative path to implement the EU’s approach to AI: fostering consumer trust in AI technologies based on an appropriate legal and ethical framework with a particular focus on the EU’s respect for fundamental rights at its core. Several key concerns about AI will be addressed:

  • The avoidance, or at least minimization, of consumer harm caused by AI, for instance, the protection of consumers from accidents caused by autonomous vehicles or other AI-driven robotics, as well as loss of privacy rights or limitations to the right of freedom of expression that may be caused by overarching surveillance, facial recognition and other monitoring systems, and protection from unlawful discrimination that may be caused by AI tools displaying bias against certain population groups.
  • The avoidance of such harm from the beginning, as a consequence of design flaws in the AI system, use of poor quality or biased data and ability of AI systems to continue learning when in use.
  • The necessity for human oversight, data accuracy, as well as transparency and accountability of AI systems, namely by avoiding the “black-box effect” and granular applicability of different outcomes to individuals.
  • Ensure consumers have an easy and secure way to seek redress against harm caused by AI and gather the necessary evidence documenting such harm, as well as trace the damaging outcome back to a particular human action or omission.
  • Fill gaps in prior EU safety legislation as well as seamlessly fit future AI regulation into the existing EU legal arsenal: namely, integration with prior legislation on the protection of personal data, non-discrimination, product safety and product liability, including the possibility to revise some prior safety regulations that only apply to existing risks in “static” products which, unlike AI systems, cannot evolve while already on the market.

Such concerns are likely to be the main focus of AI regulation, expected in 2021, following the much-anticipated findings of the impact assessment study.