New EU AI legislation

On the fourteenth of June, the European parliament passed sweeping legislation on the regulation of artificial intelligence. The parliament’s position was backed with an overwhelming majority of 499 votes in favor and 28 against, with 93 abstentions.

The legislation, which still requires approval by the European Council, seeks to regulate a number of potential harmful uses of AI, while also providing norms regarding the use of training data. The version approved by the Parliament today included more 700 amendments. The original Commission proposal was more than 100 pages long.

The law has potentially strong provisions to prevent government overreach into public and private life, and has the potential to mitigate some of the large risks associated with AI. Despite this potential, critics note that some provisions may restrain the development or uses of AI in Europe.

One of the many areas of regulation concerns the transparency requirements for data used to train AI, as well as transparency relating to outputs, such as the labeling of deep fakes. Some representatives of the AI industry have labeled the requirements “technically infeasible.” More generally, OpenAI’s Sam Altman is quoted as saying the European Commission’s proposal “may be prohibitively difficult to comply with.”

The current version of the EU AI act, which still has to be approved by the European Council, has the following requirement on training data for AI systems, such as OpenAI’s ChatGPT:

“without prejudice to Union or national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.”

While transparency of the data used, including the use of copyrighted works, is, potentially a good thing, for it allows authors, artists and copyright holders to know when their works are being used, it can, depending upon the implementation, represent a risk for the development and deployment of artificial intelligence. Because generative AI’s are often trained by scraping vast amounts of data from the internet, it can be technically very difficult or impossible to clearly identify the works used, or how the data is used for specific outputs. That said, the EU can explore ways to ensure that the summaries are both meaningful and technically and practically feasible, and consider ways that authors or artists can audit databases.

According to the current text, violation of some parts of the regulation could have firms face fines of up 40 million euros or 7% of their global revenue, which ever is greater.


Original Commission Proposal

Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS, Brussels, 21.4.2021, COM(2021) 206 final,
2021/0106(COD).

    TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS

    Article 52
    Transparency obligations for certain AI systems

    1.Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.

    2.Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.

    3.Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.

    However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.

    4.Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.

EU Parliament Amended texts

See: REPORT on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. 22.5.2023 – (COM(2021)0206 – C9‑0146/2021 – 2021/0106(COD)) – ***I
https://www.europarl.europa.eu/doceo/document/A-9-2023-0188_EN.html

here are a few provisions:

Amendment 203

    Article 3 – paragraph 1 – point 44 d (new)
    Amendment
    (44 d) “deep fake” means manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, produced using AI techniques, including machine learning and deep learning;

Amendment 486

    Article 52 – paragraph 3 – subparagraph 1
    Amendment
    3. Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications.

Amendment 764
Annex VII – point 4 – point 4.5

    4.5. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2, after all other reasonable ways to verify conformity have been exhausted and have proven to be insufficient, and upon a reasoned request, the notified body shall also be granted access to the training and trained models of the AI system, including its relevant parameters. Such access shall be subject to existing Union law on the protection of intellectual property and trade secrets. They shall take technical and organisational measures to ensure the protection of intellectual property and trade secrets.

Amendment 399

    Article 28 b (new)
    Obligations of the provider of a foundation model
    1. A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licences, as a service, as well as other distribution channels.

    2. For the purpose of paragraph 1, the provider of a foundation model shall:

    (a) demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development

    (b) process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation

    (c) design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;

    (d) design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system, whithout prejudice to relevant existing Union and national law. This obligation shall not apply before the standards referred to in Article 40 are published. Foundation models shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle;

    (e) draw up extensive technical documentation and intelligible instructions for use, in order to enable the downstream providers to comply with their obligations pursuant to Articles 16 and 28(1);.

    (f) establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement,

    (g) register that foundation model in the EU database referred to in Article 60, in accordance with the instructions outlined in Annex VIII point C.

    When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account, including as reflected in relevant harmonised standards or common specifications, as well as the latest assessment and measurement methods, reflected in particular in benchmarking guidance and capabilities referred to in Article 58a;

    3. Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 2(e) at the disposal of the national competent authorities

    4. Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition

    a) comply with the transparency obligations outlined in Article 52 (1),

    b) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally-acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression,

    c) without prejudice to Union or national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

Amendment 647

    Article 71 – paragraph 3 – introductory part
    3. Non compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 40 000 000 EUR or, if the offender is a company, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:

From the Original EC proposal:

    TITLE II
    PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES
    Article 5

    1.The following artificial intelligence practices shall be prohibited:

    (a)the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

    (b)the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

    (c)the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

      (i)detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

      (ii)detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

    (d)the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

      (i)the targeted search for specific potential victims of crime, including missing children;

      (ii)the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;

      (iii)the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA 62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.

    2.The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements:

    (a)the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;

    (b)the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.

    In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.

    3.As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.

    The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.

    4.A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.