News
BackA ground-breaking agreement was reached in Brussels on 8 December. After 36 hours of marathon negotiations, the EU Parliament, Council and Commission announced the final trilogue result on the Regulation on artificial intelligence (AI).
The AI Act, as the Regulation is also known, is the world's first set of rules on artificial intelligence. The EU’s intention is to play a pioneering role in this sector. Artificial intelligence has been around for decades, but technological developments in recent years have made it increasingly important, as can be seen in applications such as ChatGPT.
The European Commission presented a Proposal for a Regulation in April 2021, the Council adopted its position in December 2022 and the European Parliament followed in June 2023. On 8 December 2023, Carme Artigas, Spanish State Secretary for Digitalisation and Artificial Intelligence, Internal Market Commissioner Thierry Breton and the rapporteurs of the European Parliament Brando Benifei (S&D) and Dragoş Tudorache (Renew) presented the political agreement.
Risk-based approach
The regulation differentiates between various risk categories when regulating artificial intelligence. The higher the risk, the stricter the regulation.
Certain AI applications will be banned from the EU internal market. According to the trilogue agreement, these include biometric categorisation systems that use sensitive characteristics (such as political and religious beliefs or sexual orientation), as well as the untargeted reading of facial images from the internet (scraping) or from video surveillance systems to create facial recognition databases. Emotion recognition in the workplace and in educational institutions, social scoring based on social behaviour or personal characteristics are also prohibited. AI systems that manipulate people's behaviour in order to circumvent their free will and AI that is used to exploit people's weaknesses are also included in the list of banned technologies. There are to be narrow exceptions to the ban on biometric identification systems for law enforcement purposes.
Some AI systems are classified as high-risk systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law). These must fulfil certain requirements in order to be authorised on the EU internal market. During the negotiations, the EU Parliament successfully pushed through a mandatory fundamental rights impact assessment. AI systems that are used to influence election results and voter behaviour are considered high-risk. Citizens will have the right to launch complaints about AI systems and receive explanations on decisions that are based on high-risk systems and affect their rights.
In order to take account of the wide range of AI systems and the rapid increase in capabilities, it was agreed that general purpose AI systems (GPAI) and the models on which they are based must fulfil certain transparency requirements, such as documentation on technical and copyright aspects as well as information on the content used for training. Stricter requirements are to apply to GPAI models with a high systemic risk.
Office for Artificial intelligence
An Artificial intelligence Office ("AI Office") will be set up at EU level within the Commission. Its task will be the monitoring of the most advanced AI models, promoting standards and test procedures and enforcing the common rules in the Member States. A scientific committee will advise the AI Office on GPAI models.
AK Demands
In an open letter before the last trilogue negotiations, AK drew attention to some critical points. AK calls for the classification of high-risk systems to be based on a list of high-risk applications. AK rejects additional criteria that require interpretation, as this would create legal uncertainty for both consumers and AI users. Practice also shows that automated credit scoring, emotion recognition or the determination of the insurance risk of individual consumers are always particularly risky. AK has also spoken out in favour of a fundamental rights review as part of the AI Regulation. Whether the fundamental rights impact assessment now adopted for high-risk systems fulfils the AK requirements has yet to be examined.
With regard to AI in the workplace, AK has spoken out in favour of a ban on emotion recognition and pointed out that AI applications in the workplace must not only meet technical requirements, but also labour law protection mechanisms and co-determination rights within the framework of (supra-) company representation of employees' interests.
What are the next steps?
A political agreement was reached on 8 December and many of the major issues were clarified. The text of the regulation still needs to be finalised in detail and then confirmed by the EU Parliament and Council. In the EU Parliament, the Committee on the Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) will vote during their next meetings scheduled for the end of January. Important details still need to be clarified prior to then. The AI Regulation will come into force two years after its entry into force, with the exception of some parts that will be applied earlier (including the ban on certain AI applications). In order to bridge the transition period, the Commission has announced that it will launch an "AI Pact" with voluntary commitments.
Further information:
EU Parliament: Artificial intelligence Act: deal on comprehensive rules for trustworthy AI
Council: Artificial intelligence Act: Council and Parliament strike a deal on the first rules for AI in the world
EU Commission: Statement by President von der Leyen on the political agreement on the EU AI Act
AK EUROPA: Open letter to EU-decisionmakers in the trilogue
AK EUROPA: EU Artificial intelligence Act: Is sufficient protection for workers and consumers ensured?