Katana VentraIP

Artificial Intelligence Act

The Artificial Intelligence Act (AI Act)[a] is a European Union regulation concerning artificial intelligence (AI).

Title

Artificial Intelligence Act[a]

13 March 2024

21 May 2024

It establishes a common regulatory and legal framework for AI within the European Union (EU).[1] Proposed by the European Commission on 21 April 2021,[2] it passed the European Parliament on 13 March 2024,[3] and was unanimously approved by the EU Council on 21 May 2024.[4] The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation.[5] Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU.[6]


It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes.[7] As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context.[6] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.[8] More restrictive regulations are planned for powerful generative AI systems with systemic impact.[9]


The Act classifies non-exempted AI applications by their risk of causing harm. There are four levels—unacceptable, high, limited, minimal—plus an additional category for general-purpose AI. Applications with unacceptable risks are banned. High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments. Limited-risk applications only have transparency obligations, while minimal-risk applications are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations for high-capability models.[9][10]

Provisions[edit]

Risk categories[edit]

There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:

Reactions[edit]

Experts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe.[35] Anu Bradford at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies.[35]


Amnesty International criticized the AI Act for not completely banning real-time facial recognition, which they said could damage "human rights, civil space and rule of law" in the European Union. It also criticized the absence of ban on exporting AI technologies that can harm human rights.[35]


Some tech watchdogs have argued that there were major loopholes in the law that would allow large tech monopolies to entrench their advantage in AI, or to lobby to weaken rules.[36][37] Some startups welcomed the clarification the act provides, while others argued the additional regulation would make European startups uncompetitive compared to American and Chinese startups.[37] La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control". LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".[11]

Algorithmic bias

Ethics of artificial intelligence

Regulation of algorithms

Regulation of artificial intelligence in the European Union

Existential risk from artificial general intelligence

(19 April 2024 corrected version, Archived 21 May 2024 at the Wayback Machine)

Artificial Intelligence Act