Earlier this month, the European Union approved the world’s first set of regulatory rules that govern artificial intelligence. The EU AI Act (the “Act”) provides a comprehensive framework for targeting the risks of AI by protecting fundamental rights and values, and making tech more “human centric”.
Our Data, Privacy and Cyber Group have set out detailed commentary on the Act here but for a short bitesize summary see below.
Falling within scope
The Act applies to “providers”, “deployers”, “importers” and “distributors“ of AI systems marketed and used within the EU, regardless of whether the organisation is established within the EU. The main compliance focus in the Act will fall on providers and deployers of high risk AI systems, each carrying a distinct set of obligations.
Classification of Risk
The theme of the Act is to regulate AI based on the risk that it presents. This risk classification is split into four categories:
- Unacceptable risk systems – there is an outright ban on AI systems considered a threat to people, civil society, human rights (e.g. manipulative AI, systems that infer emotions in the workplace, social scoring systems).
- High risk systems – most of the Act is focused on high risk systems which require compliance with strict requirements (e.g. non banned biometrics, many employment related systems, AI in critical infrastructure).
- Limited risk systems – subject to specific transparency requirements where AI systems lack transparency re AI usage (e.g. Chatbots and similar).
- Minimal risk systems – the Act permits free use of minimal-risk AI systems (this includes things like AI in video games).
The Act also creates obligations on providers of General Purpose AI (“GPAI”) models to reduce risks created by generative AI technologies, with a set of more stringent requirements applying to systemic GPAI models. In particular, all providers of GPAI models must:
- Meet certain transparency requirements documenting the scope of GPAI models.
- Include compliance with EU copyright laws.
- Publish detailed summaries on the content used for training the GPAI model.
Governance and Enforcement
Each Member State bears the responsibility of overseeing adherence to the provisions of the Act, with the exception of GPAI models, which are enforced by the EU AI Office (“AI Office”).
The establishment of the AI Office in January 2024, notably predating the passage of the Act, will assist Member States in enforcement practices and also support the Commission’s role as primary enforcer of the Act.
The Act further established the AI Board, comprised of one representative per Member State. Amongst its duties are the issuance of codes of practice, recommendations and opinions, and establishment of technical standards, aiming to uphold uniform application of the Act throughout the EU.
Non-compliance with the Act can result in penalties of up to 7% of global annual turnover or €35 million, as well as reputational damage.
Next Steps
The Act is expected to be finally adopted in April 2024 with it becoming fully applicable 24 months after its entry into force. There will be a phased implementation period broadly as follows:
- Prohibited practices applicable after 6 months of entry into force;
- Codes of practice will apply 9 months after entry into force;
- GPAI rules will apply 12 months after entry into force;
- Obligations for many high risk AI systems applicable 24 months after entry into force; and
- Obligations for other high risk AI systems applicable 36 months after entry into force.
For more detailed information on the Act please see our article here and please join us for our next In-House Data Club webinar on 30 April 2024 at 4.00pm.
The Internal Market Committee co-rapporteur Brando Benifei said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency... We ensured that human beings and European values are at the very centre of AI’s development”.