However, with great power comes great responsibility.
The European Union (EU) is taking a bold step forward to ensure AI is used safely, ethically, and transparently.
The AI EU Act is the world’s first comprehensive law, establishing a clear set of rules to protect fundamental rights and public interests while promoting innovation.
It introduces a risk-based approach, applying different requirements.
If you are developing, using, or deploying AI in any capacity within the EU market, this regulation will likely affect you.
In this article, you'll find the answers to:
Before jumping into the details, it's important to highlight who this AI EU Act was created for.
And yes, it probably applies to you as well.
Short answer: almost everyone who uses AI in the EU market.
If you're touching AI in any way and operating within or interacting with the EU market, the AI EU Act likely applies to you.
It's not only relevant for developers and providers of AI systems but it also applies to users of AI.
Including businesses, organisations, and even individuals who deploy AI tools within the European Union.
To be honest, if you're reading this article, the AI EU Act probably applies to you.
So, let's take a closer look and answer some important questions.
The Artificial Intelligence Act of the European Union can be quite an overwhelming document.
To help you digest all the information, our Subject Matter Experts have gone through the act and summarised it for you.
The most important takeaways are explained in this article.
If you're curious to read the full document (which we advise you to do), check it out here.
For this article, we'll be referring to the AI EU ACT (Artificial Intelligence Act of the European Union).
The AI EU ACT has been deployed to govern the use of artificial intelligence in the European Union.
It is the world's first comprehensive legal framework for artificial intelligence.
It's designed to ensure AI is used safely, ethically, and transparently, while still allowing businesses to innovate and grow.
The act uses a risk-based approach to regulation.
Applying different rules to AI systems based on their risk level.
There are four risk levels included in the AI EU ACT
Let's take a deep dive into each risk level.
This is the no-go zone.
Think of AI systems that manipulate human behaviour, exploit vulnerabilities, or facilitate surveillance on a mass scale.
Sounds like your product? Well, that's not great ...
Because ...
Under the AI EU Act, these uses are flat-out banned.
Penalties may vary from €7.5 million or 1.5% of the global annual turnover, to €35 million or 7% of the global annual turnover.
👆 Depending on the nature of the non-compliance.
AI systems with unacceptable risks include
* Taken directly from the AI EU Act
Here’s where it gets interesting.
Is your business in sectors like healthcare, finance, transportation, or hiring, and you’re using AI tools that affect critical decisions?
Then you’re in the high-risk category.
But don’t panic—this isn’t a blockade.
It’s a chance to differentiate yourself by adhering to the highest standards of transparency, accountability, and safety.
First of all, here's an overview of high-risk AI systems
👉 It's important to double-check the AI EU Act official report for a full overview of what systems fall within this category, as updates may have been published.
Providers of high-risk AI systems must:
Implement a comprehensive risk management system that spans the entire lifecycle of the high-risk AI system.
Ensure robust data governance by using training, validation, and testing datasets that are relevant, sufficiently representative, and, as far as possible, free from errors and complete in accordance with the system's intended purpose.
Develop detailed technical documentation that demonstrates compliance with the AI EU Act, providing authorities with the necessary information to assess adherence.
Design the high-risk AI system with automated record-keeping capabilities to log events that are critical for identifying national-level risks and tracking significant modifications throughout the system's lifecycle.
Provide clear instructions for use to downstream deployers, enabling them to comply with all relevant regulatory requirements.
Ensure the high-risk AI system is designed to facilitate human oversight, allowing deployers to intervene and manage the system when necessary.
Design the AI system to achieve optimal levels of accuracy, robustness, and cybersecurity, ensuring reliable and secure performance.
Establish a quality management system to maintain ongoing compliance with all regulatory standards.
AI systems with limited risk, such as chatbots or AI tools that provide information, require transparency measures.
It's about informing users they are interacting with an AI.
This accounts for both developers and deployers.
Limited-risk AI systems include:
As well as
A GPAI model refers to an AI model—particularly one trained on extensive data using large-scale self-supervision.
This includes models regardless of their market placement and their ability to be integrated into various downstream systems or applications.
Note: this definition excludes AI models used solely for research, development, and prototyping before they are commercially released.
A GPAI system is an AI system built on a general-purpose AI model.
Here's an overview of GPAI models
It is designed to fulfil multiple functions, whether for direct application or for integration into other AI systems.
GPAI systems may be used as high-risk AI systems or integrated into them.
GPAI system providers should cooperate with such high-risk AI system providers to enable the latter’s compliance.
All providers of GPAI Models must:
Prepare Comprehensive Technical Documentation: Create detailed documentation that outlines the training and testing processes, along with evaluation results, for the GPAI model.
Provide Integration Documentation for Downstream Providers: Supply clear and thorough information to downstream providers who plan to incorporate the GPAI model into their own systems. This documentation should explain the model’s capabilities, limitations, and compliance requirements.
Implement a Copyright Compliance Policy: Develop and enforce a policy to ensure adherence to the Copyright Directive, protecting intellectual property rights in the use of training data.
Publish a Detailed Training Data Summary: Release a comprehensive summary of the content used to train the GPAI model, detailing the types and sources of data involved.
Systemic risk refers to the potential for high-impact GPAI models to significantly affect the EU market.
Or have negative consequences on:
Such risks are characterised by their ability to scale and propagate across the value chain.
One key indicator of systemic risk is the model’s training resources:
if it uses more than \(10^{25}\) floating point operations (FLOPs) during training, it is considered to have high-impact capabilities and is presumed to pose a systemic risk.
🚨NOTE: The EU Commission has the authority to classify a model as posing a systemic risk.
Additional obligations:
Just don't. Please just follow the regulations.
Organizations that do not comply with prohibited AI practices may face fines of up to €35 million or 7% of their worldwide annual turnover, whichever is higher.
For most other breaches, such as failing to meet the requirements for high-risk AI systems, fines can be as much as €15 million or 3% of worldwide annual turnover, whichever is higher.
Providing incorrect, incomplete, or misleading information to authorities can result in penalties of up to €7.5 million or 1% of worldwide annual turnover, whichever is higher.
Originally proposed by the European Commission in April 2021, the EU AI Act received approval from the European Parliament on 22 April 2024 and from the EU Member States on 21 May 2024.
The law will come into effect in phases, starting 20 days after its publication in the Official Journal of the EU.
Key dates to note include:
Fundamental rights refer to basic human rights, such as privacy, freedom of expression, and protection against discrimination. The AI EU Act is designed to protect these rights by regulating AI systems that could potentially violate them. For instance, AI systems that perform biometric identification in publicly accessible spaces for law enforcement purposes are strictly regulated to prevent misuse that could undermine individual privacy rights.
General-purpose AI models (GPAI) are AI models capable of performing a wide range of tasks and integrating them into various applications. These models are subject to specific transparency obligations, requiring providers to disclose details about the data used for training and to document the model's capabilities and limitations. This ensures that downstream users understand the model's potential risks and compliance requirements.
Under the AI Act, providers of AI systems, especially those classified as high-risk, must comply with various transparency obligations. This includes providing clear documentation of the AI system’s functionality, limitations, and risks, as well as informing users when they are interacting with an AI system. This transparency helps public authorities and competent authorities assess compliance and potential impacts.
Competent authorities are national bodies designated to enforce the AI Act within their jurisdictions. They are responsible for ensuring compliance, monitoring the market, and taking corrective action when necessary. Competent authorities also work in collaboration with public authorities to oversee AI applications, especially those with potential risks to public health, safety, and fundamental rights.
The AI Act includes specific provisions for AI systems used for law enforcement purposes, such as remote biometric identification in public spaces. These applications are generally prohibited, except in exceptional cases like searching for missing persons or preventing serious crimes. Competent authorities and law enforcement agencies must ensure that such uses are transparent, justified, and proportionate to the potential risks involved.
Potential risks refer to the adverse impacts that AI systems might have on individuals or society, including threats to privacy, security, and fundamental rights. The AI Act categorises AI systems based on their risk levels—unacceptable, high, limited, or minimal—and imposes corresponding requirements to mitigate these risks.
The AI Act prohibits the use of AI systems for social scoring by public authorities, where individuals or groups are assessed based on social behaviours, personal characteristics, or predicted future behaviours. This measure prevents unfair discrimination and protects fundamental rights by ensuring that AI-driven decisions are fair, transparent, and based on objective criteria.
Service providers in the AI sector must comply with the new regulations set forth by the AI Act. This includes adhering to transparency obligations, implementing robust data governance, and maintaining appropriate cybersecurity measures. Providers must also ensure their systems are designed to facilitate human oversight and allow for intervention when necessary.
The Union has the authority to enforce the AI Act through fines and corrective measures. Non-compliant organisations may face substantial penalties, including fines of up to €35 million or 7% of global turnover for the most serious violations. The Union can also work with competent authorities across Member States to ensure consistent application of the Act.