Blog | Growth Tribe

EU AI Act: The Important Insights, Powerful Tips and Who Must Comply

Written by Marco Tortike | September 9, 2024

Artificial intelligence is transforming industries across the globe, revolutionising everything from healthcare and finance to marketing and logistics.

 

However, with great power comes great responsibility.

 

The European Union (EU) is taking a bold step forward to ensure AI is used safely, ethically, and transparently.

 

The AI EU Act is the world’s first comprehensive law, establishing a clear set of rules to protect fundamental rights and public interests while promoting innovation.

 

It introduces a risk-based approach, applying different requirements.

 

If you are developing, using, or deploying AI in any capacity within the EU market, this regulation will likely affect you.

 

In this article, you'll find the answers to:

Key takeaways:

 

 

For who is the EU AI Act relevant? 

Before jumping into the details, it's important to highlight who this AI EU Act was created for. 

 

And yes, it probably applies to you as well. 

 

Short answer: almost everyone who uses AI in the EU market. 

 

If you're touching AI in any way and operating within or interacting with the EU market, the AI EU Act likely applies to you.

 

It's not only relevant for developers and providers of AI systems but it also applies to users of AI.

 

Including businesses, organisations, and even individuals who deploy AI tools within the European Union.

 

To be honest, if you're reading this article, the AI EU Act probably applies to you. 

 

So, let's take a closer look and answer some important questions. 

 

What is the EU AI Act exactly? 

The Artificial Intelligence Act of the European Union can be quite an overwhelming document.

 

To help you digest all the information, our Subject Matter Experts have gone through the act and summarised it for you.

 

The most important takeaways are explained in this article.

 

If you're curious to read the full document (which we advise you to do), check it out here. 

 

For this article, we'll be referring to the AI EU ACT (Artificial Intelligence Act of the European Union).

 

The AI EU ACT has been deployed to govern the use of artificial intelligence in the European Union.

 

It is the world's first comprehensive legal framework for artificial intelligence.

 

It's designed to ensure AI is used safely, ethically, and transparently, while still allowing businesses to innovate and grow. 

 

The act uses a risk-based approach to regulation.

 

Applying different rules to AI systems based on their risk level.

 

There are four risk levels included in the AI EU ACT

 

Let's take a deep dive into each risk level. 

 

1. Unacceptable risk  

 

This is the no-go zone.

 

Think of AI systems that manipulate human behaviour, exploit vulnerabilities, or facilitate surveillance on a mass scale.

 

Sounds like your product? Well, that's not great ... 

 

Because ...  

 

Under the AI EU Act, these uses are flat-out banned.

 

Penalties may vary from €7.5 million or 1.5% of the global annual turnover, to  €35 million or 7% of the global annual turnover.

 

👆 Depending on the nature of the non-compliance.

 

AI systems with unacceptable risks include  

  1. Using subliminal, manipulative, or deceptive methods distorts behaviour and undermines informed decision-making, resulting in significant harm.

  2. Exploiting vulnerabilities related to age, disability, or socio-economic conditions to manipulate behaviour, leading to significant harm.

  3. Employing biometric categorisation systems to infer sensitive attributes (such as race, political beliefs, trade union membership, religion or philosophical beliefs, sex life, or sexual orientation), except when used for labelling or filtering lawfully obtained biometric datasets or when law enforcement categorizes biometric data.

  4. Social scoring involves evaluating or classifying individuals or groups based on social behaviour or personal characteristics, leading to adverse or unfavourable treatment.

  5. Assessing an individual's risk of committing criminal offences solely through profiling or personality traits, except when used to support human assessments based on objective, verifiable facts directly related to criminal activity.

  6. Creating facial recognition databases by indiscriminately scraping facial images from the internet or CCTV footage.

  7. Inferring emotions in workplaces or educational settings, except for medical or safety purposes.

  8. Using ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, except in the following cases:
    • Searching for missing persons, abduction victims, or individuals who have been trafficked or sexually exploited;
    • Preventing an imminent and substantial threat to life or a foreseeable terrorist attack;
    • Identifying suspects involved in serious crimes, such as murder, rape, armed robbery, narcotics and illegal weapons trafficking, organised crime, and environmental crime. 

* Taken directly from the AI EU Act 


2. High risk 

Here’s where it gets interesting.

 

Is your business in sectors like healthcare, finance, transportation, or hiring, and you’re using AI tools that affect critical decisions?

 

Then you’re in the high-risk category.

 

But don’t panic—this isn’t a blockade.

 

It’s a chance to differentiate yourself by adhering to the highest standards of transparency, accountability, and safety.

 

First of all, here's an overview of high-risk AI systems  

👉 It's important to double-check the AI EU Act official report for a full overview of what systems fall within this category, as updates may have been published. 

 

Providers of high-risk AI systems must:

  • Implement a comprehensive risk management system that spans the entire lifecycle of the high-risk AI system.

    • Tip: Regularly Review and Update Risk Assessments. Establish a protocol for frequent risk assessments throughout the AI system’s lifecycle. Use risk management software to track and address new risks as they emerge.

  • Ensure robust data governance by using training, validation, and testing datasets that are relevant, sufficiently representative, and, as far as possible, free from errors and complete in accordance with the system's intended purpose.

    • Tip: Regularly audit your training, validation, and testing datasets to ensure they are relevant, representative, and free from errors. Implement automated tools for data validation to streamline this process.

  • Develop detailed technical documentation that demonstrates compliance with the AI EU Act, providing authorities with the necessary information to assess adherence.

    • Tip: Create a Compliance Checklist. Develop a checklist of documentation requirements based on the AI EU Act and ensure that all necessary documents are created and updated. Use templates to standardize this process and ensure consistency.

  • Design the high-risk AI system with automated record-keeping capabilities to log events that are critical for identifying national-level risks and tracking significant modifications throughout the system's lifecycle.

    • Tip: Regularly Review Logs. Schedule regular reviews of these logs to identify patterns or anomalies that could indicate emerging risks or compliance issues.

 

  • Provide clear instructions for use to downstream deployers, enabling them to comply with all relevant regulatory requirements.

    • Tip: Develop User Manuals and Training Materials. Create comprehensive user manuals and training materials for downstream deployers. Include practical examples and FAQs to address common issues.

 

  • Ensure the high-risk AI system is designed to facilitate human oversight, allowing deployers to intervene and manage the system when necessary.

    • Tip: Implement Monitoring Dashboards. Use dashboards that allow human operators to monitor AI system performance in real time and intervene when necessary.

 

  • Design the AI system to achieve optimal levels of accuracy, robustness, and cybersecurity, ensuring reliable and secure performance.

    • Tip: Strengthen Cybersecurity Measures. Implement strong cybersecurity protocols, such as encryption and access controls, to protect the AI system from threats. Regularly update your security measures in response to emerging threats.

 

  • Establish a quality management system to maintain ongoing compliance with all regulatory standards.

    • Tip: Develop Quality Assurance Protocols. Create and maintain quality assurance protocols that include regular audits, process reviews, and compliance checks. Ensure that these protocols are integrated into your overall quality management system.

 

3. Limited risk

AI systems with limited risk, such as chatbots or AI tools that provide information, require transparency measures.

 

It's about informing users they are interacting with an AI.

 

This accounts for both developers and deployers. 

 

Limited-risk AI systems include:

 

As well as

 

  

4. Minimal risk (General Purpose AI—GPAI)

A GPAI model refers to an AI model—particularly one trained on extensive data using large-scale self-supervision. 

 

This includes models regardless of their market placement and their ability to be integrated into various downstream systems or applications.

 

Note: this definition excludes AI models used solely for research, development, and prototyping before they are commercially released.

 

A GPAI system is an AI system built on a general-purpose AI model.

 

Here's an overview of GPAI models

 

 

It is designed to fulfil multiple functions, whether for direct application or for integration into other AI systems.

 

GPAI systems may be used as high-risk AI systems or integrated into them.

 

GPAI system providers should cooperate with such high-risk AI system providers to enable the latter’s compliance.

 

All providers of GPAI Models must: 

  • Prepare Comprehensive Technical Documentation: Create detailed documentation that outlines the training and testing processes, along with evaluation results, for the GPAI model.

  • Provide Integration Documentation for Downstream Providers: Supply clear and thorough information to downstream providers who plan to incorporate the GPAI model into their own systems. This documentation should explain the model’s capabilities, limitations, and compliance requirements.

  • Implement a Copyright Compliance Policy: Develop and enforce a policy to ensure adherence to the Copyright Directive, protecting intellectual property rights in the use of training data.

  • Publish a Detailed Training Data Summary: Release a comprehensive summary of the content used to train the GPAI model, detailing the types and sources of data involved.

 

Systemic risk refers to the potential for high-impact GPAI models to significantly affect the EU market.

 

Or have negative consequences on:

  • Public health
  • Safety
  • Public security
  • Fundamental rights,
  • Society at large.

 

Such risks are characterised by their ability to scale and propagate across the value chain.

 

One key indicator of systemic risk is the model’s training resources:

if it uses more than \(10^{25}\) floating point operations (FLOPs) during training, it is considered to have high-impact capabilities and is presumed to pose a systemic risk.

 

🚨NOTE: The EU Commission has the authority to classify a model as posing a systemic risk.

 

Additional obligations:

  • Conduct and Document Model Evaluations: Carry out comprehensive evaluations of the model, including adversarial testing, to detect and address systemic risks. Thoroughly document these evaluations and the steps taken to mitigate identified risks.

  • Identify and Mitigate Systemic Risks: Evaluate potential systemic risks and their sources. Implement measures to address these risks effectively.

  • Report Serious Incidents Promptly: Monitor, document, and report any serious incidents and corrective actions to the AI Office and relevant national authorities without delay.

  • Implement Robust Cybersecurity Measures: Ensure the model is protected by strong cybersecurity protocols to safeguard against potential threats and breaches.

 

What if I don't follow the regulations in the AI EU Act?

 

Just don't. Please just follow the regulations. 

 

Organizations that do not comply with prohibited AI practices may face fines of up to €35 million or 7% of their worldwide annual turnover, whichever is higher.

 

For most other breaches, such as failing to meet the requirements for high-risk AI systems, fines can be as much as €15 million or 3% of worldwide annual turnover, whichever is higher.

 

Providing incorrect, incomplete, or misleading information to authorities can result in penalties of up to €7.5 million or 1% of worldwide annual turnover, whichever is higher.

  • For these businesses, the fine will be the lower of the two possible amounts specified above.

 

When does it take effect?

Originally proposed by the European Commission in April 2021, the EU AI Act received approval from the European Parliament on 22 April 2024 and from the EU Member States on 21 May 2024.

 

The law will come into effect in phases, starting 20 days after its publication in the Official Journal of the EU.

 

Key dates to note include:

  • After six months: The prohibitions on certain AI practices will come into force.
  • After 12 months: The rules for general-purpose AI (GPAI) will apply to new GPAI models. Providers of existing GPAI models, which have been on the market for at least 12 months before the Act takes effect, will have 36 months from the date of enforcement to comply.
  • After 24 months: The regulations for high-risk AI systems will be enforced.
  • After 36 months: The rules for AI systems that are products or safety components regulated under specific EU laws will be implemented.

 

FAQ 

 

What are fundamental rights, and how does the AI Act protect them?

Fundamental rights refer to basic human rights, such as privacy, freedom of expression, and protection against discrimination. The AI EU Act is designed to protect these rights by regulating AI systems that could potentially violate them. For instance, AI systems that perform biometric identification in publicly accessible spaces for law enforcement purposes are strictly regulated to prevent misuse that could undermine individual privacy rights.

 

What are general-purpose AI models?

General-purpose AI models (GPAI) are AI models capable of performing a wide range of tasks and integrating them into various applications. These models are subject to specific transparency obligations, requiring providers to disclose details about the data used for training and to document the model's capabilities and limitations. This ensures that downstream users understand the model's potential risks and compliance requirements.

 

What transparency obligations are imposed on AI system providers?

Under the AI Act, providers of AI systems, especially those classified as high-risk, must comply with various transparency obligations. This includes providing clear documentation of the AI system’s functionality, limitations, and risks, as well as informing users when they are interacting with an AI system. This transparency helps public authorities and competent authorities assess compliance and potential impacts.

 

What role do competent authorities play in the enforcement of the AI Act?

Competent authorities are national bodies designated to enforce the AI Act within their jurisdictions. They are responsible for ensuring compliance, monitoring the market, and taking corrective action when necessary. Competent authorities also work in collaboration with public authorities to oversee AI applications, especially those with potential risks to public health, safety, and fundamental rights.

 

How does the AI Act address the use of AI systems for law enforcement purposes?

The AI Act includes specific provisions for AI systems used for law enforcement purposes, such as remote biometric identification in public spaces. These applications are generally prohibited, except in exceptional cases like searching for missing persons or preventing serious crimes. Competent authorities and law enforcement agencies must ensure that such uses are transparent, justified, and proportionate to the potential risks involved.

 

What are the potential risks associated with AI under the AI Act?

Potential risks refer to the adverse impacts that AI systems might have on individuals or society, including threats to privacy, security, and fundamental rights. The AI Act categorises AI systems based on their risk levels—unacceptable, high, limited, or minimal—and imposes corresponding requirements to mitigate these risks.

 

How does the AI Act ensure fair treatment in AI-driven scoring systems?


The AI Act prohibits the use of AI systems for social scoring by public authorities, where individuals or groups are assessed based on social behaviours, personal characteristics, or predicted future behaviours. This measure prevents unfair discrimination and protects fundamental rights by ensuring that AI-driven decisions are fair, transparent, and based on objective criteria.

 

How will the AI Act affect service providers in the AI sector?


Service providers in the AI sector must comply with the new regulations set forth by the AI Act. This includes adhering to transparency obligations, implementing robust data governance, and maintaining appropriate cybersecurity measures. Providers must also ensure their systems are designed to facilitate human oversight and allow for intervention when necessary.

 

What actions can the Union take if AI providers fail to comply with the Act?

The Union has the authority to enforce the AI Act through fines and corrective measures. Non-compliant organisations may face substantial penalties, including fines of up to €35 million or 7% of global turnover for the most serious violations. The Union can also work with competent authorities across Member States to ensure consistent application of the Act.