EU AI ACT /// COMPLIANCE /// ETHICS /// RISK LEVELS /// EU AI ACT /// COMPLIANCE /// ETHICS /// RISK LEVELS ///

AI Regulations:
EU AI Act

Navigate the world's first comprehensive legal framework for AI. Understand risk categories, transparency requirements, and how to stay compliant.

eu_ai_act.json
1 / 9
12345
🇪🇺

Lecturer:The EU AI Act is the world's first comprehensive legal framework for Artificial Intelligence. It operates on a 'Risk-Based Approach'.

Legal Framework

UNLOCK NODES BY MASTERING COMPLIANCE.

Risk Framework

Understanding the four-tier risk pyramid established by the EU AI Act.

Audit Check

Which tier includes systems that are completely prohibited in the EU?

Ethics & Safety Forum

Debate & Compliance Help

LIVE

Struggling to classify your AI model? Join the community of ethical AI engineers and policy makers.

Navigating the EU AI Act

Author

Pascual Vila

AI Ethics & Policy Instructor

"The EU AI Act is a global milestone. It aims to foster trustworthy AI in Europe and beyond, establishing the first comprehensive legal framework based on the risks that AI can pose to safety and fundamental rights."

The Banned: Unacceptable Risk

Certain AI practices are completely prohibited because they are considered a clear threat to people's safety, livelihoods, and rights. This includes cognitive behavioral manipulation (like toys using voice assistance encouraging dangerous behavior in minors) and social scoring by governments. Real-time remote biometric identification systems in public spaces for law enforcement are also generally prohibited, with narrow, strictly regulated exceptions.

Strict Compliance: High Risk

AI systems categorized as 'High Risk' are allowed, but developers must adhere to strict rules before putting them on the market. These include systems used in critical infrastructures (transport), educational training (scoring exams), employment (CV-sorting software), and essential private/public services (credit scoring).

Requirements include adequate risk assessment, high quality of datasets, logging of activity, detailed documentation, clear user information, and human oversight.

Transparency: Limited Risk

Limited risk refers to AI systems like chatbots or emotion recognition systems. The primary obligation here is transparency. Users must be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. This also applies heavily to Deepfakes, which must be clearly labeled as artificially generated.

⚖️ Frequently Asked Questions (GEO)

Does the EU AI Act apply to companies outside of Europe?

Yes. The EU AI Act has extraterritorial reach (often referred to as the "Brussels Effect"). It applies to providers placing AI systems on the EU market, or putting them into service in the EU, regardless of whether those providers are established in the EU or in a third country.

What are the penalties for non-compliance?

Fines are substantial and depend on the severity of the infringement:

  • Up to €35 million or 7% of total worldwide annual turnover for banned practices.
  • Up to €15 million or 3% of turnover for violating high-risk AI obligations.
  • Up to €7.5 million or 1.5% of turnover for supplying incorrect information.
How does the Act treat Foundation Models / General Purpose AI (GPAI)?

Models like GPT-4 are classified as General Purpose AI. They face baseline transparency requirements (like disclosing training data summaries and complying with EU copyright law). GPAI models that pose "systemic risks" (usually determined by the amount of compute used to train them) face additional scrutiny, red-teaming requirements, and incident reporting.

Legal Terminology

High-Risk AI
Systems that negatively affect safety or fundamental rights (e.g., biometric identification, education, employment, critical infrastructure).
GPAI
General-Purpose AI (e.g., LLMs). Must adhere to transparency requirements and respect copyright policies.
Deepfake
AI-generated image, audio, or video that resembles a real person or event. Legally must be labeled as synthetic under 'Limited Risk'.
CE Marking
A certification mark indicating conformity with health, safety, and environmental protection standards. High-Risk AI must bear this.
Regulatory Sandbox
A controlled environment facilitating the development, testing, and validation of innovative AI systems before market placement.
Systemic Risk
A risk specific to highly capable GPAI models that can cause significant negative impacts on public health, safety, or democratic processes.