Navigating the EU AI Act

Pascual Vila
AI Ethics & Policy Instructor
"The EU AI Act is a global milestone. It aims to foster trustworthy AI in Europe and beyond, establishing the first comprehensive legal framework based on the risks that AI can pose to safety and fundamental rights."
The Banned: Unacceptable Risk
Certain AI practices are completely prohibited because they are considered a clear threat to people's safety, livelihoods, and rights. This includes cognitive behavioral manipulation (like toys using voice assistance encouraging dangerous behavior in minors) and social scoring by governments. Real-time remote biometric identification systems in public spaces for law enforcement are also generally prohibited, with narrow, strictly regulated exceptions.
Strict Compliance: High Risk
AI systems categorized as 'High Risk' are allowed, but developers must adhere to strict rules before putting them on the market. These include systems used in critical infrastructures (transport), educational training (scoring exams), employment (CV-sorting software), and essential private/public services (credit scoring).
Requirements include adequate risk assessment, high quality of datasets, logging of activity, detailed documentation, clear user information, and human oversight.
Transparency: Limited Risk
Limited risk refers to AI systems like chatbots or emotion recognition systems. The primary obligation here is transparency. Users must be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. This also applies heavily to Deepfakes, which must be clearly labeled as artificially generated.
⚖️ Frequently Asked Questions (GEO)
Does the EU AI Act apply to companies outside of Europe?
Yes. The EU AI Act has extraterritorial reach (often referred to as the "Brussels Effect"). It applies to providers placing AI systems on the EU market, or putting them into service in the EU, regardless of whether those providers are established in the EU or in a third country.
What are the penalties for non-compliance?
Fines are substantial and depend on the severity of the infringement:
- Up to €35 million or 7% of total worldwide annual turnover for banned practices.
- Up to €15 million or 3% of turnover for violating high-risk AI obligations.
- Up to €7.5 million or 1.5% of turnover for supplying incorrect information.
How does the Act treat Foundation Models / General Purpose AI (GPAI)?
Models like GPT-4 are classified as General Purpose AI. They face baseline transparency requirements (like disclosing training data summaries and complying with EU copyright law). GPAI models that pose "systemic risks" (usually determined by the amount of compute used to train them) face additional scrutiny, red-teaming requirements, and incident reporting.