AI Systems with ‘Unacceptable Risk’ Are Now BANNED in the EU
The EU Just Dropped the Hammer on Risky AI
Starting February 2, 2025, the European Union has officially unleashed its regulatory power to ban AI systems deemed “unacceptably risky”. This marks the first compliance deadline under the EU’s groundbreaking AI Act, a sweeping regulatory framework that’s been years in the making. The stakes? Companies could face fines of up to €35 million (~$36 million) or 7% of their annual revenue—whichever hits harder.
What’s at Stake? The Four Risk Levels of AI
The EU’s AI Act categorizes AI systems into four risk levels, each with its own regulatory firepower:
- Minimal Risk: Think email spam filters. These get a free pass with no oversight.
- Limited Risk: Customer service chatbots fall here, facing light-touch regulation.
- High Risk: AI used in healthcare recommendations or critical infrastructure? Heavy oversight applies.
- Unacceptable Risk: The big no-no. These systems are completely banned.
“Organizations are expected to be fully compliant by February 2, but the next big deadline is in August. By then, fines and enforcement provisions will take effect.”
Rob Sumroy, Head of Technology at Slaughter and May
The AI Pact: Who’s In and Who’s Out?
Over 100 companies, including tech giants like Amazon, Google, and OpenAI, signed the EU AI Pact last September. This voluntary pledge committed them to applying the AI Act’s principles ahead of the official deadline. But not everyone joined the party. Meta, Apple, and French AI startup Mistral notably skipped the Pact. However, their absence doesn’t mean they’re off the hook—compliance is mandatory for all.
Exceptions to the Rule: When Risky AI Gets a Pass
Even the strictest rules have loopholes. The AI Act allows law enforcement to use biometric systems in public places for targeted searches—like finding an abduction victim or preventing an imminent threat. Similarly, emotion-detection AI gets a green light in workplaces and schools if there’s a medical or safety justification, such as therapeutic use.
The Road Ahead: Clarity and Challenges
While the February 2 deadline is a formality, the real enforcement kicks in later this year. The European Commission plans to release additional guidelines in early 2025, but the clock is ticking. Companies must also navigate how the AI Act interacts with other laws like GDPR, NIS2, and DORA, creating potential overlaps and compliance headaches.
“AI regulation doesn’t exist in isolation. Understanding how these laws fit together is just as crucial as understanding the AI Act itself.”
Rob Sumroy
What This Means for You
If your business operates in the EU, now is the time to act. Audit your AI systems, identify high-risk applications, and ensure compliance before the enforcement hammer drops. The EU isn’t playing around—this is a wake-up call for every organization leveraging AI.