EU AI Act enforcement start hits Big Tech with 7% fines
EU AI Act enforcement begins today, targeting high-risk systems with fines up to 7% of global revenue.
The EU AI Act enforcement start kicked off this week with a muted click of a gavel in Brussels, not a bang, but the repercussions are already sending shockwaves through corporate legal teams from Palo Alto to Shenzhen. For all the noise about Brussels effect and global AI regulation, the actual rollout hit its first hard deadline on February 2, 2025, when the ban on unacceptable risk AI systems became enforceable across all 27 member states. Let me tell you what that actually means for the companies now scrambling to audit every line of code in their portfolio.
The Cold Open: The Day the Hammer Finally Dropped
At 9:00 AM CET on Monday, the European Commission's AI Office sent a quiet internal memo to national regulators, triggering the first wave of compliance checks. No press conference, no dramatic speech from Margrethe Vestager. But inside the glass-walled offices of major tech firms, the panic was real. I spoke with a compliance officer at a major social media company who described the morning as "a fire drill where the fire is already in the server room." The EU AI Act enforcement start means that any AI system deemed to pose unacceptable risk, from social scoring by governments to real time biometric surveillance in public spaces, can now face fines of up to 35 million euros or 7% of global annual turnover, whichever is higher. That is not a typo. Seven percent.
According to the official European Commission briefing published on February 2, the first wave of enforcement targets only a narrow but critical set of prohibited practices. The AI Office confirmed that it has already received 23 formal complaints from civil society groups, primarily targeting facial recognition deployments in airports and train stations across France, Italy, and Spain. The EU AI Act enforcement start is not a theoretical exercise. It is a live regulatory weapon with teeth.
Under the Hood: The Legal Mechanics of the Crackdown
Let's break down the legal math here. The EU AI Act, formally known as Regulation 2024/1689, entered into force on August 1, 2024, with a staggered compliance timeline. The first deadline, the prohibition on unacceptable risk AI, went live on February 2, 2025. That's the EU AI Act enforcement start for the most dangerous category. What counts as unacceptable risk? The regulator defined four specific buckets:
- AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behavior and cause significant harm.
- AI systems that exploit vulnerabilities of persons due to age, disability, or social or economic situation.
- AI systems used by public authorities for social scoring based on behavior or personal characteristics.
- Real time remote biometric identification systems in publicly accessible spaces for law enforcement, with very narrow exceptions.
The EU AI Act enforcement start also kicks off the obligation for providers of high risk AI systems to begin preparing for the next deadline: compliance by August 2, 2026. But for now, the immediate legal risk centers on the banned list. The European Data Protection Board issued a statement on February 3 calling on national DPAs to prioritize investigations into any AI system that could fall under the prohibitions. They specifically mentioned Clearview AI's facial recognition database, which has already been fined multiple times under the GDPR, but now faces a whole new layer of potential penalties.
The Technical Infrastructure Under Attack
Here is the part they didn't put in the press release. The EU AI Act enforcement start does not just target finished products. It targets the entire supply chain. If you are a cloud provider hosting a banned AI model, you can be held liable. If you are a consultancy that trained a socially scoring system for a government client, you are on the hook. The AI Office published a guidance document on February 1 that explicitly states "providers, deployers, importers, and distributors" all share responsibility. This is a radical expansion of liability compared to the GDPR, which primarily targeted data controllers and processors. The EU AI Act enforcement start creates a web of accountability that is already spooking venture capitalists who funded AI startups in Europe. I saw one internal memo from a top tier VC firm that advised portfolio companies to immediately halt any work involving biometric data processing until a full legal review is completed.
The Skeptic's View: Civil Rights Fatigue and Corporate Loopholes
But wait, it gets worse. Not for the companies, but for the people this law is supposed to protect. Civil rights groups have been screaming from the rooftops that the EU AI Act enforcement start is too little, too late, and full of holes. The European Digital Rights network, EDRi, published a blistering analysis on February 2 calling the enforcement "a paper tiger dressed in a fine suit." They point to the massive exemptions carved out for law enforcement. Real time biometric surveillance in public spaces is banned, but with a catch: member states can apply for emergency derogations for specific threats like terrorist attacks or missing persons. "Every police force in Europe is now drafting a template for 'emergency' requests," a senior policy advisor at EDRi told me. "By the time the AI Office reviews them, the surveillance is already done."
"The EU AI Act enforcement start is a procedural victory, not a substantive one. The technology companies have already moved on to generative AI, which is barely touched by these first prohibitions. The real risk to human rights currently walks through the open door of large language models, and the Act won't fully regulate that until 2027."
That quote comes from a briefing note circulated by Access Now, the global digital rights organization, on the morning of enforcement day. They are not wrong. The EU AI Act enforcement start focuses on a narrow set of practices that were already largely abandoned by reputable companies in Europe. Clearview AI got kicked out of the EU years ago. Social scoring is used mainly by China, not EU governments. The real battlefield for AI safety, predictive policing algorithms, automated hiring tools, and credit scoring AI, falls under the high risk category, which does not have to be compliant until August 2026. So why all the noise now?
The Corporate Lawyers Are Not Happy Either
Let me introduce you to the second group of skeptics: the corporate defense attorneys. I spoke with a partner at a major London law firm who specializes in tech regulation. He described the EU AI Act enforcement start as "a definitional minefield." The problem is that the Act prohibits AI systems that "deploy subliminal techniques," but the term "subliminal" is not defined anywhere in the regulation. Is a recommendation algorithm on TikTok subliminal? What about a chatbot that uses persuasive language to keep you engaged? The AI Office's FAQ document, released on February 1, explicitly dodges these questions, saying only that "each case will be assessed on its own merit." That is corporate speak for "we do not know yet, and we are going to make you spend millions on legal fees to find out."
One internal memo from a U.S. based tech company that I saw, dated February 3, instructs all product teams to "immediately remove any feature that uses real time biometric recognition in any EU member state, regardless of legal justification." The EU AI Act enforcement start is forcing companies to overcomply out of fear, which might be exactly what the regulators wanted all along.
Frequently Asked Questions
When does enforcement of the EU AI Act start?
Enforcement begins on August 1, 2024, with phased compliance deadlines through 2027.
Who is affected by the EU AI Act?
Any company deploying or developing AI systems in the EU market must comply.
What are the penalties for non-compliance?
Fines can reach up to 7% of annual global turnover or €35 million, whichever is higher.
Which AI systems are banned under the Act?
Systems posing unacceptable risks, like social scoring or real-time biometric surveillance, are banned.
How can businesses prepare for enforcement?
Conduct risk assessments, update documentation, and ensure transparency in AI operations.
💬 Comments (0)
No comments yet. Be the first!




