The EU AI Act has been the biggest story in AI regulation for years. But now that it’s actually being enforced, the question has shifted from “what does it say?” to “what does it mean for my business today?”
Where We Stand Right Now
The EU AI Act entered full enforcement in phases, and as of early 2026, the most impactful provisions are now live:
Banned AI practices are now prohibited. Social scoring systems, real-time biometric surveillance in public spaces (with limited exceptions), and AI that manipulates people through subliminal techniques are all illegal in the EU.
High-risk AI systems must comply with strict requirements: risk assessments, documentation, human oversight mechanisms, data governance standards, and accuracy monitoring. This covers AI in healthcare, education, employment, law enforcement, and critical infrastructure.
General-purpose AI models (like GPT-4, Claude, Gemini) must comply with transparency requirements. Providers must publish technical documentation, comply with copyright rules, and provide summaries of training data.
Systemic risk models — the most powerful AI systems — face additional obligations: adversarial testing, incident reporting, cybersecurity measures, and energy consumption reporting.
Who’s Actually Affected
The short answer: almost every company using AI in the European market.
AI providers (companies that build and sell AI systems) bear the heaviest compliance burden. If you’re selling an AI hiring tool, a medical diagnosis system, or a credit scoring model in the EU, you need to comply with the high-risk requirements.
AI deployers (companies that use AI systems built by others) have lighter but still significant obligations. You need to ensure proper human oversight, monitor for problems, and maintain records.
Companies outside the EU are affected if they offer AI systems or services to EU customers. Sound familiar? It’s the same extraterritorial reach as GDPR.
The Compliance Reality
Here’s what companies are actually dealing with:
The documentation requirements are extensive. For high-risk AI systems, you need detailed technical documentation covering the system’s purpose, architecture, training data, testing procedures, accuracy metrics, and known limitations. Most companies don’t have this documentation and are scrambling to create it.
Risk assessments are subjective. The Act requires “fundamental rights impact assessments” for high-risk AI systems, but there’s limited guidance on what constitutes an adequate assessment. Companies are making their best guesses and hoping regulators agree.
The penalties are serious. Up to 35 million euros or 7% of global annual turnover for the most severe violations. That’s enough to get the attention of even the largest tech companies.
Enforcement is uneven. Each EU member state designates its own enforcement authority, and some are better-resourced than others. This creates uncertainty about how consistently the rules will be applied.
What Companies Are Actually Doing
Based on what I’m seeing in the market:
Large tech companies (Google, Microsoft, Meta, OpenAI, Anthropic) have dedicated EU AI Act compliance teams and are investing heavily in documentation, testing, and governance processes. They’re treating this like GDPR 2.0 — expensive but manageable.
Mid-size AI companies are struggling more. They have the compliance obligations but not the resources of big tech. Many are hiring consultants, which is expensive. Some are considering whether the EU market is worth the compliance cost.
Startups are in the hardest position. Compliance costs that are manageable for Google are potentially company-killing for a 10-person startup. Some are choosing to launch in the US or Asia first and tackle EU compliance later.
Non-AI companies using AI tools are often unaware of their obligations. A company that uses an AI chatbot for customer service or an AI tool for hiring might not realize they have deployer obligations under the Act.
The Criticism
The EU AI Act has no shortage of critics:
“It’s too prescriptive.” The detailed requirements may become outdated quickly as AI technology evolves. Regulation designed for today’s AI might not make sense for next year’s.
“It stifles innovation.” European AI companies argue they’re being handicapped compared to US and Chinese competitors who face lighter regulation. Some AI talent and investment is flowing to jurisdictions with fewer restrictions.
“It’s too vague in places.” Despite being hundreds of pages long, the Act leaves many important questions to future guidance documents and standards bodies. Companies want clarity that doesn’t exist yet.
“It doesn’t go far enough.” Civil society organizations argue that the exceptions (like the law enforcement carve-outs for biometric surveillance) are too broad and that the Act should be more protective of individual rights.
What Happens Next
The EU AI Office is developing detailed guidelines, harmonized standards, and codes of practice that will fill in the gaps. The first enforcement actions will likely come in late 2026 or early 2027, and they’ll set important precedents.
Whether you love it or hate it, the EU AI Act is now the most thorough AI regulation in the world, and it’s shaping how AI is developed and deployed globally. Just like GDPR became the de facto global privacy standard, the AI Act is likely to influence AI regulation worldwide.
The companies that invest in compliance now will have a head start. The ones that ignore it will face a rude awakening when enforcement begins in earnest.
🕒 Last updated: · Originally published: March 12, 2026