The EU AI Act is already in force — this is not a future requirement

The EU AI Act entered into force on August 1, 2024. The common mistake is treating it as a future regulation still being drafted. It is not. Specific obligations are already active, others hit in August 2026, and enforcement powers are operational. Here is where things stand right now:

Note on recent developments: In November 2025, the EU proposed a Digital Omnibus package that could delay some high-risk system requirements by up to 16 months if harmonized technical standards are unavailable. However, backstop dates of December 2027 and August 2028 ensure enforcement happens regardless. More importantly, GPAI obligations and prohibited practice bans are already in effect and unaffected by any potential delay. Do not use this proposal as a reason to pause compliance planning.

Does the EU AI Act apply to your company?

This is the first question every non-EU company asks — and the answer surprises many. Geographic location of your company does not determine whether the EU AI Act applies to you. What matters is whether EU residents are affected by your AI.

The Act explicitly applies to:

In practice, this means: if any EU resident uses your AI product, accesses your AI-powered service, or is affected by a decision made by your AI system — even if you have zero EU presence — you are likely in scope.

The GDPR parallel: This is the same extraterritorial approach as GDPR, which already resulted in billions in fines for US companies including Google, Meta, and Amazon. The EU AI Act follows the same "Brussels Effect" — regulating by market access, not corporate location. Any company that thought "GDPR doesn't apply to us because we're in the US" and later discovered it did should treat the EU AI Act with equal seriousness.

Use this quick scope checker to determine if you need to act:

🔍 Quick scope check — answer yes to any of these and you are likely in scope

1
Do any EU residents use your product? Even if you don't specifically target the EU, if EU users can access your service, you likely have EU market exposure.
2
Does your product include any AI features that affect users? Recommendations, personalization, automated decisions, chatbots, content moderation, pricing — all count.
3
Do you use AI tools that affect EU employees or customers? HR software with AI screening, CRM with AI scoring, customer service chatbots — deployer obligations apply.
4
Do you provide a foundation model or AI API that others use in the EU? GPAI obligations apply to model providers regardless of where downstream users are.
5
Is your AI used in hiring, credit, healthcare, education, or law enforcement? These are high-risk categories with the strictest obligations and closest regulatory scrutiny.

The four risk tiers — and what each means for you

The EU AI Act classifies all AI into four risk tiers. Your tier determines your compliance obligations. Here is what each means in plain English:

🚫 Prohibited — Banned outright (in force since February 2025)

AI systems that pose unacceptable risks to fundamental rights are completely banned. No compliance path — you must not use or provide them.

  • Subliminal manipulation that causes harm without the user's awareness
  • Exploitation of vulnerabilities based on age, disability, or socioeconomic status
  • Social scoring by governments or private entities
  • Real-time remote biometric identification in public spaces by law enforcement (with very limited exceptions)
  • Untargeted facial recognition database scraping from the internet or CCTV
  • Emotion recognition in workplaces and educational institutions
  • Predictive policing based solely on profiling

⚠️ High-Risk — Extensive compliance obligations (August 2026)

AI with significant potential to affect people's rights, health, safety, or livelihoods. Requires conformity assessment, technical documentation, human oversight, and EU database registration.

  • Biometric identification and categorization systems
  • Critical infrastructure management (energy, water, transport)
  • AI in hiring, performance evaluation, task allocation
  • Credit scoring, insurance risk assessment, benefits eligibility
  • Education — student assessment, admission decisions
  • Law enforcement and border control applications
  • AI as safety components in medical devices, vehicles, aviation

💬 Limited Risk — Transparency obligations only

AI with risk of manipulation or deception. Users must be told they are interacting with AI. Deepfakes must be labeled as such.

  • Customer-facing chatbots
  • AI-generated content (images, audio, video)
  • Emotion recognition systems (non-workplace, non-education contexts)

✅ Minimal Risk — No mandatory obligations

The vast majority of AI applications. No mandatory requirements, though voluntary codes of conduct are encouraged.

  • Spam filters, basic recommendation systems, AI-enabled video games
  • Most productivity AI tools not affecting consequential decisions

What actually counts as high-risk — the list that surprises most companies

The high-risk category is where most compliance work happens — and where most companies get surprised. Here are the real-world applications that trigger high-risk classification:

HR and Hiring AI: Any AI used for CV screening, candidate ranking, interview analysis, performance evaluation, promotion decisions, or task allocation involving EU employees or applicants is automatically high-risk. This catches a huge range of HR software that many companies don't think of as "AI."

Credit and Financial Services: AI used for credit scoring, loan decisions, insurance risk assessment, or benefits eligibility involving EU residents is high-risk. Fintech companies using AI for underwriting have significant compliance exposure.

Healthcare AI: AI tools used for clinical decision support, diagnostic assistance, or embedded in medical devices as safety components. US health tech companies selling into Europe should evaluate this carefully.

Education: AI used for student assessment, admission decisions, or evaluating academic performance. EdTech platforms with EU users need to review their AI features.

Biometrics: Any AI system that identifies, verifies, or categorizes individuals based on biometric data — beyond simple device unlock scenarios.

The hiring AI trap: This is the most common surprise. Companies that use an AI-powered ATS (Applicant Tracking System), a resume screening tool, or a video interview analysis product to evaluate EU-based job applicants are deploying high-risk AI — regardless of whether the AI component is built in-house or provided by a third party. If your HR software has any AI features that affect hiring decisions for EU candidates, it needs a fundamental rights impact assessment and human oversight protocols.

Key deadlines — the timeline you need to know

Aug
2024

EU AI Act enters into force

The regulation is officially law. The compliance clock started.

Past — in effect
Feb
2025

Prohibited AI practices banned

Social scoring, subliminal manipulation, untargeted facial recognition scraping, emotion recognition in workplaces and schools — all banned. Enforcement active. Investigations underway.

Past — enforced now
Aug
2025

GPAI model obligations apply

Providers of large language models and other general-purpose AI must maintain technical documentation, publish training data summaries, comply with EU copyright law, and implement safety policies. 26 major providers including Microsoft, Google, Amazon, OpenAI, and Anthropic have signed the GPAI Code of Practice.

Past — enforced now
Aug
2026

High-risk AI system obligations enforceable ← Critical deadline

The main compliance deadline. High-risk AI systems must have completed conformity assessments, built technical documentation, implemented risk management processes, registered in the EU AI database, and established human oversight mechanisms. This covers hiring AI, credit scoring, healthcare AI, education AI, and more.

5 months away
Aug
2027

Extended deadline for legacy systems and safety-component AI

AI embedded in regulated products (medical devices, vehicles, aviation) and GPAI models placed on market before August 2025 must be compliant.

August 2027

Understand your AI governance gaps before August 2026

Run a free gap assessment against ISO 42001 and NIST AI RMF to see where your AI governance program stands and what needs to be built.

Start Free →

Are you a provider, a deployer, or both?

Your compliance obligations under the EU AI Act depend heavily on your role. Many companies are surprised to discover they are both.

Providers develop, train, or place AI systems on the EU market. They carry the heaviest compliance burden. If you build an AI product — even using third-party models — and sell it to EU customers, you are a provider. Obligations include: conformity assessments for high-risk systems, technical documentation (Annex IV), registration in the EU AI database, CE marking (for certain systems), post-market monitoring, and incident reporting.

Deployers use AI systems in their business operations. If your company uses an AI-powered tool that affects EU employees or customers — an AI-powered HR platform, a CRM with AI scoring, a customer service chatbot — you are a deployer. Obligations include: fundamental rights impact assessments for high-risk systems, human oversight implementation, informing users when AI is influencing decisions, and maintaining usage logs for high-risk applications.

The integration trap: A deployer can become a provider. If you take a third-party AI model (like an OpenAI API) and integrate it into a product you then sell to EU customers, you have become a provider of an AI system — even though you didn't train the underlying model. The EU AI Act specifically addresses this: companies that substantially modify or integrate AI systems become providers with full provider obligations.

General-Purpose AI (GPAI): what it means and who it affects

General-purpose AI models — large language models, image generation models, multimodal AI, and other systems capable of performing a wide range of tasks — have their own compliance track under the EU AI Act. These obligations are already in effect as of August 2025.

If you provide a GPAI model (train and release an LLM, offer an AI API), you must:

GPAI models that cross a certain computational threshold (above 10^25 FLOPs) or have widespread deployment are classified as having systemic risk and face additional obligations including adversarial testing (red-teaming), incident reporting to the EU AI Office, cybersecurity measures, and energy efficiency reporting.

If you integrate a third-party GPAI model into your product, your compliance obligations depend on what you do with it. Simple, transparent integration with limited customization keeps you as a deployer. Significant fine-tuning, custom system prompting that changes behavior, or building a downstream product creates provider obligations.

Fines and penalties — why this cannot be ignored

The EU AI Act's penalty regime exceeds GDPR's and is among the strictest in the world:

For SMEs, fines are calculated using whichever of the two options results in the lower amount — providing some relief relative to large enterprises. Beyond financial penalties, companies face: mandatory product recalls, being barred from the EU market, civil liability claims from affected individuals, and in some EU member states, potential criminal charges under national implementing legislation.

The enforcement reality: As of early 2026, no major financial penalties have been publicly issued for AI Act violations — but investigations are underway for workplace emotion recognition and social scoring violations. The pattern from GDPR is instructive: enforcement started slowly, then escalated sharply. Companies that built compliance programs early were far better positioned when enforcement intensified. The time to build your program is now, not when the first major penalty is announced.

ISO 42001 and NIST AI RMF: your implementation frameworks

The EU AI Act tells you what you need to achieve — risk management, documentation, human oversight, transparency, continuous monitoring. ISO 42001 and NIST AI RMF tell you how to achieve it.

ISO 42001 is the world's first certifiable AI management system standard, published in December 2023. It provides a structured framework for governing AI that directly maps to EU AI Act requirements. Organizations certified under ISO 42001 have documented evidence of AI risk management, governance policies, human oversight mechanisms, and continuous improvement processes — exactly what EU regulators and enterprise customers want to see. For organizations already holding ISO 27001, ISO 42001 adds only 3–6 months of incremental work since the management system structure is shared.

NIST AI RMF (AI Risk Management Framework) is the US voluntary standard that provides practical, operational guidance for identifying and managing AI risks. It is organized around four functions — Govern, Map, Measure, Manage — and provides detailed playbooks for each. It does not result in a certification, but it is increasingly referenced by US federal agencies, enterprise procurement teams, and cyber insurance carriers as the baseline for "reasonable AI governance."

The practical approach most organizations take: use NIST AI RMF as the internal operational framework for day-to-day AI risk management, and pursue ISO 42001 certification as the external, auditable credential that demonstrates compliance to regulators and enterprise customers. The two frameworks share approximately 70% of their underlying principles and can be implemented together efficiently.

What to do right now — your action plan

1

Inventory all AI in your organization

Map every AI system your company develops, uses, or integrates — including third-party tools used by HR, sales, customer service, and operations. Many companies discover they have far more AI exposure than they realized once they audit systematically.

2

Determine EU nexus for each system

For each AI system, assess whether EU residents are affected — as users, customers, employees, or people whose data is processed. Any EU nexus brings that system into scope.

3

Classify each system against the four risk tiers

Use the Annex III categories to determine risk level. If your AI is used in hiring, credit, education, healthcare, critical infrastructure, law enforcement, or biometrics — it is likely high-risk. When in doubt, treat it as high-risk until you can document why it qualifies for a lower tier.

4

Check for prohibited practices immediately

Review your AI inventory against the prohibited list. Any workplace emotion recognition, social scoring, or untargeted facial recognition must stop now — these are already banned and enforcement is active.

5

Build technical documentation for high-risk systems

For each high-risk AI system, begin building the Annex IV technical documentation package: system description, intended purpose, development methodology, training data, testing results, performance metrics, and risk controls. This takes months — start now.

6

Implement human oversight mechanisms

High-risk AI systems must have meaningful human oversight — not rubber-stamp review, but genuine ability for humans to understand, question, and override AI decisions. Document these mechanisms with evidence that they actually function.

7

Implement ISO 42001 as your governance framework

ISO 42001 certification provides the auditable management system that regulators, enterprise customers, and cyber insurers increasingly expect to see. For organizations already holding ISO 27001, the incremental work is 3–6 months. For those starting fresh, 9–12 months is typical.

8

Appoint an EU authorized representative if needed

Non-EU providers of high-risk AI systems must appoint an EU-based authorized representative before placing systems on the EU market. This representative accepts legal obligations on your behalf. Engage a law firm or specialist compliance organization in the EU to fulfill this role.

Assess your AI governance readiness

Run a free gap assessment covering ISO 42001, NIST AI RMF, and EU AI Act readiness — get a detailed report on where your AI governance program stands and what to build first.

Start Free →