EU AI Act · 113 Days to Full Enforcement

EU AI Act Compliance Guide:
Requirements, Deadlines & What AI Companies Must Do Now

The EU AI Act is the world's first comprehensive AI law. Full enforcement begins August 2, 2026. Fines reach €35 million or 7% of global revenue. This guide tells you exactly what applies to your product.

April 10, 2026
12 min read
EU Artificial Intelligence Act
~113 Days to Enforcement
€35M Max Fine (Prohibited AI)
7% Revenue Penalty Rate
4 Risk Tiers
🇪🇺
This applies to you even if you're US-based. The EU AI Act covers any AI system placed on the EU market or used by EU users — regardless of where your company is headquartered. If you serve European customers, you are subject to these requirements.

The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. It doesn't hit all at once — it has a tiered rollout where the most dangerous categories faced restrictions first. The big deadline for AI-generated content companies is August 2, 2026, when transparency obligations and high-risk AI system rules become fully enforceable.

That's 113 days from today. Most AI startups serving EU users have not started. The companies that get investigated first aren't necessarily the ones doing the most harm — they're the ones with the most visibility and the least documentation.

This guide covers what the law actually requires, what risk tier your product falls into, and the specific steps to get compliant before August 2.

01

What the EU AI Act Is and Who It Affects

The EU AI Act is a horizontal regulation — it doesn't target a specific industry or use case. It governs any AI system that is: (a) placed on the EU market, (b) put into service in the EU, or (c) affects persons located in the EU. That means US companies with EU users are fully in scope.

Who counts as a "provider" under the Act? Any entity that develops an AI system with the intent of placing it on the market or putting it into service, even under their own name or trademark. If you build an AI product and sell it to EU customers, you're a provider.

The rollout timeline is staggered by risk level:

  • August 1, 2024
    EU AI Act enters into force
    The regulation is officially law. 24-month implementation period begins for most provisions.
  • February 2, 2025
    Prohibited AI practices banned LIVE
    Social scoring systems, cognitive behavioral manipulation, real-time biometric surveillance in public spaces — already illegal in the EU.
  • August 2, 2025
    GPAI model obligations apply LIVE
    General-purpose AI models (GPT-4 class, Claude, Gemini) must comply with transparency and copyright summary requirements. Systemic-risk models face additional obligations.
  • August 2, 2026 113 Days
    High-risk AI systems + transparency obligations
    Annex III high-risk categories, AI-generated content watermarking (Article 50), AI interaction disclosure, and deepfake labeling all become enforceable.
  • August 2, 2027
    Annex I product-embedded AI rules
    AI systems in regulated products (medical devices, machinery, aviation) face extended compliance window.

For most AI startups building content generation, chat, or automation tools, August 2, 2026 is the critical date. That's when Article 50 transparency requirements — including watermarking and AI disclosure — become enforceable against you.

ℹ️
National enforcement bodies: Each EU member state designates its own national competent authority to enforce the EU AI Act. The EU AI Office coordinates enforcement for GPAI models at the EU level. This means enforcement pressure can come from 27 different directions.
02

The Four Risk Tiers: Which One Are You?

The EU AI Act classifies AI systems into four risk tiers. Your tier determines your compliance obligations. Most AI startups land in "limited risk" — but that doesn't mean minimal work. It means specific transparency obligations that are just as legally binding.

🚫 Unacceptable Risk
Prohibited Outright
AI systems posing unacceptable risks to fundamental rights. Banned since February 2, 2025.
Social scoring by governments · Cognitive manipulation · Real-time public biometric surveillance · Exploiting children or vulnerable groups
⚠️ High Risk
Pre-deployment Requirements
Must complete risk assessment, register in EU database, human oversight, and extensive documentation before deployment.
CV screening · Credit scoring · Medical diagnostics · Critical infrastructure management · Biometric categorization · Educational assessments
📋 Limited Risk
Transparency Obligations
Must disclose AI nature to users. AI-generated content must be labeled and watermarked. The category most SaaS AI companies fall into.
Chatbots · AI image/video/audio generation · Deepfakes · Content recommendation · Customer service bots
✅ Minimal Risk
Voluntary Codes of Conduct
No mandatory obligations, but encouraged to adopt voluntary codes of conduct. Most AI-powered spam filters and game NPCs fall here.
AI-enabled video games · Spam filters · AI-powered inventory management · Search ranking

The High-Risk Annex III Categories You Need to Know

If your AI touches any of these eight use-case areas, you're classified as high-risk and face the most demanding compliance requirements — registration, conformity assessments, and human oversight obligations before August 2, 2026:

# EU AI Act Annex III — High-Risk Categories
1. Biometric identification and categorization of natural persons
2. Critical infrastructure (electricity, water, gas networks)
3. Education and vocational training (grading, admissions)
4. Employment and worker management (CV screening, monitoring)
5. Access to essential services (credit, insurance, benefits)
6. Law enforcement (risk assessment, crime prediction)
7. Migration and border control
8. Administration of justice and democratic processes

Most content-generation AI startups are "limited risk" — but don't let that term mislead you. Limited risk still means enforceable transparency obligations. A company generating AI images for marketing clients is limited risk, but still must watermark every output and disclose AI nature to end users.

⚠️
Dual-use systems can escalate: If your tool is marketed for general use but is reasonably foreseeable to be used in high-risk contexts (e.g., HR teams using your AI to screen candidates), regulators may treat it as high-risk. Document intended use cases explicitly and add explicit prohibitions for Annex III applications in your ToS.
03

Transparency Requirements for AI-Generated Content

Article 50 is the provision most AI content companies will spend the most engineering time on. It has four distinct requirements, each with different technical and UX implications.

1. AI Interaction Disclosure

Any AI system designed to interact with natural persons (chatbots, AI assistants, voice agents) must inform users they are interacting with an AI. This notification must happen before or at the start of the interaction. The exception is when it's "obvious from the context."

// Required: disclose AI nature at interaction start
// ❌ Wrong: generic chat UI with no disclosure
// ✅ Right: "You're chatting with an AI assistant"

// Must appear BEFORE user first sends a message
// Not buried in footer, not in ToS, not on hover

2. Synthetic Content Marking (Machine-Readable Watermarking)

Providers of AI systems that generate synthetic images, audio, video, or text that could be mistaken for authentic content must mark outputs in a machine-readable format. This is the watermarking mandate.

The EU AI Act does not specify a single technical standard, but C2PA (Coalition for Content Provenance and Authenticity) is the emerging interoperable standard that regulators reference. The marker must:

// Article 50 watermarking requirements
Machine-readable (C2PA manifest or equivalent)
Identifies the content as AI-generated
Embedded at time of generation
Technically robust — must not be easily removed
Verifiable by third-party tools

// C2PA manifest example for AI-generated image
{
  "@context": "https://c2pa.org/assertions/v1",
  "claim_generator": "YourApp/1.0 c2pa-rs/0.28",
  "assertions": [{
    "label": "c2pa.ai.generativemodel",
    "data": {
      "provider": "YourCompany Inc.",
      "model_id": "image-gen-v2",
      "eu_ai_act_tier": "limited"
    }
  }]
}

3. Deepfake Labeling

Content that depicts existing persons, events, or places that did not occur must carry a visible, prominent disclosure. Not a hover state. Not a metadata field only. A human-readable label at the point of display.

This applies whether the deepfake is a product feature or user-generated content distributed on your platform. If your platform distributes deepfakes created by users, you carry disclosure obligations too.

4. AI-Generated Text Disclosure

Providers of GPAI models must make machine-readable disclosures for AI-generated text. This is primarily aimed at news, legal, and public-interest content contexts — but it's interpreted broadly. If your AI generates any text that could influence public discourse, err on the side of disclosure.

🔍
Scan your codebase now: AuditGen's scanner identifies every AI content generation endpoint that lacks Article 50-compliant watermarking. Get a detailed gap report in under 60 seconds. Run free scan →
04

How AuditGen Automates EU AI Act Compliance

Manual compliance audits take weeks. They require someone to read your entire codebase, map every AI generation call, cross-reference against Article 50 requirements, and write up findings. Then repeat every time you ship new code.

AuditGen does this automatically, in seconds, on every deploy.

Technical Disclosure Detection

AuditGen scans your GitHub repository for every AI generation integration — OpenAI, Anthropic, Stability AI, Runway, ElevenLabs, and 40+ other providers. It maps each call to the content type it produces and evaluates whether Article 50 watermarking and disclosure requirements are met at that generation point.

# AuditGen scan output example
src/routes/generate.js:47 — OpenAI image gen
  → C2PA watermark: present
  → User disclosure: present
  → Article 50 status: COMPLIANT

src/services/avatar.js:112 — fal.ai flux-pro
  → C2PA watermark: missing
  → User disclosure: missing
  → Article 50 status: VIOLATION

src/workers/video-gen.js:203 — Runway ML
  → C2PA watermark: missing
  → Deepfake label: not evaluated
  → Article 50 status: REQUIRES REVIEW

Automated Audit Trails

When EU regulators request evidence of compliance, they want documentation: when did you implement watermarking? What percentage of outputs are covered? What is your review cadence?

AuditGen maintains a continuous compliance audit trail — a time-stamped record of every scan, every finding, and every remediation. If an investigation happens, you don't scramble to reconstruct your compliance history. You export it.

The audit trail includes: scan timestamps, codebase snapshots at scan time, compliance score history, and a record of all detected changes to generation pipelines. This is the documentation stack the EU AI Act's record-keeping obligations require.

PR-Level Compliance Blocking

AuditGen integrates as a CI check. Any pull request that introduces new AI generation code without corresponding Article 50 compliance is flagged before merge. Your engineering team catches violations in code review — before they ever reach production and before they become €15 million problems.

Connect your repo: AuditGen integrates with GitHub in under 2 minutes. Get your EU AI Act compliance score and a detailed finding report before end of day. Start free scan →
05

SB 942 vs EU AI Act: What's Different, What Overlaps

If you're already working on California SB 942 compliance (enforcement: August 2, 2026), here's the good news: the two regulations have significant overlap. AuditGen covers both. Here's how they compare side-by-side.

Requirement SB 942 (California) EU AI Act (Article 50) AuditGen Covers
AI-generated image watermarking Required — latent watermark Required — machine-readable marker ✓ Both
AI-generated audio watermarking Required Required ✓ Both
AI-generated video watermarking Required Required ✓ Both
Disclosure to end users Visible label at point of delivery Visible label at point of display ✓ Both
Machine-readable manifest Encouraged (/.well-known/) Required (C2PA or equivalent) ✓ Both
Deepfake labeling Required for realistic synthetic media Required — must be prominent ✓ Both
FRIA / Risk assessment Required (FRIA document) Required for high-risk AI (conformity assessment) ✓ Both
Codebase audit documentation Not explicitly required Required — technical documentation (Art. 11) ✓ EU AI Act
EU database registration No equivalent Required for high-risk AI (Art. 49) ◐ Guidance provided
Human oversight mechanisms Not required Required for high-risk AI (Art. 14) ◐ Assessment only
Enforcement body California Attorney General National competent authorities + EU AI Office
Max fine (content violations) $5,000 per violation €15M or 3% of revenue

The key insight: the watermarking and disclosure work you do for SB 942 directly satisfies EU AI Act Article 50 requirements. The two regimes use different terminology but demand the same technical implementation.

Where they diverge: the EU AI Act has a heavier documentation burden. You'll need technical documentation files (Article 11), a conformity assessment if you're high-risk (Article 43), and EU database registration for high-risk systems (Article 49). None of that is required under SB 942.

If you're only compliant with SB 942, you're about 70% of the way to EU AI Act compliance for the transparency provisions. The remaining 30% is documentation and the high-risk assessment workflow.

💡
Efficiency opportunity: The compliance work overlaps so significantly that teams handling both SB 942 and EU AI Act simultaneously save 40–50% of engineering time compared to treating them as separate initiatives. AuditGen's scan covers requirements for both regulations in a single pass.
Free · No Account Required · 60 Seconds

Check Your EU AI Act Compliance Status Now

Scan your GitHub repo or website. Get a detailed report covering Article 50 watermarking gaps, AI disclosure requirements, and risk tier classification — with specific code locations and fix recommendations.

⚡ Run Free Compliance Scan

No credit card. No signup. Results in under a minute.