The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. It doesn't hit all at once — it has a tiered rollout where the most dangerous categories faced restrictions first. The big deadline for AI-generated content companies is August 2, 2026, when transparency obligations and high-risk AI system rules become fully enforceable.
That's 113 days from today. Most AI startups serving EU users have not started. The companies that get investigated first aren't necessarily the ones doing the most harm — they're the ones with the most visibility and the least documentation.
This guide covers what the law actually requires, what risk tier your product falls into, and the specific steps to get compliant before August 2.
What the EU AI Act Is and Who It Affects
The EU AI Act is a horizontal regulation — it doesn't target a specific industry or use case. It governs any AI system that is: (a) placed on the EU market, (b) put into service in the EU, or (c) affects persons located in the EU. That means US companies with EU users are fully in scope.
Who counts as a "provider" under the Act? Any entity that develops an AI system with the intent of placing it on the market or putting it into service, even under their own name or trademark. If you build an AI product and sell it to EU customers, you're a provider.
The rollout timeline is staggered by risk level:
-
August 1, 2024EU AI Act enters into forceThe regulation is officially law. 24-month implementation period begins for most provisions.
-
February 2, 2025Prohibited AI practices banned LIVESocial scoring systems, cognitive behavioral manipulation, real-time biometric surveillance in public spaces — already illegal in the EU.
-
August 2, 2025GPAI model obligations apply LIVEGeneral-purpose AI models (GPT-4 class, Claude, Gemini) must comply with transparency and copyright summary requirements. Systemic-risk models face additional obligations.
-
August 2, 2026 113 DaysHigh-risk AI systems + transparency obligationsAnnex III high-risk categories, AI-generated content watermarking (Article 50), AI interaction disclosure, and deepfake labeling all become enforceable.
-
August 2, 2027Annex I product-embedded AI rulesAI systems in regulated products (medical devices, machinery, aviation) face extended compliance window.
For most AI startups building content generation, chat, or automation tools, August 2, 2026 is the critical date. That's when Article 50 transparency requirements — including watermarking and AI disclosure — become enforceable against you.
The Four Risk Tiers: Which One Are You?
The EU AI Act classifies AI systems into four risk tiers. Your tier determines your compliance obligations. Most AI startups land in "limited risk" — but that doesn't mean minimal work. It means specific transparency obligations that are just as legally binding.
The High-Risk Annex III Categories You Need to Know
If your AI touches any of these eight use-case areas, you're classified as high-risk and face the most demanding compliance requirements — registration, conformity assessments, and human oversight obligations before August 2, 2026:
1. Biometric identification and categorization of natural persons
2. Critical infrastructure (electricity, water, gas networks)
3. Education and vocational training (grading, admissions)
4. Employment and worker management (CV screening, monitoring)
5. Access to essential services (credit, insurance, benefits)
6. Law enforcement (risk assessment, crime prediction)
7. Migration and border control
8. Administration of justice and democratic processes
Most content-generation AI startups are "limited risk" — but don't let that term mislead you. Limited risk still means enforceable transparency obligations. A company generating AI images for marketing clients is limited risk, but still must watermark every output and disclose AI nature to end users.
Transparency Requirements for AI-Generated Content
Article 50 is the provision most AI content companies will spend the most engineering time on. It has four distinct requirements, each with different technical and UX implications.
1. AI Interaction Disclosure
Any AI system designed to interact with natural persons (chatbots, AI assistants, voice agents) must inform users they are interacting with an AI. This notification must happen before or at the start of the interaction. The exception is when it's "obvious from the context."
// ❌ Wrong: generic chat UI with no disclosure
// ✅ Right: "You're chatting with an AI assistant"
// Must appear BEFORE user first sends a message
// Not buried in footer, not in ToS, not on hover
2. Synthetic Content Marking (Machine-Readable Watermarking)
Providers of AI systems that generate synthetic images, audio, video, or text that could be mistaken for authentic content must mark outputs in a machine-readable format. This is the watermarking mandate.
The EU AI Act does not specify a single technical standard, but C2PA (Coalition for Content Provenance and Authenticity) is the emerging interoperable standard that regulators reference. The marker must:
✓ Machine-readable (C2PA manifest or equivalent)
✓ Identifies the content as AI-generated
✓ Embedded at time of generation
✓ Technically robust — must not be easily removed
✓ Verifiable by third-party tools
// C2PA manifest example for AI-generated image
{
"@context": "https://c2pa.org/assertions/v1",
"claim_generator": "YourApp/1.0 c2pa-rs/0.28",
"assertions": [{
"label": "c2pa.ai.generativemodel",
"data": {
"provider": "YourCompany Inc.",
"model_id": "image-gen-v2",
"eu_ai_act_tier": "limited"
}
}]
}
3. Deepfake Labeling
Content that depicts existing persons, events, or places that did not occur must carry a visible, prominent disclosure. Not a hover state. Not a metadata field only. A human-readable label at the point of display.
This applies whether the deepfake is a product feature or user-generated content distributed on your platform. If your platform distributes deepfakes created by users, you carry disclosure obligations too.
4. AI-Generated Text Disclosure
Providers of GPAI models must make machine-readable disclosures for AI-generated text. This is primarily aimed at news, legal, and public-interest content contexts — but it's interpreted broadly. If your AI generates any text that could influence public discourse, err on the side of disclosure.
How AuditGen Automates EU AI Act Compliance
Manual compliance audits take weeks. They require someone to read your entire codebase, map every AI generation call, cross-reference against Article 50 requirements, and write up findings. Then repeat every time you ship new code.
AuditGen does this automatically, in seconds, on every deploy.
Technical Disclosure Detection
AuditGen scans your GitHub repository for every AI generation integration — OpenAI, Anthropic, Stability AI, Runway, ElevenLabs, and 40+ other providers. It maps each call to the content type it produces and evaluates whether Article 50 watermarking and disclosure requirements are met at that generation point.
✓ src/routes/generate.js:47 — OpenAI image gen
→ C2PA watermark: present
→ User disclosure: present
→ Article 50 status: COMPLIANT
✗ src/services/avatar.js:112 — fal.ai flux-pro
→ C2PA watermark: missing
→ User disclosure: missing
→ Article 50 status: VIOLATION
✗ src/workers/video-gen.js:203 — Runway ML
→ C2PA watermark: missing
→ Deepfake label: not evaluated
→ Article 50 status: REQUIRES REVIEW
Automated Audit Trails
When EU regulators request evidence of compliance, they want documentation: when did you implement watermarking? What percentage of outputs are covered? What is your review cadence?
AuditGen maintains a continuous compliance audit trail — a time-stamped record of every scan, every finding, and every remediation. If an investigation happens, you don't scramble to reconstruct your compliance history. You export it.
The audit trail includes: scan timestamps, codebase snapshots at scan time, compliance score history, and a record of all detected changes to generation pipelines. This is the documentation stack the EU AI Act's record-keeping obligations require.
PR-Level Compliance Blocking
AuditGen integrates as a CI check. Any pull request that introduces new AI generation code without corresponding Article 50 compliance is flagged before merge. Your engineering team catches violations in code review — before they ever reach production and before they become €15 million problems.
SB 942 vs EU AI Act: What's Different, What Overlaps
If you're already working on California SB 942 compliance (enforcement: August 2, 2026), here's the good news: the two regulations have significant overlap. AuditGen covers both. Here's how they compare side-by-side.
| Requirement | SB 942 (California) | EU AI Act (Article 50) | AuditGen Covers |
|---|---|---|---|
| AI-generated image watermarking | Required — latent watermark | Required — machine-readable marker | |
| AI-generated audio watermarking | Required | Required | |
| AI-generated video watermarking | Required | Required | |
| Disclosure to end users | Visible label at point of delivery | Visible label at point of display | |
| Machine-readable manifest | Encouraged (/.well-known/) | Required (C2PA or equivalent) | |
| Deepfake labeling | Required for realistic synthetic media | Required — must be prominent | |
| FRIA / Risk assessment | Required (FRIA document) | Required for high-risk AI (conformity assessment) | |
| Codebase audit documentation | Not explicitly required | Required — technical documentation (Art. 11) | |
| EU database registration | No equivalent | Required for high-risk AI (Art. 49) | |
| Human oversight mechanisms | Not required | Required for high-risk AI (Art. 14) | |
| Enforcement body | California Attorney General | National competent authorities + EU AI Office | — |
| Max fine (content violations) | $5,000 per violation | €15M or 3% of revenue | — |
The key insight: the watermarking and disclosure work you do for SB 942 directly satisfies EU AI Act Article 50 requirements. The two regimes use different terminology but demand the same technical implementation.
Where they diverge: the EU AI Act has a heavier documentation burden. You'll need technical documentation files (Article 11), a conformity assessment if you're high-risk (Article 43), and EU database registration for high-risk systems (Article 49). None of that is required under SB 942.
If you're only compliant with SB 942, you're about 70% of the way to EU AI Act compliance for the transparency provisions. The remaining 30% is documentation and the high-risk assessment workflow.
Check Your EU AI Act Compliance Status Now
Scan your GitHub repo or website. Get a detailed report covering Article 50 watermarking gaps, AI disclosure requirements, and risk tier classification — with specific code locations and fix recommendations.
⚡ Run Free Compliance ScanNo credit card. No signup. Results in under a minute.