SB 942 — California's AI Transparency Act — took effect January 1, 2026. But the California AG doesn't start enforcement until August 2, 2026. That gives most AI startups roughly four months to get compliant.
The companies that will get hit first aren't the ones who tried and failed. They're the ones who never started — because they assumed someone else was handling it, or because they didn't realize how many places in their codebase generate AI content.
This checklist is written for engineers and technical founders. No legalese. Just the five concrete things you need to ship before August 2.
Audit Your Codebase for AI-Generated Content
You cannot watermark what you haven't found. Before you write a single line of compliance code, you need a complete map of every function, endpoint, and pipeline in your codebase that produces or serves AI-generated content.
This is harder than it sounds. Most teams have AI content scattered across multiple services: a generation API here, a background job there, a cached output from six months ago that still gets served from S3. SB 942 covers all of it.
What to scan for:
image_gen: openai.images.generate, dall-e, stable-diffusion, flux, fal.ai
video_gen: runway, pika, sora, kling, minimax
audio_gen: elevenlabs, suno, udio, bark, tortoise
multimodal: gemini, gpt-4o, claude — when returning media
pipelines: any celery/bull/sidekiq job that calls the above
Pay special attention to cached outputs. If your platform stores AI-generated content in S3, R2, or any CDN and re-serves it later, that re-serve still counts as distribution under SB 942. Watermarks need to be embedded at generation time, before the content ever hits storage.
Implement Latent Watermarking Before the Deadline
This is the heaviest lift on the list. SB 942 requires latent (invisible) watermarks embedded in every AI-generated image, video, and audio file. The watermark must:
— Survive basic editing (crop, resize, compression, color adjustment)
— Encode provider identity, model identifier, and generation timestamp
— Be verifiable by your public detection tool
— Be compatible with C2PA (Coalition for Content Provenance and Authenticity) spec
The C2PA compatibility requirement matters. It's not just about having *some* watermark — regulators want interoperable provenance metadata that external verification tools can read. A proprietary watermark scheme that only your own tool can detect is unlikely to satisfy the AG's interpretation.
{
"@context": "https://c2pa.org/assertions/v1",
"claim_generator": "YourApp/1.0 c2pa-rs/0.28",
"title": "AI Generated Image",
"assertions": [{
"label": "c2pa.training-mining", // required
"data": { "use": "notAllowed" }
}]
}
For images, use c2pa-rs (Rust) or c2patool. For audio, look at AudioSeal (Meta's open-source watermarking library). Video is the hardest — you'll likely need per-frame watermarking via the C2PA video spec.
Budget 3–6 engineer-weeks for this step if you haven't started. It's not conceptually complex, but integrating it cleanly into every generation pipeline without adding latency takes time.
Generate a FRIA (Fundamental Rights Impact Assessment)
A Fundamental Rights Impact Assessment is a formal document that evaluates how your AI system affects user rights — privacy, autonomy, freedom from discrimination, due process. It's required as a pre-deployment artifact for any AI system that falls under SB 942's covered provider definition.
The FRIA isn't just a checkbox. If the AG investigates your platform, the FRIA is one of the first documents they'll request. A missing or thin FRIA signals that you didn't take compliance seriously.
A complete FRIA covers:
✓ System description (what the AI generates, at what scale)
✓ Data inputs and sources (training data provenance)
✓ Potential for discriminatory outputs (bias analysis)
✓ Privacy risks (does the system memorize training data?)
✓ Safeguards implemented (filters, human review, kill switches)
✓ User recourse mechanisms (how to dispute AI outputs)
✓ Watermarking and disclosure methodology
✓ Review cadence (how often you re-evaluate)
Keep the FRIA versioned and tied to model releases. If you push a new model version that changes output characteristics, the FRIA needs to be updated before that version goes live.
Create AI Disclosure Manifests for Public-Facing Models
Beyond watermarks embedded in the content itself, SB 942 requires manifest disclosures — visible, human-readable labels that accompany AI-generated content when displayed to end users.
But there's a second requirement most startups miss: machine-readable manifest
files that describe your AI models publicly. Think of it like a robots.txt,
but for AI provenance. The AG (and third-party compliance tools) should be able
to query your domain and get structured information about your AI capabilities.
{
"provider": "Your Company Inc.",
"contact": "compliance@yourcompany.com",
"sb942_compliant": true,
"models": [{
"id": "image-gen-v2",
"type": "image",
"watermarking": "c2pa-v1",
"detection_endpoint": "https://yourapp.com/verify"
}],
"fria_url": "https://yourapp.com/legal/fria",
"last_updated": "2026-03-29"
}
On the user-facing side: disclosure labels must be prominent. Not a footnote. Not a hover tooltip. The label "Generated by AI" (or equivalent) needs to be visible at the point where users first encounter the content.
Check your content cards, feed renders, export flows, and email delivery paths. Each one is a potential violation if it strips the disclosure label.
Set Up Continuous Compliance Monitoring
The four steps above get you to compliance. This step keeps you there.
One-time audits expire the moment you ship new code. Every new AI feature, every model upgrade, every new generation pipeline is a potential compliance gap. The AG doesn't care that you were clean in April — they care about August 2 and every day after.
Continuous compliance monitoring means:
→ Re-scan repo on every pull request to main
→ Block deploys when new AI endpoints lack watermarking
→ Alert on new dependencies that suggest AI content generation
→ Track watermark coverage rate over time (target: 100%)
→ Quarterly FRIA review tied to model versioning
→ Audit log of all compliance changes (for AG requests)
The simplest implementation: add AuditGen as a CI check. Every PR that introduces new AI generation code gets flagged for compliance review before it ships. Your engineering team gets the fix in the same PR review loop they already use — no separate compliance process to remember.
The companies that get investigated are the ones who were clean once and then shipped fast and sloppy. Compliance is an ongoing engineering discipline, not a one-time audit.
See Your Risk Score Before the AG Does
Scan your GitHub repo or website for SB 942 compliance gaps. Get a detailed report with specific findings and fix recommendations — in under a minute.
⚡ Run Free Compliance ScanNo credit card. No signup. Instant results.