COOKIES. CONSENT. COMPLIANCE
secure privacy badge logo
April 30, 2026

California AI Transparency Law: What Businesses Need to Disclose and Implement

A user receives a denial of their rental application. The denial was generated by an algorithmic tenant screening system that processed their credit file, eviction history, and income documentation. The user does not know AI made the decision. They do not know what factors the system weighted most heavily. They have no information about the logic, the data categories, or whether they can dispute the outcome with a human reviewer. Under California's evolving AI transparency framework, this scenario describes a compounding compliance failure — and 2026 is the year enforcement posture around it sharpens significantly.

California has enacted multiple AI transparency laws in 2024 and 2025, with operative dates running from January 2026 through 2028. Understanding which law applies to your organization, what it specifically requires, and how those requirements translate into product and system design is not optional. California enacted 18 AI-related bills in 2024 alone. The state's Attorney General has signaled active enforcement interest across all of them.

TL;DR

  • California's AI transparency obligations come from four separate enacted instruments, not a single law. Each has a different scope, operative date, and set of obligations.
  • The California AI Transparency Act (SB 942, as amended by AB 853) applies to large generative AI providers with over one million monthly users. It requires free AI detection tools, visible manifest disclosures, and embedded latent disclosures on AI-generated content. Operative date: August 2, 2026.
  • The Generative AI Training Data Transparency Act (AB 2013) requires developers of generative AI systems to publish high-level training data documentation. Effective: January 1, 2026.
  • The Transparency in Frontier AI Act (SB 53) requires frontier model developers to publish safety protocols, report critical safety incidents, and maintain whistleblower protections. Effective: January 1, 2026, with penalties up to $1 million per violation.
  • The CPPA's ADMT regulations require businesses using automated decision-making for significant decisions to provide pre-use notices explaining how the ADMT works, what data it uses, and what rights consumers have. Effective for new systems: January 1, 2026. For existing systems: January 1, 2027.

The Four Enacted Layers: What Is Actually Law

Layer 1: California AI Transparency Act (SB 942 as amended by AB 853)

SB 942 was signed by Governor Newsom on September 19, 2024. AB 853, signed October 13, 2025, amended it and pushed the operative date to August 2, 2026, deliberately aligning with the EU AI Act's enforcement date for high-risk systems.

The law applies to "covered providers" — entities that create, code, or otherwise produce a generative AI system with more than one million monthly visitors or users that is publicly accessible within California. This is a developer-level obligation, not a deployer obligation: it targets the organizations building the GenAI systems, not the businesses integrating them into products (though licensees have downstream compliance responsibilities).

Three core requirements apply to covered providers. First, they must make available a free, publicly accessible AI detection tool that allows users to assess whether image, video, or audio content was created or altered by the provider's GenAI system. Second, they must offer users the option to include a manifest disclosure — a visible, conspicuous label — on AI-generated image, video, or audio content. Manifest disclosures must be clear, permanent, appropriate for the medium, and understandable to a reasonable person. Third, they must include a latent disclosure — embedded metadata conveying content provenance information — in all AI-generated image, video, or audio content, to the extent technically feasible and reasonable.

AB 853 expanded the original law's scope in two directions. It added "GenAI hosting platforms" — websites or applications that make available for download the source code or model weights of a GenAI system to California residents — with obligations effective January 1, 2027. It also added "capture device manufacturers" — entities producing devices with built-in cameras, microphones, or voice recorders for sale in California — with latent disclosure obligations for devices produced after January 1, 2028.

License compliance is a structural obligation under the amended law. If a covered provider knows that a third-party licensee has modified a licensed GenAI system such that it can no longer include required disclosures, the provider must revoke that license within 96 hours. Licensees must cease using the system after license revocation. Civil penalties reach $5,000 per violation, collectible by the Attorney General, city attorneys, or county counsel.

Layer 2: Generative AI Training Data Transparency Act (AB 2013)

AB 2013, signed September 28, 2024, effective January 1, 2026, applies to developers of generative AI systems or services intended for use by Californians — including both new systems and systems that were substantially modified on or after January 1, 2022. Developers must publish a high-level summary of their training datasets on a publicly accessible website.

The required disclosure covers: the sources of training data, the types of data included, the time periods covered, whether the data includes personal information as defined by the CCPA, whether it includes copyrighted or licensed material, and the intellectual property status of the data used. The law does not define "high-level," leaving discretion in how much detail must be provided. It provides no trade secret protection mechanism and includes no compliance certification process — developers must make their own judgment about what constitutes a sufficient high-level summary given the law's stated purpose of enabling users to understand the nature of the system's training foundation.

The intersection of AI training data transparency obligations with the CPPA's data minimization and ADMT regulations — and how California's layered framework affects both developers and deployers of AI systems — is the compliance landscape that any organization developing or deploying AI in California must now navigate as an integrated set of obligations rather than as isolated requirements.

Layer 3: Transparency in Frontier AI Act (SB 53)

SB 53, signed September 29, 2025, effective January 1, 2026, represents California's first law specifically targeting frontier model developers — entities that develop AI systems trained using more than 10^26 floating-point operations. The TFAIA's scope is deliberately narrow compared to the vetoed SB 1047: it focuses on transparency and reporting rather than pre-deployment testing requirements or model-level "kill switches."

Frontier model developers subject to the TFAIA must publish detailed safety frameworks documenting how they identify, assess, and mitigate safety risks from their models. They must publish annual transparency reports describing significant safety incidents and their responses. Critical safety incidents — defined as incidents involving loss of human control, mass casualties, critical infrastructure disruption, or malicious use causing significant physical or property damage — must be reported to the California Governor's Office of Emergency Services. Robust whistleblower protections apply to employees who disclose AI-related safety concerns to authorities. A CalCompute public AI cloud consortium is established to advance responsible AI development and research. Non-compliance carries penalties up to $1 million per violation.

Layer 4: CPPA ADMT Regulations (Effective January 1, 2026)

The CPPA's automated decision-making technology regulations, finalized September 22, 2025 and effective January 1, 2026, create transparency obligations for any CCPA-covered business that uses ADMT for significant decisions affecting California consumers. Unlike the three provisions above, these regulations are deployer-focused: they apply to businesses that use automated systems against consumers, not only to the developers who build those systems.

Pre-use notices are required before collecting personal information for use in ADMT, or before applying ADMT to already-collected data, for significant decisions. The notice must describe how the ADMT works, what data influences its outputs, how outputs are used in the decision, and what alternative process is available to consumers who opt out. Consumers have the right to opt out of ADMT for significant decisions, with at least two accessible opt-out mechanisms required. Consumers have the right to access information about the logic of the ADMT and how its outputs were used in a specific decision affecting them. Significant decisions include those affecting financial services, housing, insurance, healthcare, education, employment, and essential services access — advertising was deliberately excluded from the final regulation text.

What These Laws Do Not Cover: The Proposed vs Enacted Distinction

Several proposals for broader algorithmic transparency in California failed to become law, and distinguishing them from enacted obligations is critical for accurate compliance planning.

AB 2930, the dedicated algorithmic accountability bill that would have imposed impact assessment and audit requirements across industries for "automated decision tools," was withdrawn before passing. It was the most detailed transparency and accountability bill California considered and is likely to influence future legislation, but it is not current law.

A chatbot disclosure bill requiring notification when users are interacting with an AI rather than a human remains in progress but was not enacted in its original form in the 2024 cycle, though Utah's and Illinois' equivalent laws provide the directional signal for where California may go. Colorado's comprehensive AI Act, which includes consumer disclosure requirements for high-risk AI systems, is itself being revised with an effective date shifted to January 1, 2027.

Implementing AI Transparency: What Specific System Changes Are Required

For organizations building generative AI products that qualify as covered providers under SB 942, implementation requires three parallel engineering workstreams.

The manifest disclosure system requires that any image, video, or audio content output by a qualifying GenAI system be accompanied by a visible, permanent label identifying it as AI-generated. The label must be appropriately integrated into the medium — a watermark on images, an overlay on video, an audio tag or accompanying text for audio. Designing this as a user-controlled option (users may choose to include it) while ensuring that the option is accessible and defaults to an enabled state for content distributed publicly reflects both the letter and the intent of the law.

The latent disclosure system requires embedded metadata in all AI-generated image, video, and audio content. The Coalition for Content Provenance and Authenticity (C2PA) technical standard is the emerging implementation framework for content credentials — a standardized format for embedding provenance metadata that records the origin, creation tool, and modification history of content files. Implementing C2PA-compatible content credentials in the generation pipeline satisfies the latent disclosure requirement while aligning with the technical standard most likely to become the reference for future regulation and platform compatibility.

The AI detection tool requirement means building and publicly exposing a free service capable of analyzing submitted content and returning a determination of whether that content was created or altered by the provider's GenAI system. The tool must be publicly accessible — available without account creation or paid subscription. It must be accurate enough to serve the purpose of enabling users to identify AI-generated content in a meaningful way.

For organizations deploying ADMT for significant decisions, the transparency implementation is user-interface-level and documentation-level simultaneously. Pre-use notices must be contextually placed — before the data collection or automated processing occurs — rather than buried in privacy policy pages. An individual applying for a rental or insurance product must see a notice specifically about the ADMT being used before their data is processed through it. The notice cannot be generic; it must describe the specific system's logic, data inputs, and decision outputs in terms a reasonable person can understand. Building the technical documentation infrastructure that supports ADMT transparency — including model cards, decision logs, and version-controlled system descriptions — is the foundation for both user-facing notice requirements and regulatory audit readiness.

Access rights under the ADMT regulations require that when a consumer asks about the ADMT decision that affected them, the business can retrieve and explain the specific output generated for that individual. This requires that inference-time inputs and outputs be logged at the individual decision level, tied to a user identifier, and retrievable through a rights response workflow. A batch-level log that shows aggregate model behavior but cannot reconstruct a specific user's decision path does not satisfy the access right.

How California Compares to EU AI Act Transparency Requirements

The EU AI Act's transparency obligations and California's AI transparency framework overlap significantly in intent but differ in mechanism and scope. The EU AI Act imposes transparency requirements as technical documentation obligations on providers — Article 13 requires high-risk systems to be designed to enable deployers to understand how the system works, and Article 52 requires disclosure when end-users interact with AI systems including chatbots and emotion recognition tools. The EU AI Act's transparency and documentation requirements for high-risk system providers and deployers create parallel obligations to California's ADMT pre-use notice requirements across the same decision domains — employment, credit, healthcare, insurance.

Where California focuses on consumer-facing transparency as a rights mechanism — giving consumers notice and choice — the EU AI Act focuses on technical transparency as a compliance mechanism, requiring that systems be designed to allow both deployers and regulators to understand and verify their behavior. The two approaches are complementary: an organization building AI governance infrastructure to satisfy EU AI Act Article 11 technical documentation requirements is simultaneously building the system-level transparency that California's disclosure requirements can draw from.

For organizations subject to both frameworks, integrated implementation is more efficient than parallel programs. The model documentation produced for EU AI Act conformity assessments — system architecture, training data characteristics, validation results, known limitations — provides the substantive foundation for California's ADMT pre-use notices and AB 2013 training data disclosures. The decision logging implemented for EU AI Act Article 12 compliance enables the individual-level access rights under CPPA's ADMT regulations. The human oversight mechanisms required under EU AI Act Article 14 satisfy the alternative-pathway obligation California's opt-out right triggers.

Common Mistakes

Treating SB 942 as the whole of California's AI transparency framework is the most widespread planning error. SB 942 applies only to large GenAI providers and covers content provenance — it does not govern automated decision-making, employment AI, or the transparency obligations that the CPPA's ADMT regulations impose on deployers. Organizations that focus only on SB 942 may be fully compliant with the content labeling requirements while missing the more operationally demanding ADMT notice and access obligations entirely.

Assuming that a general privacy policy disclosure of AI use satisfies pre-use notice requirements is a design-level error. The CPPA's ADMT regulations require contextual, system-specific notices at the point before the relevant processing occurs — not a generic paragraph in a privacy policy. The disclosure must describe the specific ADMT's logic, the specific data it processes, and the specific way its outputs are used. Generic AI disclosure language fails this standard.

Not maintaining version-controlled documentation of AI systems means that when a regulator or consumer asks about the ADMT or GenAI system that processed their data on a specific date, the organization cannot produce accurate information about the system's behavior at that time. System documentation must be versioned and retrievable by deployment date, not just maintained in a current state.

Licensing AI capabilities without contractual transparency compliance obligations passes legal exposure to the covered provider without eliminating it. AB 853's revocation requirement means that covered providers must flow down transparency obligations to licensees contractually and must monitor licensee compliance — otherwise they face both the risk of losing license revenue through mandatory revocation and the reputational exposure of their system being used in non-transparent ways.

FAQ

Does California require AI transparency?

Yes, through multiple enacted instruments. SB 942 requires content provenance disclosures from large GenAI providers, effective August 2, 2026. AB 2013 requires training data transparency from GenAI developers, effective January 1, 2026. SB 53 requires safety framework disclosures from frontier model developers, effective January 1, 2026. The CPPA's ADMT regulations require pre-use notices from businesses using automated decision-making for significant decisions, effective January 1, 2026 for new systems.

What is an automated decision system under California law?

Under the CPPA's ADMT regulations, any technology that processes personal information and uses computation to replace or substantially replace human decision-making. The definition is functional rather than technology-specific — it captures rule-based scoring systems, machine learning models, and any other computational decision system that meets the threshold.

Do companies have to disclose AI use?

Large GenAI providers must disclose AI-generated content through manifest and latent disclosures. Businesses using ADMT for significant decisions must provide pre-use notices. SB 53 frontier model developers must publish safety frameworks. Disclosure obligations depend on which category applies to the specific organization and system.

What is AI explainability?

Explainability refers to the capacity to describe, in terms a human can understand, why an AI system produced a specific output. Transparency is the disclosure obligation — informing users that AI is being used and what it does. Explainability is the technical capability that makes meaningful transparency possible. California's ADMT access rights require explainability at the individual decision level: consumers must be able to receive information about the logic and parameters that produced their specific outcome.

How do you implement AI transparency?

For SB 942 covered providers: build manifest and latent disclosure systems, deploy a free AI detection tool, and include contractual transparency obligations in licensee agreements. For CPPA ADMT deployers: design contextual pre-use notices at the point of processing, implement opt-out mechanisms with genuine alternative pathways, and build decision-level logging for access rights responses.

California's AI transparency framework is not a single coming regulation — it is an existing set of operative obligations with enforcement authority already in place. The Attorney General, the CPPA, and local city attorneys all have enforcement standing under different components of this framework. Organizations that build their transparency infrastructure proactively — covering content provenance, training data disclosure, safety reporting, and automated decision notice and access — are building governance infrastructure that serves both California compliance and the EU AI Act's parallel requirements simultaneously.

See how Secure Privacy's AI governance platform helps organizations build the system documentation, ADMT notice infrastructure, and audit-ready transparency records that California's layered AI transparency framework requires.

logo

Get Started For Free with the
#1 Cookie Consent Platform.

tick

No credit card required

Sign-up for FREE

image

California AI Transparency Law: What Businesses Need to Disclose and Implement

A user receives a denial of their rental application. The denial was generated by an algorithmic tenant screening system that processed their credit file, eviction history, and income documentation. The user does not know AI made the decision. They do not know what factors the system weighted most heavily. They have no information about the logic, the data categories, or whether they can dispute the outcome with a human reviewer. Under California's evolving AI transparency framework, this scenario describes a compounding compliance failure — and 2026 is the year enforcement posture around it sharpens significantly.

  • Legal & News
  • Data Protection
image

California Data Minimization Requirement: What CPRA Requires in Practice

Your analytics team wants to add household income range to the signup form to improve ad targeting. Your product team is capturing full browsing histories for all logged-in users in case the data becomes useful for a recommendation feature later. Your marketing team is enriching CRM records with third-party behavioral profiles because the data was available. None of these are theoretical risks — all three are exactly the kind of practice that California's data minimization requirement, codified in CCPA Section 1798.100(c), is designed to prohibit.

  • Privacy Governance
image

Does GDPR Require Consent Renewal? Cookie Consent Expiration Explained (2026)

Here is the direct answer: GDPR does not specify how long consent lasts. No article sets a 12-month expiry. No recital prescribes a renewal interval. The regulation is silent on duration.

  • Legal & News
  • Data Protection