Blog Compliance
March 3, 2026

Unpacking Disney’s $2.75M CCPA Fine: What Enterprises can Learn About Consent Audit

March 3, 2026
Ivan Tsarynny
Ivan Tsarynny

The largest CCPA settlement in California history isn’t a story about a company that ignored privacy. It’s a story about what happens when doing the right things isn’t the same as making sure the right things have been done right.

When “best efforts” aren’t sufficient

In a sense, it’s easy to be sympathetic to Disney. There’s an interpretation of this story in which Disney looks like a company that did comply.  It had a Consent Management Platform. Users had opt-out mechanisms. Consent banners alerted when users arrived at their streaming properties. By the surface-level indicators that most organizations rely on to measure consent readiness, Disney appeared to be doing what companies are supposed to do.

And yet earlier this month, the California Attorney General announced that Disney would pay $2.75 million to resolve allegations that it failed to honor California consumers’ privacy rights — the largest settlement of its kind since the California Consumer Privacy Act took effect. The investigation didn’t find that Disney had ignored privacy. It found something more instructive: that the company’s  consent systems worked in some places, for some users, on some devices, and stopped working the moment any of those variables changed. That gap between a consent system that exists and one that functions is the central lesson of this case, and it could belong to any organization operating digital properties at scale.

What the investigation found

The California Department of Justice conducted a deliberate sweep of streaming services in January 2024, examining how consumer opt-out rights were being implemented in practice. What investigators found in Disney’s systems wasn’t the result of bad intent but rather a set of architectural gaps that were consequential because they were systemic.

Opt-out signals were scoped to individual devices and services rather than to consumer accounts, meaning a choice made on one platform didn’t carry through to others. Webform opt-outs stopped data sharing through Disney’s own advertising systems but didn’t reach certain third-party advertising technology vendors embedded in their apps. Global Privacy Control signals were treated as device-specific rather than account-wide. And some connected TV apps directed users to a webform rather than offering an in-app opt-out option. Taken together, these gaps meant that consumers who believed they had opted out had, in many cases, only partially done so. This wasn’t because the intent wasn’t there, but because the architecture wasn’t designed to follow the consumer’s choice all the way through.

The gap that gets everyone

Disney’s situation is unusual in scale and visibility, but the underlying problem is not unusual at all. Most organizations that have invested in consent management have done so by deploying a platform, configuring it, and moving on. The CMP is live. The legal team signed off. The banners are appearing. Somewhere in a project management tool, the compliance task is marked complete.

What rarely gets built into this picture is any ongoing mechanism for verifying that the consent system continues to do what it was configured to do, especially as the environment around it changes continuously. And digital environments change constantly. New vendor scripts get added to tag managers. Mobile apps get updated and SDKs get upgraded. A/B testing frameworks modify page behavior. Third-party advertising partners rotate in and out of campaigns. Geographic expansion brings new regulatory contexts. Every one of these changes is an opportunity for something in the consent chain to break quietly, without triggering any alert.

The core problem is what might be called the propagation gap: the distance between a consumer making a choice and that choice actually being honored by every system that handles their data. In a simple architecture with a handful of first-party tools, this gap is manageable. In the kind of architecture most large organizations actually operate — dozens of third-party vendors, multiple app platforms, server-side tagging layers, connected TV environments, authenticated and unauthenticated user states — the gap is wide, invisible from the surface, and almost impossible to close through configuration alone.

The other problem is time. Consent audits are typically conducted at launch and then periodically thereafter, if at all. But the compliance question isn’t whether consent was working in March when the audit was done. It’s whether consent is working today, and yesterday, and the day after a deployment goes out, and the week a new advertising partner gets onboarded. Point-in-time audits create a documented record of moments. They leave everything between those moments undocumented and unverified.

Regulators, when they investigate, don’t limit themselves to the moments you chose to examine.

What the regulatory environment expects

The Disney case is a CCPA matter, but the compliance expectation it reflects is not unique to California. Across every major privacy jurisdiction, regulators have moved — or are moving — toward a similar standard: it is not enough to implement consent mechanisms. Organizations must be able to demonstrate that those mechanisms work.

The California framework, as strengthened by the CPRA, treats opt-out rights as account-level and comprehensive. A consumer’s choice applies wherever that consumer’s data flows, not just to the specific device or service interface they happened to use when they made the choice. The Global Privacy Control is recognized as a valid opt-out signal, and that signal must be honored in full, not filtered through platform-by-platform logic.

Under the GDPR, the accountability principle places the burden squarely on organizations to prove that data processing is lawful and that consent, where it is the legal basis for processing, was validly obtained and is being honored. This is not a burden regulators carry on behalf of companies — it is a burden companies carry on behalf of themselves. When a data protection authority investigates, the question they ask is not, “Can you show us your CMP?” It is, “Can you show us evidence that consent is functioning as claimed?”

The broader global landscape — Brazil’s data protection framework, Canada’s evolving federal and provincial laws, India’s new data protection legislation, the growing number of U.S. state privacy laws — converges on the same expectation from different legal starting points. Consumer consent choices must be honored. Organizations must be able to prove they were. The technical complexity of doing so at scale is not a mitigating factor; it is precisely the problem organizations are expected to have solved.

The trajectory of enforcement in every major jurisdiction is the same: from checking for the existence of consent mechanisms toward scrutinizing the actual, runtime behavior of those mechanisms across real consumer experiences.

What a working consent program actually requires

If the Disney case reveals what breaks down when consent is treated as a configuration exercise, the logical question is what treating it as a continuous operational practice actually looks like. The answer involves four things that most organizations have in partial form but rarely connect into a coherent whole.

  1. Policy specific enough to be testable. Most consent programs have policy in the form of legal documentation and CMP configuration settings. What they often lack is policy articulated as testable expectations: given a user who has declined all cookies, what exactly should load, what exactly should not, on which properties, in which jurisdictions, under which regulatory framework? Without that level of specificity, there is no reliable basis for verifying whether behavior is correct. You can observe what happens; you can’t evaluate whether it’s right.
  2. Full-chain verification, not just banner-level checks. A consent choice travels from a user interface through a consent management system through various signal-passing mechanisms through first-party and third-party systems through advertising technology layers. At each handoff, there is an opportunity for the signal to be lost, distorted, or ignored. Verifying compliance means tracing what actually happens at every step — not what the configuration specifies should happen, but what runtime behavior reveals is actually happening.
  3. Coverage that matches the scale of your actual environment. Testing consent behavior on ten pages of a site that has ten thousand pages, or testing on a desktop browser while ignoring mobile apps and connected TV platforms, or testing the accept flow while not testing decline and no-action states — each of these limitations creates a blind spot. Regulatory investigators are not constrained by the same scope limitations as your internal audit team. Achieving meaningful coverage requires automation, because the matrix of variables involved is too large for manual testing to address comprehensively.
  4. Continuity built into how you operate, not bolted on after the fact. Consent compliance is not a state you achieve; it is a condition you maintain. The moment a code change goes out, a new vendor is added, or a regional configuration is updated, the state of compliance changes. Continuous testing — integrated into release processes rather than conducted as a separate periodic exercise — is the only way to detect when something breaks before a regulator or a consumer discovers it.

The evidence problem

There is a practical dimension to all of this that organizations often underestimate until they’re in a regulatory investigation: the difference between believing your consent program works and being able to prove it.

“We believe our systems are configured correctly,” might be a reasonable starting point for an internal discussion, but  it is not an adequate response to a regulatory inquiry. What regulators require,and what the Disney case underscores, is documentation of actual behavior: records showing that consent signals were propagated correctly, that specific vendors were blocked when they should have been, and that opt-out requests were honored across the relevant scope of a consumer’s account.

This kind of documentation doesn’t produce itself. It requires that consent auditing activities generate structured, timestamped evidence as a byproduct — not just flag violations when they occur, but maintain a record of compliance when systems are functioning correctly. That record is what transforms a consent program from an operational practice into a defensible posture.

Organizations that can produce this kind of evidence are in a fundamentally different position when a regulator asks questions than organizations that can only describe what their systems are supposed to do. The gap between those two positions is, in practical terms, the gap between a settlement and a clean bill of health.

Five things organizations should do differently

The distance between where most organizations are and where the regulatory environment expects them to be is significant. But it is not insurmountable. The following represent the highest-leverage changes a privacy team can make to move toward a genuinely closed-loop consent program.

  1. Trace the propagation path of your opt-out signals, end to end. Don’t assume that because your CMP records an opt-out, every downstream system that handles consumer data receives and honors that signal. Map the path explicitly, and test it — including across devices and platforms for authenticated users, and including every third-party vendor whose scripts load in your digital properties.
  2. Test what actually fires under each consent state, not just what is configured to fire.There is a difference between a CMP configuration that says a vendor should be blocked on decline and actual runtime confirmation that the vendor’s script did not load. The Disney case turns specifically on this distinction. Runtime verification, not configuration review, is the relevant standard.
  3. Honor privacy signals at the account level, not just the session level. If a consumer communicates a preference — whether through a banner interaction, a webform, or a browser-level signal like the Global Privacy Control — that preference should attach to their identity and follow them across every surface where they interact with your brand. Device-level and service-level siloing of consent signals is precisely the failure the California AG’s office cited.
  4. Build consent testing into your change management process. Treat a deployment that touches your tag manager, your consent configuration, your advertising technology stack, or your app as an event that triggers consent verification — not an event that might trigger consent verification if someone thinks to ask. The most common source of consent drift is not negligence; it is the cumulative effect of changes made by teams who weren’t thinking about consent when they made them.
  5. Define what evidence looks like before you need it. Before a regulator asks, decide what documentation your consent program will generate, how it will be stored, and how it can be retrieved for a specific time period and a specific set of properties. The organizations best positioned in regulatory inquiries are the ones who designed their evidence trail deliberately, not the ones who tried to reconstruct it after the fact.

The real cost

Financial penalties are only one dimension of the exposure. A public enforcement action carries reputational weight that is hard to quantify, particularly for consumer brands. Remediation conducted under regulatory oversight is invariably more expensive and more disruptive than remediation done on your own terms. And the internal cost of an investigation — legal hours, executive attention, organizational distraction — compounds every other impact.

The more useful question isn’t “how much could this cost?” It’s whether, if a regulator examined your digital properties today, you’d have the evidence to show that your consent program does what it’s supposed to do. Most organizations, if they’re honest, haven’t fully tested that question. The Disney case is a good reason to do so now.

The standard has shifted

Compliance used to mean demonstrating effort: a CMP deployed, a privacy notice published, opt-out mechanisms in place. That was often enough.

It isn’t anymore. The Disney settlement reflects a regulatory posture that has moved from checking for the existence of consent controls to evaluating whether those controls actually work — for every consumer, across every platform, every day. Meeting that standard means treating consent as a continuous operational commitment, not a completed implementation. It means closing the loop between policy, system behavior, and documented evidence.

Say it. Do it. Prove it. Most organizations are doing the first reliably, the second partially, and the third barely at all. The gap in between is exactly where regulatory exposure lives.