January 13, 2026

Why Content Security Policy Fails PCI 6.4.3 (And What QSAs Accept Instead)

January 13, 2026
Ivan Tsarynny
Ivan Tsarynny

Content Security Policy looks like it was designed for PCI Requirement 6.4.3. You define which domains can load scripts on your payment page, the browser enforces it, and unauthorized code gets blocked. For teams drowning in third-party JavaScript, CSP feels like the obvious answer.

Then you get to your audit, and the QSA starts asking questions CSP can’t answer.

“What scripts actually ran on your payment page on November 15th?” “Who approved this analytics tag and when?” “How do you know if an authorized vendor ships different code tomorrow?”

CSP tells the browser which domains are allowed. But 6.4.3 is asking a different question: do you know what’s actually running, who said it could be there, and how you’d notice if it changed?

Most organizations discover this gap mid-audit, which is an expensive time to learn that your CSP headers aren’t the same thing as a compliance program.

This article explains where CSP falls short in QSA evaluations, what controls actually pass audits, and how to build a defensible 6.4.3 program without ripping out your existing security.

What PCI 6.4.3 actually requires (from someone who’s sat through these audits)

Every 6.4.3 audit eventually lands on the same three questions. They’re phrased differently depending on your QSA, but the underlying concern is always the same.

Can you show me what’s actually running on your payment page?

Not what you think is running based on your deployment pipeline or CSP headers. What’s actually executing in the customer’s browser right now. That includes first-party JavaScript, third-party libraries, inline scripts, and anything injected dynamically after the page loads.

If you can’t produce a current inventory, everything else becomes theoretical.

How did each script get there?

For every script on the page, someone needs to have made a documented decision that this specific code belongs in a payment context. Not “we approved Google Analytics in 2019” – a recorded decision that says who approved this script, when, and what they understood it would do.

Generic approvals don’t hold up well once scripts start changing.

How do you know if something changed?

Controls that only work on deployment day aren’t really controls. The question is whether you’d notice if something new appeared on your payment page or if an existing script started behaving differently.

This is where most programs break. Prevention is easy to talk about. Continuous visibility over time is harder to prove.

Here’s the actual test: If your approved analytics vendor ships an update tonight that adds new data collection, would you know? Would it go through review? Or would it just start running because the domain was already whitelisted?

That’s what 6.4.3 is really testing.

What CSP actually does (and where it stops)

CSP is solid browser-side security. It defines which domains can load scripts, blocks inline code if you tell it to, and stops unauthorized JavaScript from executing. When it’s configured correctly, it meaningfully reduces your attack surface.

The problem isn’t that CSP doesn’t work. It’s that it solves a different problem than the one 6.4.3 is asking about.

CSP doesn’t inventory your scripts

A CSP policy might say “allow scripts from cdn.analytics-vendor.com,” but it can’t tell you:

  • How many scripts are loading from that domain (could be 1, could be 8)
  • What those scripts do
  • Whether new ones appeared since last month
  • What changed in the code

Multiple scripts can execute under a single script-src rule, and CSP treats them identically.

CSP doesn’t document authorization

A CSP header is a technical permission, not a business approval. It doesn’t capture:

  • Who decided this domain should be trusted
  • What review happened
  • What functionality was approved
  • When the decision was made

There’s no audit trail. Just a header that says “browser, allow this domain.”

CSP doesn’t track changes

Once you whitelist a domain, everything from that source is trusted forever. If your analytics vendor ships an update that adds new tracking, CSP allows it silently. Hash-based CSP is stricter, but it breaks every time code changes and still doesn’t tell you what changed or who approved it.

CSP doesn’t generate compliance evidence

Violation reports show what CSP blocked, which is useful for security monitoring. But QSAs aren’t asking “what did you block?” They’re asking “what ran, why was it approved, and how did you manage changes?”

CSP can’t answer those questions because it was never designed to.

What QSAs actually look for (that CSP can’t provide)

QSAs aren’t checking if your CSP header is syntactically correct. They’re trying to determine if you have ongoing control over what runs on your payment pages.

Here’s how those conversations typically go:

“Show me what’s running on your payment page right now”

They want a complete inventory of scripts executing in production, across all environments where payment data is processed. For each script, they need to see source, function, and what data it can access.

A CSP header that says script-src cdn.vendor.com doesn’t tell them any of this.

“Who approved each of these scripts?”

For every script on the list, they’re looking for documented approval: who authorized it, when, and why it’s necessary in a payment context. “We’ve always used Google Analytics” is not documentation. “Jane Smith approved GA4 tracking on 3/15/2024 for conversion attribution” is documentation.

CSP policies don’t capture approval history.

“How do you know if something changed?”

This is where things usually get uncomfortable. QSAs want evidence that script changes are detected continuously, not just when someone remembers to check. They want to see logs showing:

  • When new scripts appeared
  • When existing scripts changed
  • How those changes were reviewed
  • What action was taken

If monitoring only started two weeks before the audit, they’ll notice.

“Walk me through your last six months”

Point-in-time configuration isn’t enough. QSAs want to see that controls have been operating consistently over the entire audit period. How were scripts reviewed? When did reauthorization happen? How were changes handled?

CSP exists in the present tense. It can’t reconstruct what was running four months ago or whether anything changed between audits.

The gap becomes obvious when you try to answer these questions with CSP artifacts. You can show them a header that says which domains are allowed, but you can’t show them what actually loaded from those domains, whether it was the same code as last month, or who reviewed the changes.

That’s why CSP alone doesn’t pass 6.4.3 audits, even though it’s good security.

What actually passes a 6.4.3 audit

The good news is that QSAs are reasonably consistent about what they’ll accept. The requirement is prescriptive enough that most assessors end up looking for the same things.

Automated platforms get the smoothest rides

Client-side security platforms purpose-built for this requirement tend to pass audits without much drama. They continuously scan payment pages, maintain live inventories, track approvals through structured workflows, and produce audit-ready evidence.

When QSAs evaluate these setups, they typically get:

  • An inventory that matches the page’s current state
  • Approvals with clear business justification and named approvers
  • Change logs showing what updated and when
  • Continuous monitoring evidence spanning the audit period

The evidence is system-generated and continuous, which answers the “how do you know?” question cleanly.

Manual processes pass, but expect more questions

Spreadsheet-based tracking and manual review processes can pass, but they get heavier scrutiny. Not because QSAs doubt your diligence – because browsers change faster than spreadsheets do.

Third-party tags update weekly. New scripts appear. Configurations drift between quarterly reviews. A clean spreadsheet from last month might already be stale.

QSAs probe for timing gaps and coverage holes:

  • Were reviews frequent enough?
  • Did you catch every script?
  • Were approvals recorded before scripts went live?
  • What happened between review cycles?

Incomplete approval records, inconsistent timing, or missed scripts quickly weaken your story. The more manual the process, the more opportunities for gaps.

Hybrid approaches work when documentation stays tight

Some organizations combine automated discovery with manual authorization workflows, or integrate monitoring tools with existing change management systems. QSAs accept these when the components work together coherently.

The key is an unbroken evidence trail. If your automated scanner detects scripts, but approvals live in email threads and nobody tracks changes systematically, you’ve just created more work for yourself during the audit.

What it comes down to: Can you accurately discover what’s running, document who approved it, prove you’re monitoring continuously, and show how you handle changes? If yes, most QSAs will accept your approach. If not, no amount of CSP configuration will save you.

Where PaymentGuard AI actually helps

CSP handles execution control. It tells the browser which domains can load scripts. That’s valuable, and you should keep using it.

But when your QSA starts asking “show me what ran on your payment page last Tuesday,” CSP can’t answer. That’s not a failure of CSP – it’s just not what it was built to do.

PaymentGuard AI fills the gaps that 6.4.3 audits expose.

It sees what’s actually executing

PaymentGuard continuously observes every script running on your payment pages in real customer browsers. Not what your CSP policy allows in theory, but what’s actually loading and executing in practice.

This matters because:

  • Third-party vendors can have multiple scripts under one domain
  • Tag managers inject code dynamically
  • A/B testing platforms modify behavior at runtime
  • Scripts can change between your deployments

When a QSA asks “what’s running right now?”, you pull a current inventory instead of guessing based on your last code review.

It ties scripts to authorization workflows

For each script PaymentGuard discovers, it creates an authorization record: who approved it, what business purpose it serves, when approval happened, and what functionality was reviewed.

When scripts change – because vendors ship updates constantly – PaymentGuard detects the modification and routes it through the same authorization workflow. You don’t accidentally trust new behavior just because the domain was already whitelisted.

This gives you the audit trail QSAs are looking for: a documented decision per script, with approver identity and business justification attached.

It monitors behavior, not just presence

QSAs increasingly push on whether your controls detect runtime behavior changes. It’s not enough to authorize a script once. The question is whether you’d notice if it started doing something different.

PaymentGuard continuously watches authorized scripts for modifications, detects new scripts as they appear, and alerts when behavior changes. The monitoring logs prove that detection has been running throughout the audit period, not just turned on before the assessment.

It keeps the evidence chain intact

Most compliance programs break at evidence collection. The process exists, but when audit time comes, you’re frantically assembling screenshots, digging through Jira tickets, and hoping you can reconstruct what happened six months ago.

PaymentGuard maintains audit-ready documentation automatically: inventory reports, authorization history, change logs, monitoring records. When your QSA asks for evidence, you provide it directly from the system instead of rebuilding the story under deadline pressure.

The relationship with CSP: PaymentGuard doesn’t replace CSP. CSP stays in place doing enforcement. PaymentGuard adds the visibility, governance, and historical evidence layer that 6.4.3 requires. They solve different problems, and you need both.

How to actually pass a 6.4.3 audit

The requirement sounds simple: know what’s running on your payment pages, approve it properly, and track changes over time. Implementation is where things get messy.

Keep your inventory current with reality

Payment pages don’t behave the same across environments, user segments, consent states, or A/B test variants. Scripts get injected at runtime. Vendors update code between your deployments.

Your inventory needs to reflect actual page behavior, not what you think is deployed. That means:

  • Observing execution in real browsers, not inferring from build artifacts
  • Covering all environments where payment data flows
  • Capturing first-party, third-party, inline, and dynamically loaded scripts
  • Updating continuously, not quarterly

Treat inventory as a living document. If it doesn’t match what’s actually running today, everything downstream is guesswork.

Turn approvals into auditable records

Script approval can’t exist in someone’s memory or scattered across Slack threads. You need documentation that survives staff turnover and audit pressure.

For each script:

  • Who approved it (name, not “the team”)
  • When approval happened (date, not “sometime last year”)
  • Why it’s necessary (business justification, not “marketing wanted it”)
  • What functionality was reviewed (data access, execution context)

When scripts change, reauthorize them. An unchanged domain doesn’t mean unchanged risk. Vendors ship updates constantly.

Monitor continuously, not sporadically

6.4.3 fails in the gaps between reviews. Quarterly audits miss everything that happens between checkpoints.

You need monitoring that:

  • Runs by default, not manually triggered
  • Catches new scripts as they appear
  • Detects changes to existing scripts
  • Alerts on anomalies
  • Documents response actions

If monitoring only started before your audit, QSAs will ask what happened during the previous 11 months. That’s an uncomfortable question.

Generate evidence as you operate

If you’re assembling evidence during audit prep, you’ve already lost time and probably accuracy. Evidence should be a byproduct of your controls, not a separate scramble.

Maintain these continuously:

  • Script inventory history
  • Approval records with timestamps
  • Monitoring logs
  • Change documentation
  • Response actions

When your QSA asks for six months of evidence, you pull reports instead of reconstructing history from partial records.

What to do if you’re running CSP and failing audits

If you already have CSP deployed, you’re not starting from zero. You’ve established browser-side enforcement, and that’s real security. Keep it.

The issue isn’t that CSP doesn’t work. It’s that PCI 6.4.3 is asking questions CSP wasn’t designed to answer.

CSP says “browser, only load scripts from these domains.” That’s enforcement.

6.4.3 asks “what scripts actually ran, who approved them, and how would you know if they changed?” That’s governance and evidence.

The right move is straightforward: Let CSP handle enforcement. Add purpose-built tooling like PaymentGuard AI to handle discovery, authorization tracking, and continuous evidence generation.

Once those roles are separated, 6.4.3 stops being a fire drill. You walk into audits with complete documentation, QSAs get the evidence they need without extracting it from you over three weeks, and you’re not rebuilding your compliance story under pressure.

CSP keeps unauthorized code from executing. PaymentGuard keeps you from failing audits because you can’t explain what did execute.

Both matter. Neither replaces the other.