December 5, 2025

How to Prove PCI DSS 6.4.3 & 11.6.1 Compliance to Your QSA (Evidence, Alerts, Audit Trail)

December 5, 2025
Ivan Tsarynny
Ivan Tsarynny

When organizations fail PCI audits, it is rarely because they lack documentation or controls. They fail because they cannot prove those controls operate reliably when a QSA evaluates them.

Requirements 6.4.3 and 11.6.1 expect evidence that reflects the page as the browser renders it. QSAs look for evidence that shows the controls running on the actual rendered page during the assessment period. This expectation is clear in the standard, and it is the point where many teams struggle.

So even if some scripts are missing from the organization’s documentation, it calls the entire control chain into question. Because when evidence is incomplete or stale, the QSAs must assume the control is ineffective. 

And when that happens, most organizations spend upwards of fifty thousand dollars on remediation, re-assessment, and QSA fees. Yet, certification delays interrupt partnerships, affect revenue, and slow the business.

This article explains how to avoid that outcome. 

What you’ll learn

  • How QSAs evaluate 6.4.3 and 11.6.1 
  • Why organizations fail 6.4.3 and 11.6.1 audits
  • How automation can can help you pass your next attempt

Why organizations fail PCI DSS 6.4.3 and 11.6.1 audits

QSAs approach 6.4.3 and 11.6.1 through a predictable sequence. They load the payment page, capture what the browser receives, and then check whether each script and header is inventoried, authorized, tied to an integrity baseline, and covered by monitoring evidence for the assessment period.

If any part of that chain is missing or out of alignment, the control does not hold to pass the audit. The question is why that alignment breaks.  

Let’s explore how small gaps in script management and monitoring accumulate into findings. 

Why payment page script inventories fail PCI DSS 6.4.3 requirements

Teams maintain spreadsheets or CMDB entries that reflect what engineering believes is deployed. But additional scripts almost always surface when a QSA loads the payment page.

The root issue is that modern payment pages are not static. They assemble in real time, often pulling in scripts the organization does not own or control. Third-party tools load their own dependencies, and vendor-hosted scripts change without notice.

Manual reviews cannot keep up with a changing page. An inventory that is correct on the day it is created begins to drift immediately, and it soon falls out of sync with what the browser actually loads.

From the assessor’s point of view, when a script appears in the rendered page but not in the inventory, 6.4.3 is not met. 

Why script authorization evidence fails PCI DSS 6.4.3 audits

QSAs doubt authorization evidence when organizations label scripts as “authorized,” but fail to show who approved them or when. And that happens when approvals are vague, justifications are generic, or third-party scripts are trusted by default.

To put it simply, QSAs are looking for a clear decision trail. For each script, they want to see a business or technical justification, an approver, and a date. When the only proof is a spreadsheet column that says “approved,”  the assessor concludes that authorization is a label, not a control.

This gap becomes more obvious when the QSA finds scripts on the payment page that have no matching approval record. 

Why integrity controls don’t always hold up 

The third pattern is claiming integrity controls without being able to prove they operate.

Teams point to SRI tags, CSP policies, or a monitoring tool and treat that as sufficient. But under 6.4.3, that is not enough. QSAs want to see that a baseline was established and that validations have been running against that baseline over time.

If a team cannot show the known-good version of a script or when it was last validated, the integrity element of 6.4.3 fails. Configuration is not enough. QSAs need evidence of execution with timestamps across the assessment period.

The same applies to 11.6.1. If teams claim to monitor for changes but cannot produce logs or alert history, the mechanism is treated as unproven.

Why monitoring fails frequency and coverage expectations

Monitoring often falls short because manual checks and periodic scans do not run consistently. Teams depend on people to remember tasks, tools that need upkeep, and schedules that slip during busy cycles. 

As a result, tools configured once are rarely revisited, and the logs show long gaps with no activity or alerts. From the QSA’s view, those gaps mean the environment may have been unmonitored, and 11.6.1 is not met.

Why does evidence become stale 

Evidence goes stale because the environment changes faster than the documentation that supports it. Inventories age as new scripts appear or old ones are removed. So gaps form across the assessment period. And different teams update their own records on their own timelines, which creates multiple versions of truth that drift apart.

When the QSA renders the page and sees scripts or headers that do not appear in the documentation, the script list from the live browser session becomes the basis for the finding. 

What QSAs actually need to see

For 6.4.3 and 11.6.1, QSAs anchor on the rendered page and work backward. They look at what the browser actually receives, then verify you have the inventory, approval trail, integrity baseline, and monitoring evidence to match it.

Here is how their questions typically map to the evidence they expect to see. 

QSA QuestionEvidence Required
What scripts executed when we loaded the page?Complete inventory of all first, third, and fourth-party scripts that execute in the browser, with URL, domain, owner, and documented purpose. Must match what the QSA observed in their rendered session.
How do you confirm this inventory is current?Timestamps and records from recent discovery runs showing validation against the rendered page, including dynamic and conditional scripts.
Who approved these scripts?Approval records for each script with approver name, role, date, and documented justification. Showcase a clear workflow from discovery to review to authorization.
How do you verify script integrity?Documentation of the integrity method (hashing, SRI, behavioral baselines) along with logs showing validation runs and results across the assessment window.
How are new scripts captured and approved?Evidence that newly detected scripts are discovered through monitoring, added to inventory, and routed for authorization before or immediately after appearing on the payment page.

For Requirement 11.6.1 (Change and Tamper Detection)

For Requirement 11.6.1, QSAs look for signs that change detection is functioning as an operational control, not a configured one. They expect to see monitoring that evaluates the rendered page on a predictable cadence, generates alerts when something changes, and documents how those alerts are handled.

QSA QuestionEvidence Required
How do you detect unauthorized changes?Documentation and configuration of the monitoring mechanism that evaluates payment pages as received by the browser, covering scripts and HTTP headers.
Show me recent monitoring activityTimestamped logs proving that monitoring ran continuously (or at the defined frequency) across the assessment period, with no unexplained gaps.
What happens when something changes?Alert records showing what was detected, when it was detected, who was notified, and how the alert was routed.
Show me an investigationAn end-to-end example that traces an alert through investigation and resolution, with the supporting tickets and remediation actions documented.
How often does monitoring runEvidence of monitoring frequency. This might include configuration exports, scan schedules, and, if applicable, the targeted risk analysis supporting any interval other than weekly.

In the end, all of it comes back to whether your evidence matches what the browser actually receives.

How PaymentGuard AI produces QSA-ready evidence

PaymentGuard AI is designed around the same sequence QSAs use during 6.4.3 and 11.6.1 assessments. It captures what runs in the browser, maintains the authorization chain, verifies integrity over time, and logs detection activity in a way that matches QSA expectations. 

Let’s take a closer look at how PaymentGuard AI delivers that:

Keeps the inventory aligned with QSA expectations

PaymentGuard AI discovers every script that executes in the browser, be it first-party, third-party, or fourth-party. It continuously updates inventory as the page changes, so the exported list matches what a QSA sees when they load the payment page. 

This way your teams stay focused on the work that strengthens resilience rather than chasing evidence gaps, all while inventory stays aligned with QSA expectations. 

Maintains a clean authorization trail

It flags new scripts the moment they appear. Then, it routes each one through an approval workflow with required justification fields, approver identity, and timestamps. This way, the record shows exactly how the script moved from discovery to authorization, allowing QSAs to follow the chain without ambiguity.

Tracks script behavior to preserve integrity over time

PaymentGuard AI builds a behavioral baseline for each authorized script and monitors for changes. When behavior shifts, due to changes like new DOM access, different outbound calls, or modified logic, the system records the change with timestamps and version history. 

QSAs see evidence that integrity checks operate continuously, not just when documentation was written.

Detects and evaluates the page that the customer receives

PaymentGuard AI monitors payment pages exactly as the consumer’s browser receives them, satisfying the core intent of 11.6.1. Unauthorized changes generate real-time alerts and supporting logs. Each alert is tied to a full investigation record, giving assessors a complete sequence from detection to resolution.

Deploys quickly to reduce compliance burden

PaymentGuard AI deploys in under 24 hours and requires no engineering lift. It integrates cleanly with SIEM workflows and produces evidence continuously in the background. The controls remain active even as the environment changes.

From failed audit to passing: A realistic timeline

Once the rendered page is monitored continuously, the rest falls into place. The inventory stabilizes, authorizations follow a clear workflow, and monitoring logs grow into a defensible audit trail. From there, moving from a failed audit to QSA-ready evidence typically takes eight to ten weeks. PaymentGuard AI supports that journey. 

The table shows the sequence.

WeekActivityOutcome
Week 1PaymentGuard AI is deployed on payment pagesYou gain full visibility within 24 hours because discovery runs directly in the browser, not through manual reviews.
Week 2PaymentGuard AI identifies scripts and routes them into authorizationThe inventory stabilizes quickly as new scripts flow into a structured approval process instead of ad-hoc updates.
Weeks 3–4PaymentGuard AI establishes baselines for authorized scriptsEach script gets tied to a clear justification and baseline, closing one of the most common gaps QSAs flag.
Weeks 5–8PaymentGuard AI runs continuous monitoring across the environmentMonitoring produces a consistent trail of logs and alerts because it evaluates the page automatically. 
Week 9+PaymentGuard AI generates a complete evidence set for reviewThe evidence set is complete and aligned because it reflects how the controls operated throughout the assessment window.

The bottom line

The gap in 6.4.3 and 11.6.1 is rarely about effort or intent. Most teams have policies, processes, and owners in place. The issue is that these controls must be proven through what the browser shows and what the logs can confirm. And the fact that manual reviews simply can’t keep up with how quickly payment pages change.

When that evidence isn’t there, or doesn’t align with the rendered page, the QSA has no basis to treat the control as operating, regardless of how well it was designed.

If you put in the work and still failed the last audit due to evidence, the problem is not effort. It is how the controls show up in front of a QSA. PaymentGuard AI can help you pass on the next attempt. 

Feroot deploys in 24 hours and generate the exact evidence QSAs need.  Schedule a demo to see how it works.

FAQ

What do PCI DSS Requirements 6.4.3 and 11.6.1 actually require?

Requirement 6.4.3 mandates that organizations maintain an inventory of all scripts executing on payment pages, with documented authorization and business justification for each script, plus integrity verification to ensure scripts haven’t been tampered with. Requirement 11.6.1 requires a change and tamper detection mechanism that alerts on unauthorized modifications to payment pages, including scripts and HTTP headers. The critical detail both requirements share is that QSAs evaluate these controls based on what actually renders in the customer’s browser, not what you think you deployed. Your evidence must reflect the live payment page throughout the entire assessment period.

Why do organizations with documented controls still fail 6.4.3 and 11.6.1 audits?

Organizations fail not because they lack documentation, but because they cannot prove their controls operate reliably. The most common failure pattern: teams maintain script inventories in spreadsheets that reflect what engineering believes is deployed, but when QSAs load the payment page, additional scripts appear that aren’t documented. Third-party tools load their own dependencies, vendor-hosted scripts change without notice, and manual reviews can’t keep pace with pages that assemble dynamically in real time. Even one undocumented script on the rendered page causes QSAs to question the entire control chain. If you can’t prove the inventory is current, the authorization is missing, or the monitoring has gaps, the requirements aren’t met regardless of how well the controls were designed.

What evidence do QSAs actually need to see for Requirement 6.4.3?

QSAs need a complete inventory of every script that executes in the browser, including first, third, and fourth-party scripts, with URLs, domains, owners, and documented purposes. For each script, they expect approval records showing the approver’s name, role, date, and business justification. They need documentation of your integrity verification method, whether hashing, SRI, or behavioral baselines, along with logs proving validation ran continuously across the assessment window with timestamps. Finally, they want evidence showing how newly detected scripts are discovered, added to inventory, and routed for authorization. The key requirement: all of this evidence must match what QSAs observe when they load your payment page in a browser during the assessment.

What does “continuous monitoring” actually mean for Requirement 11.6.1?

Continuous monitoring for 11.6.1 means your detection mechanism evaluates payment pages as received by the browser on a predictable, documented cadence throughout the assessment period. QSAs expect timestamped logs proving monitoring ran consistently with no unexplained gaps, alert records showing what was detected and when, evidence of how alerts were routed and investigated, and end-to-end examples tracing alerts through investigation to resolution. Weekly monitoring is the baseline expectation unless you have a targeted risk analysis supporting a different interval. The critical element: QSAs need proof the monitoring was operational, not just configured. Configuration without execution logs means the control is unproven.

Why can’t manual script reviews satisfy these requirements?

Manual reviews create several insurmountable problems. First, they’re point-in-time snapshots that age immediately as payment pages change, meaning your inventory is already outdated by the time you finish documenting it. Second, manual reviews only capture what loads during that specific session and miss conditional scripts that fire based on geography, device type, user segment, or A/B test variants. Third, manual processes can’t prove continuous operation because they depend on people remembering tasks and tools that need maintenance, creating gaps in your monitoring logs. Fourth, when QSAs load your payment page and see scripts that aren’t in your manual inventory, they have no basis to trust that your controls operated throughout the assessment period. Manual evidence simply can’t demonstrate the continuous, comprehensive monitoring that 6.4.3 and 11.6.1 demand.

What’s the typical cost of failing a PCI DSS audit due to 6.4.3 or 11.6.1?

Organizations that fail typically spend $50,000 or more on remediation, re-assessment fees, and additional QSA time. Beyond direct costs, certification delays interrupt partnerships, block new customer contracts, affect revenue recognition, and slow business operations. The timeline impact extends the damage as teams scramble to implement controls, collect retrospective evidence, and schedule follow-up assessments. Many organizations discover they need to rebuild evidence for the entire assessment period, which can take months if they’re relying on manual processes. Organizations using PaymentGuard AI avoid these costs entirely by maintaining continuous, audit-ready evidence that satisfies QSA requirements from day one.

How long does it take to go from a failed audit to passing with proper evidence?

With automated monitoring in place, moving from a failed audit to QSA-ready evidence typically takes 8-10 weeks. Week 1: Deploy monitoring to gain full visibility within 24 hours. Week 2: Identify all scripts and route them through structured authorization workflow. Weeks 3-4: Establish behavioral baselines for authorized scripts with clear justifications. Weeks 5-8: Run continuous monitoring across the environment to build a defensible audit trail with consistent logs and alerts. Week 9+: Generate the complete evidence set that reflects how controls operated throughout the assessment window. PaymentGuard AI supports this exact timeline because it captures what runs in the browser, maintains authorization chains, verifies integrity over time, and logs detection activity in formats that match QSA expectations.

Can I use SRI tags or Content Security Policy to satisfy Requirement 6.4.3?

SRI tags and CSP policies are valuable security controls, but they alone don’t satisfy 6.4.3. QSAs want to see that baselines were established and that validations have been running against those baselines over time. Simply pointing to SRI configuration or CSP headers isn’t enough. You need to prove you established a known-good version for each script, that you’re validating against that baseline continuously, and that you have timestamped logs showing validation results across the assessment period. Configuration proves intent, but QSAs need evidence of execution. PaymentGuard AI addresses this by building behavioral baselines for each authorized script, monitoring for deviations, and maintaining timestamped records of integrity checks that run automatically.

What happens if a script changes during the PCI assessment period?

This is exactly what 11.6.1 is designed to catch. When a script changes, whether from a vendor update, compromised third party, or internal modification, your detection mechanism must alert on that change with a timestamp, route the alert to the security team, and generate an investigation trail. QSAs will ask to see examples of how you detected changes, how alerts were handled, and how investigations were documented. If you can’t show this end-to-end flow with supporting evidence, 11.6.1 fails even if the change was legitimate. PaymentGuard AI detects script behavior changes in real time, alerts immediately when deviations occur, and preserves the complete investigation record from detection through resolution, giving QSAs exactly the evidence trail they expect.

Why does my inventory keep falling out of sync with what QSAs see?

Modern payment pages are dynamic systems that assemble in real time. Third-party tools load their own dependencies without your knowledge or control. Tag managers inject scripts based on campaigns, user segments, or A/B tests. Vendor-hosted scripts update silently. Conditional scripts fire only under specific circumstances like mobile browsers, certain geographies, or cart values above a threshold. Your manual inventory reflects what you documented during your last review, but the page has already changed by the time that documentation is complete. PaymentGuard AI solves this by discovering scripts continuously as they execute in actual customer sessions, capturing every variant regardless of when or how it loads, and maintaining an inventory that updates automatically to match what the browser receives.

How quickly can automated monitoring be deployed?

PaymentGuard AI deploys in under 24 hours with a simple JavaScript tag and requires no engineering lift or code changes to your payment flow. Once active, it immediately begins monitoring all payment pages, building the script inventory, and establishing baselines. Within 24 hours, you have complete visibility into everything executing on your payment pages. From there, the platform runs continuously in the background, maintaining your inventory, flagging new scripts for authorization, monitoring for behavior changes, and generating the evidence logs QSAs need. There’s no per-page configuration, no integration complexity, and no disruption to payment operations. You gain full 6.4.3 and 11.6.1 evidence coverage faster than it would take to manually audit a single payment page.

What if we’ve already failed a PCI audit? Can we still use automation for the re-assessment?

Yes, and this is one of the most common use cases for PaymentGuard AI. Organizations that failed their initial audit deploy the platform immediately to start building the continuous evidence trail QSAs require. The 8-10 week timeline from deployment to QSA-ready evidence means you can schedule your re-assessment with confidence that you’ll have complete, defensible documentation. The platform captures everything from the moment it’s deployed forward, so you’re building a clean evidence baseline rather than trying to reconstruct what happened in the past. Many organizations use the failed audit findings as a roadmap, deploying PaymentGuard AI to address the specific evidence gaps QSAs identified, then demonstrating during re-assessment that the controls now operate continuously with full visibility.