A fintech startup ran Semgrep, CodeQL, and SonarQube on every commit. Green across the board. Passed their SOC 2 audit. Three weeks later, an attacker drained ${42000} from customer accounts through a business logic flaw that none of those tools even looked at.
Not a misconfiguration. Not a missing rule. A fundamental limitation of how static analysis works.
SAST tools match patterns in source code. They're exceptional at finding known vulnerability signatures — SQL injection via string concatenation, hardcoded credentials, buffer overflows in C. But pattern matching has a ceiling, and most engineering teams don't understand where that ceiling is. They see green checkmarks and assume coverage. That assumption gets expensive.
Business Logic Flaws: The Biggest Blind Spot
Static analysis cannot reason about what your application is supposed to do.
Consider this checkout endpoint:
app.post('/checkout', async (req, res) => {
const { cartId, promoCode } = req.body;
const cart = await Cart.findById(cartId);
if (promoCode) {
const promo = await Promo.findByCode(promoCode);
if (promo && !promo.expired) {
cart.total = cart.total * (1 - promo.discount);
// no check if promo was already applied
// no check if promo belongs to this user
// no limit on how many times you can POST this
}
}
await cart.save();
res.json({ total: cart.total });
});
Every SAST tool on the market will pass this. No injection. No XSS. No hardcoded secrets. Syntactically, it's clean.
But hit that endpoint five times with the same promo code and the total drops to near zero. The discount stacks multiplicatively. A 20% off code applied four times gives you 59% off. Apply it ten times and you're paying pennies.
This isn't hypothetical. Coupon stacking bugs show up constantly in bug bounty programs. Shopify, Uber, and dozens of smaller companies have paid out bounties for exactly this pattern. No static analysis tool catches it because there's no syntactic signature to match — you'd need to understand the business rule that a promo code should only apply once per cart.
Race Conditions and TOCTOU
Time-of-check-to-time-of-use. The bane of concurrent systems and almost entirely invisible to SAST.
async function withdrawFunds(userId, amount) {
const account = await Account.findById(userId);
// Check: does the user have enough?
if (account.balance < amount) {
throw new Error('Insufficient funds');
}
// Use: deduct the amount
// But what if another request checked between these two lines?
account.balance -= amount;
await account.save();
}
Fire two requests simultaneously for ${500} when the balance is ${600}. Both pass the check. Both deduct. Balance goes to -${400}. Classic double-spend.
CodeQL has some concurrency analysis for Java. Semgrep doesn't even try. SonarQube flags some threading issues in specific patterns but misses the async/await variants entirely. And honestly? Even the tools that claim to detect race conditions catch maybe 10-15% of real-world cases. The problem is fundamentally about runtime behavior, and static tools analyze code at rest.
You need database-level locking, optimistic concurrency, or idempotency keys. No linter will tell you that.
Indirect Data Flow Through External Systems
SAST traces data flow through your codebase. That's the key word — your codebase.
Data goes into a message queue, gets processed by a different service, stored in Redis, picked up by a cron job, and rendered in a template. The taint tracking breaks the moment data leaves the process boundary. Every microservice architecture has these gaps. Polyglot systems make it worse — your Python service receives data that your Node service validated (or didn't).
// Service A: validates and publishes
const sanitized = sanitizeInput(userInput);
await messageQueue.publish('user-updates', { name: sanitized });
// Service B: six hops later, different repo, different team
consumer.on('render-notification', (msg) => {
// Is msg.name still sanitized? Was it re-serialized?
// Did any service in between modify it?
template.render(`Hello ${msg.name}`); // potential XSS
});
No SAST tool on the market traces taint across service boundaries through message queues. Some commercial tools like Checkmarx and Veracode attempt cross-file analysis within a single project, but cross-service? Across different languages? Through a Kafka topic? Not happening.
Authentication and Authorization Logic
Tools can find missing authentication middleware if you write a custom rule. But authorization — who can do what to which resource — that's entirely application-specific.
app.get('/api/documents/:id', authenticate, async (req, res) => {
// authenticate checks: is this a valid user? Yes.
// but does THIS user have access to THIS document?
const doc = await Document.findById(req.params.id);
res.json(doc); // IDOR - any authenticated user gets any document
});
Insecure Direct Object Reference. Still in the OWASP Top 10 after all these years. SAST tools see a database query with a parameter from the URL and... that's fine. That's how every API works. The tool cannot know that req.params.id should be scoped to req.user.organizationId or whatever your tenancy model requires.
Some teams write Semgrep rules like "flag any database query in a route handler that doesn't reference req.user." Clever. But brittle — refactor your auth into a middleware or a service layer and the rule breaks.
Configuration-Dependent Vulnerabilities
Your code is secure. Your deployment isn't.
CORS set to * in production. Debug mode left on. Session cookies without the Secure flag because the local dev proxy doesn't use HTTPS and someone forgot to add an environment check. SAST analyzes source code, not runtime configuration. Some tools scan config files, sure. But the interaction between code behavior and deployment config? That gap is real.
// Looks fine in code
app.use(cors(config.corsOptions));
// config.corsOptions in production:
// { origin: '*', credentials: true }
// loaded from an env var that nobody audited
SonarQube might flag origin: '*' if it's hardcoded. But when it comes from process.env.CORS_ORIGIN? Silent. The tool sees a variable reference and moves on.
Cryptographic Misuse Beyond the Obvious
SAST catches MD5 for password hashing. Good. Low-hanging fruit.
It won't catch:
- AES-CBC without HMAC (padding oracle attacks still work in 2026)
- RSA with PKCS#1 v1.5 padding instead of OAEP
- Comparing HMAC signatures with
===instead of a constant-time comparison - Generating IVs from
Math.random()wrapped in a buffer conversion so theMath.randompattern match fails - Using the same key for encryption and authentication
Cryptographic correctness requires understanding the protocol, not just the primitives. Most SAST rules are written at the primitive level: "don't use DES," "don't use ECB mode." The subtle stuff — the stuff that actually breaks real systems — needs a reviewer who understands applied cryptography. Or a specialized tool like CryptoGuard, which is narrow but deeper.
The Semantic Gap in Template Injection
Server-side template injection (SSTI) has become more common as frameworks proliferate. SAST tools have signatures for the obvious cases — Jinja2 with render_template_string(user_input). But the real-world variants are messier.
# Indirect SSTI - user controls a "theme" setting
# that maps to a template path
theme = db.get_user_preference(user_id, 'theme')
# theme value: "{{config.__class__.__init__.__globals__['os'].popen('id').read()}}"
# stored weeks ago via a settings page that allows "custom themes"
return render_template(f"themes/{theme}/base.html")
The injection point (settings page) and the execution point (template render) are separated by time and code path. Static analysis loses the trail. And the f"themes/{theme}/base.html" pattern doesn't match any standard SSTI signature because it looks like a path construction, not a template evaluation.
What Actually Fills These Gaps
SAST is a layer. A good one. But treating it as your security program is like wearing a seatbelt and skipping the brakes.
DAST (Dynamic Application Security Testing) catches what SAST misses by actually running the application. Tools like Burp Suite, OWASP ZAP, and Nuclei send real requests and analyze real responses. They'll find the IDOR, the race condition (sometimes), the misconfigured CORS. But they're slower, harder to integrate into CI, and need a running environment.
Manual code review catches business logic flaws. Human reviewers understand intent, not just syntax. A reviewer who knows your domain will look at that checkout endpoint and immediately ask "what stops someone from applying this twice?" No tool asks that question.
Threat modeling before writing code. Identify where the trust boundaries are, what assumptions each component makes, where data crosses those boundaries. STRIDE, PASTA, attack trees — pick a framework, any framework. The exercise matters more than the methodology.
Runtime protection as the last line. WAFs, RASP, anomaly detection. They won't prevent the vulnerability from existing, but they can stop exploitation.
Building a Layered Approach
Run SAST. Absolutely. Configure it well, tune the rules, fix what it finds. But then ask: what can't this tool see?
For every feature that handles money, permissions, or sensitive data, add manual review on top of automated scanning. A human reviewer who understands your business logic catches the classes of bugs that pattern matching never will.
ScanMyCode.dev combines automated analysis with expert review — covering both the pattern-matching layer and the semantic understanding that tools alone can't provide. You get findings with exact file and line references, plus business-logic observations that no SAST scanner generates.
Stop Trusting the Green Checkmark
Green CI doesn't mean secure. It means your tools didn't find anything they were designed to find. That's a much weaker statement than most teams realize.
Map your actual risk surface. Figure out which parts of your codebase handle the most sensitive operations. Then ask honestly: is a regex-based pattern matcher sufficient to validate the security of a payment flow? Of an authorization system? Of a data pipeline that crosses four service boundaries?
Usually the answer is no. And that's fine — as long as you've got something else covering the gap.
Don't wait until an attacker finds what your scanner missed. Get a security audit that goes beyond pattern matching — results in 24 hours.