A team I worked with turned on SonarQube for the first time on a three-year-old codebase. 2,847 issues. Critical, major, minor, info — a wall of red and orange that made the dashboard look like it was on fire. The tech lead stared at it for about forty seconds, said "we'll get to that," and they never did.
That was eighteen months ago. The rule set is still active. Nobody looks at it.
The 72-Hour Window
When a team first enables static analysis, there's roughly a 72-hour window where people care. They click through findings, maybe fix a few obvious ones, feel productive. Then the backlog doesn't shrink because new code keeps adding issues, and the initial dump is too large to triage in any reasonable timeframe. By day four, the Slack channel for scanner alerts gets muted. By week two, the CI gate that was supposed to block merges on critical findings gets switched to "informational only" because it's blocking every single PR.
This isn't a tooling problem. It's a human problem that tooling makes worse.
Why Most Configurations Are Garbage
The default rule sets for most static analysis tools are built to be comprehensive, not practical. ESLint ships with hundreds of rules. SonarQube's default quality profile for Java has over 500. Semgrep's community rules number in the thousands. And teams just... turn them all on.
Think about that for a second. You wouldn't configure a firewall by enabling every possible rule and seeing what breaks. But that's exactly what happens with static analysis.
The fix is boring and nobody wants to do it: start with maybe 20-30 rules. The ones that catch actual bugs your team has shipped before. Not hypothetical issues. Not style nitpicks. Real defects that caused real incidents.
// .eslintrc.js - start here, not with "eslint:recommended"
module.exports = {
rules: {
// These have caused production bugs for us specifically
'no-await-in-loop': 'error', // killed our API response times in Q3
'no-return-assign': 'error', // caused a billing bug that took 3 days to find
'eqeqeq': 'error', // type coercion bit us with user IDs
'no-throw-literal': 'error',
'require-atomic-updates': 'error', // race condition in payment flow
// Everything else? Off. Add rules when they earn their place.
}
};
That's it. Five rules. Each one maps to an actual incident. When someone asks "why is this rule enabled?" you can point to a postmortem, not a best-practices blog post.
The Severity Lie
Static analysis tools label findings as "critical" based on the potential impact, not the actual risk in your specific context. A SQL injection finding in a function that only processes data from an internal admin tool behind VPN and SSO is not the same as one in a public-facing search endpoint. But your scanner gives them both a big red "CRITICAL" badge.
Teams that successfully use static analysis do something uncomfortable: they override severities. Aggressively. That "critical" finding in dead code that hasn't been called since 2019? Downgrade it to info. The "minor" style issue that consistently causes confusion during code review and leads to bugs? Bump it to error.
Your context matters more than the tool's defaults. Period.
Baseline and Gate: The Only Pattern That Works
Forget fixing the backlog. Seriously. Declare bankruptcy on it.
Every tool worth using has a baseline feature. SonarQube calls it "new code period." Semgrep has --baseline-commit. Even ESLint can be wired up with lint-staged to only check changed files.
# semgrep in CI - only flag new issues
semgrep ci --baseline-commit=$(git merge-base HEAD origin/main) --config=p/security-audit --error # fail the build on new findings only
The deal you make with your team is simple: old code is old code. We're not going to fix it all at once. But every new line goes through the scanner, and new findings block the merge. No exceptions, no "I'll fix it in the next PR," no override buttons that junior devs can click.
Over time, as files get touched for feature work, the old issues get cleaned up organically. It takes months. Sometimes a year. But it actually happens, unlike the "we'll schedule a sprint for tech debt" fantasy.
The Tools That Actually Matter in 2026
Quick rundown, because people always ask:
Semgrep — best balance of power and usability. Custom rules in YAML that don't require a PhD to write. The pro tier adds cross-file analysis that catches things like tainted data flowing through three function calls before hitting a sink. Worth paying for if you're doing security-focused scanning.
CodeQL — GitHub's offering. Powerful query language, incredible for deep analysis, but the learning curve is steep and scans are slow. Use it for weekly deep scans, not on every PR. Teams that run CodeQL on every push end up with 45-minute CI pipelines and developers who go make coffee during builds.
ESLint + typescript-eslint — still the bread and butter for JavaScript/TypeScript. The type-aware rules catch stuff that pure pattern matching misses. Enable @typescript-eslint/no-floating-promises and @typescript-eslint/no-misused-promises at minimum. These two rules alone prevent an entire class of async bugs.
Ruff — if you're on Python and not using Ruff yet, you're wasting time. It replaced Flake8, isort, and pycodestyle for most teams and runs in milliseconds. Not exaggerating — it's written in Rust and lints a million lines before the old tools finish starting up.
What Scanners Can't Do
Static analysis catches patterns. It doesn't understand intent. A scanner will flag eval() every time, but it can't tell you that your permission model has a logic flaw where users can escalate by creating a group, adding themselves as admin, then leaving. That requires understanding the business logic, and no amount of abstract syntax tree traversal gets you there.
Same goes for:
- Authorization bypass through business logic
- Race conditions that depend on deployment topology
- Data validation that's technically correct but semantically wrong
- Third-party API integrations where you're trusting input you shouldn't
This is where automated scanning and human review complement each other. The scanner handles the mechanical stuff — the patterns, the known-bad constructs, the dependency CVEs. Humans handle the "wait, should this endpoint even exist?" questions.
ScanMyCode.dev combines both. Automated scanning catches the low-hanging fruit and known vulnerability patterns, but the results get reviewed and contextualized so you're not just staring at a raw findings dump. Every issue comes with the file, the line number, and a concrete fix — not a generic "consider sanitizing user input" suggestion that tells you nothing.
Making Developers Not Hate It
The single biggest predictor of whether static analysis sticks at a company: do developers see it as helpful or as bureaucracy?
Things that make developers hate it:
- Blocking PRs for style issues (tabs vs spaces, really?)
- Findings with no actionable fix suggestion
- False positives that require a suppression comment with a justification paragraph
- Scans that take more than 3 minutes on a typical PR
Things that make developers actually appreciate it:
- Catching a null pointer bug before it hits staging
- Flagging a dependency with a known exploit before it gets deployed
- Inline PR comments that show the exact fix, not just the problem
- Running fast enough that results appear before the developer switches context
Speed matters more than comprehensiveness. A fast scanner that catches 70% of issues and runs in 30 seconds will get used. A thorough scanner that catches 95% but takes 20 minutes will get disabled. Every time.
The Suppression File Problem
At some point, every team creates a suppression file. .semgrepignore, sonar-project.properties exclusions, eslint-disable comments scattered through the codebase. This is normal and fine — until it isn't.
Set a rule: every suppression needs a comment explaining why. Not "false positive" — that tells you nothing six months later. Something like:
// eslint-disable-next-line security/detect-object-injection
// Safe: key is from enum, not user input. See PaymentType enum in types.ts
const handler = handlers[paymentType];
And review suppressions quarterly. Pull up a report of all disabled rules, all ignored files, all inline suppressions. You'll find that half of them were added during a crunch period and nobody remembers why. Some of the "false positives" from six months ago are real issues now because the code around them changed.
Stop Treating It Like a Checkbox
Compliance audits ask "do you use static analysis?" and teams answer yes because they installed SonarQube. Having a tool installed is not the same as using it effectively. A scanner that produces 3,000 unread alerts is theater.
The real question is: when was the last time a static analysis finding prevented a bug from reaching production? If nobody on the team can answer that, your setup needs work.
And if the answer is "we don't know because we don't track it" — that's the first thing to fix. Tag your bug tickets. When a scanner catches something, note it. After three months, you'll have actual data on ROI instead of a vague sense that "security tooling is important."
Don't wait for an audit to find out your scanning setup is decorative. Get a code review from ScanMyCode.dev — you'll get a prioritized report within 24 hours, covering real issues in your actual codebase, not generic recommendations from a default rule set.