Back to Blog
Security11 min

OAuth2 Flows That Look Right But Aren't

Common OAuth2 implementation mistakes that pass code review but leave your app wide open. Real misconfigurations from production systems.

By Security TeamMarch 30, 2026

A fintech startup got breached last year because their OAuth2 implementation passed every code review, every automated scan, and two external audits. The redirect URI validation used a startsWith check. Attacker registered https://legit-app.com.evil.io, and that was it. Full account takeover across 12,000 users.

OAuth2 is one of those protocols where "working" and "secure" are completely different things.

The Redirect URI Problem Is Worse Than You Think

Everyone knows you should validate redirect URIs. Most teams do. But the validation itself is where things fall apart.

// This passes code review all the time
function validateRedirectUri(uri, allowedUris) {
  return allowedUris.some(allowed => uri.startsWith(allowed));
}

// Registers: https://app.example.com
// Attacker uses: https://app.example.com.evil.io
// Validation: ✅ passes

Exact string matching. That's it. Not startsWith, not contains, not regex with anchors you forgot. Exact match.

// Do this instead
function validateRedirectUri(uri, allowedUris) {
  const parsed = new URL(uri);
  const normalized = parsed.origin + parsed.pathname;
  return allowedUris.includes(normalized);
}

Even that has edge cases with trailing slashes and path normalization. But it kills the subdomain trick dead.

The really nasty variant? Open redirectors on your own domain. If https://app.example.com/goto?url=evil.com exists anywhere on the site, an attacker chains it: legitimate redirect URI → open redirector → attacker's server. The authorization code or token lands on evil.com. Your exact-match validation saw a perfectly valid URI.

State Parameter: The Thing Nobody Implements Right

The state parameter prevents CSRF in OAuth flows. RFC 6749 says it's "RECOMMENDED." Translation from standards-speak: it's required, they just didn't want to break existing implementations.

Here's what goes wrong in practice:

// Pattern seen in ~40% of OAuth implementations
app.get('/auth/callback', (req, res) => {
  const { code, state } = req.query;
  
  // "Validate" state
  if (!state) {
    return res.status(400).send('Missing state');
  }
  
  // Exchange code for token
  // ... rest of flow
});

Checking that state exists isn't validation. Checking that state matches a cryptographically random value tied to the user's session is validation. Big difference.

// Actual state validation
app.get('/auth/callback', (req, res) => {
  const { code, state } = req.query;
  const expectedState = req.session.oauthState;
  
  if (!state || !expectedState || 
      !crypto.timingSafeEqual(
        Buffer.from(state), 
        Buffer.from(expectedState)
      )) {
    return res.status(403).send('Invalid state');
  }
  
  delete req.session.oauthState;
  // Now exchange code
});

Timing-safe comparison. Single use (delete after check). Tied to the session. Three things, and most implementations get zero of them right.

Authorization Code vs. Implicit Flow: A Dead Horse That Won't Stay Down

The implicit flow returns tokens directly in the URL fragment. It was designed for SPAs back when CORS was painful and browser capabilities were limited. That was 2012.

It's 2026. CORS works. PKCE exists. The implicit flow should be dead.

But it's not. A security audit last quarter found 3 out of 7 apps still using implicit flow because "the tutorial said so." The tutorial was from 2017. Tokens in URL fragments get logged by proxies, browser extensions, analytics scripts. Every piece of JavaScript on the page can read window.location.hash.

Authorization Code + PKCE. For everything. SPAs, mobile apps, server-side apps. No exceptions worth making in 2026.

// PKCE flow - the right way for SPAs
const codeVerifier = generateRandomString(128);
const codeChallenge = base64url(sha256(codeVerifier));

// Step 1: Authorization request
const authUrl = new URL('https://auth.example.com/authorize');
authUrl.searchParams.set('response_type', 'code');
authUrl.searchParams.set('client_id', CLIENT_ID);
authUrl.searchParams.set('redirect_uri', REDIRECT_URI);
authUrl.searchParams.set('code_challenge', codeChallenge);
authUrl.searchParams.set('code_challenge_method', 'S256');
authUrl.searchParams.set('state', generateState());

// Step 2: Token exchange includes verifier
const tokenResponse = await fetch('https://auth.example.com/token', {
  method: 'POST',
  body: new URLSearchParams({
    grant_type: 'authorization_code',
    code: authorizationCode,
    redirect_uri: REDIRECT_URI,
    client_id: CLIENT_ID,
    code_verifier: codeVerifier  // proves you're the same client
  })
});

Token Storage: Where Frontend Teams Get Creative

localStorage. Teams keep putting tokens in localStorage. Every XSS vulnerability on your domain — yours, a third-party script, an analytics tag — can read localStorage. All of it. No restrictions.

The argument is always "but we don't have XSS vulnerabilities." Right. And the 200 npm packages you're loading don't either. And nobody on the team will ever accidentally introduce one. And no third-party script you embed will ever get compromised.

HttpOnly cookies for refresh tokens. In-memory for access tokens (short-lived). That's the answer, and it's been the answer for years.

// Server sets this on token exchange
Set-Cookie: refresh_token=abc123; 
  HttpOnly; 
  Secure; 
  SameSite=Strict; 
  Path=/api/auth/refresh;
  Max-Age=604800

// Client stores access token in memory only
let accessToken = null; // gone on page refresh, that's fine

async function getAccessToken() {
  if (accessToken && !isExpired(accessToken)) {
    return accessToken;
  }
  // Silent refresh via HttpOnly cookie
  const res = await fetch('/api/auth/refresh', { 
    credentials: 'include' 
  });
  const data = await res.json();
  accessToken = data.access_token;
  return accessToken;
}

"But the user has to re-authenticate on refresh!" No, that's what the silent refresh is for. The HttpOnly cookie handles it. The access token lives in a JavaScript variable — gone when the tab closes or refreshes, recreated silently from the refresh token cookie. XSS can't touch HttpOnly cookies.

Scope Creep: When Permissions Grow But Never Shrink

OAuth scopes are supposed to follow least privilege. In practice, scopes accumulate.

A SaaS product needed read access to GitHub repos for a code review feature. Launch day: scope=repo:read. Six months later someone added a "fix it for me" button: scope=repo:read,repo:write. Then came the CI integration: scope=repo:read,repo:write,workflow. Then webhooks: scope=repo:read,repo:write,workflow,admin:repo_hook.

Nobody ever went back to check if the original read scope was still needed separately, or if the broader permissions covered it. Nobody checked if the "fix it" feature was even still in the product. It wasn't — got removed in a redesign. The scope stayed.

Audit your requested scopes quarterly. Map each scope to a feature that's actually live. Remove what's dead. Your users don't read the permission screen carefully, but security researchers do.

The Token Lifetime Trap

Access tokens should be short-lived. Refresh tokens should be long-lived but rotatable. Sounds simple. Gets mangled constantly.

Common pattern that seems fine:

// "Standard" token config
{
  access_token_lifetime: 3600,   // 1 hour
  refresh_token_lifetime: 2592000 // 30 days
}

One hour access tokens are actually too long for most apps. If a token leaks, an attacker has a full hour of access. For a banking app? Fifteen minutes, max. For a read-only dashboard? An hour is probably fine. Context matters more than any "standard" recommendation.

The worse problem: refresh token reuse detection. If a refresh token gets stolen and the attacker uses it, does your system notice? Refresh token rotation means every time a refresh token is used, you issue a new one and invalidate the old one. If the old one shows up again, someone's replaying it — kill the entire token family.

// Refresh token rotation with replay detection
async function handleRefresh(refreshToken) {
  const tokenRecord = await db.refreshTokens.findOne({ 
    token: refreshToken 
  });
  
  if (!tokenRecord) {
    // Unknown token - could be replay after rotation
    // Kill the entire family to be safe
    await db.refreshTokens.deleteMany({ 
      family: tokenRecord?.family 
    });
    throw new Error('Token reuse detected');
  }
  
  if (tokenRecord.used) {
    // REPLAY DETECTED - this token was already rotated
    await db.refreshTokens.deleteMany({ 
      family: tokenRecord.family 
    });
    // Maybe alert security team
    await alertSecurityTeam(tokenRecord.userId);
    throw new Error('Token reuse detected');
  }
  
  // Mark current token as used
  await db.refreshTokens.updateOne(
    { token: refreshToken }, 
    { used: true }
  );
  
  // Issue new tokens in same family
  const newRefreshToken = generateToken();
  await db.refreshTokens.create({
    token: newRefreshToken,
    family: tokenRecord.family,
    userId: tokenRecord.userId,
    used: false
  });
  
  return { 
    accessToken: signAccessToken(tokenRecord.userId),
    refreshToken: newRefreshToken 
  };
}

OIDC ID Tokens: Not Your Authorization Mechanism

OpenID Connect adds ID tokens on top of OAuth2. ID tokens tell you who someone is. Access tokens tell you what they can do. Mixing them up is surprisingly common.

Seen this in production more than once: API endpoints accepting ID tokens for authorization. The ID token proves identity — great. But it wasn't designed to be sent to resource servers. It doesn't have scope claims the way access tokens do. It might not have an audience claim that matches your API. And most critically, there's no standard mechanism for the resource server to validate that the ID token was intended for it.

ID tokens are for the client. Access tokens are for the API. Keep them in their lanes.

Automated Scanning Catches What Reviews Miss

OAuth2 misconfigurations are subtle. The code works. The flow completes. Users log in successfully. Everything looks correct until someone with a proxy tool spends 30 minutes poking at your callback endpoint. Code reviewers focus on logic and readability — they're not going to manually trace every possible redirect URI manipulation or check if state validation uses timing-safe comparison.

ScanMyCode.dev flags these patterns automatically. Weak redirect validation, missing state checks, tokens in localStorage, implicit flow usage, overly broad scopes. The security audit covers the full OWASP authentication checklist and returns specific line numbers with fix suggestions.

What Actually Matters

Fix these five things and you're ahead of 90% of OAuth implementations:

  1. Exact-match redirect URI validation. Kill open redirectors on your domain while you're at it.
  2. Cryptographic state parameter, session-bound, single-use, timing-safe comparison.
  3. Authorization Code + PKCE everywhere. Implicit flow is a liability.
  4. Tokens out of localStorage. HttpOnly cookies for refresh, memory for access.
  5. Refresh token rotation with replay detection. Kill the family on reuse.

None of this is exotic. None of it requires custom crypto. It's just doing the boring parts right instead of copying the first Stack Overflow answer that made the login screen work.

Not sure if your OAuth implementation is solid? Run a security audit — get a full report with exact issues and fixes within 24 hours.

oauth2authenticationauthorizationoidcsecurity-auditaccess-control

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit