Back to Blog
Security11 min

SSRF Is Eating Your Cloud Infrastructure From the Inside

Server-Side Request Forgery went from obscure bug class to the attack behind Capital One's breach. Your webhook handler is probably vulnerable right now.

By Security TeamMarch 16, 2026

Capital One lost 100 million customer records in 2019. Not because of some exotic zero-day. Not because their firewall had a gap. A single SSRF vulnerability in a web application firewall let an attacker query the AWS metadata service, grab IAM credentials, and walk straight into S3 buckets full of credit applications. Total cost: $80 million in fines, plus whatever trust is worth.

Server-Side Request Forgery used to be a footnote in security training. Background noise. Then cloud happened, and suddenly every application sits next to an internal metadata endpoint that hands out credentials to anyone who asks nicely from localhost.

What Actually Happens During an SSRF Attack

The setup is almost embarrassingly simple. Your application takes a URL as input — maybe for a webhook callback, a profile image fetch, an RSS feed import, a PDF generation service, whatever. Instead of pointing at a legitimate external resource, the attacker points it at something internal.

// Webhook verification endpoint. Looks innocent enough.
app.post('/api/webhooks/verify', async (req, res) => {
  const { callbackUrl } = req.body;

  // "Just check if the URL responds"
  const response = await fetch(callbackUrl);
  const status = response.status;

  res.json({ reachable: status === 200 });
});

// Attacker sends:
// { "callbackUrl": "http://169.254.169.254/latest/meta-data/iam/security-credentials/" }
// Your server happily fetches it. From inside the VPC. Game over.

That 169.254.169.254 address is the AWS instance metadata service. Every EC2 instance can reach it. It returns temporary credentials, instance identity documents, user data scripts — basically the keys to whatever that instance is allowed to touch. GCP uses the same pattern at metadata.google.internal. Azure at 169.254.169.254 too, just different paths.

And the thing is, from the server's perspective, this is a completely normal outbound HTTP request. No alarms. No blocked ports. The application is doing exactly what it was told to do.

Why Your URL Validation Is Probably Broken

Everyone's first instinct: blocklist internal IPs. Check if the URL resolves to 127.0.0.1 or 10.x.x.x or 169.254.x.x and reject it.

Bypass techniques that work in production right now:

// All of these resolve to 127.0.0.1
http://0x7f000001/
http://017700000001/
http://2130706433/
http://127.1/
http://0/
http://[::1]/
http://localtest.me/           // DNS rebinding - resolves to 127.0.0.1
http://spoofed.burpcollaborator.net/  // DNS rebinding attack

// AWS metadata via IPv6
http://[fd00:ec2::254]/latest/meta-data/

// Double encoding
http://169.254.169.254%252f..%252f

// Redirect bypass - your server follows redirects, right?
http://attacker.com/redirect?url=http://169.254.169.254/

That last one is devastating. You validate the initial URL, confirm it points to a public IP, then your HTTP client follows a 302 redirect straight to the metadata service. Most HTTP libraries follow redirects by default. fetch, axios, requests, HttpClient — all of them.

DNS rebinding is even nastier. The attacker controls a domain that alternates between resolving to a public IP (passes your check) and an internal IP (hits the target when the actual request fires). The TTL is set to zero, so your DNS cache can't save you. Tools like rbndr automate this completely.

The Metadata Service Problem Nobody Talks About

AWS introduced IMDSv2 in 2019 specifically because of SSRF. It requires a PUT request to get a session token before you can query metadata. Since most SSRF vulnerabilities only let you make GET requests, this blocks the basic attack.

Sounds great. Except.

IMDSv2 is opt-in. Your existing instances are still running v1 unless someone explicitly migrated them. And even in organizations that mandate v2 for new instances, there's usually a Terraform module somewhere that was written in 2018 and nobody's touched since. A quick aws ec2 describe-instances filtered on MetadataOptions.HttpTokens will tell you exactly how exposed you are. Most teams that run this check don't love what they find.

# Find instances still running IMDSv1
aws ec2 describe-instances   --query 'Reservations[].Instances[?MetadataOptions.HttpTokens==`optional`].[InstanceId,Tags[?Key==`Name`].Value|[0]]'   --output table

# Enforce IMDSv2 on a specific instance
aws ec2 modify-instance-metadata-options   --instance-id i-0abc123def456   --http-tokens required   --http-endpoint enabled

GCP's equivalent is the Metadata-Flavor: Google header requirement. Better than nothing, but if the SSRF lets you control headers (which many do through URL parameters in poorly written proxy endpoints), you're back to square one.

Real Defenses That Actually Work

Forget blocklists. They're a game of whack-a-mole against an attacker who has infinite encoding tricks. Build an allowlist instead.

import { URL } from 'url';
import dns from 'dns/promises';

const ALLOWED_SCHEMES = new Set(['https:']);
const BLOCKED_RANGES = [
  /^127./,
  /^10./,
  /^172.(1[6-9]|2d|3[01])./,
  /^192.168./,
  /^169.254./,
  /^0./,
  /^::1$/,
  /^fd/i,
  /^fe80/i,
];

async function validateOutboundUrl(input: string): Promise {
  const parsed = new URL(input);

  // Scheme allowlist. HTTP is almost never needed.
  if (!ALLOWED_SCHEMES.has(parsed.protocol)) {
    throw new Error('Only HTTPS URLs allowed');
  }

  // Resolve DNS BEFORE making the request
  const addresses = await dns.resolve4(parsed.hostname);

  for (const addr of addresses) {
    if (BLOCKED_RANGES.some(r => r.test(addr))) {
      throw new Error('URL resolves to internal address');
    }
  }

  // CRITICAL: Replace hostname with resolved IP to prevent DNS rebinding
  // But keep the Host header for TLS/routing
  return input;
}

// Then: disable redirects entirely
const response = await fetch(validatedUrl, {
  redirect: 'error',  // Do NOT follow redirects
  signal: AbortSignal.timeout(5000),
});

Key points in that code: resolve DNS yourself, check the IP, then disable redirect following. If you absolutely need redirects, re-validate after each hop. And set a timeout — SSRF to internal services that hang (like a port scan) shouldn't tie up your request workers for 30 seconds.

Network-level segmentation helps too. Your webhook processing service doesn't need to reach the metadata endpoint. Period. A VPC endpoint policy or a local iptables rule blocking 169.254.169.254 from application containers is defense in depth that costs nothing.

# Block metadata access from application containers
iptables -A OUTPUT -d 169.254.169.254 -j DROP -m owner --uid-owner appuser

# Or in a Kubernetes NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: block-metadata
spec:
  podSelector:
    matchLabels:
      app: webhook-service
  policyTypes:
    - Egress
  egress:
    - to:
        - ipBlock:
            cidr: 0.0.0.0/0
            except:
              - 169.254.169.254/32

The Blind SSRF Problem

Not every SSRF shows you the response. Sometimes the application fetches a URL but only tells you "success" or "failure." Attackers still extract data through timing (internal port responds in 2ms, closed port times out after 10 seconds) and out-of-band channels (DNS lookups to attacker-controlled nameservers that log the subdomain, which contains exfiltrated data).

Blind SSRF is harder to exploit but also harder to detect. Your WAF won't catch it because the response never leaves your network in the normal sense. Application-level logging of every outbound request — including resolved IP, response time, and response size — is the only way to spot these after the fact.

// Log every outbound request for SSRF detection
async function monitoredFetch(url: string, context: string) {
  const start = Date.now();
  const parsed = new URL(url);
  const resolved = await dns.resolve4(parsed.hostname);

  const logEntry = {
    timestamp: new Date().toISOString(),
    url: url,
    resolvedIps: resolved,
    context: context,
    duration: 0,
    status: 0,
  };

  try {
    const res = await fetch(url, { redirect: 'error', signal: AbortSignal.timeout(5000) });
    logEntry.duration = Date.now() - start;
    logEntry.status = res.status;
    return res;
  } catch (err) {
    logEntry.duration = Date.now() - start;
    logEntry.status = -1;
    throw err;
  } finally {
    logger.info('outbound_request', logEntry);
    // Alert if resolved IP is internal
    if (resolved.some(ip => BLOCKED_RANGES.some(r => r.test(ip)))) {
      alerting.critical('possible_ssrf_attempt', logEntry);
    }
  }
}

Where SSRF Hides in Modern Applications

Webhook handlers are the obvious one. But SSRF surfaces in places teams never think to check:

PDF generation. Services like Puppeteer, wkhtmltopdf, or any headless browser that renders HTML to PDF. Pass it <img src="http://169.254.169.254/..."> and the rendered PDF contains the metadata response. WeasyPrint, Prince — same story. If it resolves URLs during rendering, it's an SSRF vector.

Image/file processing. ImageMagick has had SSRF vulnerabilities through SVG files for years. Upload an SVG with an xlink:href pointing to an internal service and ImageMagick fetches it during processing. The policy.xml file should disable URL fetching, but the default config on most distros doesn't.

OAuth callbacks. The redirect URI in OAuth flows. If your application fetches the authorization server's metadata endpoint from a user-supplied issuer URL, that's SSRF. OpenID Connect discovery (/.well-known/openid-configuration) is particularly juicy because the application actively expects to fetch URLs from it.

GraphQL. Introspection queries aren't the issue — it's custom scalar types that accept URLs and resolve them server-side. A URL scalar that validates format but not destination is an SSRF waiting to happen.

Automated Security Scanning

Manual code review catches maybe half of SSRF vectors. The obvious ones — direct fetch(userInput) — sure. But SSRF through three layers of abstraction, where a user-controlled value eventually reaches an HTTP client after being stored in a database and retrieved by a background job? Nobody catches that in a code review. ScanMyCode.dev traces data flow across your entire codebase, flagging every path where user input reaches an outbound HTTP call, including indirect ones through queues and databases. The report shows exact file and line numbers with fix suggestions specific to your framework.

Stop Treating SSRF as a Low-Severity Bug

Bug bounty programs used to pay pocket change for SSRF. That changed fast after researchers started chaining SSRF with cloud metadata to dump entire customer databases. GitLab paid $33,000 for an SSRF in their import feature. Shopify paid $25,000 for one in their app proxy. These weren't critical zero-days — they were fetch calls without proper validation.

Your cloud provider assumes your application won't make requests to internal services on behalf of attackers. Your application assumes the network will block bad requests. Neither is checking. That gap is where SSRF lives.

Run an audit. Check every endpoint that accepts URLs. Verify IMDSv2 is enforced across your fleet. Add network policies blocking metadata access from application pods. And if you want someone else to find what you missed — submit your codebase for a security audit. Results in 24 hours, every SSRF vector mapped out with remediation steps.

owaspssrfcloud-securityawsvulnerabilityserver-side-request-forgery

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit