Back to Blog
Security9 min

Your Secrets Survive Deployment and Then Leak at Runtime

Secrets that pass every pre-commit check can still leak through logs, error traces, debug endpoints, and cloud metadata. Where runtime secret exposure actually happens.

By Security TeamMarch 29, 2026

A fintech startup ran truffleHog on every commit, rotated keys quarterly, used a vault for storage. Textbook secret hygiene. Then someone grepped their Datadog logs and found 43 API keys sitting in plain text. Not from git history. From console.log statements their developers left in production code.

Everyone obsesses over secrets in source control. And yeah, that matters. But the weirder, harder problem is secrets that leak after deployment. Through logs. Through error stack traces. Through /proc on a shared host. Through that debug endpoint someone forgot to disable.

Environment Variables Are Not Secret Storage

Twelve-factor app methodology told everyone to use environment variables for config. Fine advice for non-sensitive config. Terrible advice that got cargo-culted into "put your database password in an env var."

On Linux, environment variables for any process are readable by the same user:

# anyone running as the same user can do this
cat /proc/<pid>/environ | tr '\0' '\n' | grep SECRET
# DB_SECRET=prod_s3cr3t_k3y_2026

In a containerized setup with shared namespaces, or on a host where multiple services run under the same user (which is still shockingly common), that means one compromised service leaks every secret to every other service. Kubernetes pods sharing a service account? Same deal.

And then there are the frameworks that helpfully dump environment variables into error pages. Rails did this for years in development mode. Next.js exposes anything prefixed with NEXT_PUBLIC_ to the browser bundle. Django's debug page literally has a section called "Environment" that shows everything.

# next.config.js - spot the mistake
// NEXT_PUBLIC_ prefix = shipped to every browser
// someone named their Stripe key this way. in production.
module.exports = {
  env: {
    NEXT_PUBLIC_STRIPE_KEY: process.env.STRIPE_SECRET_KEY,
  },
};

That actually happened. A SaaS company exposed their Stripe secret key to every visitor for 11 days before someone noticed.

Logging Is the Biggest Secret Leak Vector Nobody Talks About

Structured logging is great. Structured logging that serializes entire request objects is a nightmare. One Express middleware:

app.use((req, res, next) => {
  // "let's log all requests for debugging"
  // congratulations, you're now logging Authorization headers
  logger.info('Incoming request', {
    method: req.method,
    url: req.url,
    headers: req.headers,  // bearer tokens, API keys, session cookies
    body: req.body          // passwords, credit card numbers, whatever
  });
  next();
});

Every single request's Authorization header, now sitting in CloudWatch or Elasticsearch or wherever the logs go. And logs get retained for months. Sometimes years. Often replicated across regions. Good luck rotating those tokens when they've been fanned out to 6 different log aggregation services.

The scarier variant is exception logging. An unhandled error in a payment flow might dump the entire context object:

try {
  await chargeCustomer(order);
} catch (err) {
  // err.config.headers contains the API key
  // err.response.data might contain customer PII
  // Sentry captures all of this by default
  Sentry.captureException(err);
}

Axios errors include the full request config. That means headers. That means your Authorization: Bearer sk_live_... is now in Sentry, viewable by every developer with dashboard access. Sentry added data scrubbing for this exact reason, but the defaults only catch fields literally named "password." Name your header something creative and it sails right through.

Cloud Metadata: The 169.254.169.254 Problem

Every major cloud provider runs a metadata service at 169.254.169.254. On AWS, it hands out temporary IAM credentials to any process that asks. SSRF vulnerability + metadata endpoint = full cloud account compromise. Capital One's 2019 breach? Exactly this pattern. An SSRF in a WAF config pulled IAM credentials from the metadata service, then used them to dump 100 million customer records from S3.

AWS introduced IMDSv2 to require a session token, making SSRF exploitation harder. But IMDSv1 is still enabled by default on most EC2 instances. You have to actively opt out.

# check if your instance still allows IMDSv1
# if this returns credentials, you're vulnerable to SSRF-based theft
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/

# enforce IMDSv2 - do this on every instance
aws ec2 modify-instance-metadata-options \
  --instance-id i-1234567890abcdef0 \
  --http-tokens required \
  --http-endpoint enabled

GCP and Azure have similar endpoints. GCP's is slightly better because it requires a Metadata-Flavor: Google header, but SSRF through a server-side proxy often includes arbitrary headers.

Debug Endpoints and Health Checks That Say Too Much

Spring Boot Actuator. The number of production services running with /actuator/env exposed is genuinely alarming. A 2024 scan by GreyNoise found over 12,000 publicly accessible Actuator endpoints on the internet. That endpoint shows every environment variable, including database URLs with embedded credentials.

Node.js has process.env which some health check endpoints include:

// health.js - "simple health check"
app.get('/health', (req, res) => {
  res.json({
    status: 'ok',
    uptime: process.uptime(),
    env: process.env,  // WHY
  });
});

Nobody writes this intentionally. But someone adds it during debugging, the PR gets approved because it's "just a health check," and six months later it's sitting behind a load balancer with no auth.

PHP's phpinfo() is the classic version. But modern equivalents exist in every ecosystem. Ruby's rails/info/routes in development mode. Python's Django debug toolbar. Go services that expose /debug/pprof without authentication. They all leak something.

Build Args, Docker Inspect, and Layer Caching

Quick one because people keep getting this wrong. Docker build args are not secrets.

# people think ARG is safe because it's "build time only"
docker build --build-arg DB_PASSWORD=hunter2 .

# nope, it's stored in the image metadata forever
docker inspect myimage | grep DB_PASSWORD
# "DB_PASSWORD=hunter2"

# and in the layer history
docker history myimage
# ARG DB_PASSWORD=hunter2

BuildKit has --secret for this, which mounts secrets as files during build without baking them into layers. But the ARG pattern is still in most Docker tutorials. Including the official docs for some images.

Client-Side Bundles and Source Maps

Webpack, Vite, esbuild. They inline environment variables at build time. If your .env file has a server-side secret and your bundler config isn't carefully scoped, that secret ends up in a JavaScript file served to every browser.

// vite.config.ts
export default defineConfig({
  define: {
    // this replaces every instance of process.env in client code
    // including process.env.DATABASE_URL if it exists
    'process.env': process.env  // don't. just don't.
  }
});

Vite and Next.js mitigate this with prefixes (VITE_ and NEXT_PUBLIC_). But Webpack with DefinePlugin using process.env will happily inline everything. And source maps make it even worse because they give you the original variable names in a readable format. Most teams deploy source maps to production for error tracking. Check if yours are publicly accessible right now:

curl -s https://yoursite.com/assets/main.js.map | head -c 500

If that returns JSON, your source maps are public.

What Actually Works for Runtime Secret Protection

Vault-based injection at startup, not environment variables. HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault. The secret gets fetched at application boot and held in memory. Never written to disk, never in environment variables, never in build args.

For logging: redaction middleware. Not optional, not "we'll add it later." Every logging call goes through a sanitizer that strips known secret patterns.

const REDACT_PATTERNS = [
  /Bearers+[A-Za-z0-9-._~+/]+=*/g,
  /sk_(live|test)_[A-Za-z0-9]{24,}/g,
  /ghp_[A-Za-z0-9]{36}/g,
  /-----BEGIN (RSA |EC )?PRIVATE KEY-----/g,
  /eyJ[A-Za-z0-9-_]+.eyJ[A-Za-z0-9-_]+/g, // JWTs
];

function redact(obj) {
  const str = JSON.stringify(obj);
  return JSON.parse(
    REDACT_PATTERNS.reduce(
      (s, pattern) => s.replace(pattern, '[REDACTED]'),
      str
    )
  );
}

For cloud metadata: IMDSv2, network policies that block 169.254.169.254 from application containers, and least-privilege IAM roles that limit what stolen credentials can do.

For client bundles: explicit allowlists of which variables get exposed. Never pass process.env wholesale. Audit your built bundles with grep for patterns like sk_ or -----BEGIN as part of CI.

The Audit Gap

Most security tools focus on static analysis. Scan the code, find the hardcoded secret, flag it. But runtime leaks happen in configuration, in logging behavior, in deployment topology. A static scan won't tell you that your Kubernetes pod mounts a secret as an environment variable instead of a file, or that your log aggregator retains unredacted data for 90 days.

ScanMyCode.dev runs security audits that look beyond just source code patterns. The security audit checks for logging hygiene, exposed debug endpoints, insecure secret injection patterns, and configuration issues that lead to runtime exposure. Results in 24 hours, with exact locations and fix instructions.

Stop Treating Deployment as the Finish Line

Secrets don't stop being a problem once they leave your repository. They travel through build systems, sit in environment variables, get serialized into logs, cached in Docker layers, embedded in JavaScript bundles, and served by metadata endpoints. The attack surface for a secret at runtime is orders of magnitude larger than in source control.

Run grep -r "process.env" src/ and count how many of those references end up in client code or log statements. Probably more than you think.

Pre-commit hooks and git scanning are table stakes. The real work starts after deployment. Get a security audit that covers the full lifecycle, not just the repository.

secrets-managementruntime-securityloggingcloud-securityenvironment-variables

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit