Back to Blog
Security9 min

Kubernetes Secrets Are Just Base64. That's Not Encryption.

K8s Secrets give teams a false sense of security. Base64 decodes in milliseconds. Here's what actually happens in production and how to fix it.

By Security TeamMarch 24, 2026

A team I worked with ran 47 microservices on EKS. Solid CI/CD pipeline, proper namespacing, network policies — the works. Then during a routine pentest, someone ran kubectl get secret db-credentials -o jsonpath='{.data.password}' | base64 -d and had the production database password in plaintext. Took about three seconds.

Nobody panicked because nobody understood what just happened. "But we stored it as a Secret," the lead said. Yeah. About that.

Base64 Is an Encoding, Not a Cipher

This should be obvious but apparently it's not, given how many production clusters treat Kubernetes Secrets like a vault. The entire "security" model of a K8s Secret is this:

echo "my-database-password" | base64
# bXktZGF0YWJhc2UtcGFzc3dvcmQ=

echo "bXktZGF0YWJhc2UtcGFzc3dvcmQ=" | base64 -d
# my-database-password

That's it. That's the whole protection layer. Base64 exists so you can stuff binary data into YAML without breaking parsers. It was never designed to hide anything. Every developer with kubectl access and the right RBAC permissions can decode every secret in their namespace instantly.

And yet teams commit Secret manifests to Git repos thinking they're safe because the values "look encrypted." They're not. You're storing passwords in a format that any online converter can reverse.

Where Your Secrets Actually Live

Understanding the threat model means knowing where secrets physically exist at each stage. It's worse than most people think.

etcd — the big one

Every Secret object gets persisted to etcd, Kubernetes' backing datastore. By default — and this is the critical part — etcd stores secrets in plaintext. Not base64. Actual plaintext. The base64 encoding is a YAML serialization detail; etcd doesn't care about it.

So if someone gets read access to etcd (misconfigured firewall, stolen node credentials, backup file left on an S3 bucket — pick your favorite incident), they get every secret in the entire cluster. Every namespace. Every service account token. Every TLS certificate private key.

The Shopify breach postmortem from 2022 mentioned etcd exposure as a contributing factor. It's not theoretical.

Node filesystem

When a pod mounts a secret, kubelet writes it to a tmpfs volume on the node. Better than disk, sure. But anyone with SSH access to the node can read /proc/<pid>/root/ and find mounted secrets. Container escapes through CVE-2024-21626 (the runc bug) gave exactly this access.

Environment variables

Worse option. Secrets injected as env vars show up in /proc/<pid>/environ, in crash dumps, in logging frameworks that dump environment on errors. Spring Boot Actuator's /env endpoint has leaked production secrets more times than anyone wants to count.

# This is how most tutorials teach it. It's also the worst option.
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: app
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: password

Volume mounts are marginally better. External secret injection at runtime is the actual answer.

Encryption at Rest: The Minimum Nobody Configures

Kubernetes has supported encryption at rest for etcd since version 1.13. That was 2018. Most clusters in 2026 still don't have it enabled because it requires manual configuration of the API server, and managed providers have varying defaults.

EKS enables envelope encryption if you configure a KMS key. GKE does it by default with Google-managed keys. AKS... check your cluster config, because the defaults have changed three times.

The configuration looks like this:

# /etc/kubernetes/encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: <base64-encoded-32-byte-key>
      - identity: {}

Notice the identity provider at the bottom. That's the fallback — and identity means "no encryption." If your encryption key ever fails to load, Kubernetes silently falls back to storing everything in plaintext. Silently. No alerts. No warnings in default monitoring.

Fun, right?

What Teams Actually Get Wrong

Beyond the base64 misconception, there's a pattern. Almost every cluster audit reveals the same issues:

Secrets in Git. Even with tools like SealedSecrets or SOPS, teams mess this up. Someone copies the unsealed version into a commit "temporarily" and forgets. Git never forgets. Even after you delete the file, git log --all --full-history -- path/to/secret.yaml pulls it right back.

Overly broad RBAC on secrets. Default ClusterRoleBindings in many Helm charts grant get and list on all secrets in a namespace. That developer who only needs to deploy their frontend app? They can now read the payment service's Stripe API keys.

# This is too common. Don't do this.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "list", "watch"]
  # No resourceNames restriction = access to ALL secrets in namespace

No rotation. Secrets get created during initial setup and never change. Database passwords that are three years old. API keys from employees who left two jobs ago. The median secret age in clusters I've seen audited is 14 months. Some had secrets older than the team maintaining them.

Logging and tracing. Secrets get printed to stdout during debug sessions and ingested by the logging stack. Now your Datadog account has your production credentials indexed and searchable. Retention policy? 30 days? 90? Indefinite?

External Secret Managers: The Actual Solution

The Kubernetes community figured this out years ago. Don't store secrets in Kubernetes. Store them in something built for the job.

External Secrets Operator

ESO syncs secrets from AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager, Azure Key Vault, and about a dozen other backends into Kubernetes Secret objects. The Secret still exists in the cluster, but the source of truth is external, encrypted, audited, and rotation-capable.

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-credentials
  data:
    - secretKey: password
      remoteRef:
        key: production/database
        property: password

That refreshInterval: 1h means rotation happens automatically. Change the secret in AWS, and within an hour every pod has the new value. No redeployment. No kubectl apply.

CSI Secret Store Driver

Mounts secrets directly from an external provider as a volume, bypassing Kubernetes Secrets entirely. The secret material never touches etcd. It goes from your vault to the pod's tmpfs mount. For compliance-heavy environments (PCI DSS, SOC2), this is often the only acceptable pattern.

Vault Agent Sidecar

HashiCorp's approach. A sidecar container authenticates to Vault using the pod's service account, retrieves secrets, writes them to a shared volume, and handles lease renewal. More moving parts, but gives you dynamic secrets — database credentials that are generated on demand and automatically expire.

Dynamic secrets are wildly underrated. Instead of one static password that forty pods share, each pod gets a unique short-lived credential. When the pod dies, the credential dies with it. Blast radius goes from "everything" to "one pod for 24 hours."

A Practical Hardening Checklist

Not everything needs Vault. Sometimes you just need to make what you have less terrible. In order of effort versus impact:

1. Enable etcd encryption at rest. Twenty minutes of work. Eliminates the "someone grabs an etcd backup" attack entirely. Verify it worked: ETCDCTL_API=3 etcdctl get /registry/secrets/default/my-secret --print-value-only should return gibberish, not readable YAML.

2. Restrict RBAC on secrets. Use resourceNames in Roles to limit which specific secrets a service account can access. Not just "all secrets in namespace."

3. Audit secret access. Enable Kubernetes audit logging for secret read operations. If you're on a managed provider, this is usually a checkbox. You want to know who accessed what and when, because you won't know you've been breached without this data.

4. Never use env vars for secrets. Volume mounts or external injection only. Env vars leak in too many places.

5. Scan for secrets in manifests and Dockerfiles. Tools like TruffleHog and GitLeaks catch committed secrets. Run them in CI, not as an afterthought.

6. Rotate. Set calendar reminders if you have to. 90 days maximum for production credentials. 30 for anything touching payment data.

Scanning Your K8s Configs Automatically

Manual reviews catch maybe 60% of these issues on a good day. The rest hide in Helm chart defaults, copy-pasted Stack Overflow YAML, and "temporary" configurations that became permanent two years ago. ScanMyCode.dev runs automated audits on your Kubernetes manifests, Dockerfiles, and application code — checking for exposed secrets, misconfigured RBAC, missing encryption settings, and the dozens of other patterns that lead to breaches. You get a report with exact file locations and fix instructions within 24 hours.

Stop Treating Base64 Like a Security Feature

Kubernetes Secrets have a naming problem. The word "secret" implies confidentiality that the primitive doesn't provide. It's a configuration object with a convenient API. Treat it that way.

If your secrets strategy starts and ends with kubectl create secret, you have a gap. Maybe not one that gets exploited today. But attack surface is cumulative, and secret sprawl only grows.

Run a Docker & Kubernetes security audit on your cluster configs. Takes five minutes to submit, and you get actionable results in 24 hours. Better to find the problems yourself than to have a pentester — or worse — find them for you.

kubernetessecrets-managementencryptionetcdcloud-securitydevops

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit