Back to Blog
Security11 min

Your Kubernetes RBAC Is Probably Wide Open (And You Don't Know It)

Most Kubernetes clusters run with overly permissive RBAC policies. A look at the misconfigurations that give attackers cluster-admin within minutes.

By Security TeamMarch 9, 2026

A fintech startup lost their entire production cluster last year because a junior dev had cluster-admin bound to the default service account. Not a sophisticated attack. Not a zero-day. Someone ran kubectl get secrets --all-namespaces from a compromised pod and pulled every TLS cert, every database password, every API key in the cluster. Took about 40 seconds.

RBAC in Kubernetes is one of those things that looks straightforward in the docs and turns into a nightmare in production. The API is clean. The YAML is readable. And somehow, 60% of clusters audited by Aqua Security in 2024 had at least one ClusterRoleBinding that granted excessive privileges.

The Default Service Account Problem

Every namespace gets a default service account. Every pod that doesn't specify one uses it. And by default, that service account can talk to the Kubernetes API.

Most teams never touch it.

apiVersion: v1
kind: Pod
metadata:
  name: my-app
spec:
  containers:
    - name: app
      image: my-app:latest
  # no serviceAccountName specified
  # congrats, you're running as "default"
  # which probably has more access than you think

The fix is dead simple. Set automountServiceAccountToken: false on pods that don't need API access. Create dedicated service accounts for pods that do. But "dead simple" and "actually done in production" are two very different things. Teams provision clusters with Helm charts that don't set service accounts, copy-paste deployment manifests from Stack Overflow, and suddenly you've got 200 pods all running as default with API access they never needed.

Why cluster-admin Gets Handed Out Like Candy

You know how it goes. Someone can't deploy. They get a permission error. The on-call engineer is tired, it's Friday afternoon, and they bind cluster-admin to that user's service account. "We'll fix it Monday." Monday never comes.

Datadog's 2024 Container Security report found that 38% of organizations had at least one human user with cluster-admin privileges who didn't need it. Not service accounts. Actual humans.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: give-bob-everything
  # created 14 months ago
  # bob left the company 8 months ago
  # nobody revoked this
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: User
    name: bob@company.com
    apiGroup: rbac.authorization.k8s.io

Bob's gone. His binding isn't. And if Bob's credentials got phished, reused, or leaked? That binding is a direct path to owning the cluster.

Wildcard Permissions: The Silent Killer

Wildcards in RBAC rules are the chmod 777 of Kubernetes. They solve every permission problem instantly and create a dozen security problems silently.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: super-helpful-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
  # this is literally cluster-admin with extra steps
  # saw this in a Fortune 500 company's staging cluster
  # "staging" that shared a network with production

Staging clusters that share networks with production. Development namespaces on the same cluster as customer data. Test service accounts with production-level access. These aren't edge cases. Walk into any mid-size company running Kubernetes and you'll find at least one of these.

Partial wildcards are just as bad

Even limiting to specific API groups while using verb wildcards creates problems. A role with verbs: ["*"] on secrets lets the holder read, create, delete, and patch secrets. Including the ones mounted by other pods. Including the ones containing your cloud provider credentials.

Pod Security and RBAC Work Together (Or Don't)

RBAC controls who can do what with the Kubernetes API. Pod security standards control what pods can actually do on the node. Most teams configure one and forget the other.

A pod with privileged: true can escape to the host. If the service account for that pod also has permissions to create new pods, an attacker can spawn a privileged pod in any namespace, break out to the node, and pivot to other nodes through the kubelet API. Red teams pull this off in under 5 minutes on poorly configured clusters.

apiVersion: v1
kind: Pod
metadata:
  name: totally-legit-debug-pod
  namespace: kube-system
spec:
  hostPID: true
  hostNetwork: true
  containers:
    - name: pwn
      image: alpine
      command: ["nsenter", "--target", "1", "--mount", "--uts", "--ipc", "--net", "--pid", "--", "bash"]
      securityContext:
        privileged: true
  # if your RBAC lets someone create pods in kube-system
  # this is game over
  # nsenter into PID 1 = you ARE the node now

Auditing RBAC Without Losing Your Mind

Running kubectl auth can-i --list for every service account in every namespace is technically possible. Nobody does it. It takes forever and the output is unreadable.

Tools like kubectl-who-can and Fairwinds' RBAC Manager help. KubeAudit catches common misconfigurations. But these are point-in-time checks. Someone adds a ClusterRoleBinding on Tuesday, your audit ran on Monday, and you don't catch it until the next scheduled scan. If there is a next scheduled scan.

What actually works:

  • Admission controllers (OPA Gatekeeper, Kyverno) that reject overly permissive role bindings before they land
  • Continuous monitoring that alerts on new ClusterRoleBindings, especially to cluster-admin
  • Regular rotation of service account tokens (Kubernetes 1.24+ finally made bound tokens the default, but plenty of clusters still run long-lived tokens)

Namespace Isolation Is Not Network Isolation

Teams treat namespaces like security boundaries. They aren't. A namespace is an organizational boundary. Without NetworkPolicies, any pod in any namespace can talk to any other pod in the cluster.

Combine that with weak RBAC and you get lateral movement paradise. Compromise one pod, use its service account to list pods in other namespaces, find one without network restrictions, pivot there, escalate. Rinse and repeat until you hit something with cloud IAM credentials or a database connection string.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  # start with deny-all, then whitelist
  # yes it breaks things initially
  # better than finding out your monitoring pod
  # can reach your payment service

Start with deny-all ingress per namespace. Whitelist explicitly. It'll break things on the first deploy. Fix those things. You'll end up with a cluster where lateral movement actually requires effort instead of a single curl command between pods.

Real Checklist, No Fluff

Set automountServiceAccountToken: false on every pod that doesn't need API access. That's most of them.

Delete every ClusterRoleBinding you can't explain in one sentence. If nobody remembers why it exists, it shouldn't exist.

Search for wildcard verbs: kubectl get clusterroles -o json | jq '.items[] | select(.rules[]?.verbs[]? == "*") | .metadata.name'

Run kubectl-who-can create pods -n kube-system. The list should be very short. If your CI/CD pipeline service account is on it, rethink your deployment strategy.

Enable audit logging. You can't detect privilege escalation if you're not recording API calls.

Automated Kubernetes Auditing

Manually reviewing RBAC across a cluster with 50 namespaces and 300 service accounts is a multi-day effort. And it's outdated by the time you finish. ScanMyCode.dev runs automated audits on your Kubernetes configurations and Dockerfiles, flagging overly permissive roles, missing security contexts, and exposed secrets with exact file locations and remediation steps.

Stop Treating RBAC as a One-Time Setup

RBAC configs drift. People leave. Service accounts accumulate permissions. New Helm charts bring their own roles that nobody reviews. The cluster you secured six months ago isn't the cluster running today.

Don't wait until a penetration test (or worse, an incident) reveals that your monitoring stack has cluster-admin. Run a Docker & Kubernetes audit and get a clear picture of your cluster's security posture within 24 hours.

kubernetesrbacsecuritycluster-securityaccess-control

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit