A fintech startup ran a pentest last year. REST API: clean. Auth flows: solid. Then the tester found the GraphQL endpoint. Within forty minutes they had the full schema, user emails for every account, and a query that pegged the database at 100% CPU for eleven seconds straight. The fix took the team three weeks because nobody had thought about GraphQL as an attack surface at all.
That's not unusual. GraphQL is a phenomenal tool for frontend teams. But it ships with features that are, from a security perspective, basically attack primitives baked right into the spec.
Introspection Is Recon Handed to You on a Plate
REST APIs force attackers to guess endpoints. Fuzz /api/users, /api/admin, hope something responds. GraphQL? Send this:
{
__schema {
types {
name
fields {
name
type { name }
}
}
}
}
And the server hands back the complete schema. Every type, every field, every relationship. Mutations included. It's like asking a bank for a floor plan and they email you the blueprints with the vault combination circled in red.
Production APIs should have introspection disabled. Period. Apollo Server, Yoga, Mercurius — they all support it. But the default is on. And the number of production GraphQL APIs running with introspection wide open is staggering. Escape Technologies reported that roughly 18% of publicly accessible GraphQL endpoints still expose full introspection as of late 2025.
// Apollo Server - disable introspection in production
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== 'production',
// yeah, this should be the default. it's not.
});
Some teams argue they need introspection for tooling. Fine. Restrict it to internal networks or authenticated admin roles. Exposing it to the public internet is handing attackers the map before they even start.
Deeply Nested Queries Will Kill Your Database
This is the one that catches people off guard.
Say you have users who have posts, posts have comments, comments have authors (users again). Circular reference. Standard data model, nothing weird. But GraphQL lets a client send this:
query DeepNest {
users {
posts {
comments {
author {
posts {
comments {
author {
posts {
comments {
author { email }
}
}
}
}
}
}
}
}
}
}
That query generates an exponential number of database calls. Eight levels deep on a moderately sized dataset and your Postgres instance is done. No special tools needed. No SQL injection. Just a valid GraphQL query that the schema permits by design.
The fix isn't complicated but it requires deliberate effort. Query depth limiting, query cost analysis, or both:
// graphql-depth-limit package
import depthLimit from 'graphql-depth-limit';
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules: [depthLimit(5)],
// 5 levels handles 99% of legitimate frontend queries
// anything deeper is almost certainly adversarial
});
Cost analysis is better but harder. Assign a weight to each field, sum them up, reject queries over a threshold. Libraries like graphql-query-complexity handle this. The tricky part is tuning the weights — too aggressive and legitimate queries break, too lenient and you're still vulnerable.
Batching: One Request, Fifty Mutations
Most GraphQL servers accept query batching by default. Send an array of operations in a single HTTP request. Useful for performance. Also useful for brute-forcing login endpoints at scale while bypassing per-request rate limiters.
// Single HTTP POST, 200 login attempts
[
{ "query": "mutation { login(email: \"admin@corp.com\", pass: \"password1\") { token } }" },
{ "query": "mutation { login(email: \"admin@corp.com\", pass: \"password2\") { token } }" },
// ... 198 more
]
Your rate limiter sees one request from one IP. The server processes 200 mutations. Rate limiting on GraphQL needs to account for the number of operations inside a request, not just the request itself. Most off-the-shelf API gateways don't do this unless you configure them specifically for GraphQL.
Disable batching if you don't need it. If you do need it, cap the array size at something sane — 5, maybe 10 — and count each operation individually for rate limiting purposes.
Authorization at the Resolver Level (Not the Schema Level)
REST APIs naturally gate access at the route level. GET /admin/users → check admin role → done. GraphQL breaks this model because everything flows through a single endpoint.
Teams frequently make the mistake of checking permissions at the query level when they should be checking at the resolver level. A user might not have access to query { admin { users } }, but what about query { post(id: 1) { author { role } } } resolving through the user type and exposing the role field via a relationship?
Every resolver that touches sensitive data needs its own authorization check. Not just the top-level queries and mutations. Every. Single. Resolver.
// Bad: only checking at the query level
const resolvers = {
Query: {
adminUsers: (_, __, context) => {
if (!context.user.isAdmin) throw new ForbiddenError();
return db.users.findAll();
}
},
// User type resolver has no auth checks
// anyone who can reach a User object sees everything
User: {
email: (user) => user.email,
role: (user) => user.role, // oops
ssn: (user) => user.ssn, // double oops
}
};
// Better: field-level authorization
User: {
email: (user, _, context) => {
if (context.user.id !== user.id && !context.user.isAdmin) return null;
return user.email;
},
ssn: (user, _, context) => {
if (!context.user.isAdmin) return null;
return user.ssn;
}
}
Tedious? Absolutely. But the alternative is data leaks through graph traversal. And those are hard to catch in testing because the obvious paths are secured — it's the indirect ones that get you.
Alias-Based Attacks Nobody Talks About
GraphQL aliases let you rename fields in the response. Handy for the client. Also lets you call the same resolver multiple times in a single query:
query AliasAttack {
a1: user(id: "1") { email }
a2: user(id: "2") { email }
a3: user(id: "3") { email }
# ... enumerate every user
a5000: user(id: "5000") { email }
}
One query. Five thousand resolver executions. Not batching (that's multiple operations) — this is a single operation with aliases. Depth limiters won't catch it because it's flat. Cost analysis will, if you configured it. Most teams haven't.
This is how data scraping happens on GraphQL APIs. An attacker doesn't need to paginate through a list endpoint because they can enumerate individual records with aliases in parallel.
Persisted Queries Change the Game
The nuclear option for GraphQL security: don't accept arbitrary queries at all.
With persisted queries (sometimes called "trusted documents"), the server only executes pre-registered query hashes. The frontend sends a hash, the server looks it up, runs the corresponding query. Anything not in the allowlist gets rejected.
// Client sends:
{
"extensions": {
"persistedQuery": {
"version": 1,
"sha256Hash": "abc123..."
}
},
"variables": { "userId": "42" }
}
// Server only executes if hash matches a known query
// arbitrary queries → 400 Bad Request
This eliminates introspection abuse, nested query attacks, alias enumeration, and batching exploits in one move. The tradeoff is development workflow — every query change requires updating the persisted query store. For security-critical APIs (banking, healthcare, government) it's worth it. For a content site? Probably overkill.
Error Messages Leak Schema Information
Quick one. GraphQL error messages in development mode are incredibly detailed. Field suggestions ("Did you mean 'secretAdminToken'?"), type information, resolver paths. In production, strip all of that. Return generic errors to the client and log details server-side.
// Apollo Server
formatError: (err) => {
// log full error internally
logger.error(err);
// return sanitized version to client
if (process.env.NODE_ENV === 'production') {
return { message: 'An error occurred' };
}
return err;
}
The "Did you mean..." feature is literally an oracle attack. Attackers send typos on purpose to discover field names without introspection. Cute feature in development. Vulnerability in production.
Automated Scanning Catches What Code Review Misses
Code review helps, but GraphQL security issues are structural. Authorization gaps in resolver chains, missing depth limits, exposed introspection — these live in the interaction between resolvers, schema design, and server configuration. Hard to spot by reading individual files.
ScanMyCode.dev runs automated security audits that catch these patterns across your entire codebase. Resolver auth gaps, missing rate limiting, exposed introspection endpoints, unbounded query complexity — the full picture, with exact locations and remediation steps.
What Actually Matters
If you're running GraphQL in production, here's the minimum:
- Introspection off in production. Non-negotiable.
- Query depth limit set to 5-7. Adjust based on your schema.
- Query cost analysis with field-level weights. Reject expensive queries before they execute.
- Resolver-level authorization. Every field that touches sensitive data.
- Batching disabled or capped, with per-operation rate limiting.
- Error messages sanitized. No field suggestions in production.
- Persisted queries if the threat model justifies it.
Most of these take a day to implement. The depth limit is literally one line of code. But teams skip them because the API "works" and nobody's attacking it yet. Yet.
Don't wait until someone scrapes your user table through a nested alias query. Get a security audit — 24-hour turnaround, and you'll know exactly where the gaps are.