Back to Blog
Performance11 min

Your ORM Is Firing 200 Queries Per Page Load and You Don't Know It

N+1 queries silently destroy application performance. How to detect them, why ORMs make them worse, and what actually fixes the problem.

By Performance TeamMarch 10, 2026

A team shipped a dashboard page. Looked fine in dev — 50ms response times, snappy UI, everyone happy. Two months into production with real data, the page took 14 seconds to load. Their APM tool showed 847 SQL queries per single page render.

Nobody changed the code. The data grew.

This is the N+1 query problem. And it's everywhere.

What Actually Happens Under the Hood

The name says it: you fire 1 query to get a list of N items, then N additional queries to get related data for each item. Load 50 orders, each order triggers a query for the customer, another for the line items, maybe one more for shipping status. That's 50 × 3 + 1 = 151 queries. For one page.

Each individual query might take 2ms. Fast, right? But 151 × 2ms = 302ms of pure database time, not counting connection overhead, network round-trips, serialization. Add connection pool contention under load and you're looking at seconds.

The dangerous part: it works perfectly with 5 records in your dev database.

ORMs Make This Stupidly Easy to Create

Every modern ORM uses lazy loading by default. Sequelize, Eloquent, ActiveRecord, Entity Framework — they all do it. The reasoning sounds solid: only load data when you access it, avoid fetching stuff you don't need. In practice, this means your template iterating over a list silently fires queries on every loop iteration.

// Looks innocent. It's not.
const orders = await Order.findAll();

for (const order of orders) {
  // Each of these is a separate SQL query
  const customer = await order.getCustomer();
  const items = await order.getLineItems();
  console.log(`${customer.name}: ${items.length} items`);
}
// With 100 orders: 1 + 100 + 100 = 201 queries

The fix looks almost identical:

// Eager loading — 3 queries total regardless of record count
const orders = await Order.findAll({
  include: [
    { model: Customer },
    { model: LineItem }
  ]
});

for (const order of orders) {
  // No additional queries — data already loaded
  console.log(`${order.Customer.name}: ${order.LineItems.length} items`);
}

Three queries versus 201. Same result. The ORM abstraction hides the cost so well that developers don't even realize they're making network calls inside a loop.

Django Gets This Particularly Wrong

Django's ORM is lazy by design. QuerySets don't execute until you iterate. This leads to a pattern that trips up even experienced Python developers:

# views.py — the silent killer
def order_list(request):
    orders = Order.objects.all()  # No query yet
    return render(request, 'orders.html', {'orders': orders})

# orders.html
# {% for order in orders %}          ← query 1: SELECT * FROM orders
#   {{ order.customer.name }}        ← query 2, 3, 4... SELECT * FROM customers WHERE id=?
#   {{ order.items.count }}          ← query N+2... SELECT COUNT(*) FROM items WHERE order_id=?
# {% endfor %}

The fix:

# select_related for ForeignKey (JOIN)
# prefetch_related for ManyToMany/reverse FK (separate query, Python-side join)
orders = Order.objects.select_related('customer').prefetch_related('items').all()

Two lines. Massive difference. But nothing in Django warns you when you forget them. The page renders fine, the tests pass, and the problem only surfaces under real load with real data volumes.

GraphQL Made Everything Worse

GraphQL resolvers are basically N+1 machines. Each field resolver runs independently, which means a query asking for users and their posts fires one resolver for the user list, then N resolvers for posts — each hitting the database.

// This resolver creates N+1 by design
const resolvers = {
  User: {
    posts: async (user) => {
      // Called once PER USER in the result set
      return await Post.findAll({ where: { userId: user.id } });
    }
  }
};

Facebook built DataLoader specifically to solve this. It batches individual loads within a single tick of the event loop:

const DataLoader = require('dataloader');

const postLoader = new DataLoader(async (userIds) => {
  // Single query: SELECT * FROM posts WHERE user_id IN (?, ?, ?)
  const posts = await Post.findAll({
    where: { userId: userIds }
  });

  // Map results back to input order
  const postsByUser = {};
  posts.forEach(p => {
    (postsByUser[p.userId] ||= []).push(p);
  });
  return userIds.map(id => postsByUser[id] || []);
});

Without DataLoader (or a similar batching solution), every GraphQL API with nested relationships is an N+1 factory. Full stop.

How to Actually Detect These

You can't fix what you can't see. A few approaches that work in production:

Query logging with thresholds. Most ORMs let you hook into the query lifecycle. Set an alert when a single request fires more than 10 queries. In Rails:

# config/initializers/query_counter.rb
ActiveSupport::Notifications.subscribe('sql.active_record') do |*args|
  Thread.current[:query_count] ||= 0
  Thread.current[:query_count] += 1

  if Thread.current[:query_count] > 20
    Rails.logger.warn "N+1 ALERT: #{Thread.current[:query_count]} queries in single request"
  end
end

The Bullet gem (Ruby) and django-debug-toolbar (Python) catch N+1 patterns in development. They're not perfect — Bullet has false positives and misses some patterns — but they catch the obvious ones before they ship.

APM tools are your best bet for production. New Relic, Datadog, Scout — they all visualize query counts per request. Sort by total query count descending and you'll find your worst offenders in minutes. One team found a single API endpoint firing 3,400 queries per call. The endpoint had been in production for eight months.

For Node.js, Knex's query event and Prisma's query logging give you visibility without third-party tools:

// Prisma query logging
const prisma = new PrismaClient({
  log: [
    { level: 'query', emit: 'event' }
  ]
});

let queryCount = 0;
prisma.$on('query', () => { queryCount++; });

// After request handler
if (queryCount > 15) {
  console.warn(`Potential N+1: ${queryCount} queries in single request`);
}

Beyond Eager Loading: When JOINs Aren't the Answer

Eager loading solves most N+1 cases. But sometimes you're loading deeply nested data and JOINs create massive Cartesian products. Loading orders → items → products → categories with JOINs means every row gets duplicated for each combination. 50 orders × 10 items × 3 categories = 1,500 rows when you only need 50 + 500 + ~100 unique records.

Strategies for complex cases:

Batch loading with IN clauses. Instead of N individual SELECTs, collect all IDs and fire one SELECT with WHERE id IN (...). This is what prefetch_related does in Django and what DataLoader does in GraphQL. Two queries instead of N+1, and no Cartesian explosion.

Database views or materialized views for dashboards and reports. Pre-join the data at the database level. The view materializes once (or on schedule), and your app reads from a flat, already-joined table. Works exceptionally well for read-heavy pages that don't need real-time data.

Caching the computed result. If you're loading the same nested data repeatedly, cache the fully assembled object. Redis with a 60-second TTL eliminates 99% of the database hits for frequently accessed pages. Just make sure your cache invalidation actually works — stale dashboards cause their own category of incidents.

The Numbers That Should Scare You

Some rough benchmarks from real applications:

A basic e-commerce order list page: N+1 version fires ~500 queries in 1.2 seconds. Optimized version: 4 queries in 45ms. A 26x improvement from adding two include statements.

A social feed with posts, authors, likes, and comments: N+1 generated 2,100+ queries for 50 posts. Optimized with DataLoader: 8 queries, 80ms. The database server CPU dropped from 78% to 12% after the fix deployed.

Connection pooling amplifies the problem. Most pools default to 10-20 connections. When every request hogs a connection for 200+ queries, you hit pool exhaustion fast. Under load, requests queue up waiting for connections, and response times spike exponentially. What was 300ms for one user becomes 8 seconds for 50 concurrent users.

Catching Performance Killers Before Production

N+1 queries are one of those problems that code review consistently misses. Reviewers look at logic, not the implicit database calls hiding behind ORM method chains. A line that says order.customer.name looks like a property access, not a network call. Static analysis tools can flag some patterns, but the ORM abstraction makes it genuinely hard to catch programmatically.

ScanMyCode.dev performance audits trace query patterns across your codebase, flagging endpoints with excessive database round-trips and showing exactly which model relationships need eager loading. You get a report with file paths, line numbers, and the estimated query reduction for each fix.

Stop Guessing, Start Measuring

Add query counting to your test suite. Seriously. Assert that your critical endpoints don't exceed a query budget. When someone accidentally removes an include or adds a new relationship without eager loading, the test fails before it ever reaches production.

// Jest example with Prisma
test('order list stays within query budget', async () => {
  let queries = 0;
  prisma.$on('query', () => queries++);

  await request(app).get('/api/orders');

  // Budget: 5 queries max for this endpoint
  expect(queries).toBeLessThanOrEqual(5);
});

Every application has N+1 queries hiding somewhere. Most teams just don't know about them yet because the dataset is still small. Production data grows. Traffic grows. The queries that took 50ms today take 5 seconds next quarter.

Find them before your users do. Run a performance audit — you'll get a full report of query hotspots, with fixes, in 24 hours.

performancedatabasen-plus-oneormsqloptimization

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit