Back to Blog
Performance10 min

Performance Optimization: Techniques That Actually Work

Practical strategies to identify and fix performance bottlenecks in your applications.

By Performance TeamFebruary 12, 2026

Performance matters. A 100ms delay can reduce conversions by 7%. Users expect applications to load in under 3 seconds. Slow code costs money.

This guide covers proven techniques to identify and fix performance bottlenecks across frontend, backend, and database layers.

Measure First, Optimize Later

Never optimize without data. Use profiling tools to find the actual bottlenecks.

Frontend Profiling

// Chrome DevTools Performance tab
// 1. Record → Interact → Stop
// 2. Analyze flame chart for long tasks
// 3. Check layout/paint/composite time

// Measure specific operations
console.time('expensive-operation');
expensiveOperation();
console.timeEnd('expensive-operation');

Backend Profiling (Node.js)

// CPU profiling
node --prof app.js
node --prof-process isolate-*.log > profile.txt

// Memory profiling
node --inspect app.js
// Chrome DevTools → Memory tab

Database Profiling

-- PostgreSQL
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';

-- MySQL
EXPLAIN SELECT * FROM users WHERE email = 'test@example.com';

-- MongoDB
db.users.find({email: 'test@example.com'}).explain('executionStats');

Database Optimization

1. Fix N+1 Queries

The most common performance killer in web applications.

// ❌ Bad: N+1 queries
const users = await User.findAll();
for (const user of users) {
  user.posts = await Post.findAll({ where: { userId: user.id } });
}

// ✅ Good: Single query with join
const users = await User.findAll({
  include: [{ model: Post }]
});

2. Add Indexes

Unindexed queries can scan millions of rows. Add indexes for frequently queried columns.

-- Before: Full table scan (millions of rows)
SELECT * FROM orders WHERE user_id = 123;  -- 2500ms

-- Add index
CREATE INDEX idx_orders_user_id ON orders(user_id);

-- After: Index scan (milliseconds)
SELECT * FROM orders WHERE user_id = 123;  -- 3ms

Index Guidelines:

  • Index foreign keys
  • Index columns in WHERE clauses
  • Index columns used for sorting (ORDER BY)
  • Composite indexes for multi-column queries
  • Don't over-index (slows down writes)

3. Use SELECT Wisely

-- ❌ Bad: Fetches all columns (including large TEXT fields)
SELECT * FROM articles;

-- ✅ Good: Fetch only needed columns
SELECT id, title, summary FROM articles;

4. Pagination

// ❌ Bad: Loads all records into memory
const allUsers = await User.findAll();

// ✅ Good: Paginate results
const users = await User.findAll({
  limit: 20,
  offset: (page - 1) * 20
});

// ✅ Better: Cursor-based pagination for large datasets
const users = await User.findAll({
  where: { id: { [Op.gt]: lastSeenId } },
  limit: 20,
  order: [['id', 'ASC']]
});

5. Connection Pooling

// ❌ Bad: New connection per query
for (let i = 0; i < 100; i++) {
  const client = await db.connect();
  await client.query('SELECT ...');
  await client.close();
}

// ✅ Good: Reuse connections from pool
const pool = new Pool({ max: 20 });
for (let i = 0; i < 100; i++) {
  await pool.query('SELECT ...');
}

Backend Optimization

1. Caching

Don't recompute what hasn't changed.

// In-memory cache
const cache = new Map();

async function getUser(id) {
  if (cache.has(id)) {
    return cache.get(id);  // Instant
  }
  const user = await db.query('SELECT * FROM users WHERE id = ?', [id]);
  cache.set(id, user);
  return user;
}

// Redis cache for distributed systems
const redis = new Redis();

async function getUser(id) {
  const cached = await redis.get(`user:${id}`);
  if (cached) return JSON.parse(cached);
  
  const user = await db.query('SELECT * FROM users WHERE id = ?', [id]);
  await redis.setex(`user:${id}`, 3600, JSON.stringify(user));  // 1h TTL
  return user;
}

2. Async Operations

// ❌ Bad: Sequential (600ms total)
const user = await getUser(id);        // 200ms
const posts = await getPosts(userId);  // 200ms
const stats = await getStats(userId);  // 200ms

// ✅ Good: Parallel (200ms total)
const [user, posts, stats] = await Promise.all([
  getUser(id),
  getPosts(userId),
  getStats(userId)
]);

3. Lazy Loading

// ❌ Bad: Loads everything upfront
class User {
  constructor(data) {
    this.posts = loadAllPosts(data.id);  // Expensive!
  }
}

// ✅ Good: Load only when needed
class User {
  async getPosts() {
    if (!this._posts) {
      this._posts = await loadPosts(this.id);
    }
    return this._posts;
  }
}

4. Batch Operations

// ❌ Bad: 1000 separate queries
for (const user of users) {
  await db.query('INSERT INTO logs VALUES (?)', [user.id]);
}

// ✅ Good: Single batch insert
const values = users.map(u => [u.id]);
await db.query('INSERT INTO logs VALUES ?', [values]);

5. Stream Large Data

// ❌ Bad: Loads entire file into memory
app.get('/download', async (req, res) => {
  const data = await fs.readFile('large-file.zip');  // OOM risk
  res.send(data);
});

// ✅ Good: Stream directly
app.get('/download', (req, res) => {
  const stream = fs.createReadStream('large-file.zip');
  stream.pipe(res);
});

Frontend Optimization

1. Code Splitting

// ❌ Bad: Bundle includes everything (2MB)
import AdminPanel from './AdminPanel';
import UserDashboard from './UserDashboard';

// ✅ Good: Lazy load route components
const AdminPanel = lazy(() => import('./AdminPanel'));
const UserDashboard = lazy(() => import('./UserDashboard'));

2. Virtual Scrolling

For large lists, only render visible items.

// ❌ Bad: Renders 10,000 rows (slow!)
{items.map(item => <Row key={item.id} data={item} />)}

// ✅ Good: Virtual scrolling (renders ~20 visible rows)
import { FixedSizeList } from 'react-window';

<FixedSizeList
  height={600}
  itemCount={items.length}
  itemSize={50}
>
  {({ index, style }) => <Row style={style} data={items[index]} />}
</FixedSizeList>

3. Debounce/Throttle

// ❌ Bad: API call on every keystroke
<input onChange={(e) => searchAPI(e.target.value)} />

// ✅ Good: Debounced API call
import { debounce } from 'lodash';

const debouncedSearch = debounce(searchAPI, 300);
<input onChange={(e) => debouncedSearch(e.target.value)} />

4. Memoization

// ❌ Bad: Recomputes on every render
function Component({ data }) {
  const processed = expensiveProcessing(data);
  return <div>{processed}</div>;
}

// ✅ Good: Memoized computation
function Component({ data }) {
  const processed = useMemo(() => expensiveProcessing(data), [data]);
  return <div>{processed}</div>;
}

5. Image Optimization

<!-- ❌ Bad: Loads 5MB image for 300px display -->
<img src="photo.jpg" width="300" />

<!-- ✅ Good: Responsive images with modern formats -->
<picture>
  <source srcset="photo-300.webp" media="(max-width: 600px)" type="image/webp" />
  <source srcset="photo-800.webp" type="image/webp" />
  <img src="photo-800.jpg" alt="Photo" loading="lazy" />
</picture>

Algorithm Complexity

Choosing the right data structure matters.

// ❌ Bad: O(n) lookup
const users = [/* 10,000 users */];
const user = users.find(u => u.id === targetId);  // Scans entire array

// ✅ Good: O(1) lookup
const usersMap = new Map(users.map(u => [u.id, u]));
const user = usersMap.get(targetId);  // Instant

// ❌ Bad: O(n²) nested loops
for (let i = 0; i < users.length; i++) {
  for (let j = 0; j < posts.length; j++) {
    if (posts[j].userId === users[i].id) {
      // ...
    }
  }
}

// ✅ Good: O(n) with hash map
const postsByUser = posts.reduce((acc, post) => {
  acc[post.userId] = acc[post.userId] || [];
  acc[post.userId].push(post);
  return acc;
}, {});

Monitoring & Alerts

Set up monitoring to catch performance regressions:

  • Lighthouse CI: Automated performance audits on every deploy
  • New Relic / Datadog: Real-time performance monitoring
  • Sentry: Performance transaction tracking
  • Google Analytics: Page load time trends
// Set performance budgets in CI
// lighthouse.config.js
module.exports = {
  ci: {
    assert: {
      assertions: {
        'first-contentful-paint': ['error', { maxNumericValue: 2000 }],
        'speed-index': ['error', { maxNumericValue: 3000 }],
        'interactive': ['error', { maxNumericValue: 4000 }],
      },
    },
  },
};

Performance Audit Checklist

  • ✅ Database queries have appropriate indexes
  • ✅ No N+1 query problems
  • ✅ Large datasets are paginated
  • ✅ Caching is implemented for static/slow data
  • ✅ Images are optimized and lazy-loaded
  • ✅ Bundle size is under 200KB (gzipped)
  • ✅ Code splitting for large routes
  • ✅ No blocking synchronous operations
  • ✅ Virtual scrolling for large lists
  • ✅ Debounced search/input handlers

Conclusion

Performance optimization is about finding and fixing the biggest bottlenecks first. Measure, identify, optimize, repeat.

Want a comprehensive performance analysis of your codebase? Get a performance audit and receive specific optimization recommendations within 24 hours.

performanceoptimizationspeedefficiency

Ready to improve your code?

Get an AI-powered code audit with actionable recommendations. Results in 24 hours.

Start Your Audit