Connection Pooling Done Right
PgBouncer, Prisma pool, Supabase pooler, and pool sizing formula
What You’ll Build
After following this guide, you will have a working implementation of connection pooling done right in your project. Database connection exhaustion is the #1 cause of production outages in serverless/edge environments. Each connection uses ~10MB of RAM. Connection pooling reuses a small number of connections across many requests. Learn when to use application-level pooling vs external poolers like PgBouncer.
Use Cases & Problems Solved
- Set up a production-ready database layer with type-safe queries
- Manage schema changes through migrations without data loss
- Avoid raw SQL injection risks and inconsistent data access patterns
Prerequisites
- PostgreSQL database
- Understanding of your app’s concurrency model
Step-by-Step Implementation
Node.js application-level pool (pg)
The following snippet shows how to node.js application-level pool (pg). Copy this into your project and adjust the values for your environment.
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Max connections in pool
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 5000, // Fail if can't get connection in 5s
});
// Use pool.query for simple queries (auto acquires/releases)
const { rows } = await pool.query('SELECT * FROM users WHERE id = $1', [userId]);
// Manual acquire/release for transactions
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query('INSERT INTO orders ...');
await client.query('UPDATE inventory ...');
await client.query('COMMIT');
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release(); // ALWAYS release back to pool
}
Pool sizing formula
The optimal pool size is: connections = (CPU cores * 2) + effective_spindle_count. For most cloud databases with SSDs, this means 5-10 connections per server instance.
# For a 4-core server with SSD:
# connections = (4 * 2) + 1 = 9
# For serverless (many instances): use an external pooler
# PgBouncer or Supabase Pooler with transaction mode
⚠️ Don’t Do This
❌ Creating a new connection per request in serverless
// Each Lambda/Edge function creates a new connection
// 1000 concurrent requests = 1000 connections = OOM!
export default async function handler(req) {
const client = new Client(process.env.DATABASE_URL);
await client.connect(); // New connection every time!
const result = await client.query('SELECT ...');
await client.end();
}
✅ Use a connection pooler for serverless environments
// Use Supabase Pooler or PgBouncer in transaction mode
// .env: DATABASE_URL=postgres://...supabase.com:6543/postgres?pgbouncer=true
// Only ~10 actual DB connections for thousands of requests
Testing
Verify your implementation with these tests:
// __tests__/connection-pooling-done-right.test.ts
import { describe, it, expect } from 'vitest';
describe('Connection Pooling Done Right', () => {
it('should initialize without errors', () => {
// Test that the setup completes successfully
expect(() => setup()).not.toThrow();
});
it('should handle the primary use case', async () => {
const result = await execute();
expect(result).toBeDefined();
expect(result.success).toBe(true);
});
it('should handle edge cases', async () => {
// Test with empty/null input
const result = await execute(null);
expect(result.error).toBeDefined();
});
});
Verification
# Check current connections:
SELECT count(*) FROM pg_stat_activity;
# Load test to verify pool works:
npx autocannon -c 100 -d 10 http://localhost:3000/api/users
# Connections should stay constant (pool size), not grow with requests Related Specs
Safe Database Migrations
Zero-downtime migration patterns, rollback strategies, and data backfills
Full-Text Search in PostgreSQL
tsvector, tsquery, ranking, trigram similarity, and search indexing