PostgreSQL ships with configuration so conservative it runs on machines with 256MB of RAM. In production, you have 16GB or 64GB of RAM and your database is still using 128MB of shared buffers. These configuration changes typically deliver 2-5x performance improvements on existing hardware with no schema or query changes.
⚡ TL;DR: Set shared_buffers to 25% of RAM. Set effective_cache_size to 75% of RAM. Set work_mem based on max_connections and available RAM. Use PgBouncer for connection pooling. Enable pg_stat_statements. Read EXPLAIN (ANALYZE, BUFFERS) before any other optimization.
Essential configuration changes
# postgresql.conf — critical settings
# Memory
shared_buffers = 4GB # 25% of RAM (default: 128MB — far too low)
effective_cache_size = 12GB # 75% of RAM — tells planner how much OS cache available
work_mem = 64MB # Per sort/hash operation (careful: max_connections * work_mem)
# 100 connections * 64MB = 6.4GB potential!
maintenance_work_mem = 1GB # For VACUUM, CREATE INDEX, ALTER TABLE
# Checkpoints (write throughput)
checkpoint_completion_target = 0.9 # Spread checkpoint writes over 90% of interval
wal_buffers = 64MB # WAL buffer (default 16MB, often too small)
default_statistics_target = 100 # Better query planning (default 100, up to 10000)
# Parallelism
max_parallel_workers_per_gather = 4 # Allow 4 workers per query
max_parallel_workers = 8 # Total parallel workers
# Logging (for finding slow queries)
log_min_duration_statement = 1000 # Log queries > 1 second
log_checkpoints = on
log_connections = off # Very noisy with connection pooler
# Use PGTune (pgtune.leopard.in.ua) to generate config for your hardware
Connection pooling with PgBouncer
# Each PostgreSQL connection uses ~5-10MB RAM
# With 1000 app servers * 10 connections = 10,000 connections = 100GB RAM!
# PgBouncer: pool connections at the proxy layer
# pgbouncer.ini
[databases]
myapp = host=localhost dbname=myapp
[pgbouncer]
pool_mode = transaction # Connection released after each transaction
# Most efficient: one server conn serves many clients
max_client_conn = 10000 # Max app connections to PgBouncer
default_pool_size = 25 # Actual PostgreSQL connections per database
reserve_pool_size = 5 # Emergency extra connections
# Result:
# App thinks it has 10,000 connections
# PostgreSQL actually has 25 connections
# Memory: 25 * 8MB = 200MB (vs 10,000 * 8MB = 80GB)
pg_stat_statements — find slow queries
-- Enable extension:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Add to postgresql.conf: shared_preload_libraries = 'pg_stat_statements'
-- Top 10 slowest queries by total time:
SELECT
round(total_exec_time::numeric / 1000, 2) AS total_seconds,
calls,
round((total_exec_time / calls)::numeric, 2) AS avg_ms,
round((rows / calls)::numeric) AS avg_rows,
left(query, 100) AS query_snippet
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
-- Reset stats:
SELECT pg_stat_statements_reset();
- ✅ shared_buffers = 25% of RAM — most impactful single change
- ✅ PgBouncer in transaction mode — reduces PostgreSQL connections 100x
- ✅ pg_stat_statements — find which queries consume the most time
- ✅ log_min_duration_statement = 1000 — log slow queries automatically
- ✅ VACUUM and ANALYZE regularly — dead tuples bloat tables
- ❌ Never set work_mem too high — max_connections * work_mem can exhaust RAM
- ❌ Never ignore pg_stat_statements — it reveals the real bottlenecks
PostgreSQL tuning pairs with the SQL query optimization guide — config sets the ceiling, query design determines what you hit. External reference: PGTune — PostgreSQL config generator.
Recommended Reading
→ Designing Data-Intensive Applications — The essential book every senior developer needs.
→ The Pragmatic Programmer — Timeless engineering wisdom for writing better code.
Affiliate links. We earn a small commission at no extra cost to you.
Free Weekly Newsletter
🚀 Don’t Miss the Next Cheat Code
Join 1,000+ senior developers getting expert JS, Python, AWS and system design secrets weekly.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
