Back to Blog

PostgreSQL Performance Tuning Guide

Master the art of PostgreSQL optimization with this comprehensive guide covering configuration tuning, query optimization, indexing strategies, and monitoring best practices.

Why PostgreSQL Performance Matters

PostgreSQL is one of the most advanced open-source relational databases, powering mission-critical applications worldwide. However, out-of-the-box configurations are designed to run on minimal hardware. To unlock PostgreSQL's full potential, you need to tune it for your specific workload.

Memory Configuration

Memory settings have the most significant impact on PostgreSQL performance. Let's explore the key parameters:

shared_buffers

This is PostgreSQL's dedicated memory for caching data. The recommended setting is 25% of total system memory, but not more than 8GB on most systems.

# For a server with 32GB RAM
shared_buffers = 8GB

# Check current setting
SHOW shared_buffers;

effective_cache_size

This tells the query planner how much memory is available for disk caching. Set it to 50-75% of total system memory.

# For a server with 32GB RAM
effective_cache_size = 24GB

work_mem

Memory for internal sort operations and hash tables. Be careful—this is per-operation, not per-connection!

# Conservative starting point
work_mem = 64MB

# Calculate: (Total RAM - shared_buffers) / (max_connections * 3)
# Adjust based on query complexity

⚠️ work_mem Warning

Setting work_mem too high can cause out-of-memory conditions. A complex query might use work_mem multiple times. Monitor your system carefully when adjusting this parameter.

maintenance_work_mem

Memory for maintenance operations like VACUUM, CREATE INDEX. Can be set higher than work_mem.

# For most servers
maintenance_work_mem = 2GB

Connection Tuning

max_connections

Each connection consumes memory and resources. Use connection pooling instead of increasing this blindly.

# Reasonable default
max_connections = 200

# With PgBouncer, you can keep this lower
max_connections = 100

Connection Pooling with PgBouncer

PgBouncer is essential for high-traffic applications:

# pgbouncer.ini
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_addr = *
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25

💡 Pool Mode Selection

Session mode: Connection held for entire session (legacy apps). Transaction mode: Connection returned after transaction (recommended for most apps). Statement mode: Connection returned after each statement (limited use cases).

Write-Ahead Log (WAL) Configuration

wal_level

Determines how much information is written to WAL:

# For replication and point-in-time recovery
wal_level = replica

# For logical replication
wal_level = logical

checkpoint_timeout and max_wal_size

Control checkpoint frequency and WAL storage:

# Increase for write-heavy workloads
checkpoint_timeout = 15min
max_wal_size = 4GB
min_wal_size = 1GB

Query Optimization

EXPLAIN ANALYZE

Always analyze slow queries:

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT u.name, COUNT(o.id) as order_count
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.created_at > '2024-01-01'
GROUP BY u.id, u.name
HAVING COUNT(o.id) > 5;

Key things to look for in EXPLAIN output:

  • Seq Scan: Full table scan—may need an index
  • Nested Loop: Acceptable for small result sets
  • Hash Join: Efficient for larger joins
  • Merge Join: Good for sorted data
  • Buffers: High "read" values indicate disk I/O

Query Rewriting Tips

-- AVOID: Subquery in WHERE
SELECT * FROM orders 
WHERE user_id IN (SELECT id FROM users WHERE status = 'active');

-- PREFER: JOIN
SELECT o.* FROM orders o
INNER JOIN users u ON o.user_id = u.id
WHERE u.status = 'active';

-- AVOID: Functions on indexed columns
SELECT * FROM users WHERE LOWER(email) = 'test@example.com';

-- PREFER: Functional index or store lowercase
CREATE INDEX idx_users_email_lower ON users (LOWER(email));

-- AVOID: SELECT *
SELECT * FROM users;

-- PREFER: Select only needed columns
SELECT id, name, email FROM users;

Indexing Strategies

B-tree Indexes (Default)

Best for equality and range queries:

-- Single column index
CREATE INDEX idx_users_email ON users(email);

-- Composite index (order matters!)
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);

-- Partial index (only index relevant rows)
CREATE INDEX idx_orders_active ON orders(status) 
WHERE status = 'pending';

GIN Indexes

For full-text search and array/JSONB columns:

-- Full-text search
CREATE INDEX idx_articles_search ON articles 
USING GIN(to_tsvector('english', title || ' ' || content));

-- JSONB containment
CREATE INDEX idx_data_gin ON events USING GIN(metadata);

BRIN Indexes

For large tables with naturally ordered data:

-- Perfect for time-series data
CREATE INDEX idx_logs_created ON logs USING BRIN(created_at);

-- Much smaller than B-tree, good for append-only tables

Index Maintenance

-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan, idx_tup_read
FROM pg_stat_user_indexes
ORDER BY idx_scan ASC;

-- Find unused indexes
SELECT indexrelid::regclass as index,
       relid::regclass as table,
       pg_size_pretty(pg_relation_size(indexrelid)) as size
FROM pg_stat_user_indexes
WHERE idx_scan = 0 AND indexrelid::regclass::text NOT LIKE '%pkey';

VACUUM and Maintenance

Autovacuum Tuning

Autovacuum is critical for PostgreSQL health:

# More aggressive autovacuum for busy tables
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05
autovacuum_vacuum_cost_delay = 2ms
autovacuum_vacuum_cost_limit = 1000

# Per-table settings for very large tables
ALTER TABLE huge_table SET (
    autovacuum_vacuum_scale_factor = 0.01,
    autovacuum_analyze_scale_factor = 0.005
);

Bloat Detection

-- Check table bloat
SELECT schemaname, tablename,
       pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) as total_size,
       pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) as table_size,
       pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename) - 
                      pg_relation_size(schemaname || '.' || tablename)) as index_size,
       n_dead_tup,
       n_live_tup,
       round(100 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) as dead_pct
FROM pg_stat_user_tables
ORDER BY n_dead_tup DESC;

Monitoring and Observability

Essential Queries

-- Active queries
SELECT pid, now() - pg_stat_activity.query_start AS duration, query, state
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '5 seconds'
AND state != 'idle';

-- Table sizes
SELECT relname, pg_size_pretty(pg_total_relation_size(relid))
FROM pg_catalog.pg_statio_user_tables
ORDER BY pg_total_relation_size(relid) DESC;

-- Cache hit ratio (should be > 99%)
SELECT sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM pg_statio_user_tables;

pg_stat_statements

Essential extension for query performance analysis:

-- Enable the extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Top queries by total time
SELECT query, calls, total_exec_time, mean_exec_time, rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

📊 Recommended Monitoring Tools

pgMonitor: Prometheus/Grafana stack for PostgreSQL. pg_stat_monitor: Enhanced pg_stat_statements. pgBadger: Log analyzer for detailed reports. PgHero: Performance dashboard with query analysis.

Production Checklist

  • ✅ Memory settings tuned for your workload
  • ✅ Connection pooling implemented (PgBouncer)
  • ✅ Autovacuum properly configured
  • ✅ pg_stat_statements enabled
  • ✅ Appropriate indexes created
  • ✅ Unused indexes removed
  • ✅ Regular ANALYZE on frequently changed tables
  • ✅ Monitoring and alerting in place
  • ✅ Backup and recovery tested
  • ✅ Replication configured for HA

Conclusion

PostgreSQL performance tuning is an iterative process. Start with memory configuration, implement connection pooling, ensure proper indexing, and continuously monitor your database. Remember that every workload is different—always test changes in a staging environment before applying to production.

At VESTLABZ, our database engineers help organizations optimize their PostgreSQL deployments for maximum performance and reliability. Whether you're dealing with slow queries, scaling challenges, or migration projects, we're here to help.

DL

David Lee

Senior Database Engineer at VESTLABZ

Share this article: