Last updated: December 04, 2025
Author: Paul Namalomba
- SESKA Computational Engineer
- Software Developer
- PhD Candidate (Civil Engineering Spec. Computational and Applied Mechanics)
Contact: kabwenzenamalomba@gmail.com
Website: paulnamalomba.github.io
PostgreSQL 16 is an advanced open-source relational database with support for ACID transactions, JSON, full-text search, and extensibility. This guide covers installation, configuration, performance tuning, and administration using PowerShell on Windows. Power users need to understand connection pooling, query optimization, backup strategies, and replication for production deployments.
choco install postgresql16C:\Program Files\PostgreSQL\16\bin to system PATHinitdb -D "C:\PostgreSQL\data" -U postgres -W -E UTF8Start-Service postgresql-x64-16psql -U postgres -d postgresCREATE DATABASE mydb;publicpostgresql.conf (located in data directory):
# Connection settings
max_connections = 100
shared_buffers = 4GB # 25% of system RAM
effective_cache_size = 12GB # 75% of system RAM
# Write-ahead log
wal_level = replica
max_wal_size = 2GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
# Query planner
random_page_cost = 1.1 # For SSD storage
effective_io_concurrency = 200 # For SSD storage
# Logging
log_destination = 'stderr'
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_statement = 'all'
log_min_duration_statement = 1000 # Log queries > 1 second
pg_hba.conf (host-based authentication):
# TYPE DATABASE USER ADDRESS METHOD
local all all scram-sha-256
host all all 127.0.0.1/32 scram-sha-256
host all all ::1/128 scram-sha-256
host all all 0.0.0.0/0 scram-sha-256
Best Practices: - Use connection pooling (PgBouncer) for applications with >100 connections - Enable query logging for slow queries (>1000ms) - Implement regular VACUUM ANALYZE for table statistics - Use prepared statements to prevent SQL injection - Create indexes on foreign keys and frequently queried columns - Set appropriate work_mem per connection (start with 4MB)
scram-sha-256 password encryption; avoid trust and md5 methodsssl = onpostgres superuser for applicationslisten_addresses = 'localhost') or specific IPs; use firewall rulespgaudit extension for compliance requirements and track DDL/DML operationspgcrypto for column-level encryption; enable transparent data encryption (TDE) for data-at-restCreate Secure Application User:
CREATE ROLE appuser WITH LOGIN PASSWORD 'SecurePassword123!';
CREATE DATABASE appdb OWNER appuser;
GRANT CONNECT ON DATABASE appdb TO appuser;
\c appdb
GRANT USAGE ON SCHEMA public TO appuser;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO appuser;
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
Download, install, and perform initial configuration of PostgreSQL 16 on Windows with service setup and data directory initialization.
# Download PostgreSQL 16 installer
$installerUrl = "https://get.enterprisedb.com/postgresql/postgresql-16.1-1-windows-x64.exe"
$installerPath = "$env:TEMP\postgresql-16-installer.exe"
Invoke-WebRequest -Uri $installerUrl -OutFile $installerPath
# Install silently (requires admin)
Start-Process -FilePath $installerPath -ArgumentList @(
'--mode', 'unattended',
'--superpassword', 'StrongPassword123!',
'--serverport', '5432',
'--datadir', 'C:\PostgreSQL\data',
'--servicename', 'postgresql-x64-16'
) -Wait
# Add to PATH
$pgBinPath = "C:\Program Files\PostgreSQL\16\bin"
[Environment]::SetEnvironmentVariable("PATH", "$env:PATH;$pgBinPath", "Machine")
$env:PATH += ";$pgBinPath"
# Verify installation
psql --version
# Start service
Start-Service postgresql-x64-16
# Check service status
Get-Service postgresql-x64-16 | Select-Object Name, Status, StartType
Create databases, users, and schemas with proper permission grants and role management.
# Set password environment variable for non-interactive authentication
$env:PGPASSWORD = "StrongPassword123!"
# Connect and create database
psql -U postgres -c "CREATE DATABASE production_db WITH ENCODING 'UTF8' LC_COLLATE='en_US.UTF-8' LC_CTYPE='en_US.UTF-8';"
# Create roles
psql -U postgres -c "CREATE ROLE admin_role;"
psql -U postgres -c "CREATE ROLE readonly_role;"
psql -U postgres -c "CREATE ROLE readwrite_role;"
# Create users
psql -U postgres -c "CREATE USER admin_user WITH PASSWORD 'AdminPass123!' IN ROLE admin_role;"
psql -U postgres -c "CREATE USER app_user WITH PASSWORD 'AppPass123!' IN ROLE readwrite_role;"
psql -U postgres -c "CREATE USER report_user WITH PASSWORD 'ReportPass123!' IN ROLE readonly_role;"
# Grant database access
psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE production_db TO admin_role;"
psql -U postgres -c "GRANT CONNECT ON DATABASE production_db TO readwrite_role;"
psql -U postgres -c "GRANT CONNECT ON DATABASE production_db TO readonly_role;"
# Grant schema permissions
psql -U postgres -d production_db -c @"
GRANT USAGE ON SCHEMA public TO readwrite_role;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO readwrite_role;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO readwrite_role;
GRANT USAGE ON SCHEMA public TO readonly_role;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO readwrite_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_role;
"@
# List users and roles
psql -U postgres -c "\du"
Perform full database backups, compressed backups, and point-in-time recovery setup with automated scheduling.
# Full database dump (plain SQL)
pg_dump -U postgres -d production_db -F p -f "C:\Backups\production_db_$(Get-Date -Format 'yyyyMMdd_HHmmss').sql"
# Compressed custom format dump (recommended for large databases)
pg_dump -U postgres -d production_db -F c -f "C:\Backups\production_db_$(Get-Date -Format 'yyyyMMdd_HHmmss').dump"
# Dump specific schema
pg_dump -U postgres -d production_db -n public -F c -f "C:\Backups\public_schema_$(Get-Date -Format 'yyyyMMdd_HHmmss').dump"
# Dump specific tables
pg_dump -U postgres -d production_db -t users -t orders -F c -f "C:\Backups\specific_tables_$(Get-Date -Format 'yyyyMMdd_HHmmss').dump"
# Dump all databases
pg_dumpall -U postgres -f "C:\Backups\all_databases_$(Get-Date -Format 'yyyyMMdd_HHmmss').sql"
# Restore from plain SQL dump
psql -U postgres -d production_db -f "C:\Backups\production_db_20241204_120000.sql"
# Restore from custom format dump
pg_restore -U postgres -d production_db -F c -c --if-exists "C:\Backups\production_db_20241204_120000.dump"
# Parallel restore (faster for large databases)
pg_restore -U postgres -d production_db -F c -j 4 "C:\Backups\production_db_20241204_120000.dump"
# Create scheduled backup task (runs daily at 2 AM)
$action = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument @"
-Command "pg_dump -U postgres -d production_db -F c -f C:\Backups\production_db_`$(Get-Date -Format 'yyyyMMdd').dump"
"@
$trigger = New-ScheduledTaskTrigger -Daily -At 2am
$principal = New-ScheduledTaskPrincipal -UserID "NT AUTHORITY\SYSTEM" -LogonType ServiceAccount -RunLevel Highest
Register-ScheduledTask -Action $action -Trigger $trigger -Principal $principal -TaskName "PostgreSQL Daily Backup" -Description "Daily PostgreSQL backup"
Execute queries, analyze query plans, and monitor query performance with explain plans and timing.
# Execute simple query
psql -U postgres -d production_db -c "SELECT * FROM users LIMIT 10;"
# Execute query from file
psql -U postgres -d production_db -f "C:\Scripts\queries.sql"
# Get query execution time
psql -U postgres -d production_db -c "\timing" -c "SELECT COUNT(*) FROM large_table;"
# Analyze query plan
psql -U postgres -d production_db -c "EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'user@example.com';"
# Get table sizes
psql -U postgres -d production_db -c @"
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname||'.'||tablename)) AS table_size,
pg_size_pretty(pg_indexes_size(schemaname||'.'||tablename)) AS indexes_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC;
"@
# Get index usage statistics
psql -U postgres -d production_db -c @"
SELECT
schemaname,
tablename,
indexname,
idx_scan,
idx_tup_read,
idx_tup_fetch,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;
"@
# Find slow queries (requires pg_stat_statements extension)
psql -U postgres -d production_db -c @"
SELECT
query,
calls,
total_exec_time,
mean_exec_time,
max_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
"@
Create, analyze, and maintain indexes for optimal query performance with different index types.
# Create B-tree index (default)
psql -U postgres -d production_db -c "CREATE INDEX idx_users_email ON users(email);"
# Create unique index
psql -U postgres -d production_db -c "CREATE UNIQUE INDEX idx_users_username ON users(username);"
# Create composite index
psql -U postgres -d production_db -c "CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);"
# Create partial index (conditional)
psql -U postgres -d production_db -c "CREATE INDEX idx_orders_active ON orders(user_id) WHERE status = 'active';"
# Create GIN index for JSON columns
psql -U postgres -d production_db -c "CREATE INDEX idx_products_attributes ON products USING GIN(attributes);"
# Create GiST index for full-text search
psql -U postgres -d production_db -c "CREATE INDEX idx_articles_search ON articles USING GiST(to_tsvector('english', title || ' ' || content));"
# Rebuild index (remove bloat)
psql -U postgres -d production_db -c "REINDEX INDEX CONCURRENTLY idx_users_email;"
# Rebuild all indexes on table
psql -U postgres -d production_db -c "REINDEX TABLE CONCURRENTLY users;"
# Get unused indexes
psql -U postgres -d production_db -c @"
SELECT
schemaname,
tablename,
indexname,
pg_size_pretty(pg_relation_size(indexrelid)) AS index_size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;
"@
# Drop unused indexes
psql -U postgres -d production_db -c "DROP INDEX CONCURRENTLY idx_old_unused_index;"
Perform routine maintenance tasks including VACUUM, ANALYZE, and statistics updates for optimal database health.
# Manual vacuum (non-blocking)
psql -U postgres -d production_db -c "VACUUM;"
# Vacuum specific table
psql -U postgres -d production_db -c "VACUUM VERBOSE ANALYZE users;"
# Full vacuum (locks table, reclaims more space)
psql -U postgres -d production_db -c "VACUUM FULL users;"
# Analyze statistics for query planner
psql -U postgres -d production_db -c "ANALYZE;"
# Reindex database
psql -U postgres -d production_db -c "REINDEX DATABASE production_db;"
# Check for bloat
psql -U postgres -d production_db -c @"
SELECT
schemaname,
tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
n_dead_tup,
n_live_tup,
ROUND(100 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_tuple_percent
FROM pg_stat_user_tables
WHERE n_dead_tup > 0
ORDER BY n_dead_tup DESC;
"@
# Enable autovacuum (should be on by default)
psql -U postgres -c "ALTER SYSTEM SET autovacuum = on;"
psql -U postgres -c "SELECT pg_reload_conf();"
# Configure autovacuum thresholds for specific table
psql -U postgres -d production_db -c @"
ALTER TABLE users SET (
autovacuum_vacuum_scale_factor = 0.1,
autovacuum_analyze_scale_factor = 0.05
);
"@
Connection Refused:
- Error: "could not connect to server: Connection refused"
- Check service: Get-Service postgresql-x64-16
- Start service: Start-Service postgresql-x64-16
- Check port: netstat -an | Select-String "5432"
- Check logs: Get-Content "C:\PostgreSQL\data\log\postgresql-*.log" -Tail 50
Authentication Failed:
- Error: "FATAL: password authentication failed"
- Check pg_hba.conf: Ensure correct authentication method
- Reset password: psql -U postgres -c "ALTER USER postgres PASSWORD 'NewPassword123!';"
- Reload config: psql -U postgres -c "SELECT pg_reload_conf();"
Out of Memory: - Error: "out of memory" or "could not resize shared memory segment" - Check shared_buffers: Reduce if >25% of RAM - Check work_mem: Reduce per-connection memory - Increase system RAM or reduce max_connections
Lock Timeouts: - Check blocking queries:
psql -U postgres -d production_db -c @"
SELECT
pid,
usename,
pg_blocking_pids(pid) AS blocked_by,
query
FROM pg_stat_activity
WHERE cardinality(pg_blocking_pids(pid)) > 0;
"@
psql -U postgres -c "SELECT pg_terminate_backend(12345);"Common Logs:
- PostgreSQL logs: C:\PostgreSQL\data\log\postgresql-*.log
- Windows Event Viewer: Application logs for PostgreSQL service
- Connection logs: Enable log_connections = on in postgresql.conf
Memory Configuration:
# For 16GB RAM system
shared_buffers = 4GB # 25% of RAM
effective_cache_size = 12GB # 75% of RAM
maintenance_work_mem = 1GB # For VACUUM, CREATE INDEX
work_mem = 16MB # Per sort/hash operation
Connection Pooling: Install PgBouncer for connection pooling:
# Install PgBouncer
choco install pgbouncer
# Configure pgbouncer.ini
@"
[databases]
production_db = host=localhost port=5432 dbname=production_db
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = md5
auth_file = C:\PostgreSQL\pgbouncer\userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
"@ | Set-Content "C:\PostgreSQL\pgbouncer\pgbouncer.ini"
# Start PgBouncer service
net start pgbouncer
Query Optimization:
# Enable pg_stat_statements extension
psql -U postgres -d production_db -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
# Add to postgresql.conf
@"
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
"@ | Add-Content "C:\PostgreSQL\data\postgresql.conf"
# Restart service
Restart-Service postgresql-x64-16
Monitoring Commands:
# Active connections
psql -U postgres -c "SELECT count(*) FROM pg_stat_activity;"
# Database size
psql -U postgres -c "SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;"
# Cache hit ratio (should be >99%)
psql -U postgres -d production_db -c @"
SELECT
sum(heap_blks_read) as heap_read,
sum(heap_blks_hit) as heap_hit,
sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS cache_hit_ratio
FROM pg_statio_user_tables;
"@
# Transaction rate
psql -U postgres -d production_db -c "SELECT xact_commit + xact_rollback AS total_transactions FROM pg_stat_database WHERE datname = 'production_db';"