
Best Practices for Running PostgreSQL in Docker (With Examples)

Running PostgreSQL inside Docker containers can simplify development, deployment, and management of database applications. Docker containers offer consistency and portability, but there are several practices that can make PostgreSQL run reliably. These recommended methods help keep your PostgreSQL container efficient, secure, and easy to maintain.
1. Use Specific PostgreSQL Version Tags
Always pin your PostgreSQL Docker image to a specific version tag. Instead of choosing a broad "latest" image, specifying an exact version helps keep your environment consistent. Using the same PostgreSQL version across all deployment stages can prevent compatibility issues.
Example:
docker run -d --name my-postgres postgres:16.2
This ensures you're exactly running PostgreSQL version 16.2, avoiding unexpected differences in your setup.
2. Optimize Container Resource Allocation
PostgreSQL performs better when given proper resource limits, such as memory and CPU usage. Setting reasonable limits ensures it won’t overload your host, improving stability.
Example:
docker run -d --name optimized-postgres --memory="2g" --cpus="1.5" postgres:16.2
Here, the container is limited to using 2GB of RAM and around 1.5 CPU cores, balancing performance and resource consumption.
3. Automate Database Setup with Init Scripts
Postgres Docker images allow you to automatically initialize databases at container startup by placing SQL scripts into the /docker-entrypoint-initdb.d/
directory. This approach simplifies setting up necessary schemas or basic data for new deployments.
Create a local file (e.g., init-db.sql
):
CREATE DATABASE mydb;
CREATE USER myuser WITH ENCRYPTED PASSWORD 'mypassword';
GRANT ALL PRIVILEGES ON DATABASE mydb TO myuser;
Launch the container and mount the script:
docker run -d \
-v "$(pwd)/init-db.sql":/docker-entrypoint-initdb.d/init-db.sql \
postgres:16.2
At startup, PostgreSQL executes your SQL commands automatically.
4. Enable WAL (Write-Ahead Logging) Archiving
Write-Ahead Logging (WAL) saves detailed record logs of database modifications. Managing WAL archiving lets you restore databases to specific moments in time for reliable recovery.
Modify a postgresql.conf
file with WAL settings:
wal_level = replica
archive_mode = on
archive_command = 'cp %p /var/lib/postgresql/data/wal_archive/%f'
Mount this custom configuration file during container startup:
docker run -d \
-v "$(pwd)/my-postgresql.conf":/var/lib/postgresql/data/postgresql.conf \
postgres:16.2 -c 'config_file=/var/lib/postgresql/data/postgresql.conf'
Archiving WAL records is beneficial for complex recovery setups.
5. Secure Connections Using SSL/TLS
Encrypting PostgreSQL connections via SSL/TLS certificates secures data transfers and protects your database from unwanted access.
Put your server.key
and server.crt
certificate files on your local machine and update postgresql.conf
to include:
ssl = on
ssl_cert_file = '/var/lib/postgresql/server.crt'
ssl_key_file = '/var/lib/postgresql/server.key'
Launch the container with certificates mounted:
docker run -d \
-v "$(pwd)/certs/":/var/lib/postgresql/ \
-v "$(pwd)/postgresql.conf":/var/lib/postgresql/data/postgresql.conf \
postgres:16.2 -c 'config_file=/var/lib/postgresql/data/postgresql.conf'
6. Use Alpine Postgres Images for Smaller Containers
Alpine images offer a streamlined PostgreSQL variant. They create significantly smaller containers, saving storage, speeding deployments, and lowering the host's memory pressure and resource usage.
Example using Alpine variant:
docker run -d postgres:16.2-alpine
Although smaller, Alpine-based images provide full PostgreSQL functionality, suited especially for smaller footprints and lightweight deployments.
7. Add Container Health Checks
Monitoring container health allows Docker to automatically check if Postgres is responsive. Docker can then report or restart containers not working correctly.
Include a health check using pg_isready
command or define it in Dockerfile:
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD pg_isready -U postgres || exit 1
Running this check every 30 seconds allows Docker to confirm your database remains operational and responsive.
8. Network Isolation to Improve Security
Restricting container network access prevents unauthorized services from connecting to PostgreSQL. This protects your database from unnecessary exposure and helps control incoming traffic.
Set PostgreSQL to only listen on specific interfaces inside postgresql.conf
:
listen_addresses = '172.18.0.2'
Run the PostgreSQL container within a Docker-specific network (make sure to also mount your postgresql.conf
):
docker network create my_pg_net
docker run -d --network my_pg_net --ip 172.18.0.2 postgres:16.2
This ensures only approved containers on a specific network can access the database.
9. Maximize Observability with Extensions
Using PostgreSQL extensions enhances database visibility and performance monitoring. Common extensions such as pg_stat_statements
track query patterns and usage.
Launch PostgreSQL and create extensions on initialization:
CREATE EXTENSION pg_stat_statements;
Mount this SQL script into the Docker init directory at launch to ensure constant availability for application tracking and performance measurement.
10. Docker Secrets to Protect Sensitive Data
Do not rely solely on environment variables for passwords and sensitive information, as they're easily exposed. Docker secrets securely store such parameters within containers.
First, create Docker secrets:
echo "mypassword" | docker secret create pg_passwd -
Reference them from container deployments (e.g., Docker Compose YAML):
services:
db:
image: postgres:16.2
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/pg_passwd
secrets:
- pg_passwd
secrets:
pg_passwd:
external: true
By following these best practices, you can ensure your PostgreSQL containers are stable, secure, and easy to manage. If you're looking for a hassle-free way to deploy and manage Dockerized applications, check out Sliplane.io. Our platform simplifies container hosting, so you can focus on building great applications without worrying about infrastructure.