Caution
Ready for public testing — NOT yet recommended for production or critical data
This project is available for public testing and evaluation, but it remains immature and is not recommended for production use yet. The implementation has been designed to meet production standards, however it needs additional testing, stabilization, and a small set of features before we can recommend it for critical data.
Key points:
- The primary limitation is overall maturity — more real-world testing and validation are required.
- Does not yet support multiple DHIS2 environments on the same server out of the box.
- Direct database access and advanced operations require technical knowledge.
- Tuning and optimisation (postgresql, DHIS2 and container resource allocation) is needed per deployment
We explicitly do NOT recommend using this for production at this time because of the project's current maturity level. With continued development, testing (with feedback from the community), and configuration, the project is intended to meet production requirements.
This repository provides a Docker-based deployment for the DHIS2 application, designed for both local development/testing and secure production implementations. It leverages Docker Compose to orchestrate DHIS2, PostgreSQL, Traefik (as a reverse proxy), and an optional monitoring stack. Facilities are also provided for backup and restore of the database and file storage.
- Quick Start
- Deployment For Production
- Advanced Usage
- Contributing to this project
- Further Documentation
This section is for users who want to quickly set up and test the DHIS2 application on their local machine.
git clone https://github.com/dhis2/docker-deployment.git && \
cd docker-deployment && \
export GEN_APP_HOSTNAME=dhis2-127-0-0-1.nip.io && \
export GEN_LETSENCRYPT_ACME_EMAIL=whatever@dhis2.org && \
./scripts/generate-env.sh && \
make launchOpen http://dhis2-127-0-0-1.nip.io in your favorite browser.
Note
Your browser will warn you that the certificate is not trusted. This is expected, as it is a self-signed certificate.
Note
The default admin credentials are available in the .env file.
This section is for users planning to deploy DHIS2 in a production environment.
Before deploying to production, ensure you have:
- A dedicated host or virtual machine with Docker and Docker Compose installed.
- A fully qualified domain name (FQDN) for your DHIS2 instance.
- A valid email address for Let's Encrypt certificate management.
- Appropriate firewall rules configured for ports 80 and 443.
The following environment variables are required to configure the application.
# Provide the FQDN for your DHIS2 instance.
export GEN_APP_HOSTNAME=<your-domain.com>
# A valid email address is required for Let's Encrypt certificate management.
export GEN_LETSENCRYPT_ACME_EMAIL=your@email.comGenerate a new .env file by executing the following command:
./scripts/generate-env.shFor production, carefully review and configure all environment variables in your .env file. Refer to the comprehensive environment variables documentation for details on each variable. It is recommended not to change the generated values of the password variables unless you need to do so to align with your organization's security policies, or existing components.
Once the environment is configured, launch the application using Docker Compose:
docker compose upOpen https://<your-domain.com> in your favorite browser.
Note
The first time you launch the application, it will initialise with a blank database. The default admin credentials are available in the .env file. If you have an existing database, you can restore it following the Backup and Restore section, under Advanced Usage, below.
For production environments, careful configuration of PostgreSQL is critical for performance and stability.
Custom configuration for PostgreSQL should be done by adding .conf files to the ./config/postgresql/conf.d/ directory. Create new files for specific settings rather than modifying existing ones or config/postgresql/postgresql.conf.
Any changes to these files will require a restart of the PostgreSQL container to take effect. For changes to take effect without restarting the container, you can execute (inside the PostgreSQL container):
SELECT pg_reload_conf();Deployments can benefit from additional services provided by compose overlays.
To enable the Traefik dashboard for local monitoring of your reverse proxy, launch the application with the following command:
docker compose -f docker-compose.yml -f overlays/traefik-dashboard/docker-compose.yml upGlowroot is an APM (Application Performance Monitoring) tool that can be enabled to monitor the DHIS2 application's performance in production.
docker compose -f docker-compose.yml -f overlays/glowroot/docker-compose.yml upRobust backup and restore procedures are essential for production. Backups are stored in the ./backups directory. We support backup and restore of both the database and the file storage.
A complete backup of both the database and file storage can be created by executing:
make backupThis command will create two files in the ./backups directory: one for the database and one for the file storage.
-
Backup Database: The database can be backed up in
custom(default) orplainformat, controlled by thePOSTGRES_BACKUP_FORMATenvironment variable.make backup-database
This creates a file in
./backupsnamed$TIMESTAMP.pgc(custom) or$TIMESTAMP.sql.gz(plain). Consult the PostgreSQL documentation for more details. -
Backup File Storage:
make backup-file-storage
By default, backups are automatically named with a timestamp in the format YYYY-MM-DD_HH-MM-SS_UTC. You can override this by setting the BACKUP_TIMESTAMP environment variable when running backup commands:
BACKUP_TIMESTAMP=<custom-backup-timestamp> make backupThe restore process relies on the DB_RESTORE_FILE and FILE_STORAGE_RESTORE_SOURCE_DIR environment variables, which must be set to the path of the backup file/directory to restore (without the ./backups prefix).
A complete restore of both database and file storage can be done by executing:
make restore-
Restore Database: Set the
DB_RESTORE_FILEenvironment variable to the backup file name.make restore-database
-
Restore File Storage: Set the
FILE_STORAGE_RESTORE_SOURCE_DIRenvironment variable to the backup directory name.make restore-file-storage
The monitoring stack is crucial for understanding the health and performance of your production DHIS2 deployment. It includes Grafana, Loki, and Prometheus for logs and metrics collection.
The Docker Loki Driver plugin is required to forward container logs to Loki. Install it using:
./scripts/install-loki-driver.shDeploy the monitoring stack using:
docker compose -f docker-compose.yml -f overlays/monitoring/docker-compose.yml upThis deploys:
- Grafana: A web-based monitoring and visualization platform with preloaded dashboards for Traefik, PostgreSQL, and server/host data.
- Prometheus: Collects metrics from the DHIS2 application (
/api/metrics), Postgres Exporter, Traefik, Node Exporter, cAdvisor, and Prometheus itself. Data is stored locally for 15 days (default). - Loki: Aggregates all container logs (DHIS2, PostgreSQL, Traefik) via the Docker Loki Driver plugin. Logs are indexed by labels for efficiency.
DHIS2's built-in monitoring API is enabled, exposing health and performance metrics to Prometheus.
- Start services with the monitoring overlay (as shown above).
- Open
https://grafana.{APP_HOSTNAME}in your browser (where{APP_HOSTNAME}is from your.envfile). - Login with:
- Username:
admin - Password: Check your
.envfile forGRAFANA_ADMIN_PASSWORD.
- Username:
Monitoring settings can be configured via environment variables in your .env file:
GRAFANA_ADMIN_PASSWORD: Grafana admin password (auto-generated).PROMETHEUS_RETENTION_TIME: Prometheus data retention (default:15d).LOKI_RETENTION_PERIOD: Loki log retention (default:744h= 31 days).
This section is for developers who want to contribute to this project.
- Python 3.11+
- Pip
- Make
To initialize the development environment:
make initTo start all services for development:
make launchTo stop and remove all services and their associated data:
make cleanmake testNote that the environment needs to be "fresh" for the end-to-end tests' expectations to succeed, so it's advised to clean the environment beforehand.
make clean && make testFor more in-depth information, please refer to the following: