Developer quickstart with Docker Compose
Get started with Docker Compose to run Camunda 8 Self-Managed locally. The default lightweight configuration includes the Orchestration Cluster (Zeebe, Operate, and Tasklist consolidated), Orchestration Cluster Admin (formerly Orchestration Cluster Identity), Connectors, and Elasticsearch. The full configuration additionally includes Optimize, Console, Management Identity, Web Modeler, Keycloak, and PostgreSQL. You can also switch the Orchestration Cluster secondary storage to a supported relational database or OpenSearch for local development and evaluation. Docker Compose also supports document storage and management with document handling.
The Docker images are supported for production usage; however, the Docker Compose files are intended for developers to run an environment locally and are not designed for production. For production deployments, use Kubernetes with Helm.
Prerequisites
The following prerequisites are required to run Camunda Self-Managed via Docker Compose:
| Prerequisite | Description |
|---|---|
| Docker Compose | Version 1.27.0 or later (supports the latest Compose specification). |
| Docker | Version 20.10.16 or later. |
If Docker Compose reports errors such as "unsupported attribute" when loading the Camunda Compose files:
-
Confirm you are using the Docker Compose v2 plugin:
docker compose version -
Run the commands in this guide with
docker compose(plugin syntax), notdocker-compose(legacy standalone binary). -
Upgrade Docker Desktop or Docker Engine/Compose plugin to a recent supported version, then retry.
Run Camunda 8 with Docker Compose
To start a complete Camunda 8 Self-Managed environment locally:
-
Download the artifact for Camunda 8 Docker Compose, then extract it.
-
In the extracted directory, run:
docker compose up -d -
Wait for the environment to initialize (this can take several minutes). Monitor the logs (especially the Keycloak container log) to ensure all components start.
Docker Compose configurations
Camunda provides three Docker Compose configurations in the Camunda Distributions repository:
| Configuration File | Description |
|---|---|
| docker-compose.yaml | Default lightweight configuration - Includes the core Orchestration Cluster (Zeebe, Operate, Tasklist, and Orchestration Cluster Admin), Connectors, and Elasticsearch. Ideal for most developers who want to model, deploy, and test processes. |
| docker-compose-full.yaml | Full-stack configuration - Includes all Camunda 8 components including the Orchestration Cluster, Connectors, Optimize, Console, Management Identity, Keycloak, PostgreSQL, and Web Modeler. Use this when you need management components, process optimization, or modeling. |
| docker-compose-web-modeler.yaml | Standalone Web Modeler - Runs only Web Modeler and its dependencies (Identity, Keycloak, PostgreSQL). See Deploy with Web Modeler. |
In these Docker Compose quickstart configurations, the Orchestration Cluster uses Elasticsearch as secondary storage.
The PostgreSQL container(s) in these quickstart files are used by management components (for example, Management Identity and Web Modeler), not as Orchestration Cluster secondary storage.
If you want to run the Orchestration Cluster with RDBMS secondary storage, use the dedicated RDBMS guides:
Access components
Once the containers are running, you can access the components in your browser.
You can log in to the component web interfaces with the default credentials:
- Username:
demo - Password:
demo
Orchestration Cluster (lightweight and full configurations)
The Orchestration Cluster is the core of Camunda 8, providing process automation capabilities.
| Component | URL | Description |
|---|---|---|
| Operate | http://localhost:8080/operate | Monitor and troubleshoot process instances. See Introduction to Operate and Process instance creation. |
| Tasklist | http://localhost:8080/tasklist | Complete user tasks in running process instances. See User tasks. |
| Orchestration Cluster Admin | http://localhost:8080/admin | Manage users and permissions for Orchestration Cluster (lightweight). |
| Orchestration Cluster REST API | http://localhost:8080/v2 | REST API for process automation. |
| Orchestration Cluster gRPC API | localhost:26500 | gRPC API for high-performance process automation. |
By default, the Orchestration Cluster uses Basic authentication. The full configuration uses Keycloak for Management Identity authentication.
Management and modeling components (full configuration only)
| Component | URL | Description |
|---|---|---|
| Console | http://localhost:8087 | Manage clusters and component configurations |
| Optimize | http://localhost:8083 | Analyze and improve process performance |
| Management Identity | http://localhost:8084 | Manage users for Console, Optimize, and Web Modeler |
| Web Modeler | http://localhost:8070 | Model BPMN processes, DMN decisions, and forms |
External dependencies
| Component | Configuration | URL | Description |
|---|---|---|---|
| Elasticsearch | Lightweight and full | http://localhost:9200 | Used by the Orchestration Cluster as secondary storage (and Optimize in the full configuration). |
| Keycloak | Full | http://localhost:18080/auth/ | OIDC provider for Management Identity. The lightweight configuration uses the embedded Orchestration Cluster Admin instead. Access with admin / admin. |
| PostgreSQL (management components only) | Full | localhost:5432 | Database for Management Identity and Web Modeler. In these quickstart configurations, the Orchestration Cluster continues to use Elasticsearch as secondary storage. |
In Docker Compose quickstarts, PostgreSQL is used for management-component persistence (Management Identity and Web Modeler flows). The Orchestration Cluster secondary storage in these examples remains Elasticsearch.
Configuration files and options
To start specific configurations:
-
Lightweight (default)
docker compose up -d -
Full configuration
docker compose -f docker-compose-full.yaml up -d -
Standalone Web Modeler
docker compose -f docker-compose-web-modeler.yaml up -d
Configure secondary storage for the Orchestration Cluster
The lightweight docker-compose.yaml starts the Orchestration Cluster with Elasticsearch as secondary storage. To test another backend, add a docker-compose.override.yaml file next to the extracted Compose files and override the camunda service there.
The full-stack docker-compose-full.yaml already includes PostgreSQL for Management Identity and Web Modeler. That database is separate from the Orchestration Cluster secondary storage. If you want the Orchestration Cluster itself to use RDBMS, configure the camunda service as shown in the examples below.
Use this workflow for each example:
- Create
docker-compose.override.yamlin the extracted distribution directory. - Copy the backend-specific example into that file.
- If the backend requires an external JDBC driver, place the driver JAR directly in
./driver-liband keep the./driver-lib:/driver-libvolume mount from the example. - Start the updated stack with the command shown below the example.
Camunda configures the built-in exporter automatically from camunda.data.secondary-storage.*. You do not need to add a separate exporter class for the standard Docker Compose quickstart.
Some existing pages still use the legacy environment variable prefix CAMUNDA_DATA_SECONDARYSTORAGE_*. The examples on this page use CAMUNDA_DATA_SECONDARY_STORAGE_* consistently.
Use RDBMS secondary storage
These examples switch the Orchestration Cluster from Elasticsearch to RDBMS. They are suitable for local development and evaluation. PostgreSQL and H2 are the simplest starting points. MariaDB and SQL Server are also bundled in the image. MySQL and Oracle require you to provide the JDBC driver.
The Orchestration Cluster supports RDBMS as secondary storage. Operate support on RDBMS is still limited in 8.9-alpha3. Before using these examples beyond local development, review the RDBMS support policy.
- PostgreSQL
- MariaDB
- MySQL
- Oracle
- Microsoft SQL Server
- H2
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: postgresql
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:postgresql://postgres:5432/camunda_secondary
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: camunda
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: camunda
depends_on:
- postgres
postgres:
image: postgres:16
environment:
POSTGRES_DB: camunda_secondary
POSTGRES_USER: camunda
POSTGRES_PASSWORD: camunda
ports:
- "5432:5432"
volumes:
- postgres-secondary-data:/var/lib/postgresql/data
volumes:
postgres-secondary-data:
docker compose up -d camunda postgres
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: mariadb
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:mariadb://mariadb:3306/camunda_secondary?serverTimezone=UTC
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: camunda
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: camunda
depends_on:
- mariadb
mariadb:
image: mariadb:11.4
environment:
MARIADB_DATABASE: camunda_secondary
MARIADB_USER: camunda
MARIADB_PASSWORD: camunda
MARIADB_ROOT_PASSWORD: rootcamunda
ports:
- "3306:3306"
volumes:
- mariadb-secondary-data:/var/lib/mysql
volumes:
mariadb-secondary-data:
docker compose up -d camunda mariadb
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: mysql
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:mysql://mysql:3306/camunda_secondary?serverTimezone=UTC
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: camunda
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: camunda
depends_on:
- mysql
volumes:
- ./driver-lib:/driver-lib
mysql:
image: mysql:8.4
environment:
MYSQL_DATABASE: camunda_secondary
MYSQL_USER: camunda
MYSQL_PASSWORD: camunda
MYSQL_ROOT_PASSWORD: rootcamunda
ports:
- "3306:3306"
volumes:
- mysql-secondary-data:/var/lib/mysql
volumes:
mysql-secondary-data:
docker compose up -d camunda mysql
Place the MySQL Connector/J JAR directly in ./driver-lib before you start the stack.
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: oracle
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:oracle:thin:@oracle:1521/FREEPDB1
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: camunda
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: camunda
depends_on:
- oracle
volumes:
- ./driver-lib:/driver-lib
oracle:
image: gvenzl/oracle-free:23-slim
environment:
ORACLE_PASSWORD: oracle
APP_USER: camunda
APP_USER_PASSWORD: camunda
ports:
- "1521:1521"
volumes:
- oracle-secondary-data:/opt/oracle/oradata
volumes:
oracle-secondary-data:
docker compose up -d camunda oracle
Place the Oracle JDBC driver JAR directly in ./driver-lib before you start the stack.
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: mssql
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:sqlserver://mssql:1433;databaseName=camunda_secondary;encrypt=false
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: sa
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: Camunda123!
depends_on:
- mssql
mssql:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
ACCEPT_EULA: "Y"
MSSQL_SA_PASSWORD: Camunda123!
MSSQL_PID: Developer
ports:
- "1433:1433"
volumes:
- mssql-secondary-data:/var/opt/mssql
volumes:
mssql-secondary-data:
docker compose up -d mssql
docker compose exec mssql /opt/mssql-tools18/bin/sqlcmd -C -S localhost -U sa -P 'Camunda123!' -Q "IF DB_ID('camunda_secondary') IS NULL CREATE DATABASE camunda_secondary"
docker compose up -d camunda
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: rdbms
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID: h2
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL: jdbc:h2:file:./camunda-data/h2db
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME: sa
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD: ""
volumes:
- h2-secondary-data:/usr/local/camunda/camunda-data
volumes:
h2-secondary-data:
docker compose up -d camunda
Use H2 only for development, testing, and evaluation. It is not a production backend.
Switch between RDBMS, Elasticsearch, and OpenSearch
To switch back from RDBMS to a document-store backend, change the CAMUNDA_DATA_SECONDARY_STORAGE_TYPE value and keep only the backend-specific connection settings you need.
- Elasticsearch
- OpenSearch
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: elasticsearch
CAMUNDA_DATA_SECONDARY_STORAGE_ELASTICSEARCH_URL: http://elasticsearch:9200
CAMUNDA_DATA_SECONDARY_STORAGE_ELASTICSEARCH_USERNAME: ""
CAMUNDA_DATA_SECONDARY_STORAGE_ELASTICSEARCH_PASSWORD: ""
docker compose up -d camunda elasticsearch
This matches the default lightweight quickstart backend.
services:
camunda:
environment:
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE: opensearch
CAMUNDA_DATA_SECONDARY_STORAGE_OPENSEARCH_URL: http://opensearch:9200
depends_on:
- opensearch
opensearch:
image: opensearchproject/opensearch:2.19.3
environment:
discovery.type: single-node
OPENSEARCH_JAVA_OPTS: -Xms512m -Xmx512m
DISABLE_SECURITY_PLUGIN: "true"
ports:
- "9200:9200"
- "9600:9600"
volumes:
- opensearch-secondary-data:/usr/share/opensearch/data
volumes:
opensearch-secondary-data:
docker compose up -d camunda opensearch
Secondary storage environment variables
Use these variables when you adapt the examples to your own local setup:
| Variable | Use |
|---|---|
CAMUNDA_DATA_SECONDARY_STORAGE_TYPE | Selects the backend family: rdbms, elasticsearch, or opensearch. |
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_URL | JDBC connection string for the relational database used as secondary storage. |
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_USERNAME | Database username for RDBMS secondary storage. |
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_PASSWORD | Database password for RDBMS secondary storage. |
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_DATABASEVENDORID | Optional vendor override. Use postgresql, mariadb, mysql, oracle, mssql, or h2 when you want to make the backend explicit. |
CAMUNDA_DATA_SECONDARY_STORAGE_RDBMS_AUTO_DDL | Controls whether Camunda creates and updates the schema automatically. The default is true. |
CAMUNDA_DATA_SECONDARY_STORAGE_ELASTICSEARCH_URL | Endpoint for Elasticsearch when type=elasticsearch. |
CAMUNDA_DATA_SECONDARY_STORAGE_OPENSEARCH_URL | Endpoint for OpenSearch when type=opensearch. |
For additional secondary storage settings, see Configure secondary storage and Configure RDBMS for manual installations.
Authentication
Lightweight configuration (default)
- Web UI: Log in to Operate and Tasklist with
demo/demo - APIs: REST and gRPC APIs are publicly accessible (no authentication required)
Full configuration
-
Web UI: Log in to all components (Operate, Tasklist, Console, Optimize, Web Modeler) with
demo/demo. -
APIs: REST and gRPC APIs require OAuth authentication with the following settings:
- Client ID:
orchestration(fromORCHESTRATION_CLIENT_IDin the.envfile) - Client Secret:
secret(fromORCHESTRATION_CLIENT_SECRETin the.envfile) - OAuth URL:
http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token - Audience:
orchestration-api
For details, see the REST API authentication guide.
- Client ID:
Stop Camunda 8
To stop all containers and remove associated data:
docker compose down -v
# or for the full configuration:
docker compose -f docker-compose-full.yaml down -v
The -v flag deletes all volumes, removing all data (process instances, users, etc.). Omit -v to keep your data.
Connectors
Both the lightweight and full Docker Compose configurations include built-in connectors for integrating with external systems. The connector runtime executes both outbound connectors (called from BPMN processes) and inbound connectors (triggering process instances from external events).
For details on available connectors and how to use them, see:
Connector secrets
When running Camunda locally with Docker Compose, some connectors require authentication credentials or API keys to connect with external services (for example, Slack, SendGrid, or AWS). These values should be stored securely as secrets instead of being hardcoded in your process models.
You can add secrets to the connector runtime using the included connector-secrets.txt file:
- Open
connector-secrets.txtin the extracted directory. - Add secrets in the format
NAME=VALUE, one per line:
SLACK_TOKEN=xoxb-your-token-here
SENDGRID_API_KEY=SG.your-api-key
- Save the file. The secrets become available in connector configurations using the syntax
{{secrets.NAME}}. For example,{{secrets.SLACK_TOKEN}}.
Do not commit connector-secrets.txt to version control with real credentials. Use placeholder values in the repository and configure actual secrets in each environment.
For more details, see the connector secrets documentation.
Custom connectors
In addition to the built-in connectors, you can add your own custom connectors.
To include custom connectors:
- Option 1: Create a new Docker image that bundles your connectors, as described in the Connectors repository.
- Option 2: Mount the connector JARs as volumes into the
/opt/appdirectory in the Docker Compose file.
Each connector JAR must include all required dependencies inside the JAR to run correctly.
Modeling and process execution
You can deploy and execute processes using either Desktop Modeler or Web Modeler.
Deploy with Desktop Modeler
Desktop Modeler is a free, open-source desktop application for modeling BPMN, DMN, and Camunda Forms.
Lightweight configuration
To deploy from Desktop Modeler to the lightweight configuration:
- Open Desktop Modeler and click the deployment icon (rocket symbol).
- Select Camunda 8 Self-Managed.
- Configure the connection:
- Cluster endpoint:
http://localhost:8088/v2 - Authentication: Select None (no authentication required by default)
- Cluster endpoint:
- Click Deploy.
For more details, see the Desktop Modeler deployment guide.
Full configuration
To deploy from Desktop Modeler to the full configuration:
- Open Desktop Modeler and click the deployment icon.
- Select Camunda 8 Self-Managed.
- Configure the connection:
- Cluster endpoint:
http://localhost:8088/v2 - Authentication: Select OAuth
- OAuth URL:
http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token - Client ID:
orchestration(from.envfile:ORCHESTRATION_CLIENT_ID) - Client Secret:
secret(from.envfile:ORCHESTRATION_CLIENT_SECRET) - Audience:
orchestration-api
- Cluster endpoint:
- Click Deploy.
The full configuration uses Keycloak for OIDC authentication. The client credentials (orchestration / secret) are pre-configured in the .env file and Admin configuration.
Deploy with Web Modeler
Non-production installations of Web Modeler are limited to five collaborators per project. See licensing.
Web Modeler provides a browser-based interface for creating and deploying BPMN, DMN, and form diagrams.
It is included in the full configuration by default but can also run as a standalone setup.
Standalone setup
To start Web Modeler and its dependencies independently, run:
docker compose -f docker-compose-web-modeler.yaml up -d
To stop and remove all data and volumes, run:
docker compose -f docker-compose-web-modeler.yaml down -v
Deploy or execute a process
When using the full configuration, Web Modeler connects automatically to the local Orchestration Cluster started by docker-compose-full.yaml. You can deploy and run processes directly from the Web Modeler interface.
- Log in to Web Modeler at http://localhost:8070 with
demo/demo. - Create a new project or open an existing BPMN diagram.
- Use the visual modeler to design your BPMN process.
- Click Deploy to deploy the diagram to the pre-configured Orchestration Cluster.
- After deployment, you can create process instances and monitor them in Operate.
Web Modeler uses the BEARER_TOKEN authentication method to communicate with the Orchestration Cluster. The user's authentication token from Management Identity is automatically used for deployment.
Web Modeler is not included in the lightweight configuration. To use Web Modeler with the lightweight configuration:
- Run Web Modeler separately using
docker-compose-web-modeler.yaml. - Manually configure the cluster connection in Web Modeler's configuration.
- Use
NONEorBASICauthentication for the lightweight Orchestration Cluster.
See the Web Modeler cluster configuration guide for details.
Emails
The Docker Compose setup includes Mailpit as a test SMTP server. Mailpit captures all emails sent by Web Modeler but does not forward them to the actual recipients.
You can access emails in Mailpit's web UI at http://localhost:8075.
Next steps
Now that you have Camunda 8 running locally, explore these resources:
- Getting started: Follow the getting started guide to create a Java project and connect to your local cluster.
- BPMN modeling: Learn BPMN fundamentals and best practices.
- User tasks: Implement user tasks and forms for human workflows.
- Connectors: Explore out-of-the-box connectors for common integrations.
- APIs: Use the Orchestration Cluster REST API or client libraries to interact programmatically.
- Production deployment: When ready, deploy to production with Kubernetes and Helm.