Skip to main content
Version: 8.8 (unreleased)

Camunda manual installation

This page guides you through the manual installation of Camunda 8 on a local machine, bare metal server, or virtual machine.

Prerequisites

  • Bare metal or virtual machine
    • Operating system:
      • Linux
      • Windows, macOS, and other operating systems are supported for development only and not for production.
    • Java Virtual Machine. See supported environments for version details.
    • Configure the web applications to use an available port. By default, the Orchestration Cluster listens on port 8080.
  • Secondary storage

For suggested minimum hardware requirements and networking, see the manual reference architecture requirements.

Performance on musl-based distributions

There are known performance limitations on systems that use musl instead of glibc, because Java relies on glibc for running native libraries. For example, Alpine Linux, which uses musl, has shown performance degradation compared to Debian or Ubuntu in benchmark tests.

Unsupported components

The following components are not supported for manual installation:

  • Management Identity
  • Optimize
  • Web Modeler

To install these components, use one of the supported methods:

Download artifacts

Download the required Camunda 8 artifacts from the following sources. Make sure that all artifacts use the same minor version to ensure compatibility.

Orchestration Cluster:

  • File names follow the pattern camunda-zeebe-x.y.z.(zip|tar.gz).
  • Maven Central - Select a version, then click Browse to view downloadable files such as .zip or .tar.gz.
  • Artifactory - Select a version, then browse the files to download.
  • GitHub - Select a release to download the files.

Connectors:

  • Bundle (includes pre-bundled connectors from Camunda)

    • File names follow the pattern connector-runtime-bundle-x.y.z-with-dependencies.jar.
    • Maven Central - Select a version, then click Browse to view the .jar.
    • Artifactory - Select a version, then browse the files to download.
  • Runtime-only

    • File names follow the pattern connector-runtime-application-x.y.z.jar.
    • Maven Central - Select a version, then click Browse to view the .jar.
    • Artifactory - Select a version, then browse the files to download.
note

Some out-of-the-box connectors are licensed under the Camunda Self-Managed Free Edition license. See Camunda Connectors Bundle project for an overview.

Reference architecture

Review the following reference architectures for deployment guidance:

  • Manual reference architecture - Provides an overview of the environment and requirements.
  • Amazon EC2 - A reference architecture built on Amazon Web Services (AWS) using Elastic Compute Cloud (EC2) with Ubuntu, and Amazon OpenSearch as the secondary storage.

Orchestration Cluster

For background, see the Orchestration Cluster glossary entry.
For architecture details, review the architecture.
For configuration details, see the Orchestration Cluster components.

Configure the Orchestration Cluster

By default, the configuration uses a single-node orchestration cluster with a local Elasticsearch instance as the secondary storage. If this setup matches your environment, no additional configuration is required.

If you plan to:

  • Add more nodes to the cluster
  • Use a different external secondary storage
  • Enable Connectors
  • Apply a license key

You need to make targeted configuration changes. The following sections outline the minimum required adjustments for each use case. Combine these changes into a single application.yaml under the appropriate configuration keys, or export them as environment variables.

For detailed configuration options and advanced setup guidance, refer to each component’s documentation under the Orchestration cluster section.

note

Configuration is being unified across components. Some changes will only take effect in future versions, so you may see a mix of old and new configuration options.

Configure the secondary storage

Set the secondary storage type value to elasticsearch or opensearch. Remove fields that do not apply to your selection.

If your security settings require authentication for the secondary storage, configure both username and password. Omit these fields if authentication is not required.

The following configuration defines how the Orchestration Cluster connects to secondary storage (Elasticsearch or OpenSearch). This applies to the included Operate, Tasklist, Identity, and Camunda Exporter.

For detailed configuration options, see the Orchestration Cluster configuration

CAMUNDA_DATA_SECONDARYSTORAGE_TYPE=elasticsearch|opensearch # defaults to elasticsearch

# Elasticsearch
CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_URL=http://localhost:9200
CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_USERNAME=
CAMUNDA_DATA_SECONDARYSTORAGE_ELASTICSEARCH_PASSWORD=

# OpenSearch
CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_URL=http://localhost:9200
CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_USERNAME=
CAMUNDA_DATA_SECONDARYSTORAGE_OPENSEARCH_PASSWORD=

Configure a multi-broker cluster

This example shows a 3-broker cluster.

  • Set size to 3.
  • Assign a unique node-id to each broker, starting from 0 and incrementing up to the total number of brokers (0, 1, 2).
  • Use the same initial-contact-points on all brokers.

For more details, see the Zeebe broker cluster configuration.

CAMUNDA_CLUSTER_SIZE=3
CAMUNDA_CLUSTER_NODEID=0 # unique ID of this broker node in a cluster. The ID should be between 0 and number of nodes in the cluster (exclusive).
ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS=HOST_0:26502,HOST_1:26502,HOST_2:26502

Configure Connectors authentication

Connectors require authentication to use their full capabilities. By default, the Orchestration Cluster uses basic authentication. You can configure the cluster to automatically create a user with the necessary permissions at startup.

If you don’t configure a user at startup, create one manually in the Identity UI after deployment.

For more details, see Identity configuration overview.

CAMUNDA_SECURITY_INITIALIZATION_USERS_1_USERNAME=connectors
CAMUNDA_SECURITY_INITIALIZATION_USERS_1_PASSWORD=connectors
CAMUNDA_SECURITY_INITIALIZATION_USERS_1_NAME="Connectors User"
CAMUNDA_SECURITY_INITIALIZATION_USERS_1_EMAIL=connectors@company.com
CAMUNDA_SECURITY_INITIALIZATION_DEFAULTROLES_CONNECTORS_USERS_0=connectors

Configure the license key

If your Camunda 8 Self-Managed installation requires a license, provide the license key in one of the following ways:

CAMUNDA_LICENSE_KEY=""

Run the Orchestration Cluster

Once you've downloaded the Orchestration Cluster distribution, extract it into a folder.

  1. Extract the files using your GUI or CLI:

    mkdir -p camunda && unzip camunda-zeebe-x.y.z.zip  -d camunda

    mkdir -p camunda && tar -xzf camunda-zeebe-x.y.z.tar.gz -C camunda
  2. Open the extracted folder.

  3. Update the configuration in config/application.yaml, or export the environment variables.

  4. Navigate to bin folder.

  5. Run camunda.sh (Linux/macOS) or camunda.bat (Windows).

  6. Open http://localhost:8080. On first access, you’ll be asked to create an admin user unless Identity is configured with OIDC or a similar option.

note

Camunda 8 components without a valid license may display Non-Production License in the navigation bar and issue warnings in the logs. These warnings don’t affect startup or functionality, except that Web Modeler is limited to five users. To obtain a license, visit the Camunda Enterprise page.

Run the Orchestration Cluster as a service

This example shows how to run the Orchestration Cluster as a systemd service on Ubuntu. Adjust the paths, user, and group as needed for your environment. The example uses a file with environment variables, but you can adapt it to use an application.yaml instead.

  1. Create a systemd service file named camunda.service and adjust it fit your own paths, user and group in /etc/systemd/system/camunda.service.

    generic/compute/debian/configs/camunda.service
    loading...
  2. Change the permissions on /etc/systemd/system/camunda.service to 644:

    sudo chmod 644 /etc/systemd/system/camunda.service
  3. Reload systemd and start the new service:

    sudo systemctl daemon-reload
    sudo systemctl start camunda.service
  4. Verify that the service is running:

    systemctl status camunda.service

View logs with:

journalctl -e -u camunda

Verify the Orchestration Cluster

Check the logs for a successful startup message, such as:

[2025-08-05 13:34:51.964] [main] INFO
org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port 8080 (http) with context path '/'
...
[2025-08-05 13:34:52.006] [main] INFO
org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port 9600 (http)
[2025-08-05 13:34:52.048] [main] INFO
org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 79 ms
[2025-08-05 13:34:52.054] [main] INFO
org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver - Exposing 17 endpoints beneath base path '/actuator'
[2025-08-05 13:34:52.078] [main] INFO
org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port 9600 (http) with context path '/'
[2025-08-05 13:34:52.088] [main] INFO
io.camunda.application.StandaloneCamunda - Started StandaloneCamunda in 9.376 seconds (process running for 9.817)

Check the cluster topology with the Orchestration Cluster REST API:

# replace username and password with the details of the admin user you created on first startup
curl -u username:password -L 'http://localhost:8080/v2/topology' \
-H 'Accept: application/json'
Example output
// amount of brokers, size, partitions etc. depends on your configuration
// Example: 1 broker, 3 partitions
{
"brokers": [
{
"nodeId": 0,
"host": "HOST_0",
"port": 26501,
"partitions": [
{
"partitionId": 1,
"role": "leader",
"health": "healthy"
},
{
"partitionId": 2,
"role": "leader",
"health": "healthy"
},
{
"partitionId": 3,
"role": "leader",
"health": "healthy"
}
],
"version": "8.8.0"
}
],
"clusterSize": 1,
"partitionsCount": 3,
"replicationFactor": 1,
"gatewayVersion": "8.8.0",
"lastCompletedChangeId": "-1"
}

Check the health status of the Orchestration Cluster with the actuator endpoint:

curl localhost:9600/actuator/health
Example output
{
"status": "UP",
"groups": ["liveness", "readiness", "startup", "status"],
"components": {
"brokerReady": {
"status": "UP"
},
"brokerStartup": {
"status": "UP"
},
"brokerStatus": {
"status": "UP"
},
"indicesCheck": {
"status": "UP"
},
"livenessState": {
"status": "UP"
},
"readinessState": {
"status": "UP"
},
"searchEngineCheck": {
"status": "UP"
}
}
}

Connectors

For background, see the Connectors glossary entry.
For architecture details, review the architecture.
For configuration options, see the Connectors components documentation.

Configure Connectors

If you run Connectors on the same machine as the Orchestration Cluster, change the default port (8080) to avoid conflicts.

Connectors require authentication to communicate with the Orchestration Cluster REST API and Zeebe.

By default, Connectors connect to:

  • localhost:8080 (Orchestration Cluster REST API)
  • localhost:26500 (Zeebe)
SERVER_PORT=9090

CAMUNDA_CLIENT_RESTADDRESS=http://localhost:8080
CAMUNDA_CLIENT_GRPCADDRESS=http://localhost:26500
CAMUNDA_CLIENT_MODE=selfManaged
CAMUNDA_CLIENT_AUTH_METHOD=basic
CAMUNDA_CLIENT_AUTH_USERNAME=connectors
CAMUNDA_CLIENT_AUTH_PASSWORD=connectors

For more information about the configuration of the Connectors, see Connectors configuration

Run Connectors

Both the pre-bundled and runtime-only versions of the Connectors behave the same at runtime. They automatically detect and register all connectors available on the classpath during execution. Each connector uses its default configuration as defined by the @OutboundConnector or @InboundConnector annotations.

Consider the following file structure:

/home/user/connectors $
├── connector-runtime-(application|bundle)-x.y.z(-with-dependencies).jar
└── my-custom-connector-0.1.0-SNAPSHOT-with-dependencies.jar

To start connectors bundle with all custom connectors locally, run:

java -cp "/home/user/connectors/*" "io.camunda.connector.runtime.app.ConnectorRuntimeApplication"

This starts a Zeebe client, registering the defined connector as a job worker. By default, it connects to a local Zeebe instance at port 26500.

Run Connectors as a service

This example shows how to run the Connectors as a systemd service on Ubuntu. Adjust the paths, user, and group as needed for your environment.

The example uses a file with environment variables, but you can adapt it to use an application.yaml instead.

  1. Create a systemd service file named camunda-connectors.service and adjust it fit your own paths, user and group in /etc/systemd/system/camunda-connectors.service.

    generic/compute/debian/configs/camunda-connectors.service
    loading...
  2. Change the permissions on /etc/systemd/system/camunda-connectors.service to 644:

    sudo chmod 644 /etc/systemd/system/camunda-connectors.service
  3. Reload systemd and start the service:

    sudo systemctl daemon-reload
    sudo systemctl start camunda-connectors.service
  4. Verify that the service is running:

    systemctl status camunda-connectors.service

View logs with:

journalctl -e -u camunda-connectors

Verify Connectors

Check the logs for a successful startup message, such as:

2025-08-05T14:49:58.641+02:00  INFO 99856 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 3 endpoints beneath base path '/actuator'
2025-08-05T14:49:58.666+02:00 INFO 99856 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port 9090 (http) with context path '/'
2025-08-05T14:49:58.702+02:00 INFO 99856 --- [ main] i.c.c.r.app.ConnectorRuntimeApplication : Started ConnectorRuntimeApplication in 1.286 seconds (process running for 1.386)

Check the health status of Connectors with the actuator endpoint:

curl localhost:9090/actuator/health
Example output
{
"status": "UP",
"groups": ["readiness"],
"components": {
"camundaClient": {
"status": "UP"
},
"diskSpace": {
"status": "UP",
"details": {
"total": -1,
"free": -1,
"threshold": -1,
"path": "/home/user/connectors/.",
"exists": true
}
},
"ping": {
"status": "UP"
},
"processDefinitionImport": {
"status": "UP",
"details": {
"operateEnabled": true
}
},
"ssl": {
"status": "UP",
"details": {
"validChains": [],
"invalidChains": []
}
},
"zeebeClient": {
"status": "UP",
"details": {
"numBrokers": 1,
"anyPartitionHealthy": true
}
}
}
}

Next steps

After setting up your cluster, many users typically do the following: