Camunda back up and restore
Use the backup feature to back up and restore your Camunda 8 Self-Managed components and cluster.
About this guide
This guide covers how to back up and restore your Camunda 8 Self-Managed components and cluster. Automate backup and restore procedures with tools that meet your organization's requirements.
With Camunda 8.8, the architecture was updated. For clarity, the Orchestration Cluster now consists of:
- Zeebe
- Web Applications (Operate and Tasklist)
- Identity
Depending on context, we may refer to a specific subcomponent of the Orchestration Cluster where appropriate.
This guide covers two backup paths depending on your secondary storage. Choose the path that matches your deployment:
| Elasticsearch / OpenSearch | Relational databases (RDBMS) | |
|---|---|---|
| Components covered | Zeebe, Operate, Tasklist, Optimize | Zeebe, Operate, Tasklist only |
| Optimize backup | Included | Not supported — use the ES/OpenSearch path |
| Backup coordination | All components must be backed up together using the same backup ID | Decoupled — Zeebe and RDBMS are backed up independently; Camunda aligns them automatically during restore |
| Continuous snapshots | Not supported | Zeebe takes regular snapshots; restore to any snapshot timestamp |
| Backup ID | User-supplied integer | Auto-generated by cluster |
The RDBMS backup path does not support Optimize. If you use Optimize, you must back up and restore Camunda components using the Elasticsearch / OpenSearch path.
Elasticsearch / OpenSearch
Covers all Orchestration Cluster components and Optimize. Back up and restore with no downtime using coordinated Elasticsearch or OpenSearch snapshots. All components must use the same backup ID to ensure consistency.
Prerequisites
Set up a snapshot repository in Elasticsearch or OpenSearch and configure component backup storage. Covers all components including Optimize.
Create a backup
Create a backup of all Orchestration Cluster components and Optimize using Elasticsearch or OpenSearch as secondary storage.
Restore a backup
Restore all Orchestration Cluster components and Optimize using Elasticsearch or OpenSearch as secondary storage.
Relational databases (RDBMS)
This is the first phase of new backup capabilities enabled by using an RDBMS as secondary storage. It covers Orchestration Cluster components only (Zeebe, Operate, and Tasklist). Optimize is not included.
Using an RDBMS as secondary storage unlocks three new capabilities not available in the Elasticsearch / OpenSearch path:
-
Decoupled backups: Zeebe (primary storage) and the RDBMS (secondary storage) can be backed up independently, on their own schedules. During restore, Camunda automatically aligns the two backups — there is no need to coordinate a shared backup ID or take snapshots at the same time.
-
Scheduled backups: Because backups are decoupled now, Zeebe can take backups automatically on a fixed schedule, no need to call the backup API externally.
-
Point in time restore: Zeebe continuously takes snapshots of its log stream. This creates a range of available restore points that you can restore to by timestamp, rather than being limited to a specific backup ID.
Prerequisites
Configure Zeebe continuous backup storage and your RDBMS backup tooling. Zeebe and RDBMS can be backed up independently Covers Zeebe, Operate, and Tasklist only.
Create a backup
Enable decoupled continuous backups for Zeebe, Operate, and Tasklist. Zeebe automatically takes regular snapshots; the RDBMS is backed up separately using native database tools. Optimize is not supported.
Restore a backup
Restore Zeebe, Operate, and Tasklist to any available snapshot timestamp. Camunda automatically aligns Zeebe and RDBMS state during restore. Optimize is not supported.
Considerations
Backup IDs (Elasticsearch / OpenSearch path only)
When using Elasticsearch or OpenSearch as secondary storage, each component backup is identified by a user-supplied integer backup ID. The backup ID must be greater than the ID of any previous backup.
We recommend using the Unix timestamp as the backup ID.
When using the RDBMS path, backup IDs are auto-generated by the cluster and do not need to be managed manually.
The steps outlined on this page are generally applicable for any kind of deployment but might differ slightly depending on your setup.
Management API
The management API is an extension of the Spring Boot Actuator, typically used for monitoring and other operational purposes. This is not a public API and not exposed. You will need direct access to your Camunda cluster to be able to interact with these management APIs. This is why you'll often see the reference to localhost.
Direct access will depend on your deployment environment. For example, direct Kubernetes cluster access with port-forwarding or exec to execute commands directly on Kubernetes pods. In a manual deployment you will need to be able to reach the machines that host Camunda. Typically, the management port is on port 9600 but might differ on your setup and on the components. You can find the default for each component in their configuration page.
| Component | Port |
|---|---|
| Optimize | 8092 |
| Orchestration Cluster | 9600 |
Examples for Kubernetes approaches
- Port Forwarding
- Exec
- Cronjob
Port-forwarding allows you to temporarily bind a remote Kubernetes cluster port of a service or pod directly to your local machine, allowing you to interact with it via localhost:PORT.
Since the services are bound to your local machine, you cannot reuse the same port for all port-forwards unless you start and stop each one based on usage. To avoid this limitation, the examples use different local ports for each service, allowing them to run simultaneously without conflict.
export CAMUNDA_RELEASE_NAME="camunda"
# kubectl port-forward services/$SERVICE_NAME $LOCAL_PORT:$REMOTE_PORT
kubectl port-forward services/$CAMUNDA_RELEASE_NAME-zeebe-gateway 9600:9600 & \
kubectl port-forward services/$CAMUNDA_RELEASE_NAME-optimize 8092:8092 & \
kubectl port-forward services/$CAMUNDA_RELEASE_NAME-elasticsearch 9200:9200 &
Using the bash instruction & at the end of each line would run the command in a subshell allowing the use of a single terminal.
An alternative to port-forwarding is to run commands directly on Kubernetes pods. In this example we're going to spawn a temporary pod to execute a curl request. Alternatives are to use existing pods within the namespace. Camunda's pod includes different base images, each with a different feature set.
# following will create a temporary alias within your terminal to overwrite the normal curl
export CAMUNDA_NAMESPACE="camunda"
export CAMUNDA_RELEASE_NAME="camunda"
# temporary overwrite of curl, can be removed with `unalias curl` again
alias curl="kubectl run curl --rm -i -n $CAMUNDA_NAMESPACE --restart=Never --image=alpine/curl -- -sS"
curl $CAMUNDA_RELEASE_NAME-zeebe-gateway:9600/actuator/health
curl $CAMUNDA_RELEASE_NAME-optimize:8092/actuator/health
curl $CAMUNDA_RELEASE_NAME-elasticsearch:9200/_cluster/health
This allows you to directly execute commands within the namespace and communicate with available services.
The examples in this guide showcase the backup process in a manual fashion to help you fully understand the process. You might want to use Kubernetes Cronjobs to automate the backup process for your own use case based on your own environment on a regular schedule.
Kubernetes Cronjobs will spawn a Job on a regular basis. The job will run a defined image within a given namespace, allowing you to run commands and interact with the environment.
You can see further examples from Camunda consultants in the Backup and Restore Workshop. You can use these examples to achieve similar automation.
ContextPath
If you are defining the contextPath in the Camunda Helm chart or the management.server.servlet.context-path in a standalone setup, your API requests must prepend the value specific to the contextPath for the individual component. If the management.server.port is defined this also applies to management.endpoints.web.base-path. You can learn more about this behavior in the Spring Boot documentation.
Setting the contextPath in the Helm chart for Optimize will not overwrite the contextPath of the management API, it will remain as /.
Example
If you are defining the contextPath for the Orchestration Cluster in the Camunda Helm chart:
orchestration:
contextPath: /example
A call to the management API of the Orchestration Cluster would look like the following example:
ORCHESTRATION_CLUSTER_MANAGEMENT_API=http://localhost:9600
curl $ORCHESTRATION_CLUSTER_MANAGEMENT_API/example/actuator/health
Without the contextPath it would just be:
ORCHESTRATION_CLUSTER_MANAGEMENT_API=http://localhost:9600
curl $ORCHESTRATION_CLUSTER_MANAGEMENT_API/actuator/health
contextPath for the Orchestration Cluster in the Camunda Helm chart:orchestration:
contextPath: /example
ORCHESTRATION_CLUSTER_MANAGEMENT_API=http://localhost:9600
curl $ORCHESTRATION_CLUSTER_MANAGEMENT_API/example/actuator/health
contextPath it would just be:ORCHESTRATION_CLUSTER_MANAGEMENT_API=http://localhost:9600
curl $ORCHESTRATION_CLUSTER_MANAGEMENT_API/actuator/health