Skip to main content
Version: 8.7

Restore a backup

Restore a previous backup of your Camunda 8 Self-Managed components and cluster.

About restoring a backup

To restore a backup you must complete the following main steps:

  1. Restore of Elasticsearch/OpenSearch
  2. Restore Zeebe Cluster
  3. Start all Camunda 8 components
note

When restoring Camunda 8 from a backup, all components must be restored from their backup that corresponds to the same backup ID.

Prerequisites

The following general prerequisites are required before you can restore a backup:

PrerequisiteDescription
Component clean stateThe restore process assumes a clean state for all components, including Elasticsearch/OpenSearch. This means no prior persistent volumes or component state should exist - all data is restored from scratch.
Camunda version

Backups must be restored using the exact Camunda version they were created with. As noted during the backup process, the version is embedded in the backup name.

This is essential because starting a component with a mismatched version may result in startup failures due to schema incompatibilities with Elasticsearch/OpenSearch and the component itself. Although schema changes are generally avoided in patch releases, they can still occur.

When using the Camunda Helm chart, this means figuring out the corresponding version. For this the Camunda Helm chart Version Matrix can help. Click on the major.minor release and then search for the backed up patch release of your component. The other components would typically fit in there as well.

Example: Work out your correct Camunda version

Our Backups look as follows:

camunda_optimize_1748937221_8.7.1_part_1_of_2
camunda_optimize_1748937221_8.7.1_part_2_of_2
camunda_operate_1748937221_8.7.2_part_1_of_6
camunda_operate_1748937221_8.7.2_part_2_of_6
camunda_operate_1748937221_8.7.2_part_3_of_6
camunda_operate_1748937221_8.7.2_part_4_of_6
camunda_operate_1748937221_8.7.2_part_5_of_6
camunda_operate_1748937221_8.7.2_part_6_of_6
camunda_tasklist_1748937221_8.7.2_part_1_of_6
camunda_tasklist_1748937221_8.7.2_part_2_of_6
camunda_tasklist_1748937221_8.7.2_part_3_of_6
camunda_tasklist_1748937221_8.7.2_part_4_of_6
camunda_tasklist_1748937221_8.7.2_part_5_of_6
camunda_tasklist_1748937221_8.7.2_part_6_of_6
camunda_zeebe_records_backup_1748937221

From this, we know:

  • Optimize: 8.7.1
  • Operate / Tasklist: 8.7.2

Based on this, we can look in the matrix versioning of 8.7 and see the corresponding Camunda Helm chart version is 12.0.2.

Step 1: Restore of Elasticsearch/OpenSearch

Prerequisites

The following specific prerequisites are required when restoring Elasticsearch/OpenSearch:

PrerequisiteDescription
Clean state/dataElasticsearch/OpenSearch is set up and running with a clean state and no data on it.
Snapshot repositoryElasticsearch/OpenSearch are configured with the same snapshot repository as used for backup, using the documentation linked in prerequisites.

1. Restore Templates

This step includes restoring index and component templates crucial for Camunda 8 to function properly on continuous use.

These templates are automatically applied on newly created indices. These templates are only created on the initial start of the components and the first seeding of the secondary datastore, due to which you have to temporarily restore them before you can restore all Elasticsearch/OpenSearch snapshots.

Start Camunda 8 configured with your secondary datastore endpoint

  • For example, deploy the Camunda Helm chart.
  • For manual context, start Camunda 8 components manually.
  • Depending on your setup this can mean Operate, Optimize, Tasklist, Zeebe, and the required secondary datastore.

The templates are created by Operate, Optimize, and Tasklist on startup on the first seeding of the datastore. Zeebe creates this whenever it is required, and isn't limited to the initial start. We recommend starting your full required Camunda 8 stack for the applications to show up as healthy.

You can confirm the successful creation of the index templates by using the Elasticsearch/OpenSearch API. The index templates rely on the component templates, so it also confirms these were successfully recreated.

The following uses the Elasticsearch Index API to list all index templates.

curl -s "$ELASTIC_ENDPOINT/_index_template" \
| jq -r '.index_templates[].name' \
| grep -E 'operate|tasklist|optimize|zeebe' \
| sort
Example Output
operate-batch-operation-1.0.0_template
operate-decision-instance-8.3.0_template
operate-event-8.3.0_template
operate-flownode-instance-8.3.1_template
operate-incident-8.3.1_template
operate-job-8.6.0_template
operate-list-view-8.3.0_template
operate-message-8.5.0_template
operate-operation-8.4.1_template
operate-post-importer-queue-8.3.0_template
operate-sequence-flow-8.3.0_template
operate-user-task-8.5.0_template
operate-variable-8.3.0_template
tasklist-draft-task-variable-8.3.0_template
tasklist-task-8.5.0_template
tasklist-task-variable-8.3.0_template
...

2. Find available backup IDs

With the active environment that was required to restore the datastore templates you can quickly work out available backups, using the backup APIs for each component to list available backups.

note

You will need the output for your chosen backup ID in the following steps to be able to restore datastore snapshots as it contains the snapshot names.

Operate Example

Using the Operate management API to list backups.

curl $OPERATE_MANAGEMENT_API/actuator/backups
[
{
"backupId": 1748937221,
"state": "COMPLETED",
"details": [
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_1_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:15.685+0000",
"failures":[]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_2_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:16.288+0000",
"failures":[]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_3_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:17.092+0000",
"failures":[]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_4_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:17.293+0000",
"failures":[]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_5_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:18.298+0000",
"failures":[]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_6_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:18.499+0000",
"failures":[]
}
]
}
]
Optimize Example

Using the Optimize management API to list backups.

curl $OPTIMIZE_MANAGEMENT_API/actuator/backups
[
{
"backupId": 1748937221,
"state": "COMPLETED",
"details": [
{
"snapshotName":"camunda_optimize_1748937221_8.7.1_part_1_of_2",
"state":"SUCCESS",
"startTime":"2025-06-03T07:53:54.389+0000",
"failures":[]
},
{
"snapshotName":"camunda_optimize_1748937221_8.7.1_part_2_of_2",
"state":"SUCCESS",
"startTime":"2025-06-03T07:53:54.389+0000",
"failures":[]
}
]
}
]
Tasklist Example

Using the Tasklist management API to list backups.

curl $TASKLIST_MANAGEMENT_API/actuator/backups
[
{
"backupId": 1748937221,
"state": "COMPLETED",
"failureReason": null,
"details": [
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_6_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
},
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_5_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
},
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_4_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
},
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_3_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
},
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_2_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
},
{
"snapshotName": "camunda_tasklist_1748937221_8.7.1_part_1_of_6",
"state": "SUCCESS",
"startTime": "2025-06-03T07:56:56.519+0000",
"failures": []
}
]
}
]
Zeebe Example

Using the Zeebe management API to list backups.

curl $GATEWAY_MANAGEMENT_API/actuator/backups
[
{
"backupId": 1748937221,
"state": "COMPLETED",
"details": [
{
"partitionId": 1,
"state": "COMPLETED",
"createdAt": "2025-06-03T08:06:10.408893628Z",
"brokerVersion": "8.7.1"
},
{
"partitionId": 2,
"state": "COMPLETED",
"createdAt": "2025-06-03T08:06:10.408893628Z",
"brokerVersion": "8.7.1"
},
{
"partitionId": 3,
"state": "COMPLETED",
"createdAt": "2025-06-03T08:06:10.408893628Z",
"brokerVersion": "8.7.1"
}
]
}
]

As there may be cases where this is not possible, an alternative approach is covered in the following example.

Available Backups on Elasticsearch/OpenSearch

In this scenario, follow the steps above, but when you have your Elasticsearch/OpenSearch available, use the snapshot API to list available snapshots and correlate this to the available snapshots in your backup bucket (AWS S3, Azure Store, Google GCS). It is important to use the same ID for all backups.

The following uses the Elasticsearch snapshot API to list all registered snapshots in a repository.

ELASTIC_ENDPOINT=http://localhost:9200       # Your Elasticsearch endpoint
ELASTIC_SNAPSHOT_REPOSITORY=camunda_backup # Your defined snapshot repository on Elasticsearch for Camunda backups

# Get a list of all available snapshots
curl $ELASTIC_ENDPOINT/_snapshot/$ELASTIC_SNAPSHOT_REPOSITORY/_all

# Get a list of all available snapshots and use jq to parse just the names for easier readability
curl $ELASTIC_ENDPOINT/_snapshot/$ELASTIC_SNAPSHOT_REPOSITORY/_all | jq -r '.snapshots[].snapshot'

Ensure that all backups and parts exist for each component for your chosen backup ID.

Example output
camunda_optimize_1748937221_8.7.1_part_1_of_2
camunda_optimize_1748937221_8.7.1_part_2_of_2
camunda_operate_1748937221_8.7.2_part_1_of_6
camunda_operate_1748937221_8.7.2_part_2_of_6
camunda_operate_1748937221_8.7.2_part_3_of_6
camunda_operate_1748937221_8.7.2_part_4_of_6
camunda_operate_1748937221_8.7.2_part_5_of_6
camunda_operate_1748937221_8.7.2_part_6_of_6
camunda_tasklist_1748937221_8.7.2_part_1_of_6
camunda_tasklist_1748937221_8.7.2_part_2_of_6
camunda_tasklist_1748937221_8.7.2_part_3_of_6
camunda_tasklist_1748937221_8.7.2_part_4_of_6
camunda_tasklist_1748937221_8.7.2_part_5_of_6
camunda_tasklist_1748937221_8.7.2_part_6_of_6
camunda_zeebe_records_backup_1748937221

Available Backups of Zeebe Partitions

For the Zeebe partitions backup, you will need to check your configured backup store for available backup IDs, and correlate those to the available backups on Elasticsearch/OpenSearch.

Zeebe creates a folder for each Partition ID and subfolder in this with each backup ID.

warning

Using the Zeebe Management Backup API is the recommended method for listing available backups, as it ensures the backups are complete and valid. Manually identifying backup IDs can result in restoring an incomplete backup, which will fail the restore process. If this occurs, you will need to choose a different backup ID and repeat the restore process for all components with the new backup ID, including the datastore, to avoid mismatched backup windows and potential data loss.

Example output

Example in the case of 3 partitions with two available backups:

#PartitionID folder
# BackupID folder
1/
├── 1748937221
└── 1749130104
2/
├── 1748937221
└── 1749130104
3/
├── 1748937221
└── 1749130104

3. Stop all components apart from Elasticsearch/OpenSearch

If you are using an external Elasticsearch/OpenSearch and Kubernetes, you could temporarily uninstall the Camunda Helm chart or scale all components to 0, so that nothing is running and potentially interacting with the datastore.

In a manual setup, you can simply stop all components.

If you are using the Camunda Helm chart with an embedded Elasticsearch, you can achieve this by (for example) disabling all other components in the values.yml.

elsaticsearch:
enabled: true

connectors:
enabled: false
identity:
enabled: false
optimize:
enabled: false
operate:
enabled: false
tasklist:
enabled: false
zeebe:
enabled: false
zeebe-gateway:
enabled: false

4. Delete all indices

Now that you have successfully restored the templates and stopped the components adding more indices, you must delete the existing indices to be able to successfully restore the snapshots (otherwise these will block a successful restore).

The following uses the Elasticsearch CAT API to list all indices. It also uses the Elasticsearch Index API to delete an index.

for index in $(curl -s "$ELASTIC_ENDPOINT/_cat/indices?h=index" \
| grep -E 'operate|tasklist|optimize|zeebe'); do
echo "Deleting index: $index"
curl -X DELETE "$ELASTIC_ENDPOINT/$index"
done
Example Output
Deleting index: operate-import-position-8.3.0_
{"acknowledged":true}Deleting index: operate-migration-steps-repository-1.1.0_
{"acknowledged":true}Deleting index: operate-flownode-instance-8.3.1_
{"acknowledged":true}Deleting index: operate-event-8.3.0_
{"acknowledged":true}Deleting index: operate-incident-8.3.1_
{"acknowledged":true}Deleting index: tasklist-web-session-1.1.0_
{"acknowledged":true}Deleting index: tasklist-variable-8.3.0_
{"acknowledged":true}Deleting index: operate-user-task-8.5.0_
{"acknowledged":true}Deleting index: tasklist-import-position-8.2.0_
{"acknowledged":true}Deleting index: tasklist-task-variable-8.3.0_
{"acknowledged":true}Deleting index: tasklist-flownode-instance-8.3.0_
{"acknowledged":true}Deleting index: operate-process-8.3.0_
{"acknowledged":true}Deleting index: tasklist-process-instance-8.3.0_
{"acknowledged":true}Deleting index: operate-operation-8.4.1_
{"acknowledged":true}Deleting index: operate-job-8.6.0_
{"acknowledged":true}Deleting index: operate-metric-8.3.0_
{"acknowledged":true}Deleting index: tasklist-migration-steps-repository-1.1.0_
{"acknowledged":true}Deleting index: operate-decision-8.3.0_
{"acknowledged":true}Deleting index: tasklist-process-8.4.0_
{"acknowledged":true}Deleting index: operate-variable-8.3.0_
{"acknowledged":true}Deleting index: operate-message-8.5.0_
{"acknowledged":true}Deleting index: operate-decision-requirements-8.3.0_
{"acknowledged":true}Deleting index: operate-batch-operation-1.0.0_
{"acknowledged":true}Deleting index: operate-web-session-1.1.0_
{"acknowledged":true}Deleting index: tasklist-user-1.4.0_
{"acknowledged":true}Deleting index: operate-list-view-8.3.0_
{"acknowledged":true}Deleting index: tasklist-metric-8.3.0_
{"acknowledged":true}Deleting index: operate-post-importer-queue-8.3.0_
{"acknowledged":true}Deleting index: tasklist-task-8.5.0_
{"acknowledged":true}Deleting index: tasklist-form-8.4.0_
{"acknowledged":true}Deleting index: operate-user-1.2.0_
{"acknowledged":true}Deleting index: tasklist-draft-task-variable-8.3.0_
{"acknowledged":true}Deleting index: operate-decision-instance-8.3.0_
{"acknowledged":true}Deleting index: operate-sequence-flow-8.3.0_
{"acknowledged":true}

5. Restore Elasticsearch/OpenSearch snapshots

Although the backup order was important so far to ensure consistent backups, you can restore the backed up indices in any order.

As the components do not have an endpoint to restore the backup in Elasticsearch, you will need to restore it yourself directly in your selected datastore.

Based on your chosen backup ID in find available backup IDs, you can now restore the snapshots in Elasticsearch/OpenSearch for each available backup under the same backup ID.

The following uses the Elasticsearch snapshot API to restore a snapshot.

curl -XPOST "$ELASTIC_ENDPOINT/_snapshot/$ELASTIC_SNAPSHOT_REPOSITORY/$SNAPSHOT_NAME/_restore?wait_for_completion=true"

Where $SNAPSHOT_NAME would be any of the following based on our example in find available backups IDs.

Ensure that all your backups correspond to the same backup ID and that each one is restored one-by-one.

camunda_optimize_1748937221_8.7.1_part_1_of_2
camunda_optimize_1748937221_8.7.1_part_2_of_2
camunda_operate_1748937221_8.7.2_part_1_of_6
camunda_operate_1748937221_8.7.2_part_2_of_6
camunda_operate_1748937221_8.7.2_part_3_of_6
camunda_operate_1748937221_8.7.2_part_4_of_6
camunda_operate_1748937221_8.7.2_part_5_of_6
camunda_operate_1748937221_8.7.2_part_6_of_6
camunda_tasklist_1748937221_8.7.2_part_1_of_6
camunda_tasklist_1748937221_8.7.2_part_2_of_6
camunda_tasklist_1748937221_8.7.2_part_3_of_6
camunda_tasklist_1748937221_8.7.2_part_4_of_6
camunda_tasklist_1748937221_8.7.2_part_5_of_6
camunda_tasklist_1748937221_8.7.2_part_6_of_6
camunda_zeebe_records_backup_1748937221

Step 2: Restore Zeebe Cluster

Prerequisites

The following specific prerequisites are required when restoring the Zeebe Cluster:

PrerequisiteDescription
Pre-existing dataPersistent volumes or disks must not contain any pre-existing data.
Backup storageZeebe is configured with the same backup storage as outlined in the prerequisites.
Components stoppedIt’s critical that no Camunda components are running during a Zeebe restore. Restored components may propagate an incorrect cluster configuration, potentially disrupting cluster communication.

Restore Zeebe Cluster

note

During the restoration of the Elasticsearch / OpenSearch state, we had to temporarily deploy Zeebe. This will have resulted in persistent volumes on Kubernetes and a filled data directory on each Zeebe broker in case of a manual deployment.

In the case of Kubernetes to remove all related persistent volumes.

kubectl get pvc \
| grep zeebe \
| while read namespace pvc; do
kubectl delete pvc "$pvc"
done

New persistent volumes will be created on a new Camunda Helm chart upgrade and install.

In case of a manual deployment, this means to remove the data directory of each Zeebe broker.

Camunda provides a standalone app which must be run on each node where a Zeebe broker will be running. This is a Spring Boot application similar to the broker and can run using the binary provided as part of the distribution. The app can be configured the same way a broker is configured - via environment variables or using the configuration file located in config/application.yaml.

warning

When restoring, provide the same configuration (node id, data directory, cluster size, and replication count) as the broker that will be running in this node. The partition count must be same as in the backup.

The amount of partitions backed up are also visible in the backup store of Zeebe, see how to figure out available backups. If brokers were dynamically scaled between backup and restore, this is not an issue - as long as the partition count remains unchanged.

Assuming you're using the official Camunda Helm chart, you'll have to adjust your Helm values.yml to supply the following temporarily.

It will overwrite the start command of the resulting Zeebe pod, executing a restore script. It's important that the backup is configured for Zeebe to be able to restore from the backup!

The following example is possible starting from the Camunda Helm chart version 12.1.0. Look at the note below the example to see how it can be achieved with an older Camund Helm chart version.

zeebe:
enabled: true
env:
# Environment variables to overwrite the Zeebe startup behavior
- name: ZEEBE_RESTORE
value: "true"
- name: ZEEBE_RESTORE_FROM_BACKUP_ID
value: "$BACKUP_ID" # Change the $BACKUP_ID to your actual value
# all the envs related to the backup store as outlined in the prerequisites
- name: ZEEBE_BROKER_DATA_BACKUP_STORE
value: "S3" # just as an example
...

# assuming you're using the inbuilt Elasticsearch, otherwise should be set to false
elsaticsearch:
enabled: true

connectors:
enabled: false
identity:
enabled: false
optimize:
enabled: false
operate:
enabled: false
tasklist:
enabled: false
zeebe-gateway:
enabled: false
Older Camunda Helm charts

For older Camunda Helm chart versions one can overwrite the startup behaviour of the Zeebe brokers by setting the command.

zeebe:
enabled: true
command: ["/usr/local/zeebe/bin/restore", "--backupId=$BACKUP_ID"] # Change the $BACKUP_ID to your actual value
env:
# all the envs related to the backup store as outlined in the prerequisites
...

If you're not using the Camunda Helm chart, you can use a similar approach natively with Kubernetes to overwrite the command.

The application will exit and restart the pod. This is an expected behavior. The restore application will not try to restore the state again since the partitions were already restored to the persistent disk.

tip

In Kubernetes, Zeebe is a StatefulSet, which are meant for long-running and persistent applications. There is no restartPolicy due to which the resulting pods of the Zeebe StatefulSet will always restart. Meaning that you have to observe the Zeebe brokers during restore and may have to look at the logs with --previous if it already restarted.

It will not try to import or overwrite the data again but should be noted that you may miss the successful first run if you're not observing it actively.

Restore success or failure

If restore was successful, the app exits with the log message Successfully restored broker from backup.

However, the restore will fail if:

  • There is no valid backup with the given backupId.
  • The backup store is not configured correctly.
  • The configured data directory is not empty.
  • Due to any other unexpected errors.

If the restore fails, you can re-run the application after fixing the root cause.

Step 3: Start all Camunda 8 components

Now that you have actively restored Elasticsearch/OpenSearch and the Zeebe cluster partitions, you can start all components again and use Camunda 8 as normal.

For example:

  • For Kubernetes, enable all components again in the Helm chart and remove the environment variables that overwrite the Zeebe startup behavior.

  • For a manual setup, execute the broker and all other components in their normal way.

(Optional) Restore a Web Modeler data backup

If you have previously backed up your Web Modeler data, you can restore this backup.

Backups can only be restored with downtime. To restore the database dump, first ensure that Web Modeler is stopped. Then, to restore the database use the following command:

psql -U <DATABASE_USER> -h <DATABASE_HOST> -p <DATABASE_PORT> -f dump.psql <DATABASE_NAME>

After the database has been restored, you can start Web Modeler again.

danger

When restoring Web Modeler data from a backup, ensure that the ids of the users stored in your OIDC provider (e.g. Keycloak) do not change in between the backup and restore. Otherwise, users may not be able to access their projects after the restore (see Web Modeler's troubleshooting guide).

tip

Some vendors provide tools that help with database backups and restores, such as AWS Backup or Cloud SQL backups.