Skip to main content
Version: 8.7

Create a backup

Back up your Camunda 8 Self-Managed components and cluster.

About the backup process

To create a backup you must complete the following main steps:

  1. Back up WebApps
  2. Back up Zeebe Cluster

You can also optionally back up your Web Modeler data.

before you begin
  • To create a consistent backup, you must complete backing up the WebApps first before backing up the Zeebe Cluster.
  • You must complete the prerequisites before creating a backup.

Step 1: Back up WebApps

Start the backup process by first backing up the WebApps.

note

When backing up the WebApps, the order in which you execute the following sub-steps is not important (you can start by backing up Operate, Optimize, Tasklist in any order for example).

Example API endpoint definition

note

This will heavily depend on your setup, the following examples are based on examples given in the Management API in Kubernetes using either active port-forwarding or overwrite of the local curl command.

As noted in the Management API section, this API is typically not publicly exposed. Therefore, you will need to access it directly using any means available within your environment.

# only export the BACKUP_ID once as it has to stay consistent throughout the backup procedure
export BACKUP_ID=$(date +%s) # unix timestamp as unique always increasing ID

export ELASTIC_SNAPSHOT_REPOSITORY="camunda" # the name of your snapshot repository
export ELASTIC_ENDPOINT="http://localhost:9200/"

export OPERATE_MANAGEMENT_API="http://localhost:9600/"
export OPTIMIZE_MANAGEMENT_API="http://localhost:9620/"
export TASKLIST_MANAGEMENT_API="http://localhost:9640/"
export GATEWAY_MANAGEMENT_API="http://localhost:9660/"

1. Start a backup x of Optimize

This step uses the Optimize management backup API.

curl -XPOST "$OPTIMIZE_MANAGEMENT_API/actuator/backups" \
-H "Content-Type: application/json" \
-d "{\"backupId\": $BACKUP_ID}"
Example output
{
"message":"Backup creation for ID 1748937221 has been scheduled. Use the GET API to monitor completion of backup process"
}

2. Start a backup x of Operate

This step uses the Operate management backup API.

curl -XPOST "$OPERATE_MANAGEMENT_API/actuator/backups" \
-H "Content-Type: application/json" \
-d "{\"backupId\": $BACKUP_ID}"
Example output
{
"scheduledSnapshots":[
"camunda_operate_1748937221_8.7.2_part_1_of_6",
"camunda_operate_1748937221_8.7.2_part_2_of_6",
"camunda_operate_1748937221_8.7.2_part_3_of_6",
"camunda_operate_1748937221_8.7.2_part_4_of_6",
"camunda_operate_1748937221_8.7.2_part_5_of_6",
"camunda_operate_1748937221_8.7.2_part_6_of_6"
]
}

3. Start a backup x of Tasklist

This step uses the Tasklist management backup API.

curl -XPOST "$TASKLIST_MANAGEMENT_API/actuator/backups" \
-H "Content-Type: application/json" \
-d "{\"backupId\": $BACKUP_ID}"
Example output
{
"scheduledSnapshots":[
"camunda_tasklist_1748937221_8.7.2_part_1_of_6",
"camunda_tasklist_1748937221_8.7.2_part_2_of_6",
"camunda_tasklist_1748937221_8.7.2_part_3_of_6",
"camunda_tasklist_1748937221_8.7.2_part_4_of_6",
"camunda_tasklist_1748937221_8.7.2_part_5_of_6",
"camunda_tasklist_1748937221_8.7.2_part_6_of_6"
]
}

4. Wait for backup x of Optimize to complete

This step uses the Optimize management backup API.

curl -s "$OPTIMIZE_MANAGEMENT_API/actuator/backups/$BACKUP_ID"
Example output
{
"backupId":1748937221,
"failureReason":null,
"state":"COMPLETED",
"details":[
{
"snapshotName":"camunda_optimize_1748937221_8.7.1_part_1_of_2",
"state":"SUCCESS",
"startTime":"2025-06-03T07:53:54.389+0000",
"failures":[

]
},
{
"snapshotName":"camunda_optimize_1748937221_8.7.1_part_2_of_2",
"state":"SUCCESS",
"startTime":"2025-06-03T07:53:54.389+0000",
"failures":[

]
}
]
}

Alternatively as a one-line to wait until the state is COMPLETED using a while loop and jq to parse the response JSON.

while [[ "$(curl -s "$OPTIMIZE_MANAGEMENT_API/actuator/backups/$BACKUP_ID" | jq -r .state)" != "COMPLETED" ]]; do echo "Waiting..."; sleep 5; done; echo "Finished backup with ID $BACKUP_ID"

5. Wait for backup x of Operate to complete

This step uses the the Operate management backup API.

curl -s "$OPERATE_MANAGEMENT_API/actuator/backups/$BACKUP_ID"
Example output
{
"backupId":1748937221,
"state":"COMPLETED",
"failureReason":null,
"details":[
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_1_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:15.685+0000",
"failures":[

]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_2_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:16.288+0000",
"failures":[

]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_3_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:17.092+0000",
"failures":[

]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_4_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:17.293+0000",
"failures":[

]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_5_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:18.298+0000",
"failures":[

]
},
{
"snapshotName":"camunda_operate_1748937221_8.7.2_part_6_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:55:18.499+0000",
"failures":[

]
}
]
}

Alternatively as a one-line to wait until the state is COMPLETED using a while loop and jq to parse the response JSON.

while [[ "$(curl -s "$OPERATE_MANAGEMENT_API/actuator/backups/$BACKUP_ID" | jq -r .state)" != "COMPLETED" ]]; do echo "Waiting..."; sleep 5; done; echo "Finished backup with ID $BACKUP_ID"

6. Wait for backup x of Tasklist to complete

This step uses the Tasklist management backup API.

curl "$TASKLIST_MANAGEMENT_API/actuator/backups/$BACKUP_ID"
Example output
{
"backupId":1748937221,
"state":"COMPLETED",
"failureReason":null,
"details":[
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_1_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:56.519+0000",
"failures":[

]
},
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_2_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:57.324+0000",
"failures":[

]
},
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_3_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:57.927+0000",
"failures":[

]
},
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_4_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:58.329+0000",
"failures":[

]
},
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_5_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:58.933+0000",
"failures":[

]
},
{
"snapshotName":"camunda_tasklist_1748937221_8.7.2_part_6_of_6",
"state":"SUCCESS",
"startTime":"2025-06-03T07:56:59.535+0000",
"failures":[

]
}
]
}

Alternatively as a one-line to wait until the state is COMPLETED using a while loop and jq to parse the response JSON.

while [[ "$(curl -s "$TASKLIST_MANAGEMENT_API/actuator/backups/$BACKUP_ID" | jq -r .state)" != "COMPLETED" ]]; do echo "Waiting..."; sleep 5; done; echo "Finished backup with ID $BACKUP_ID"

Step 2: Backup Zeebe Cluster

Once you have completed backing up all the WebApps, you can back up the Zeebe Cluster.

caution

When backing up the the Zeebe Cluster, you must execute the following sub-steps in the correct sequential order.

1. Soft pause exporting in Zeebe

This step uses the management API.

This will continue exporting records, but not delete those records (log compaction) from Zeebe. This makes the backup a hot backup, as covered in the backup considerations.

curl -XPOST "$GATEWAY_MANAGEMENT_API/actuator/exporting/pause?soft=true"
Example output
note

Yes, 204 is the expected result and indicates a successful soft pause.

{
"body":null,
"status":204,
"contentType":null
}

2. Create a backup x of the exported Zeebe indices in Elasticsearch/OpenSearch

You can create this backup using the respective Snapshots API.

By default, the indices are prefixed with zeebe-record. If you have configured a different prefix when configuring Elasticsearch/OpenSearch exporter in Zeebe, use this instead.

The following uses the Elasticsearch snapshot API to create a snapshot.

curl -XPUT "$ELASTIC_ENDPOINT/_snapshot/$ELASTIC_SNAPSHOT_REPOSITORY/camunda_zeebe_records_backup_$BACKUP_ID?wait_for_completion=true" \
-H 'Content-Type: application/json' \
-d '{
"indices": "zeebe-record*",
"feature_states": ["none"]
}'
Example output
{
"snapshot":{
"snapshot":"camunda_zeebe_records_backup_1748937221",
"uuid":"1p_HdzKeTZ-zY-SN1LJ9VQ",
"repository":"camunda",
"version_id":8521000,
"version":"8.17.0-8.17.4",
"indices":[
"zeebe-record_process_8.7.2_2025-06-03",
"zeebe-record_job_8.7.2_2025-06-03",
"zeebe-record_process-instance-creation_8.7.2_2025-06-03",
"zeebe-record_process-instance_8.7.2_2025-06-03",
"zeebe-record_deployment_8.7.2_2025-06-03"
],
"data_streams":[

],
"include_global_state":true,
"state":"SUCCESS",
"start_time":"2025-06-03T08:05:10.633Z",
"start_time_in_millis":1748937910633,
"end_time":"2025-06-03T08:05:11.336Z",
"end_time_in_millis":1748937911336,
"duration_in_millis":603,
"failures":[

],
"shards":{
"total":9,
"failed":0,
"successful":9
},
"feature_states":[

]
}
}

3. Wait for backup x of the exported Zeebe indices to complete before proceeding

The following uses the Elasticsearch snapshot API to get the snapshot status.

curl "$ELASTIC_ENDPOINT/_snapshot/$ELASTIC_SNAPSHOT_REPOSITORY/camunda_zeebe_records_backup_$BACKUP_ID/_status"
Example output
{
"snapshots":[
{
"snapshot":"camunda_zeebe_records_backup_1748937221",
"repository":"camunda",
"uuid":"1p_HdzKeTZ-zY-SN1LJ9VQ",
"state":"SUCCESS",
"include_global_state":true,
"shards_stats":{
"initializing":0,
"started":0,
"finalizing":0,
"done":9,
"failed":0,
"total":9
},
"stats":{
"incremental":{
"file_count":0,
"size_in_bytes":0
},
"total":{
"file_count":9,
"size_in_bytes":0
},
"start_time_in_millis":1748937910633,
"time_in_millis":0
},
"indices":{
"zeebe-record_process_8.7.2_2025-06-03",
"zeebe-record_job_8.7.2_2025-06-03",
"zeebe-record_process-instance-creation_8.7.2_2025-06-03",
"zeebe-record_process-instance_8.7.2_2025-06-03",
"zeebe-record_deployment_8.7.2_2025-06-03"
}
}
]
}

4. Create a backup x of the Zeebe broker partitions

This step uses the Zeebe management backup API.

curl -XPOST "$GATEWAY_MANAGEMENT_API/actuator/backups" \
-H "Content-Type: application/json" \
-d "{\"backupId\": $BACKUP_ID}"
Example output
{
"message":"A backup with id 1748937221 has been scheduled. Use GET actuator/backups/1748937221 to monitor the status."
}

5. Wait for backup x of Zeebe to complete before proceeding

This step uses the Zeebe management backup API.

curl "$GATEWAY_MANAGEMENT_API/actuator/backups/$BACKUP_ID"
Example output
{
"backupId":1748937221,
"state":"COMPLETED",
"details":[
{
"partitionId":1,
"state":"COMPLETED",
"createdAt":"2025-06-03T08:06:06.246997293Z",
"lastUpdatedAt":"2025-06-03T08:06:10.408893628Z",
"checkpointPosition":1,
"brokerVersion":"8.7.2"
}
]
}

Alternatively as a one-line to wait until the state is COMPLETED using a while loop and jq to parse the response JSON.

while [[ "$(curl -s "$GATEWAY_MANAGEMENT_API/actuator/backups/$BACKUP_ID" | jq -r .state)" != "COMPLETED" ]]; do echo "Waiting..."; sleep 5; done; echo "Finished backup with ID $BACKUP_ID"

6. Resume exporting in Zeebe using the management API

curl -XPOST "$GATEWAY_MANAGEMENT_API/actuator/exporting/resume"
Example output
note

Yes, 204 is the expected result and indicates a successful resume.

{
"body":null,
"status":204,
"contentType":null
}
warning

If any of the steps above fail, you might have to restart with a new backup ID. Ensure Zeebe exporting is resumed if the backup process force quits in the middle of the process.

(Optional) Back up Web Modeler data

To create a Web Modeler data backup, refer to the official PostgreSQL documentation to back up the database that Web Modeler uses.

For example, to create a backup of the database using pg_dumpall, use the following command:

pg_dumpall -U <DATABASE_USER> -h <DATABASE_HOST> -p <DATABASE_PORT> -f dump.psql --quote-all-identifiers
Password: <DATABASE_PASSWORD>

pg_dumpall might ask multiple times for the same password. The database will be dumped into dump.psql.

note

Database dumps created with pg_dumpall/pg_dump can only be restored into a database with the same or later version of PostgreSQL, see PostgreSQL documentation.

Cleaning up backups

Depending on your company’s backup policies (for example, retention periods and number of backups to keep) you should consider regularly cleaning up your old backups to reduce storage costs and efficiently manage resources.

You can use the delete backup APIs for each component to remove the associated resources from the configured backup storage. You will have to provide the same backup ID for all calls to remove it from all backup stores.

For Zeebe, you would also have to remove the separately backed up zeebe-record index snapshot using the Elasticsearch / OpenSearch API directly.