Skip to main content
Version: 8.9 (unreleased)

RDBMS configuration overview

Camunda can use a relational database (RDBMS) as the secondary storage backend for Operate, Tasklist, Identity, and the Camunda REST API.

This page explains how RDBMS configuration works at the application level. If you are deploying with Helm, see:

For supported database vendors and versions, see the
RDBMS support policy.

Enable RDBMS as secondary storage

To activate an RDBMS backend, configure two components:

  1. Enable the RDBMS exporter in Zeebe, which streams workflow data to the database.
  2. Configure the application layer (Operate, Tasklist, Identity, REST API) to use RDBMS for secondary storage.

Example configuration:

# Enable the RDBMS exporter in Zeebe
zeebe:
broker:
exporters:
rdbms:
className: camunda.data.exporters.rdbms.className

# Configure secondary storage for Camunda applications
camunda:
data:
secondary-storage:
type: rdbms
rdbms:
url: jdbc:postgresql://localhost:5432/camunda
username: camunda
password: camunda

The RDBMS exporter can be used alongside other exporters, but enabling multiple exporters may affect performance.

Schema management

Camunda uses Liquibase to automatically create and update the database schema on startup.

Liquibase creates two internal management tables:

  • DATABASECHANGELOG
  • DATABASECHANGELOGLOCK

These tables must not be modified or deleted.

For Helm deployments requiring manual schema control or access to vendor-specific SQL, see:
Access SQL and Liquibase scripts.

Configure table prefix

To add a prefix to all Camunda-managed database tables:

camunda.data.secondary-storage.rdbms.prefix: c8_

Disable automatic schema creation

If your organization manages schema manually:

camunda.data.secondary-storage.rdbms.auto-ddl: false

SQL scripts for manual schema creation are documented in the Liquibase/SQL access guide linked above.

Database privileges

The configured database user must have the following privileges on all Camunda tables:

  • SELECT
  • INSERT
  • UPDATE
  • DELETE

Additional privileges for automatic schema management

If Liquibase schema management is enabled, the following privileges must be granted before the first startup:

  • CREATE
  • ALTER
  • DROP

Additional privilege for purge operations

If using the RDBMS purge feature, the following privilege is required:

  • TRUNCATE

History cleanup

The RDBMS exporter performs automatic history cleanup using two mechanisms:

  1. TTL-based marking
    Finished process instances and related data are marked for deletion after their configured history TTL expires.

  2. Periodic cleanup job
    A scheduled cleanup process deletes marked data in batches, adjusting its interval dynamically:

  • If no data is deleted → interval doubles (up to max-history-cleanup-interval)
  • If the batch limit is reached → interval halves (down to min-history-cleanup-interval)
  • Otherwise → the interval remains unchanged

Database driver

Camunda images include JDBC drivers for all supported databases except Oracle and MySQL.

If you use one of these databases, you must provide the driver yourself.

Docker Compose

When running Camunda with Docker Compose, mount the driver into /driver-lib:

services:
camunda:
image: camunda/camunda:<tag>
volumes:
- <local-path>/driver-lib:/driver-lib

Place the driver JAR directly inside the mounted directory (not in subfolders).

Helm

If you are using the Helm charts, refer to the database configuration guide for the supported driver configuration options:

Database configuration

RDBMS configuration properties are defined under:

camunda.data.secondary-storage.rdbms.*
PropertyDescriptionDefault
urlJDBC connection URLempty
userUsername for the connectionempty
passwordPassword for the connectionempty
auto-ddlEnables Liquibase schema managementtrue
prefixOptional table name prefix""
database-vendor-idManually override vendor detection (postgres, mariadb, etc.)empty

Connection pool configuration

Camunda uses HikariCP for JDBC connection pooling. The following properties can be adjusted:

Property nameDescriptionDefault
camunda.data.secondary-storage.rdbms.connection-pool.maximum-pool-sizeMaximum number of simultaneous connections10
camunda.data.secondary-storage.rdbms.connection-pool.minimum-idleMinimum number of idle connections10
camunda.data.secondary-storage.rdbms.connection-pool.idle-timeoutTimeout (ms) before closing an idle connection600000
camunda.data.secondary-storage.rdbms.connection-pool.max-lifetimeMaximum lifetime (ms) of each connection before it is closed and replaced1800000
camunda.data.secondary-storage.rdbms.connection-pool.connection-timeoutMaximum time (ms) the application waits for a connection from the pool30000

Exporter configuration

The RDBMS exporter is automatically enabled when:

camunda.data.secondary-storage.type: rdbms

The following additional configuration options are available under camunda.data.secondary-storage.rdbms:

Exporter performance settings

Property nameDescriptionDefault
flush-intervalMaximum time a record waits in the flush queue before being flushed and committed to the databasePT0.5S
max-queue-sizeMaximum number of records allowed in the flush queue before a forced flush1000
queue-memory-limitMaximum memory usage (MB) allowed for queued records before a forced flush20

History cleanup

The RDBMS exporter provides automatic history cleanup, which works in two stages:

  1. TTL marking
    When a process instance finishes, its data is marked for deletion once its time-to-live expires.

  2. Periodic cleanup job
    A scheduled cleanup job deletes marked records in batches and adjusts future intervals dynamically:

  • If no records are deleted → interval doubles (up to maxHistoryCleanupInterval)
  • If the batch size is fully used → interval halves (down to minHistoryCleanupInterval)
  • Otherwise → interval remains unchanged

History cleanup configuration

Property nameDescriptionDefault
history.default-history-ttlTTL for finished process instances and related data (ISO-8601 duration)P30D
history.default-batch-operation-ttlTTL for batch operation historyP5D
history.batch-operation-cancel-process-instance-ttlTTL for cancel-process-instance batch operationsP5D
history.batch-operation-migrate-process-instance-ttlTTL for migrate-process-instance batch operationsP5D
history.batch-operation-modify-process-instance-ttlTTL for modify-process-instance batch operationsP5D
history.batch-operation-resolve-incident-ttlTTL for resolve-incident batch operationsP5D
history.historyCleanupBatchSizeMaximum number of entries deleted per cleanup run1000
history.minHistoryCleanupIntervalMinimum duration between cleanup runs (ISO-8601 duration)PT1M
history.maxHistoryCleanupIntervalMaximum duration between cleanup runs (ISO-8601 duration)PT60M
history.usage-metrics-ttlTTL for usage metricsP730D
history.usage-metrics-cleanupInterval between usage metrics cleanup runs (ISO-8601 duration)PT24H

Exporter cache configuration

Property nameDescriptionDefault
process-cache.max-sizeMaximum number of process definitions held in the exporter cache1000
batch-operation-cache.max-sizeMaximum number of cached batch operations1000