Camunda Exporter
The Camunda Exporter exports Zeebe records directly to Elasticsearch or OpenSearch. Unlike the Elasticsearch and OpenSearch exporters, it exports records in the format required by Operate and Tasklist, so you don’t need to configure additional importers or data transformations.
Using the Camunda Exporter can increase process instance throughput and reduce the latency of changes appearing in Operate and Tasklist.
When exporting, indexes are created as required and not recreated if they already exist. However, disabling the exporter does not delete indexes. Administrators must handle deletions. You can configure a retention policy to automatically delete data after a set number of days.
Configuration
Camunda Exporter is enabled by default if secondary storage is configured to use Elasticsearch or OpenSearch. See the properties prefixed with CAMUNDA_DATA_SECONDARYSTORAGE
in secondary-storage configuration properties.
You can also configure the following properties using exporter args
:
zeebe:
brokers:
exporters:
# Camunda Exporter ----------
# An example configuration for the camunda exporter:
#
# These setting can also be overridden using the environment variables "ZEEBE_BROKER_EXPORTERS_CAMUNDAEXPORTER_..."
# To convert a YAML formatted variable to an environment variable, start with the top-level property and separate every nested property with an underscore (_).
# For example, the property "zeebe.broker.exporters.camundaexporter.args.index.numberOfShards" would be converted to "ZEEBE_BROKER_EXPORTERS_CAMUNDAEXPORTER_ARGS_INDEX_NUMBEROFSHARDS".
#
camundaexporter:
args:
Option | Description | Default |
---|---|---|
connect | Connection configuration options. See Connect. | |
index | Index configuration options. See Index. | |
bulk | Bulk configuration options. See Bulk. | |
history | Retention configuration options. See History. | |
createSchema | If true , checks schema readiness before exporting. | true |
Options
- Connect
- Index
- Bulk
- Retention
- History
- Other
Please refer to supported environments to find out which versions of Elasticsearch and/or OpenSearch are supported in a Camunda 8 Self-Managed setup.
Option | Description | Default |
---|---|---|
dateFormat | Defines a custom date format that should be used for fetching date data from the engine (should be the same as in the engine) | yyyy-MM-dd'T'HH:mm:ss.SSSZZ |
socketTimeout | Defines the socket timeout in milliseconds, which is the timeout for waiting for data. | |
connectTimeout | Determines the timeout in milliseconds until a connection is established. |
If you are using opensearch
on AWS, the AWS SDK's DefaultCredentialsProvider is used for authentication. For more details on configuring credentials, refer to the AWS SDK documentation.
Option | Description | Default |
---|---|---|
numberOfShards | The number of shards used for each created index. | 1 |
numberOfReplicas | The number of shard replicas used for created index. | 0 |
variableSizeThreshold | Defines a threshold for variable size. Variables exceeding this threshold are split into two properties: FULL_VALUE (full content, not indexed) and VALUE (truncated content, indexed). | 8191 |
shardsByIndexName | A map where the key is the index name and the value is the number of shards, allowing you to override the default numberOfShards setting for specific indices. | |
replicasByIndexName | A map where the key is the index name and the value is the number of replicas, allowing you to override the default numberOfReplicas setting for specific indices. |
To avoid too many expensive requests to the Elasticsearch/OpenSearch cluster, the exporter performs batch updates by default. The size of the batch, along with how often it should be flushed (regardless of size) can be controlled by configuration.
Option | Description | Default |
---|---|---|
delay | Delay, in seconds, before force flush of the current batch. This ensures that even when we have low traffic of records, we still export every once in a while. | 5 |
size | The amount of records a batch should have before we flush the batch. | 1000 |
With the default configuration, the exporter will aggregate records and flush them to Elasticsearch/OpenSearch:
- When it has aggregated 1000 records.
- Five seconds have elapsed since the last flush (regardless of how many records were aggregated).
A retention policy can be set up to delete old data.
When enabled, this creates an Index Lifecycle Management (ILM) Policy that deletes the data after the specified
minimumAge
.
All index templates created by this exporter apply the created ILM Policy.
Option | Description | Default |
---|---|---|
enabled | If true the ILM Policy is created and applied to the index templates. | false |
minimumAge | Specifies how old the data must be, before the data is deleted as a duration. | 30d |
policyName | The name of the created and applied ILM policy. | camunda-retention-policy |
usageMetricsMinimumAge | Specifies how old the usage metrics data must be, before the data is deleted as a duration. Applies to camunda-usage-metric and camunda-usage-metric-tu indices. | 730d |
usageMetricsPolicyName | The name of the created and applied usage metrics ILM policy. | camunda-usage-metrics-retention-policy |
The duration can be specified in days d
, hours h
, minutes m
, seconds s
, milliseconds ms
, and/or nanoseconds
nanos
.
To keep the main runtime index performant, documents are periodically moved into historical indices. The history can be configured as follows:
Option | Description | Default |
---|---|---|
elsRolloverDateFormat | Defines how date values are formatted for historical indices using Java DateTimeFormatter syntax. If no format is specified, the first date format defined in the field mapping is used. | date |
rolloverInterval | The rollover period before an active index is rolled over. This means that rolloveInterval is the time gap between updates of historical indexes, therfore for a index index-abc and a rolloverInterval of 7 days (7d ) we will have the historical indexes index-abc-2025-01-01 , index-abc-2025-01-08 and so on. The elsRolloverDateFormat must have sufficient resolution to compute the rolloverInterval . For example, if the rolloverInterval is 1h , then the elsRolloverDateFormat should be yyyy-MM-dd-HH . Additionally, rolloverInterval cannot use seconds (s ) or minutes (m ) as units. | 1d |
rolloverBatchSize | The maximum number of instances per batch to be archived. | 100 |
waitPeriodBeforeArchiving | Grace period during which completed process instances are excluded from archiving. For example, with a value of 1h , any process instances completed within the last hour will not be archived. | 1h |
delayBetweenRuns | Time in milliseconds between archiving runs for completed process instances. | 2000 |
maxDelayBetweenRuns | The maximum delay between archive runs when using an exponential backoff strategy in case of unsuccessful archiving attempts. | 60000 |
retention | Refer to Retention for retention configuration options. |
Other miscellaneous properties:
Option | Description | Default |
---|---|---|
batchOperation.exportItemsOnCreation | Defines whether the pending items of a started batch operation should be exported from the beginning. For very large batch operations involving more than 100,000 process instances, this can cause temporary performance issues due to the high volume of document insertions. If set to false , the "has pending batch operations" spinner in the Operate UI will not function properly. | true |
Example
Here is an example configuration of the exporter:
---
exporters:
# Camunda Exporter ----------
# An example configuration for the camunda exporter:
#
# These setting can also be overridden using the environment variables "ZEEBE_BROKER_EXPORTERS_CAMUNDAEXPORTER_..."
# To convert a YAML formatted variable to an environment variable, start with the top-level property and separate every nested property with an underscore (_).
# For example, the property "zeebe.broker.exporters.camundaexporter.args.index.numberOfShards" would be converted to "ZEEBE_BROKER_EXPORTERS_CAMUNDAEXPORTER_ARGS_INDEX_NUMBEROFSHARDS".
#
camundaexporter:
args:
connect:
dateFormat: yyyy-MM-dd'T'HH:mm:ss.SSSZZ
socketTimeout: 1000
connectTimeout: 1000
bulk:
delay: 5
size: 1000
index:
numberOfShards: 3
numberOfReplicas: 0
history:
elsRolloverDateFormat: "date"
rolloverInterval: "1d"
rolloverBatchSize: 100
waitPeriodBeforeArchiving: "1h"
delayBetweenRuns: 2000
maxDelayBetweenRuns: 60000
retention:
enabled: false
minimumAge: 30d
policyName: camunda-retention-policy
usageMetricsMinimumAge: 730d
usageMetricsPolicyName: camunda-usage-metrics-retention-policy
batchOperation:
exportItemsOnCreation: true
createSchema: true