Elasticsearch exporter
For supported Elasticsearch versions in Camunda 8 Self-Managed, see supported environments.
Starting with Camunda 8.8, Camunda uses the Camunda Exporter to consume new records. Records from 8.7 and earlier are consumed only during migration.
The Elasticsearch and OpenSearch exporters remain fully usable after migration (for example, for existing setups, Optimize, or other custom use cases). Their functionality is not limited to the migration period.
From 8.9 onward, the Elasticsearch exporter also supports Optimize-focused export filters (for example, variable-name filters, variable-type filters, BPMN process include/exclude, and an Optimize mode flag).
For Optimize-specific guidance and recommended settings, see Camunda 8 system configuration.
The Zeebe Elasticsearch exporter acts as a bridge between Zeebe and Elasticsearch by exporting records written to Zeebe streams as documents into several indices.
Concept
The exporter operates on the idea that it should perform as little as possible on the Zeebe side of things. In other words, you can think of the indexes into which the records are exported as a staging data warehouse. Any enrichment or transformation on the exported data should be performed by your own ETL jobs.
When configured to do so, the exporter will automatically create an index per record value type (see the value type in the Zeebe protocol). Each of these indexes has a corresponding pre-defined mapping to facilitate data ingestion for your own ETL jobs. You can find those as templates in the resources folder of the exporter's source code.
The indexes are created as required, and will not be created twice if they already exist. However, once disabled, they will not be deleted (that is up to the administrator.) Similarly, data is never deleted by the exporter, and must be deleted by the administrator when it is safe to do so. A retention policy can be configured to automatically delete data after a certain number of days.
Configuration
As the exporter is packaged with Zeebe, it is not necessary to specify a jarPath.
The exporter can be enabled by configuring it with the classpath in the broker settings.
For a Spring Boot application or Camunda 8 with unified configuration:
Application config (YAML):
camunda:
data:
exporters:
elasticsearch:
class-name: io.camunda.zeebe.exporter.ElasticsearchExporter
args:
# Refer to the table below for the available args options
Environment variables:
Set environment variables in the format CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_... (e.g., CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_URL).
Helm:
Add the same configuration under orchestration.configuration in your values.yaml file.
Do not configure both legacy (zeebe.broker.exporters.*) and unified (camunda.data.exporters.*) exporter properties at the same time. Exporter properties are a breaking-change mapping in unified configuration, and the application fails to start until legacy properties are removed.
The exporter can be configured by providing args. The table below explains all the different
options, and the default values for these options:
| Option | Description | Default |
|---|---|---|
| url | Valid URLs as comma-separated string. | http://localhost:9200 |
| request-timeout-ms | Request timeout (in ms) for Elasticsearch. client | 30000 |
| index | Refer to index for the index configuration options. | |
| bulk | Refer to bulk for the bulk configuration options. | |
| retention | Refer to retention for the retention configuration options. | |
| authentication | Refer to authentication for the authentication configuration options. | |
| include-enabled-records | If true all enabled record types will be exported. | false |
- Index
- Bulk
- Retention
- Authentication
In most cases, you will not be interested in exporting every single record produced by a Zeebe cluster, but rather only a subset of them. This can also be configured to limit the kinds of records being exported (e.g. only events, no commands), and the value type of these records (e.g. only job and process values).
| Option | Description | Default |
|---|---|---|
| prefix | This prefix will be appended to every index created by the exporter; must not contain _ (underscore). | zeebe-record |
| create-template | If true missing indexes will be created automatically. | true |
| index-suffix-date-pattern | This suffix will be appended to every index created by the exporter; The pattern is based on the Java DateTimeFormatter and supports the same syntax. This is useful when indexes should be created in a different interval, like hourly instead of daily. | "yyyy-MM-dd'" |
| number-of-shards | The number of shards used for each new record index created. | 3 |
| number-of-replicas | The number of shard replicas used for each new record index created. | 0 |
| command | If true command records will be exported | false |
| event | If true event records will be exported | true |
| rejection | If true rejection records will be exported | false |
| checkpoint | If true records related to checkpoints will be exported | false |
| command-distribution | If true records related to command distributions will be exported | true |
| decision | If true records related to decisions will be exported | true |
| decision-evaluation | If true records related to decision evaluations will be exported | true |
| decision-requirements | If true records related to decisionRequirements will be exported | true |
| deployment | If true records related to deployments will be exported | true |
| deployment-distribution | If true records related to deployment distributions will be exported | true |
| error | If true records related to errors will be exported | true |
| escalation | If true records related to escalations will be exported | true |
| form | If true records related to forms will be exported | true |
| incident | If true records related to incidents will be exported | true |
| job | If true records related to jobs will be exported | true |
| job-batch | If true records related to job batches will be exported | false |
| message | If true records related to messages will be exported | true |
| message-batch | If true records related to message batches will be exported | false |
| message-subscription | If true records related to message subscriptions will be exported | true |
| message-start-event-subscription | If true records related to message start event subscriptions will be exported | true |
| process | If true records related to processes will be exported | true |
| process-event | If true records related to process events will be exported | false |
| process-instance | If true records related to process instances will be exported | true |
| process-instance-batch | If true records related to process instances batches will be exported | false |
| process-instance-creation | If true records related to process instance creations will be exported | true |
| process-instance-migration | If true records related to process instance migrations will be exported | true |
| process-instance-modification | If true records related to process instance modifications will be exported | true |
| process-message-subscription | If true records related to process message subscriptions will be exported | true |
| resource-deletion | If true records related to resource deletions will be exported | true |
| signal | If true records related to signals will be exported | true |
| signal-subscription | If true records related to signal subscriptions will be exported | true |
| timer | If true records related to timers will be exported | true |
| user-task | If true records related to user tasks will be exported | true |
| variable | If true records related to variables will be exported | true |
| variable-document | If true records related to variable documents will be exported | true |
Variable-name filters
Starting with Camunda 8.9, you can filter exported variable records by variable name.
Configuration:
camunda:
data:
exporters:
elasticsearch:
args:
index:
variable-name-inclusion-start-with:
- business_
variable-name-exclusion-start-with:
- business_debug
The exporter first matches variable names against inclusion rules (if present), then against exclusion rules. If a variable matches both, the exclusion wins.
For details on how this interacts with Optimize, see Camunda 8 system configuration.
Variable-type filters
Variable-type filters let you restrict exported variables by their inferred JSON type,
such as String, Number, Boolean, Object or Null.
The exporter first matches variable types against inclusion rules (if present), then against exclusion rules. If a variable type matches both, the exclusion wins.
Configuration:
camunda:
data:
exporters:
elasticsearch:
args:
index:
variable-value-type-inclusion:
- Object
- String
variable-value-type-exclusion:
- Object
Use this filter to drop large object or array payloads at export time. Type inference is similar to what Optimize uses. For details on which types to include or exclude for reporting, see Camunda 8 system configuration.
BPMN process filters
BPMN process filters control which processes (by bpmnProcessId) are exported. All records that carry the given bpmnProcessId follow the same decision.
Configuration:
camunda:
data:
exporters:
elasticsearch:
args:
index:
bpmn-process-id-inclusion:
- orderProcess
bpmn-process-id-exclusion:
- debugProcess
Processes listed under inclusion are candidates; exclusion removes any of those candidates again.
Some value types that never expose bpmnProcessId (for example, DEPLOYMENT, DECISION) are not affected and remain controlled only via the index.* flags.
Optimize mode
With Optimize mode, you can restrict exported records to those used by Optimize, reducing index size.
Configuration:
camunda:
data:
exporters:
elasticsearch:
args:
index:
optimize-mode-enabled: true
When enabled, the exporter emits only the value types and intents that Optimize imports. Other value types are dropped unless you explicitly opt in to the legacy behavior (for example, via include-enabled-records).
Use this flag only if the exporter indices are dedicated to Optimize. For SaaS and Self-Managed recommendations, see Camunda 8 system configuration.
To avoid too many expensive requests to the Elasticsearch cluster, the exporter performs batch updates by default. The size of the batch, along with how often it should be flushed (regardless of size) can be controlled by configuration.
| Option | Description | Default |
|---|---|---|
| delay | Delay, in seconds, before force flush of the current batch. This ensures that even when we have low traffic of records, we still export every once in a while. | 5 |
| size | The amount of records a batch should have before we flush the batch | 1000 |
| memory-limit | The size of the batch, in bytes, before we flush the batch | 10485760 (10 MB) |
With the default configuration, the exporter will aggregate records and flush them to Elasticsearch:
- When it has aggregated 1000 records.
- When the batch memory size exceeds 10 MB.
- Five seconds have elapsed since the last flush (regardless of how many records were aggregated).
A retention policy can be set up to delete old data.
When enabled, this creates an Index Lifecycle Management (ILM) Policy that deletes the data after the specified minimumAge.
All index templates created by this exporter apply the created ILM Policy.
| Option | Description | Default |
|---|---|---|
| enabled | If true the ILM Policy is created and applied to the index templates | false |
| minimum-age | Specifies how old the data must be, before the data is deleted as a duration | 30d |
| policy-name | The name of the created and applied ILM policy | zeebe-record-retention-policy |
The duration can be specified in days d, hours h, minutes m, seconds s, milliseconds ms, and/or nanoseconds nanos.
Providing these authentication options will enable Basic Authentication on the exporter.
| Option | Description | Default |
|---|---|---|
| username | Username used to authenticate | N/A |
| password | Password used to authenticate | N/A |
Example
Here is an example configuration of the exporter:
---
camunda:
data:
exporters:
elasticsearch:
# Elasticsearch Exporter ----------
# An example configuration for the elasticsearch exporter:
#
# These settings can also be overridden using environment variables "CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_..."
#
class-name: io.camunda.zeebe.exporter.ElasticsearchExporter
args:
# A comma separated list of URLs pointing to the Elasticsearch instances you wish to export to.
# For example, if you want to connect to multiple nodes for redundancy:
# url: http://localhost:9200,http://localhost:9201
url: http://localhost:9200
bulk:
delay: 5
size: 1000
memory-limit: 10485760
retention:
enabled: true
minimum-age: 30d
policy-name: zeebe-records-retention-policy
authentication:
username: elastic
password: changeme
index:
prefix: zeebe-record
create-template: true
index-suffix-date-pattern: "yyyy-MM-dd"
command: false
event: true
rejection: false
command-distribution: true
decision-requirements: true
decision: true
decision-evaluation: true
deployment: true
deployment-distribution: true
error: true
escalation: true
form: true
incident: true
job: true
job-batch: false
message: true
message-start-subscription: true
message-subscription: true
process: true
process-event: false
process-instance: true
process-instance-creation: true
process-instance-migration: true
process-instance-modification: true
process-message-subscription: true
resource-deletion: true
signal: true
signal-subscription: true
timer: true
user-task: true
variable: true
variable-document: true
Self-signed certificates
The Zeebe Elasticsearch exporter does not currently support connecting to Elasticsearch using self-signed certificates. If you must use self-signed certificates, it is possible to build your own trust store and have the application use it.
In this case, it is recommended to create a new custom trust store based on the default one. This way, it will also be able to verify certificates signed using trusted root certificate authorities.
-
First, create a new custom trust store which contains the same data as the default one, using PKCS12 format. To do so, find the location of the default
cacertstrust store:- On Linux systems, find it at
$JAVA_HOME/lib/security/cacerts. - For macOS, find it under
$(/usr/libexec/java_home)/jre/lib/security/cacerts.
Once you have the right location, e.g.
$JAVA_HOME/lib/security/cacerts, run the following to create a new trust store:keytool -importkeystore -srckeystore $JAVA_HOME/lib/security/cacerts -destkeystore zeebeTrustStore.jks -srcstoretype PKCS12 -deststoretype JKSSet any password, so long as it's at least 6 characters.
- On Linux systems, find it at
-
Add your custom certificate to to the new trust store. For example, if your custom certificate is located at
/tmp/myCustomCertificate.pem:keytool -import -alias MyCustomCertificate -keystore zeebeTrustStore.jks -file /tmp/myCustomCertificate.pemnoteReplace the
-fileparameter with the actual path to your certificate, and make sure to replace the-aliasparameter with something descriptive, likeWebServerCertificate.When prompted to trust the certificate, make sure to answer yes.
-
Update the application to use this trust store. First, make sure the file is readable by the application. For example, on Unix systems, run:
chmod a+r zeebeTrustStore.jksThen, specify the following properties when running the application:
javax.net.ssl.trustStore: must be set to the path of your custom trust store.javax.net.ssl.trustStorePassword: set to your trust store password.
The following example uses a trust store location of
/tmp/zeebeTrustStore.jks, and a password ofchangeme. When using the official distribution (whether Docker image or the bundled shell scripts), these propertiescan be provided using the following environment variable:JAVA_OPTS="-Djavax.net.ssl.trustStore=/tmp/zeebeTrustStore.jks -Djavax.net.ssl.trustStorePassword=changeme ${JAVA_OPTS}"
If you're using containers, you will need to mount the trust store to the container such that it can be found by the java process. This will depend on
your deployment method (e.g. Helm chart, Docker Compose). The simplest way is to build a custom image which already contains your trust store, and specifies
the environment variable.
Legacy Zeebe records and Optimize filters
With the introduction of the Camunda Exporter, the Elasticsearch and OpenSearch exporters no longer export all record types by default. Instead, they will emit only the record value types and intents required by Optimize.
To export additional record types, enable the include-enabled-records configuration property.
When you enable exporter-side filters (optimize-mode-enabled, variable-name,
variable-type, or bpmn-process-id), filtering applies only to newly produced records. Existing documents in Elasticsearch or OpenSearch are not rewritten.
To export other record types, enable the include-enabled-records configuration property.
When upgrading from 8.8 to 8.9, exporter filtering behavior may affect data completeness. See the Camunda 8 system configuration for guidance.