Skip to main content
Version: 8.9 (unreleased)

Elasticsearch exporter

note

For supported Elasticsearch versions in Camunda 8 Self-Managed, see supported environments.

Starting with Camunda 8.8, Camunda uses the Camunda Exporter to consume new records. Records from 8.7 and earlier are consumed only during migration.

The Elasticsearch and OpenSearch exporters remain fully usable after migration (for example, for existing setups, Optimize, or other custom use cases). Their functionality is not limited to the migration period.

From 8.9 onward, the Elasticsearch exporter also supports Optimize-focused export filters (for example, variable-name filters, variable-type filters, BPMN process include/exclude, and an Optimize mode flag).

For Optimize-specific guidance and recommended settings, see Camunda 8 system configuration.

The Zeebe Elasticsearch exporter acts as a bridge between Zeebe and Elasticsearch by exporting records written to Zeebe streams as documents into several indices.

Concept

The exporter operates on the idea that it should perform as little as possible on the Zeebe side of things. In other words, you can think of the indexes into which the records are exported as a staging data warehouse. Any enrichment or transformation on the exported data should be performed by your own ETL jobs.

When configured to do so, the exporter will automatically create an index per record value type (see the value type in the Zeebe protocol). Each of these indexes has a corresponding pre-defined mapping to facilitate data ingestion for your own ETL jobs. You can find those as templates in the resources folder of the exporter's source code.

note

The indexes are created as required, and will not be created twice if they already exist. However, once disabled, they will not be deleted (that is up to the administrator.) Similarly, data is never deleted by the exporter, and must be deleted by the administrator when it is safe to do so. A retention policy can be configured to automatically delete data after a certain number of days.

Configuration

note

As the exporter is packaged with Zeebe, it is not necessary to specify a jarPath.

The exporter can be enabled by configuring it with the classpath in the broker settings.

For a Spring Boot application or Camunda 8 with unified configuration:

Application config (YAML):

camunda:
data:
exporters:
elasticsearch:
class-name: io.camunda.zeebe.exporter.ElasticsearchExporter
args:
# Refer to the table below for the available args options

Environment variables:

Set environment variables in the format CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_... (e.g., CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_URL).

Helm:

Add the same configuration under orchestration.configuration in your values.yaml file.

warning

Do not configure both legacy (zeebe.broker.exporters.*) and unified (camunda.data.exporters.*) exporter properties at the same time. Exporter properties are a breaking-change mapping in unified configuration, and the application fails to start until legacy properties are removed.

The exporter can be configured by providing args. The table below explains all the different options, and the default values for these options:

OptionDescriptionDefault
urlValid URLs as comma-separated string.http://localhost:9200
request-timeout-msRequest timeout (in ms) for Elasticsearch. client30000
indexRefer to index for the index configuration options.
bulkRefer to bulk for the bulk configuration options.
retentionRefer to retention for the retention configuration options.
authenticationRefer to authentication for the authentication configuration options.
include-enabled-recordsIf true all enabled record types will be exported.false

In most cases, you will not be interested in exporting every single record produced by a Zeebe cluster, but rather only a subset of them. This can also be configured to limit the kinds of records being exported (e.g. only events, no commands), and the value type of these records (e.g. only job and process values).

OptionDescriptionDefault
prefixThis prefix will be appended to every index created by the exporter; must not contain _ (underscore).zeebe-record
create-templateIf true missing indexes will be created automatically.true
index-suffix-date-patternThis suffix will be appended to every index created by the exporter; The pattern is based on the Java DateTimeFormatter and supports the same syntax. This is useful when indexes should be created in a different interval, like hourly instead of daily."yyyy-MM-dd'"
number-of-shardsThe number of shards used for each new record index created.3
number-of-replicasThe number of shard replicas used for each new record index created.0
commandIf true command records will be exportedfalse
eventIf true event records will be exportedtrue
rejectionIf true rejection records will be exportedfalse
checkpointIf true records related to checkpoints will be exportedfalse
command-distributionIf true records related to command distributions will be exportedtrue
decisionIf true records related to decisions will be exportedtrue
decision-evaluationIf true records related to decision evaluations will be exportedtrue
decision-requirementsIf true records related to decisionRequirements will be exportedtrue
deploymentIf true records related to deployments will be exportedtrue
deployment-distributionIf true records related to deployment distributions will be exportedtrue
errorIf true records related to errors will be exportedtrue
escalationIf true records related to escalations will be exportedtrue
formIf true records related to forms will be exportedtrue
incidentIf true records related to incidents will be exportedtrue
jobIf true records related to jobs will be exportedtrue
job-batchIf true records related to job batches will be exportedfalse
messageIf true records related to messages will be exportedtrue
message-batchIf true records related to message batches will be exportedfalse
message-subscriptionIf true records related to message subscriptions will be exportedtrue
message-start-event-subscriptionIf true records related to message start event subscriptions will be exportedtrue
processIf true records related to processes will be exportedtrue
process-eventIf true records related to process events will be exportedfalse
process-instanceIf true records related to process instances will be exportedtrue
process-instance-batchIf true records related to process instances batches will be exportedfalse
process-instance-creationIf true records related to process instance creations will be exportedtrue
process-instance-migrationIf true records related to process instance migrations will be exportedtrue
process-instance-modificationIf true records related to process instance modifications will be exportedtrue
process-message-subscriptionIf true records related to process message subscriptions will be exportedtrue
resource-deletionIf true records related to resource deletions will be exportedtrue
signalIf true records related to signals will be exportedtrue
signal-subscriptionIf true records related to signal subscriptions will be exportedtrue
timerIf true records related to timers will be exportedtrue
user-taskIf true records related to user tasks will be exportedtrue
variableIf true records related to variables will be exportedtrue
variable-documentIf true records related to variable documents will be exportedtrue

Variable-name filters

Starting with Camunda 8.9, you can filter exported variable records by variable name.

Configuration:

camunda:
data:
exporters:
elasticsearch:
args:
index:
variable-name-inclusion-start-with:
- business_
variable-name-exclusion-start-with:
- business_debug

The exporter first matches variable names against inclusion rules (if present), then against exclusion rules. If a variable matches both, the exclusion wins.

For details on how this interacts with Optimize, see Camunda 8 system configuration.

Variable-type filters

Variable-type filters let you restrict exported variables by their inferred JSON type, such as String, Number, Boolean, Object or Null. The exporter first matches variable types against inclusion rules (if present), then against exclusion rules. If a variable type matches both, the exclusion wins. Configuration:

camunda:
data:
exporters:
elasticsearch:
args:
index:
variable-value-type-inclusion:
- Object
- String
variable-value-type-exclusion:
- Object

Use this filter to drop large object or array payloads at export time. Type inference is similar to what Optimize uses. For details on which types to include or exclude for reporting, see Camunda 8 system configuration.

BPMN process filters

BPMN process filters control which processes (by bpmnProcessId) are exported. All records that carry the given bpmnProcessId follow the same decision.

Configuration:

camunda:
data:
exporters:
elasticsearch:
args:
index:
bpmn-process-id-inclusion:
- orderProcess
bpmn-process-id-exclusion:
- debugProcess

Processes listed under inclusion are candidates; exclusion removes any of those candidates again.

Some value types that never expose bpmnProcessId (for example, DEPLOYMENT, DECISION) are not affected and remain controlled only via the index.* flags.

Optimize mode

With Optimize mode, you can restrict exported records to those used by Optimize, reducing index size.

Configuration:

camunda:
data:
exporters:
elasticsearch:
args:
index:
optimize-mode-enabled: true

When enabled, the exporter emits only the value types and intents that Optimize imports. Other value types are dropped unless you explicitly opt in to the legacy behavior (for example, via include-enabled-records).

Use this flag only if the exporter indices are dedicated to Optimize. For SaaS and Self-Managed recommendations, see Camunda 8 system configuration.

Example

Here is an example configuration of the exporter:

---
camunda:
data:
exporters:
elasticsearch:
# Elasticsearch Exporter ----------
# An example configuration for the elasticsearch exporter:
#
# These settings can also be overridden using environment variables "CAMUNDA_DATA_EXPORTERS_ELASTICSEARCH_..."
#

class-name: io.camunda.zeebe.exporter.ElasticsearchExporter
args:
# A comma separated list of URLs pointing to the Elasticsearch instances you wish to export to.
# For example, if you want to connect to multiple nodes for redundancy:
# url: http://localhost:9200,http://localhost:9201
url: http://localhost:9200

bulk:
delay: 5
size: 1000
memory-limit: 10485760

retention:
enabled: true
minimum-age: 30d
policy-name: zeebe-records-retention-policy

authentication:
username: elastic
password: changeme

index:
prefix: zeebe-record
create-template: true

index-suffix-date-pattern: "yyyy-MM-dd"

command: false
event: true
rejection: false

command-distribution: true
decision-requirements: true
decision: true
decision-evaluation: true
deployment: true
deployment-distribution: true
error: true
escalation: true
form: true
incident: true
job: true
job-batch: false
message: true
message-start-subscription: true
message-subscription: true
process: true
process-event: false
process-instance: true
process-instance-creation: true
process-instance-migration: true
process-instance-modification: true
process-message-subscription: true
resource-deletion: true
signal: true
signal-subscription: true
timer: true
user-task: true
variable: true
variable-document: true

Self-signed certificates

The Zeebe Elasticsearch exporter does not currently support connecting to Elasticsearch using self-signed certificates. If you must use self-signed certificates, it is possible to build your own trust store and have the application use it.

In this case, it is recommended to create a new custom trust store based on the default one. This way, it will also be able to verify certificates signed using trusted root certificate authorities.

  1. First, create a new custom trust store which contains the same data as the default one, using PKCS12 format. To do so, find the location of the default cacerts trust store:

    • On Linux systems, find it at $JAVA_HOME/lib/security/cacerts.
    • For macOS, find it under $(/usr/libexec/java_home)/jre/lib/security/cacerts.

    Once you have the right location, e.g. $JAVA_HOME/lib/security/cacerts, run the following to create a new trust store:

    keytool -importkeystore -srckeystore $JAVA_HOME/lib/security/cacerts -destkeystore zeebeTrustStore.jks -srcstoretype PKCS12 -deststoretype JKS

    Set any password, so long as it's at least 6 characters.

  2. Add your custom certificate to to the new trust store. For example, if your custom certificate is located at /tmp/myCustomCertificate.pem:

    keytool -import -alias MyCustomCertificate -keystore zeebeTrustStore.jks -file /tmp/myCustomCertificate.pem
    note

    Replace the -file parameter with the actual path to your certificate, and make sure to replace the -alias parameter with something descriptive, like WebServerCertificate.

    When prompted to trust the certificate, make sure to answer yes.

  3. Update the application to use this trust store. First, make sure the file is readable by the application. For example, on Unix systems, run:

    chmod a+r zeebeTrustStore.jks

    Then, specify the following properties when running the application:

    • javax.net.ssl.trustStore: must be set to the path of your custom trust store.
    • javax.net.ssl.trustStorePassword: set to your trust store password.

    The following example uses a trust store location of /tmp/zeebeTrustStore.jks, and a password of changeme. When using the official distribution (whether Docker image or the bundled shell scripts), these propertiescan be provided using the following environment variable:

    JAVA_OPTS="-Djavax.net.ssl.trustStore=/tmp/zeebeTrustStore.jks -Djavax.net.ssl.trustStorePassword=changeme ${JAVA_OPTS}"
warning

If you're using containers, you will need to mount the trust store to the container such that it can be found by the java process. This will depend on your deployment method (e.g. Helm chart, Docker Compose). The simplest way is to build a custom image which already contains your trust store, and specifies the environment variable.

Legacy Zeebe records and Optimize filters

With the introduction of the Camunda Exporter, the Elasticsearch and OpenSearch exporters no longer export all record types by default. Instead, they will emit only the record value types and intents required by Optimize.

To export additional record types, enable the include-enabled-records configuration property.

When you enable exporter-side filters (optimize-mode-enabled, variable-name, variable-type, or bpmn-process-id), filtering applies only to newly produced records. Existing documents in Elasticsearch or OpenSearch are not rewritten.

To export other record types, enable the include-enabled-records configuration property.

Upgrade note (8.8 to 8.9)

When upgrading from 8.8 to 8.9, exporter filtering behavior may affect data completeness. See the Camunda 8 system configuration for guidance.