Skip to main content
Version: 8.8 (unreleased)

Docker Compose

note

None of the storage options below with Docker Compose are suitable for production.

If no storage configuration is provided, the default document storage is in-memory. It means that documents will be lost when the application is stopped.

To change this to a different storage method, use the environment variables in the section below for every component using it (Zeebe and Tasklist). No additional configuration is required for the in-memory storage.

To set what storage should be used, accepted values for DOCUMENT_DEFAULT_STORE_ID are aws, inmemory, gcp (for Google Cloud Platform), and local (for local storage).

When using Docker Compose, Tasklist and Zeebe run in separate containers and do not share memory or volumes, which introduces certain limitations. While the document handling feature will still work, the environment variable below must be set for all components that use it (Zeebe and Tasklist). In this topology, using in-memory or local storage means components cannot access the same data, so documents uploaded by Zeebe may not be visible to Tasklist. This limitation does not apply when using cloud storage options like AWS or GCP, where documents are always stored in a shared, centralized location.

By using external cloud file bucket storage with AWS S3, documents can be stored in a secure, and scalable way. Buckets are integrated per cluster to ensure proper isolation and environment-specific management.

Credentials variableRequiredDescription
AWS_ACCESS_KEY_IDYesAccess key ID used to interact with AWS S3 buckets.
AWS_SECRET_ACCESS_KEYYesThe AWS secret access key associated with the AWS_ACCESS_KEY_ID. This will be used to authenticate.
AWS_REGIONYesRegion where the bucket is.
Store variableRequiredDescription
DOCUMENT_STORE_AWS_BUCKETYesSpecifies the name of the AWS S3 bucket where documents are stored.
DOCUMENT_STORE_AWS_CLASSYesio.camunda.document.store.aws.AwsDocumentStoreProvider
DOCUMENT_STORE_AWS_BUCKET_PATHNoDefines the folder-like path within the S3 bucket where documents are stored. This helps organize files within the bucket. For example, documents/invoices. If not provided, the application logic assumes a default value of "".
DOCUMENT_STORE_AWS_BUCKET_TTLNoRepresents the time-to-live (TTL) for documents stored in the S3 bucket. This could be used to set an expiration policy, meaning documents will be deleted automatically after a specified duration. If not provided, the application logic ignores this.

Example:

AWS_ACCESS_KEY_ID=AWSACCESSKEYID
AWS_REGION=eu-north-1
AWS_SECRET_ACCESS_KEY=AWSSECRETACCESSKEYGOESHERE
DOCUMENT_STORE_AWS_BUCKET=test-bucket
DOCUMENT_STORE_AWS_BUCKET_PATH=test/path
DOCUMENT_STORE_AWS_BUCKET_TTL=5
DOCUMENT_STORE_AWS_CLASS=io.camunda.document.store.aws.AwsDocumentStoreProvider
DOCUMENT_DEFAULT_STORE_ID=aws

AWS API client permission requirements

To ensure seamless integration and functionality of document handling with AWS services, the API client utilized must be configured with the appropriate permissions. The following AWS Identity and Access Management (IAM) permissions are necessary for the execution of operations related to document handling:

PermissionDescription
s3:DeleteObjectThis permission authorizes the API client to remove objects from the specified S3 bucket.
s3:GetObjectThis permission is required to retrieve contents and metadata of objects from Amazon S3. The API client will utilize this permission to download or access the contents of the documents that have been uploaded to the bucket.
s3:ListBucketThis permission allows the application to verify it has access to the specified S3 bucket. Lack of this permission does not prevent the application from starting, but it logs a warning on application start-up.
s3:PutObjectTo upload documents to an Amazon S3 bucket, the API client must have this permission.