Skip to main content
Version: 8.8 (unreleased)

Camunda 8.2 to 8.3 Helm chart upgrade

note

When upgrading to a new version of the Camunda 8 Helm charts, we recommend updating to the latest patch release of the next major version of the chart.

For example, if the current Helm chart version is 10.x.x, and the latest next major version is 11.0.1, the recommended upgrade is to 11.0.1 (not 11.0.0).

Upgrading between minor versions of the Camunda Helm chart may require configuration changes. To upgrade between patch versions or when no configuration changes are required, see the helm upgrade instructions.

Upgrade requirements

For a smooth upgrade experience, we recommend determining both your Helm chart and Helm CLI versions prior to starting your upgrade.

Helm chart version

As of the Camunda 8.4 release, the Camunda 8 Helm chart version is independent from the application version (for example, the Camunda 8.4 release uses the Helm chart version 9.0.0). The Helm chart is updated with each application release.

Review the Camunda 8 Helm chart version matrix to determine the application and Helm chart versions of your installation.

You can also view all chart versions and application versions via the Helm CLI:

helm repo update
helm search repo camunda/camunda-platform --versions

Helm CLI version

Use the recommended Helm CLI version for your Helm chart when upgrading. The Helm CLI version for each chart can be found on the chart version matrix.

Update your configuration

Configuration adjustments may be required when upgrading to a new version of the Helm chart. Before beginning your upgrade, ensure you have implemented any changes required by your new version.

Helm chart 8.3.1

caution

The following steps are applied when upgrading from any previous version, including 8.3.0.

To fix a critical issue, the following components had labels change: Operate, Optimize, Tasklist, Zeebe, and Zeebe Gateway.

Therefore, before upgrading from any previous versions, delete the Deployment/StatefulSet. There will be a downtime between the resource deletion and the actual upgrade.

kubectl -n camunda delete deployment camunda-operate
kubectl -n camunda delete deployment camunda-tasklist
kubectl -n camunda delete deployment camunda-optimize
kubectl -n camunda delete deployment camunda-zeebe-gateway
kubectl -n camunda delete statefulset camunda-zeebe

Then, follow the upgrade process as usual.

Zeebe Gateway

This change has no effect on the usual upgrade using Helm CLI. However, it could be relevant if you are using Helm post-rendering via other tools like Kustomize.

The following resources have been renamed:

  • ConfigMap: From camunda-zeebe-gateway-gateway to camunda-zeebe-gateway.
  • ServiceAccount: From camunda-zeebe-gateway-gateway to camunda-zeebe-gateway.

Helm chart 8.3.0 (minor)

caution

Updating Operate, Tasklist, and Optimize from 8.2.x to 8.3.0 will potentially take longer than expected, depending on the data to be migrated. Additionally, we identified some bugs that could also prevent the migration from succeeding. These are being addressed and will be available in an upcoming 8.3.1 patch. We suggest not updating until the patch is released.

For full change log, view the Camunda Helm chart v8.3.0 release notes.

Breaking changes
  • Elasticsearch upgraded from v7.x to v8.x.
  • Keycloak upgraded from v19.x to v22.x.
  • Zeebe runs as a non-root user by default.

Elasticsearch

Elasticsearch upgraded from v7.x to v8.x. Follow the Elasticsearch official upgrade guide to ensure you are not using any deprecated values when upgrading.

Elasticsearch - values file

The syntax of the chart values file has been changed due to the upgrade. There are two cases based on if you use the default values or custom values.

Case One: Default values.yaml

If you are using our default values.yaml, no change is required. Follow the upgrade steps as usual with the updated default values.yaml.

Case Two: Custom values.yaml

If you have a custom values.yaml, change the image repository and tag:

image:
repository: bitnami/elasticsearch
tag: 8.8.2

Setting the persistent volume size of the master nodes can't be done using the volumeClaimTemplate anymore. It must be done using the master values:

master:
masterOnly: false
heapSize: 1024m
persistence:
size: 64Gi

Setting a retentionPolicy for Elasticsearch values can't be done anymore. The retentionPolicy should be used in the respective components instead. For example, here is an Elasticsearch retentionPolicy for the Tasklist component:

retention:
enabled: false
minimumAge: 30d

In the global section, the host to show to release-name should be changed as well:

host: "{{ .Release.Name }}-elasticsearch"
Elasticsearch - Data retention

The Elasticsearch 8 chart is using different PVC names. Therefore, it's required to migrate the old PVCs to the new names, which could be done in two ways: automatic (requires certain K8s version and CSI driver), or manual (works with any Kubernetes setup).

caution

In call cases, the following steps must be executed before the upgrade.

Option One: CSI volume cloning

This method will take advantage of the CSI volume cloning functionality from the CSI driver.

Prerequisites:

  1. The Kubernetes cluster should be at least v1.20.
  2. The CSI driver must be present on your cluster.

Clones are provisioned like any other PVC with a reference to an existing PVC in the same namespace.

Before applying this manifest, ensure to scale the Elasticsearch replicas to 0. Also, ensure the dataSource.name matches the PVC that you would like to clone.

Here is an example YAML file for cloning the Elasticsearch PVC:

First, stop Elasticsearch pods:

kubectl scale statefulset elasticsearch-master --replicas=0

Then, clone the PVC (this example is for one PVC, usually you have two PVCs):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/component: master
app.kubernetes.io/instance: integration
app.kubernetes.io/name: elasticsearch
name: data-integration-elasticsearch-master-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
dataSource:
name: elasticsearch-master-elasticsearch-master-0
kind: PersistentVolumeClaim

For reference, visit Kubernetes - CSI Volume Cloning.

Option Two: Update PV manually

This approach works with any Kubernetes cluster.

  1. Get the name of PV for both Elasticsearch master PVs.
  2. Change the reclaim policy of the Elasticsearch PVs to Retain.

First, get the PV from PVC:

ES_PV_NAME0="$(kubectl get pvc elasticsearch-master-elasticsearch-master-0 -o jsonpath='{.spec.volumeName}')"

Then, change the Reclaim Policy:

kubectl patch pv "${ES_PV_NAME0}" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Finally, verify the Reclaim Policy has been changed:

kubectl get pv "${ES_PV_NAME0}" | grep Retain || echo '[ERROR] Reclaim Policy is not Retain!'

Within both Elasticsearch master PVs, edit the claimRef to include the name of the new PVCs that will appear after the upgrade. For example:

claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-camunda-elasticsearch-master-0
namespace: <NAMESPACE>

After a successful upgrade, you can now delete the old PVCs that are in a Lost state. Then, proceed with the upgrade.

Keycloak

Keycloak upgraded from v19.x to v22.x, which is the latest version at the time of writing. Even though there is no breaking change found, the upgrade should be handled carefully because of the Keycloak major version upgrade. Ensure you back up the Keycloak database before the upgrade.

note

The Keycloak PostgreSQL chart shows some warnings which are safe to ignore. That false positive issue has been reported, and it should be fixed in the next releases of the upstream PostgreSQL Helm chart.

coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.egressRules.customRules is a table. Ignoring non-table value ([])
coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.readReplicasAccessOnlyFrom.customRules is a table. Ignoring non-table value ([])
coalesce.go:289: warning: destination for keycloak.postgresql.networkPolicy.ingressRules.primaryAccessOnlyFrom.customRules is a table. Ignoring non-table value ([])
false

Zeebe

Using a non-root user by default is a security principle introduced in this version. However, because there is persistent storage in Zeebe, earlier versions may run into problems with existing file permissions not matching up with the file permissions assigned to the running user. There are two ways to fix this:

Option One: Use Zeebe user ID (recommended)

Change podSecurityContext.fsGroup to point to the UID of the running user. The default user in Zeebe has the UID 1000. That will modify the group permissions of all persistent volumes attached to that Pod.

zeebe:
podSecurityContext:
fsGroup: 1000

If you already modify the current running user, then the fsGroup needs to be changed to match the UID.

zeebe:
containerSecurityContext:
runAsUser: 1008
podSecurityContext:
fsGroup: 1008

Some storage classes may not support the fsGroup option. In this case, a possibility is to run a debug Pod to chown the mounted volumes.

Option Two: Use root user ID

If the recommended solution does not help, you may change the running user back to root.

zeebe:
containerSecurityContext:
runAsUser: 0

Web-Modeler

The configuration format of external database has been changed in Web Modeler from host, port, database to JDBC URL.

The old format:

webModeler:
restapi:
externalDatabase:
host: web-modeler-postgres-ext
port: 5432
database: rest-api-db

The new format:

webModeler:
restapi:
externalDatabase:
url: "jdbc:postgresql://web-modeler-postgres-ext:5432/rest-api-db"

Upgrade Camunda

If you have installed the Camunda 8 Helm charts before with default values, Identity and the related authentication mechanism are enabled. If you have disabled Identity, see how to upgrade Camunda with Identity disabled.

Identity enabled

Extract secrets

If not specified on installation, the Helm chart generates random secrets for all components (including Keycloak). To upgrade successfully, these secrets must be extracted and provided during your upgrade.

To extract the secrets, use the following code snippet, replacing camunda with your actual Helm release name:

# Uncomment if Console is enabled.
# export CONSOLE_SECRET=$(kubectl get secret "camunda-console-identity-secret" -o jsonpath="{.data.console-secret}" | base64 --decode)
export TASKLIST_SECRET=$(kubectl get secret "camunda-tasklist-identity-secret" -o jsonpath="{.data.tasklist-secret}" | base64 --decode)
export OPTIMIZE_SECRET=$(kubectl get secret "camunda-optimize-identity-secret" -o jsonpath="{.data.optimize-secret}" | base64 --decode)
export OPERATE_SECRET=$(kubectl get secret "camunda-operate-identity-secret" -o jsonpath="{.data.operate-secret}" | base64 --decode)
export CONNECTORS_SECRET=$(kubectl get secret "camunda-connectors-identity-secret" -o jsonpath="{.data.connectors-secret}" | base64 --decode)
export ZEEBE_SECRET=$(kubectl get secret "camunda-zeebe-identity-secret" -o jsonpath="{.data.zeebe-secret}" | base64 --decode)
export KEYCLOAK_ADMIN_SECRET=$(kubectl get secret "camunda-keycloak" -o jsonpath="{.data.admin-password}" | base64 --decode)
export KEYCLOAK_MANAGEMENT_SECRET=$(kubectl get secret "camunda-keycloak" -o jsonpath="{.data.management-password}" | base64 --decode)
export POSTGRESQL_SECRET=$(kubectl get secret "camunda-postgresql" -o jsonpath="{.data.postgres-password}" | base64 --decode)

Run helm upgrade

After exporting all secrets into environment variables, run the following upgrade command:

helm repo update
helm upgrade camunda camunda/camunda-platform \
# Uncomment if Console is enabled.
# --set global.identity.auth.console.existingSecret=$CONSOLE_SECRET \
--set global.identity.auth.tasklist.existingSecret=$TASKLIST_SECRET \
--set global.identity.auth.optimize.existingSecret=$OPTIMIZE_SECRET \
--set global.identity.auth.operate.existingSecret=$OPERATE_SECRET \
--set global.identity.auth.connectors.existingSecret=$CONNECTORS_SECRET \
--set global.identity.auth.zeebe.existingSecret=$ZEEBE_SECRET \
--set identityKeycloak.auth.adminPassword=$KEYCLOAK_ADMIN_SECRET \
--set identityKeycloak.auth.managementPassword=$KEYCLOAK_MANAGEMENT_SECRET \
--set identityKeycloak.postgresql.auth.password=$POSTGRESQL_SECRET

If you have set secret values on installation, you must specify them again on the upgrade either via --set, as demonstrated above, or by passing in a values file using the -f flag.

note

For more details on the Keycloak upgrade path, see the Keycloak upgrade guide.

Identity disabled

If you have disabled Camunda Identity and the related authentication mechanism, Camunda can be upgraded with the following command:

helm repo update
helm upgrade camunda

Upgrade failed

The following upgrade error is related to secrets extraction:

Error: UPGRADE FAILED: execution error at (camunda-platform/charts/identity/templates/tasklist-secret.yaml:10:22):
PASSWORDS ERROR: You must provide your current passwords when upgrading the release.
Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims.
Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases

'global.identity.auth.tasklist.existingSecret' must not be empty, please add '--set global.identity.auth.tasklist.existingSecret=$TASKLIST_SECRET' to the command. To get the current value:

export TASKLIST_SECRET=$(kubectl get secret --namespace "camunda" "camunda-platform-test-tasklist-identity-secret" -o jsonpath="{.data.tasklist-secret}" | base64 --decode)

When the Helm chart is removed or upgraded, persistent volume claims (PVCs) for secret storage are not removed nor recreated. To prevent Helm from recreating secrets that have already been generated, the Bitnami library chart used by Camunda blocks the upgrade path, resulting in the above error.

To complete your upgrade, extract your secrets, then provide them as environment variables during the upgrade process.