Red Hat OpenShift
Red Hat OpenShift, a Kubernetes distribution maintained by Red Hat, provides options for both managed and on-premises hosting.
Deploying Camunda 8 on Red Hat OpenShift is supported using Helm, given the appropriate configurations.
However, it's important to note that the Security Context Constraints (SCCs) and Routes configurations might require slight deviations from the guidelines provided in the general Helm deployment guide.
Additional informational and high-level overview based on Kubernetes as upstream project is available on our Kubernetes deployment reference.
Requirements
- Helm
- kubectl to interact with the cluster.
- jq to interact with some variables.
- GNU envsubst to generate manifests.
- oc (version supported by your OpenShift) to interact with OpenShift.
- AWS Quotas
- Ensure at least 3 Elastic IPs (one per availability zone).
- Verify quotas for VPCs, EC2 instances, and storage.
- Request increases if needed via the AWS console (guide), costs are only for resources used.
- A namespace to host the Camunda Platform.
For the tool versions used, check the .tool-versions file in the repository. It contains an up-to-date list of versions that we also use for testing.
Architecture
This section installs Camunda 8 following the architecture described in the reference architecture. The architecture includes the following core components:
- Orchestration Cluster: Core process execution engine (Zeebe, Operate, Tasklist, and Identity)
- Web Modeler and Console: Management and design tools (Web Modeler, Console, and Management Identity)
- Keycloak as OIDC provider: Example OIDC provider (can be replaced with any compatible IdP)
For OpenShift deployments, the following OpenShift-specific configurations are also included:
- OpenShift Routes: Native OpenShift way to expose services externally (alternative to standard Kubernetes Ingress)
- Security Context Constraints (SCCs): Security framework for controlling pod and container permissions
This guide uses a single Kubernetes namespace for simplicity, since the deployment is done with a single Helm chart. This differs from the reference architecture, which recommends separating Orchestration Cluster and Web Modeler or Console into different namespaces in production to improve isolation and enable independent scaling.
Deploy Camunda 8 via Helm charts
Configure your deployment
Start by creating a values.yml file to store the configuration for your environment.
This file will contain key-value pairs that will be substituted using envsubst.
Over this guide, you will add and merge values in this file to configure your deployment to fit your needs.
You can find a reference example of this file here:
loading...
This guide references multiple configuration files that need to be merged into a single YAML file. Be cautious to avoid duplicate keys when merging the files. Additionally, pay close attention when copying and pasting YAML content. Ensure that the separator notation --- does not inadvertently split the configuration into multiple documents.
We strongly recommend double-checking your YAML file before applying it. You can use tools like yamllint.com or the YAML Lint CLI if you prefer not to share your information online.
Configuring the Ingress
Before exposing services outside the cluster, we need an Ingress component. Here's how you can configure it:
- Using OpenShift Routes
- Using Kubernetes Ingress
- No Ingress
Routes expose services externally by linking a URL to a service within the cluster. OpenShift supports both the standard Kubernetes Ingress and routes, giving cluster users the flexibility to choose.
The presence of routes is rooted in their specification predating Ingress. The functionality of routes differs from Ingress; for example, unlike Ingress, routes don't allow multiple services to be linked to a single route or the use of paths.
To use these routes for the Zeebe Gateway, configure this through Ingress as well.
Setting Up the application domain for Camunda 8
The route created by OpenShift will use a domain to provide access to the platform. By default, you can use the OpenShift applications domain, but any other domain supported by the router can also be used.
To retrieve the OpenShift applications domain (used as an example here), run the following command and define the route domain that will be used for the Camunda 8 deployment:
loading...
If you choose to use a custom domain instead, ensure it is supported by your router configuration and replace the example domain with your desired domain. For more details on configuring custom domains in OpenShift, refer to the official custom domain OpenShift documentation.
Checking if HTTP/2 is enabled
As the Zeebe Gateway also uses gRPC (which relies on HTTP/2), HTTP/2 Ingress Connectivity must be enabled.
To check if HTTP/2 is already enabled on your OpenShift cluster, run the following command:
oc get ingresses.config/cluster -o json | jq '.metadata.annotations."ingress.operator.openshift.io/default-enable-http2"'
Alternatively, if you use a dedicated IngressController for the deployment:
loading...
- If the output is
"true", it means HTTP/2 is enabled. - If the output is
nullor empty, HTTP/2 is not enabled.
Enable HTTP/2
If HTTP/2 is not enabled, you can enable it by running the following command:
IngressController configuration:
loading...
Global cluster configuration:
oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true
This will add the necessary annotation to enable HTTP/2 for Ingress in your OpenShift cluster globally on the cluster.
Configure Route TLS
Additionally, the Zeebe Gateway should be configured to use an encrypted connection with TLS. In OpenShift, the connection from HAProxy to the Zeebe Gateway service can use HTTP/2 only for re-encryption or pass-through routes, and not for edge-terminated or insecure routes.
-
Zeebe cluster: two TLS secrets for the Zeebe Gateway are required, one for the service and the other one for the route:
-
The first TLS secret is issued to the Zeebe Gateway Service Name. This must use the PKCS #8 syntax or PKCS #1 syntax as Zeebe only supports these, referenced as
camunda-platform-internal-service-certificate. This certificate is also use in the other components such as Operate, Tasklist.In the example below, a TLS certificate is generated for the Zeebe Gateway service with an annotation. The generated certificate will be in the form of a secret.
Another option is Cert Manager. For more details, review the OpenShift documentation.
PKCS #8, PKCS #1 syntax
PKCS #1 private key encoding. PKCS #1 produces a PEM block that contains the private key algorithm in the header and the private key in the body. A key that uses this can be recognised by its BEGIN RSA PRIVATE KEY or BEGIN EC PRIVATE KEY header. NOTE: This encoding is not supported for Ed25519 keys. Attempting to use this encoding with an Ed25519 key will be ignored and default to PKCS #8.
PKCS #8 private key encoding. PKCS #8 produces a PEM block with a static header and both the private key algorithm and the private key in the body. A key that uses this encoding can be recognised by its BEGIN PRIVATE KEY header.
- The second TLS secret is used on the exposed route, referenced as
camunda-platform-external-certificate. For example, this would be the same TLS secret used for Ingress. We also configure the Zeebe Gateway Ingress to create a Re-encrypt Route.
To configure a Zeebe cluster securely, it's essential to set up a secure communication configuration between pods:
- We enable gRPC Ingress for the Core Pod, which sets up a secure proxy that we'll use to communicate with the Zeebe cluster. To avoid conflicts with other services, we use a specific domain (
zeebe-$DOMAIN_NAME) for the gRPC proxy, different from the one used by other services ($DOMAIN_NAME). We also note that the port used for gRPC is443. - We mount the Service Certificate Secret (
camunda-platform-internal-service-certificate) to the Core pod and configure a secure TLS connection.
Update your
values.ymlfile with the following:The actual configuration properties can be reviewed in the Zeebe Gateway configuration documentation.
-
-
Connectors: update your
values.ymlfile with the following:
loading...
The actual configuration properties can be reviewed in the connectors configuration documentation.
-
Configure all other applications running inside the cluster and connecting to the Zeebe Gateway to also use TLS.
-
Set up the global configuration to enable the single Ingress definition with the host. Update your configuration file as shown below:
loading...
Routes serve as OpenShift's default Ingress implementation.
If you find that its features aren't suitable for your needs, or if you prefer to use a Kubernetes-native Ingress controller, such as the ingress-nginx controller, you have that option.
For guidance on installing an Ingress controller, you can refer to the Ingress Setup documentation.
Do not confuse the ingress-nginx controller with the NGINX Ingress Controller that is endorsed by Red Hat for usage with OpenShift. Despite very similar names, they are two different products.
If you should decide to use the Red Hat endorsed NGINX Ingress Controller, you would require additional adjustments done to the Camunda 8 Ingress objects and the NGINX Ingress Controller itself to make gRPC and HTTP/2 connections work. In that case, please refer to the example and the prerequisites.
If you do not have a domain name or do not intend to use one for your Camunda 8 deployment, external access to Camunda 8 web endpoints from outside the OpenShift cluster will not be possible.
However, you can use kubectl port-forward to access the Camunda platform without a domain name or Ingress configuration. For more information, refer to the kubectl port-forward documentation.
To make this work, you will need to configure the deployment to reference localhost with the forwarded port. Update your values.yml file with the following:
loading...
When running without a domain, Console validates the JWT issuer claim against the configured Keycloak base URL. To keep token issuance consistent and avoid mismatches, the chart configuration sets Keycloak's hostname to its Kubernetes Service name when operating locally. This means that during port-forwarding you may need to map the service hostname to 127.0.0.1 so that browser redirects and token issuer values align.
Add (or update) the following entry in your /etc/hosts file while developing locally:
127.0.0.1 $CAMUNDA_RELEASE_NAME-keycloak
After adding this entry, you can reach Keycloak at:
http://$CAMUNDA_RELEASE_NAME-keycloak:18080/auth
Why port 18080?
We forward container port 8080 (originally 80) to a non‑privileged local port (18080) to avoid requiring elevated privileges and to reduce conflicts with other processes using 8080.
This constraint does not apply when a proper domain and Ingress are configured (the public FQDN is then used as the issuer and no hosts file changes are needed).
Configuring the Security Context Constraints
Depending on your OpenShift cluster's Security Context Constraints (SCCs) configuration, the deployment process may vary.
By default, OpenShift employs more restrictive SCCs. The Helm chart must assign null to the user running all components and dependencies.
- Restrictive SCCs
- Permissive SCCs
The global.compatibility.openshift.adaptSecurityContext variable in your values.yaml can be used to set the following possible values:
force: TherunAsUserandfsGroupvalues will be null in all components.disabled: TherunAsUserandfsGroupvalues will not be modified (default).
loading...
To use permissive SCCs, simply install the charts as they are. Follow the general Helm deployment guide.
loading...
Enable Enterprise components
Some components are not enabled by default in this deployment. For more information on how to configure and enable these components, refer to configuring Enterprise components and connectors.
Fill your deployment with actual values
Once you've prepared the values.yml file, run the following envsubst command to substitute the environment variables with their actual values:
loading...
Next, store various passwords in a Kubernetes secret, which will be used by the Helm chart. Below is an example of how to set up the required secret. You can use openssl to generate random secrets and store them in environment variables:
loading...
Use these environment variables in the kubectl command to create the secret.
- The
smtp-passwordshould be replaced with the appropriate external value (see how it's used by Web Modeler).
loading...
Install Camunda 8 using Helm
Now that the generated-values.yml is ready, you can install Camunda 8 using Helm.
The following are the required environment variables with some example values:
loading...
CAMUNDA_NAMESPACEis the Kubernetes namespace where Camunda will be installed.CAMUNDA_RELEASE_NAMEis the name of the Helm release associated with this Camunda installation.
Then run the following command:
loading...
This command:
- Installs (or upgrades) the Camunda platform using the Helm chart.
- Substitutes the appropriate version using the
$CAMUNDA_HELM_CHART_VERSIONenvironment variable. - Applies the configuration from
generated-values.yml.
This guide uses helm upgrade --install as it runs install on initial deployment and upgrades future usage. This may make it easier for future Camunda 8 Helm upgrades or any other component upgrades.
You can track the progress of the installation using the following command:
loading...
Verify connectivity to Camunda 8
First, we need an OAuth client to be able to connect to the Camunda 8 cluster.
Generate an M2M token using Identity
Generate an M2M token by following the steps outlined in the Identity getting started guide, along with the incorporating applications documentation.
Below is a summary of the necessary instructions:
- With domain
- Without domain
- Open Identity in your browser at
https://${DOMAIN_NAME}/managementidentity. You will be redirected to Keycloak and prompted to log in with a username and password. - Log in with the initial user
admin(defined inidentity.firstUserof the values file). Retrieve the generated password (created earlier when running the secret creation script) from the Kubernetes secret and use it to authenticate:
kubectl get secret identity-secret-for-components \
--namespace "$CAMUNDA_NAMESPACE" \
-o jsonpath='{.data.identity-first-user-password}' | base64 -d; echo
- Select Add application and select M2M as the type. Assign a name like "test."
- Select the newly created application. Then, select Access to APIs > Assign permissions, and select the Orchestration API with "read" and "write" permission.
- Retrieve the
client-idandclient-secretvalues from the application details
export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
Identity and Keycloak must be port-forwarded to be able to connect to the cluster.
kubectl port-forward "services/$CAMUNDA_RELEASE_NAME-identity" 8085:80 --namespace "$CAMUNDA_NAMESPACE"
kubectl port-forward "services/$CAMUNDA_RELEASE_NAME-keycloak" 18080:8080 --namespace "$CAMUNDA_NAMESPACE"
For a richer localhost experience (and to avoid managing many individual port-forward commands), you can use kubefwd to forward all Services in the target namespace and make them resolvable by their in-cluster DNS names on your workstation.
Example (requires sudo to bind privileged ports and modify /etc/hosts):
sudo kubefwd services -n "$CAMUNDA_NAMESPACE"
After this runs, you can reach services directly, for example:
- Identity:
http://$CAMUNDA_RELEASE_NAME-identity/managementidentity - Keycloak:
http://$CAMUNDA_RELEASE_NAME-keycloak - Zeebe Gateway gRPC:
$CAMUNDA_RELEASE_NAME-zeebe-gateway:26500
You can still use localhost ports if you prefer traditional port-forwarding. Stop kubefwd with Ctrl+C when finished. Be aware kubefwd modifies your /etc/hosts temporarily; it restores the file when it exits.
- Open Identity in your browser at
http://localhost:8085/managementidentity. You will be redirected to Keycloak and prompted to log in with a username and password. - Log in with the initial user
admin(defined inidentity.firstUserof the values file). Retrieve the generated password (created earlier when running the secret creation script) from the Kubernetes secret and use it to authenticate:
kubectl get secret identity-secret-for-components \
--namespace "$CAMUNDA_NAMESPACE" \
-o jsonpath='{.data.identity-first-user-password}' | base64 -d; echo
- Select Add application and select M2M as the type. Assign a name like "test."
- Select the newly created application. Then, select Access to APIs > Assign permissions, and select the Orchestration API with "read" and "write" permission.
- Retrieve the
client-idandclient-secretvalues from the application details
export ZEEBE_CLIENT_ID='client-id' # retrieve the value from the identity page of your created m2m application
export ZEEBE_CLIENT_SECRET='client-secret' # retrieve the value from the identity page of your created m2m application
To access the other services and their UIs, port-forward those Components as well:
Orchestration:
> kubectl port-forward "svc/$CAMUNDA_RELEASE_NAME-zeebe-gateway" 8080:8080 --namespace "$CAMUNDA_NAMESPACE"
Optimize:
> kubectl port-forward "svc/$CAMUNDA_RELEASE_NAME-optimize" 8083:80 --namespace "$CAMUNDA_NAMESPACE"
Connectors:
> kubectl port-forward "svc/$CAMUNDA_RELEASE_NAME-connectors" 8086:8080 --namespace "$CAMUNDA_NAMESPACE"
WebModeler:
> kubectl port-forward "svc/$CAMUNDA_RELEASE_NAME-web-modeler-webapp" 8070:80 --namespace "$CAMUNDA_NAMESPACE"
Console:
> kubectl port-forward "svc/$CAMUNDA_RELEASE_NAME-console" 8087:80 --namespace "$CAMUNDA_NAMESPACE"
Use the token
- REST API
- Desktop Modeler
For a detailed guide on generating and using a token, please conduct the relevant documentation on authenticating with the Orchestration Cluster REST API.
- With domain
- Without domain
Export the following environment variables:
loading...
This requires to port-forward the Zeebe Gateway to be able to connect to the cluster.
kubectl port-forward "services/$CAMUNDA_RELEASE_NAME-zeebe-gateway" 8080:8080 --namespace "$CAMUNDA_NAMESPACE"
Export the following environment variables:
loading...
Generate a temporary token to access the Orchestration Cluster REST API, then capture the value of the access_token property and store it as your token. Use the stored token (referred to as TOKEN in this case) to interact with the Orchestration Cluster REST API and display the cluster topology:
loading...
...and results in the following output:Example output
loading...
Follow our existing Modeler guide on deploying a diagram. Below are the helper values required to be filled in Modeler:
- With domain
- Without domain
The following values are required for the OAuth authentication:
- Cluster endpoint:
https://zeebe-$DOMAIN_NAME, replacing$DOMAIN_NAMEwith your domain - Client ID: Retrieve the client ID value from the identity page of your created M2M application
- Client Secret: Retrieve the client secret value from the Identity page of your created M2M application
- OAuth Token URL:
https://$DOMAIN_NAME/auth/realms/camunda-platform/protocol/openid-connect/token, replacing$DOMAIN_NAMEwith your domain - Audience:
zeebe-api, the default for Camunda 8 Self-Managed
This requires port-forwarding the Zeebe Gateway to be able to connect to the cluster:
kubectl port-forward "services/$CAMUNDA_RELEASE_NAME-zeebe-gateway" 26500:26500 --namespace "$CAMUNDA_NAMESPACE"
The following values are required for OAuth authentication:
- Cluster endpoint:
http://localhost:26500 - Client ID: Retrieve the client ID value from the identity page of your created M2M application
- Client Secret: Retrieve the client secret value from the Identity page of your created M2M application
- OAuth Token URL:
http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token - Audience:
zeebe-api, the default for Camunda 8 Self-Managed
Pitfalls to avoid
For general deployment pitfalls, visit the deployment troubleshooting guide.
Security Context Constraints (SCCs)
Security Context Constraints (SCCs) are a set of conditions that a pod must adhere to in order to be accepted into the system. They define the security conditions under which a pod operates.
Similar to how roles control user permissions, SCCs regulate the permissions of deployed applications, both at the pod and container level. It's generally recommended to deploy applications with the most restrictive SCCs possible. If you're unfamiliar with security context constraints, you can refer to the OpenShift documentation.
- Restrictive SCCs (default)
- Non-root SCCs
- Permissive SCCs
Restrictive SCCs
The following represents the most restrictive SCCs that can be used to deploy Camunda 8. Note that in OpenShift 4.10, these are equivalent to the built-in restricted SCCs (which are the default SCCs).
Allow Privileged: false
Default Add Capabilities: <none>
Required Drop Capabilities: KILL, MKNOD, SYS_CHROOT, SETUID, SETGID
Allowed Capabilities: <none>
Allowed Seccomp Profiles: <none>
Allowed Volume Types: configMap, downwardAPI, emptyDir, persistentVolumeClaim, projected, secret
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
SELinux Context Strategy: MustRunAs
FSGroup Strategy: MustRunAs
Supplemental Groups Strategy: RunAsAny
When using these SCCs, be sure not to specify any runAsUser or fsGroup values in either the pod or container security context. Instead, allow OpenShift to assign arbitrary IDs.
If you are providing the ID ranges yourself, you can also configure the runAsUser and fsGroup values accordingly.
The Camunda Helm chart can be deployed to OpenShift with a few modifications, primarily revolving around your desired security context constraints.
Non-root SCCs
If you intend to deploy Camunda 8 while restricting applications from running as root (e.g., using the nonroot built-in SCCs), you'll need to configure each pod and container to run as a non-root user. For example, when deploying Zeebe using a stateful set, you would include the following YAML, replacing 1000 with the desired user ID:
spec:
template:
spec:
securityContext:
runAsUser: 1000
containers:
securityContext:
runAsUser: 1000
As the container user in OpenShift is always part of the root group, defining a fsGroup for any Camunda 8 application pod security context is unnecessary.
This configuration is necessary for all Camunda 8 applications, as well as related ones (e.g., Keycloak, PostgreSQL, etc.). It's particularly crucial for stateful applications that will write to persistent volumes, but it's also generally a good security practice.
Permissive SCCs
If you deploy Camunda 8 (and related infrastructure) with permissive SCCs out of the box, there's nothing specific for you to configure. Here, permissive SCCs refer to those where the strategy for RunAsUser is defined as RunAsAny (including root).
Writing pod permissions for logs
OpenShift security policies often restrict writing to files within containers. This can cause Camunda pods to fail to write to the filesystem, which is typically required for writing log in files.
Instead, we configure the environment to output logs to stdout and stderr only, which are supported by OpenShift logging infrastructure.
For Camunda components (except Identity), this can be done by setting the environment variable in the chart values:
zeebe/tasklist/operate/etc:
env:
- name: CAMUNDA_LOG_FILE_APPENDER_ENABLED
value: "false"
This will disable the file appender and ensure logs are visible via the container's log output.