Deploy an EKS cluster with eksctl
This guide explores the streamlined process of deploying Camunda 8 Self-Managed on Amazon Elastic Kubernetes Service (EKS) using the eksctl
command-line tool.
Eksctl is a common CLI tool for quickly creating and managing your Amazon EKS clusters and is officially endorsed by Amazon.
This guide provides a user-friendly approach for setting up and managing Amazon EKS clusters. It covers everything from the prerequisites, such as AWS IAM role configuration, to creating a fully functional Amazon EKS cluster and a managed Aurora PostgreSQL instance. Ideal for those seeking a practical and efficient method to deploy Camunda 8 on AWS, this guide provides detailed instructions for setting up the necessary environment and AWS IAM configurations.
Prerequisites
- An AWS account is required to create resources within AWS.
- AWS CLI (2.17+), a CLI tool for creating AWS resources.
- eksctl (0.191+), a CLI tool for creating and managing Amazon EKS clusters.
- kubectl (1.30+), a CLI tool to interact with the cluster.
Considerations
This is a basic setup to get started with Camunda 8 but does not reflect a high performance setup. For a better starting point towards production, we recommend utilizing Infrastructure as Code tooling and following our Terraform guide.
To try out Camunda 8 or develop against it, consider signing up for our SaaS offering, or if you already have an Amazon EKS cluster, consider skipping to the Helm guide.
While the guide is primarily tailored for UNIX systems, it can also be run under Windows by utilizing the Windows Subsystem for Linux.
Following this guide will incur costs on your Cloud provider account, namely for the managed Kubernetes service, running Kubernetes nodes in EC2, Elastic Block Storage (EBS), and Route53. More information can be found on AWS and their pricing calculator as the total cost varies per region.
Outcome
Following this guide results in the following:
- An Amazon EKS 1.30 Kubernetes cluster with four nodes.
- Installed and configured EBS CSI driver, which is used by the Camunda 8 Helm chart to create persistent volumes.
- A managed Aurora PostgreSQL 15.4 instance that will be used by the Camunda 8 components.
- IAM Roles for Service Accounts (IRSA) configured.
- This simplifies the setup by not relying on explicit credentials, but instead allows creating a mapping between IAM roles and Kubernetes service accounts based on a trust relationship. A blog post by AWS visualizes this on a technical level.
- This allows a Kubernetes service account to temporarily impersonate an AWS IAM role to interact with AWS services like S3, RDS, or Route53 without supplying explicit credentials.
This basic cluster setup is required to continue with the Helm set up as described in our AWS Helm guide.
Deploying Amazon EKS cluster with eksctl
The eksctl
tool allows the creation of clusters via a single command, but this doesn't support all configuration options. Therefore, we're supplying a YAML file that can be used with the CLI to create the cluster preconfigured with various settings.
eksctl
prerequisites
To configure access, set up authentication to allow interaction with AWS via the AWS CLI.
A user creating AWS resources will be the owner and will always be linked to them. This means that the user will always have admin access on Kubernetes unless you delete it.
Therefore, it is a good practice to create a separate IAM user that will be solely used for the eksctl
command. Create access keys for the new IAM user via the console and export them as AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
variables to use with the AWS CLI and eksctl
.
Environment prerequisites
We recommended exporting multiple environment variables to streamline the execution of the subsequent commands.
The following are the required environment variables with some example values. Define your secure password for the Postgres database.
# The name used for the Kubernetes cluster
export CLUSTER_NAME=camunda-cluster
# Your standard region that you host AWS resources in
export REGION=eu-central-1
# Multi-region zones, derived from the region
export ZONES="eu-central-1a eu-central-1b eu-central-1c"
# The AWS Account ID
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# CIDR range used for the VPC subnets
export CIDR=10.192.0.0/16
# Name for the Postgres DB cluster and instance
export RDS_NAME=camunda-postgres
# Postgres DB admin username
export PG_USERNAME=camunda
# Postgres DB password of the admin user
export PG_PASSWORD=camundarocks123
# The default database name created within Postgres. Can directly be consumed by the Helm chart
export DEFAULT_DB_NAME=camunda
# The PostgreSQL version
export POSTGRESQL_VERSION=15.8
# Optional
# Default node type for the Kubernetes cluster
export NODE_TYPE=m6i.xlarge
# Initial node count to create the cluster with
export NODE_COUNT=4
Kubernetes secret encryption
The following enables envelope encryption to add another layer of protection to your Kubernetes secrets.
We recommend enabling KMS encryption as a first step in creating the cluster. Enabling this configuration afterward can take up to 45 minutes. The KMS key is required in the eksctl cluster YAML.
Create AWS KMS Key via the aws-cli. For additional settings, visit the documentation.
export KMS_ARN=$(aws kms create-key \
--description "Kubernetes Encryption Key" \
--query "KeyMetadata.Arn" \
--output text)
The variable KMS_ARN
contains the required output. It should look something like this: arn:aws:kms:eu-central-1:1234567890:key/aaaaaaa-bbbb-cccc-dddd-eeeeeeee
.
For more information concerning the KMS encryption, refer to the eksctl documentation.
eksctl cluster YAML
Execute the following script, which creates a file called cluster.yaml
with the following contents:
cat <<EOF >./cluster.yaml
---
apiVersion: eksctl.io/v1alpha5
metadata:
name: ${CLUSTER_NAME:-camunda-cluster} # e.g. camunda-cluster
region: ${REGION:-eu-central-1} # e.g. eu-central-1
version: "1.30"
availabilityZones:
- ${REGION:-eu-central-1}c # e.g. eu-central-1c, the minimal is two distinct Availability Zones (AZs) within the region
- ${REGION:-eu-central-1}b
- ${REGION:-eu-central-1}a
cloudWatch:
clusterLogging: {}
iam:
vpcResourceControllerPolicy: true
withOIDC: true # enables and configures OIDC for IAM Roles for Service Accounts (IRSA)
addons:
- name: vpc-cni
resolveConflicts: overwrite
version: latest
- name: kube-proxy
resolveConflicts: overwrite
version: latest
- name: aws-ebs-csi-driver # automatically configures IRSA
resolveConflicts: overwrite
version: latest
- name: coredns
resolveConflicts: overwrite
version: latest
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: ${NODE_COUNT:-4} # number of default nodes spawned if no cluster autoscaler is used
disableIMDSv1: true
disablePodIMDS: true
instanceSelector: {}
instanceTypes:
- ${NODE_TYPE:-m6i.xlarge} # node type that is selected as default
labels:
alpha.eksctl.io/cluster-name: ${CLUSTER_NAME:-camunda-cluster} # e.g. camunda-cluster
alpha.eksctl.io/nodegroup-name: services
maxSize: 10 # maximum node pool size for cluster autoscaler
minSize: 1 # minimum node pool size for cluster autoscaler
name: services
privateNetworking: true
releaseVersion: ""
securityGroups:
withLocal: null
withShared: null
ssh:
allow: false
publicKeyPath: ""
tags:
alpha.eksctl.io/nodegroup-name: services
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 80
volumeThroughput: 125
volumeType: gp3
privateCluster:
enabled: false
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: ${CIDR:-10.192.0.0/16}
clusterEndpoints:
privateAccess: false
publicAccess: true
manageSharedNodeSecurityGroupRules: true
nat:
gateway: HighlyAvailable
secretsEncryption:
keyARN: ${KMS_ARN}
EOF
With eksctl you can execute the previously created file as follows and takes 25-30 minutes.
eksctl create cluster --config-file cluster.yaml
(Optional) IAM access management
The access concerning Kubernetes is split into two layers. One being the IAM permissions allowing general Amazon EKS usage, like accessing the Amazon EKS UI, generating the Amazon EKS access via the AWS CLI, etc. The other being the cluster access itself determining which access the user should have within the Kubernetes cluster.
Therefore, we first have to supply the user with the sufficient IAM permissions and afterward assign the user a role within the Kubernetes cluster.
IAM Permissions
A minimum set of permissions is required to gain access to an Amazon EKS cluster. These two permissions allow a user to execute aws eks update-kubeconfig
to update the local kubeconfig
with cluster access to the Amazon EKS cluster.
The policy should look as follows and can be restricted further to specific Amazon EKS clusters if required:
cat <<EOF >./policy-eks.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
EOF
Via the AWS CLI, you can run the following to create the policy above in IAM.
aws iam create-policy --policy-name "BasicEKSPermissions" --policy-document file://policy-eks.json
The created policy BasicEKSPermissions
has to be assigned to a group, a role, or a user to work. Consult the AWS documentation to find the correct approach for you.
Cluster Access
By default, the user creating the Amazon EKS cluster has admin access. To allow other users to access it, we have to adjust the aws-auth
configmap. This can either be done manually via kubectl
or via eksctl
. In the following sections, we explain how to do this.
eksctl
With eksctl
, you can create an AWS IAM user to Kubernetes role mapping with the following command:
eksctl create iamidentitymapping \
--cluster=$CLUSTER_NAME \
--region=eu-central-1 \
--arn arn:aws:iam::0123456789:user/ops-admin \
--group system:masters \
--username admin
arn
is the identifier of your user.group
is the Kubernetes role and as an examplesystem:masters
is a Kubernetes group for the admin role.username
is either the username itself or the role name. It can also be any arbitrary value as it is used for the audit logs to identify the operation owner.
Example:
eksctl create iamidentitymapping \
--cluster=$CLUSTER_NAME \
--region=eu-central-1 \
--arn arn:aws:iam::0123456789:user/ops-admin \
--group system:masters \
--username admin
More information about usage and other configuration options can be found in the eksctl documentation.
kubectl
The same can also be achieved by using kubectl
and manually adding the mapping as part of the mapRoles
or mapUsers
section.
kubectl edit configmap aws-auth -n kube-system
For detailed examples, review the documentation provided by AWS.
PostgreSQL database
Creating a Postgres database can be solved in various ways. For example, by using the UI or the AWS CLI. In this guide, we provide you with a reproducible setup. Therefore, we use the CLI. For creating PostgreSQL with the UI, refer to the AWS documentation.
The resulting PostgreSQL instance and default database camunda
is intended to be used with Keycloak. You may manually add extra databases after creation for Identity with multi-tenancy.
This will not be covered in this guide as the Identity default for multi-tenancy is to be disabled.
- Identify the VPC associated with the Amazon EKS cluster:
export VPC_ID=$(aws ec2 describe-vpcs \
--query "Vpcs[?Tags[?Key=='alpha.eksctl.io/cluster-name']|[?Value=='$CLUSTER_NAME']].VpcId" \
--output text)
- The variable
VPC_ID
contains the output value required for the next step (the value should look like this:vpc-1234567890
). - Create a security group within the VPC to allow connection to the Aurora PostgreSQL instance:
export GROUP_ID=$(aws ec2 create-security-group \
--group-name aurora-postgres-sg \
--description "Security Group to allow the Amazon EKS cluster to connect to Aurora PostgreSQL" \
--vpc-id $VPC_ID \
--output text)
- The variable
GROUP_ID
contains the output (the value should look like this:sg-1234567890
). - Create a security Ingress rule to allow access to PostgreSQL.
aws ec2 authorize-security-group-ingress \
--group-id $GROUP_ID \
--protocol tcp \
--port 5432 \
--cidr $CIDR
# the CIDR range should be exactly the same value as in the `cluster.yaml`
- Retrieve subnets of the VPC to create a database subnet group:
export SUBNET_IDS=$(aws ec2 describe-subnets \
--filter Name=vpc-id,Values=$VPC_ID \
--query "Subnets[?Tags[?Key=='aws:cloudformation:logical-id']|[?contains(Value, 'Private')]].SubnetId" \
--output text | expand -t 1)
The variable
SUBNET_IDS
contains the output values of the private subnets (the value should look like this:subnet-0123456789 subnet-1234567890 subnet-9876543210
).Create a database subnet group to associate PostgreSQL within the existing VPC:
aws rds create-db-subnet-group \
--db-subnet-group-name camunda-postgres \
--db-subnet-group-description "Subnet for Camunda PostgreSQL" \
--subnet-ids $(echo $SUBNET_IDS)
- Create a PostgreSQL cluster within a private subnet of the VPC.
For the latest Camunda-supported PostgreSQL engine version, check our documentation.
aws rds create-db-cluster \
--db-cluster-identifier $RDS_NAME \
--engine aurora-postgresql \
--engine-version $POSTGRESQL_VERSION \
--master-username $PG_USERNAME \
--master-user-password $PG_PASSWORD \
--vpc-security-group-ids $GROUP_ID \
--availability-zones $(echo $ZONES) \
--database-name $DEFAULT_DB_NAME \
--db-subnet-group-name camunda-postgres
More configuration options can be found in the AWS documentation.
- Wait for the PostgreSQL cluster to be ready:
aws rds wait db-cluster-available \
--db-cluster-identifier $RDS_NAME
- Create a database instance within the DB cluster.
The engine-version
must be the same as the previously created PostgreSQL cluster.
aws rds create-db-instance \
--db-instance-identifier $RDS_NAME \
--db-cluster-identifier $RDS_NAME \
--engine aurora-postgresql \
--engine-version $POSTGRESQL_VERSION \
--no-publicly-accessible \
--db-instance-class db.t3.medium
More configuration options can be found in the AWS documentation.
- Wait for changes to be applied:
aws rds wait db-instance-available \
--db-instance-identifier $RDS_NAME
Verifying connectivity between the Amazon EKS cluster and the PostgreSQL database
- Retrieve the writer endpoint of the DB cluster.
export DB_HOST=$(aws rds describe-db-cluster-endpoints \
--db-cluster-identifier $RDS_NAME \
--query "DBClusterEndpoints[?EndpointType=='WRITER'].Endpoint" \
--output text)
- Start Ubuntu container in interactive mode within the Amazon EKS cluster.
kubectl run ubuntu --rm -i --tty --image ubuntu --env="DB_HOST=$DB_HOST" --env="PG_USERNAME=$PG_USERNAME" -- bash
- Install required dependencies:
apt update && apt install -y postgresql-client
- Connect to PostgreSQL database:
psql \
--host=$DB_HOST \
--username=$PG_USERNAME \
--port=5432 \
--dbname=postgres
Verify that the connection is successful.
Prerequisites for Camunda 8 installation
Policy for external-dns
The following instructions are based on the external-dns guide concerning the AWS setup and only covers the required IAM setup. The Helm chart will be installed in the follow-up guide.
The following relies on the previously mentioned feature around IAM Roles for Service Accounts (IRSA) to simplify the external-dns setup.
The IAM policy document below allows external-dns to update Route53 resource record sets and hosted zones. You need to create this policy in AWS IAM first. In our example, we will call the policy AllowExternalDNSUpdates
.
You may fine-tune the policy to permit updates only to explicit Hosted Zone IDs.
cat <<EOF >./policy-dns.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["route53:ChangeResourceRecordSets"],
"Resource": ["arn:aws:route53:::hostedzone/*"]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets",
"route53:ListTagsForResource"
],
"Resource": ["*"]
}
]
}
EOF
Create AWS IAM policy with the AWS CLI:
aws iam create-policy --policy-name "AllowExternalDNSUpdates" --policy-document file://policy-dns.json
# example: arn:aws:iam::XXXXXXXXXXXX:policy/AllowExternalDNSUpdates
export EXTERNAL_DNS_POLICY_ARN=$(aws iam list-policies \
--query 'Policies[?PolicyName==`AllowExternalDNSUpdates`].Arn' \
--output text)
The EXTERNAL_DNS_POLICY_ARN
will be used in the next step to create a role mapping between the Kubernetes Service Account and AWS IAM Service Account.
Using eksctl
allows us to create the required role mapping for external-dns.
eksctl create iamserviceaccount \
--cluster $CLUSTER_NAME \
--name "external-dns" \
--namespace "external-dns" \
--attach-policy-arn $EXTERNAL_DNS_POLICY_ARN \
--role-name="external-dns-irsa" \
--role-only \
--approve
export EXTERNAL_DNS_IRSA_ARN=$(aws iam list-roles \
--query "Roles[?RoleName=='external-dns-irsa'].Arn" \
--output text)
The variable EXTERNAL_DNS_IRSA_ARN
contains the arn
(it should look like this: arn:aws:iam::XXXXXXXXXXXX:role/external-dns-irsa
).
Alternatively, you can deploy the Helm chart first and then use eksctl
with the option --override-existing-serviceaccounts
instead of --role-only
to reconfigure the created service account.
Policy for cert-manager
The following instructions are taken from the cert-manager guide concerning the AWS setup and only covers the required IAM setup. The Helm chart will be installed in the follow-up guide.
The following relies on the previously mentioned feature around IAM Roles for Service Accounts (IRSA) to simplify the cert-manager setup.
The IAM policy document below allows cert-manager to update Route53 resource record sets and hosted zones. You need to create this policy in AWS IAM first. In our example, we call the policy AllowCertManagerUpdates
.
If you prefer, you may fine-tune the policy to permit updates only to explicit Hosted Zone IDs.
cat <<EOF >./policy-cert.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "route53:GetChange",
"Resource": "arn:aws:route53:::change/*"
},
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListResourceRecordSets"
],
"Resource": "arn:aws:route53:::hostedzone/*"
},
{
"Effect": "Allow",
"Action": "route53:ListHostedZonesByName",
"Resource": "*"
}
]
}
EOF
Create AWS IAM policy with the AWS CLI:
aws iam create-policy --policy-name "AllowCertManagerUpdates" --policy-document file://policy-cert.json
# example: arn:aws:iam::XXXXXXXXXXXX:policy/AllowCertManagerUpdates
export CERT_MANAGER_POLICY_ARN=$(aws iam list-policies \
--query 'Policies[?PolicyName==`AllowCertManagerUpdates`].Arn' \
--output text)
The CERT_MANAGER_POLICY_ARN
is used in the next step to create a role mapping between the Amazon EKS Service Account and the AWS IAM Service Account.
Using eksctl
allows us to create the required role mapping for cert-manager.
eksctl create iamserviceaccount \
--cluster=$CLUSTER_NAME \
--name="cert-manager" \
--namespace="cert-manager" \
--attach-policy-arn=$CERT_MANAGER_POLICY_ARN \
--role-name="cert-manager-irsa" \
--role-only \
--approve
export CERT_MANAGER_IRSA_ARN=$(aws iam list-roles \
--query "Roles[?RoleName=='cert-manager-irsa'].Arn" \
--output text)
The variable CERT_MANAGER_IRSA_ARN
will contain the arn
(it should look like this: arn:aws:iam::XXXXXXXXXXXX:role/cert-manager-irsa
).
Alternatively, you can deploy the Helm chart first and then use eksctl
with the option --override-existing-serviceaccounts
instead of --role-only
to reconfigure the created service account.
StorageClass
We recommend using gp3 volumes with Camunda 8 (see volume performance). It is necessary to create the StorageClass as the default configuration only includes gp2
. For detailed information, refer to the AWS documentation.
The following steps create the gp3
StorageClass:
- Create
gp3
StorageClass.
cat << EOF | kubectl apply -f -
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
EOF
- Modify the
gp2
storage class to mark it as a non-default storage class:
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'