Skip to main content
Version: 8.5

Deploy an EKS cluster with Terraform

This guide offers a detailed tutorial for deploying an Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) cluster, tailored explicitly for deploying Camunda 8 and using Terraform, a popular Infrastructure as Code (IaC) tool.

This is designed to help leverage the power of IaC to streamline and reproduce a Cloud infrastructure setup. By walking through the essentials of setting up an Amazon EKS cluster, configuring AWS IAM permissions, and integrating a PostgreSQL database, this guide explains the process of using Terraform with AWS, making it accessible even to those new to Terraform or IaC concepts.

tip

If you are completely new to Terraform and the idea of IaC, read through the Terraform IaC documentation and give their interactive quick start a try for a basic understanding.

Prerequisites

  • An AWS account to create any resources within AWS.
  • Terraform (1.6.x)
  • Kubectl (1.28.x) to interact with the cluster.
  • IAM Roles for Service Accounts (IRSA) configured.
    • This simplifies the setup by not relying on explicit credentials and instead creating a mapping between IAM roles and Kubernetes service account based on a trust relationship. A blog post by AWS visualizes this on a technical level.
    • This allows a Kubernetes service account to temporarily impersonate an AWS IAM role to interact with AWS services like S3, RDS, or Route53 without having to supply explicit credentials.

Considerations

This setup provides an essential foundation for beginning with Camunda 8, though it's not tailored for optimal performance. It's a good initial step for preparing a production environment by incorporating IaC tooling.

Terraform can be opaque in the beginning. If you solely want to get an understanding for what is happening, you may try out the eksctl guide to understand what resources are created and how they interact with each other.

To try out Camunda 8 or develop against it, consider signing up for our SaaS offering. If you already have an Amazon EKS cluster, consider skipping to the Helm guide.

For the simplicity of this guide, certain best practices will be provided with links to additional documents, enabling you to explore the topic in more detail.

danger

Following this guide will incur costs on your Cloud provider account, namely for the managed Kubernetes service, running Kubernetes nodes in EC2, Elastic Block Storage (EBS), and Route53. More information can be found on AWS and their pricing calculator as the total cost varies per region.

Outcome

Following this tutorial and steps will result in:

  • An Amazon EKS Kubernetes cluster running the latest Kubernetes version with four nodes ready for Camunda 8 installation.
  • The EBS CSI driver is installed and configured, which is used by the Camunda 8 Helm chart to create persistent volumes.
  • A managed Aurora PostgreSQL 15.4 instance to be used by the Camunda 8 components.

Installing Amazon EKS cluster with Terraform

Terraform prerequsites

  1. Create an empty folder to place your Terraform files in.
  2. Create a config.tf with the following setup:
terraform {
backend "local" {
path = "terraform.tfstate"
}

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.22.0"
}
}
}

provider "aws" {
region = "eu-central-1"
}
  1. Set up the authentication for the AWS provider.
note

It's recommended to use a different backend than local. More information can be found in the Terraform documentation.

note

The AWS Terraform provider is required to create resources in AWS. You must configure the provider with the proper credentials before using it. You can further change the region and other preferences and explore different authentication methods.

There are several ways to authenticate the AWS provider.

  • (Recommended) Use the AWS CLI to configure access. Terraform will automatically default to AWS CLI configuration when present.
  • Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, which can be retrieved from the AWS Console.
danger

Do not store sensitive information (credentials) in your Terraform files.

danger

A user who creates resources in AWS will therefore own these resources. In this particular case, the user will always have admin access to the Kubernetes cluster until the cluster is deleted.

Therefore, it can make sense to create an extra AWS IAM user which credentials are used for Terraform purposes.

Cluster module

This module creates the basic layout that configures AWS access and Terraform.

The following will use Terraform modules, which allows abstracting resources into reusable components.

The Camunda provided module is publicly available. It's advisable to review this module before usage.

  1. In the folder where your config.tf resides, create an additional cluster.tf.
  2. Paste the following content into the newly created cluster.tf file to make use of the provided module:
module "eks_cluster" {
source = "github.com/camunda/camunda-tf-eks-module/modules/eks-cluster"

region = "eu-central-1" # change to your AWS region
name = "cluster-name" # change to name of your choosing

# Set CIDR ranges or use the defaults
cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"
}

There are various other input options to customize the cluster setup further; see the module documentation.

PostgreSQL module

We separated the cluster and PostgreSQL modules from each other to allow more customization options to the user.

  1. In the folder where your config.tf resides, create an additional db.tf file.
  2. Paste the following contents into db.tf to make use of the provided module:
module "postgresql" {
source = "github.com/camunda/camunda-tf-eks-module/modules/aurora"
engine_version = "15.4"
auto_minor_version_upgrade = false
cluster_name = "cluster-name-postgresql" # change "cluster-name" to your name

# Please supply your own secret values
username = "secret_user"
password = "secretvalue%23"
vpc_id = module.eks_cluster.vpc_id
subnet_ids = module.eks_cluster.private_subnet_ids
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
instance_class = "db.t3.medium"
iam_auth_enabled = true

depends_on = [module.eks_cluster]
}

To manage secrets in Terraform, we recommend injecting those via Vault.

Execution

  1. Open a terminal in the created Terraform folder where config.tf and cluster.tf are.
  2. Initialize the working directory:
terraform init
  1. Apply the configuration files:
terraform apply
  1. After reviewing the plan, you can type yes to confirm and apply the changes.

At this point, Terraform will create the Amazon EKS cluster with all the necessary configurations. The completion of this process may require approximately 20-30 minutes.

(Optional) AWS IAM access management

Kubernetes access is divided into two distinct layers. The first involves AWS IAM permissions, which enable basic Amazon EKS functionalities such as using the Amazon EKS UI and generating Amazon EKS access through the AWS CLI. The second layer provides access within the cluster itself, determining the user's permissions within the Kubernetes cluster.

As a result, we must initially grant the user adequate AWS IAM permissions and subsequently assign them a specific role within the Kubernetes cluster for proper access management.

AWS IAM permissions

A minimum set of permissions is required to access an Amazon EKS cluster to allow a user to execute aws eks update-kubeconfig to update the local kubeconfig with cluster access to the Amazon EKS cluster.

The policy should look as follows and can be restricted to specific Amazon EKS clusters if required:

cat <<EOF >./policy-eks.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
EOF

Via the AWS CLI, you can run the following to create the above policy in AWS IAM.

aws iam create-policy --policy-name "BasicEKSPermissions" --policy-document file://policy-eks.json

The created policy BasicEKSPermissions has to be assigned to a group, a role, or a user to work. Consult the AWS documentation to find the correct approach for you.

Users can generate access to the Amazon EKS cluster via the AWS CLI.

aws eks --region <region> update-kubeconfig --name <clusterName>

Terraform AWS IAM permissions

The user creating the Amazon EKS cluster has admin access. To allow other users to access this cluster as well, adjust the aws-auth configmap.

With Terraform, you can create an AWS IAM user to Kubernetes role mapping via the following variable:

# AWS IAM roles mapping
aws_auth_roles = [{
rolearn = "<arn>"
username = "<username>"
groups = ["system:masters"]
}]

# AWS IAM users mapping
aws_auth_users = [{
userarn = "<arn>"
username = "<username>"
groups = ["system:masters"]
}]

Where arn is the arn of your user or the role. The group is the Kubernetes rule, where system:masters is equivalent to an admin role. Lastly, username is either the username itself or the role name, which is used for logs.

Outputs

Terraform can define outputs to make the retrieval of values generated as part of the execution easier; for example, DB endpoints or values required for the Helm setup.

  1. In the folder where your config.tf resides, create an additional output.tf.
  2. Paste the following content to expose those variables:
output "cert_manager_arn" {
value = module.eks_cluster.cert_manager_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the cert-manager"
}

output "external_dns_arn" {
value = module.eks_cluster.external_dns_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the external-dns"
}

output "postgres_endpoint" {
value = module.postgresql.aurora_endpoint
description = "The Postgres endpoint URL"
}
  1. Run terraform apply again to print the outputs in the terraform state.

We can now export those values to environment variables to be used by Helm charts:

export CERT_MANAGER_IRSA_ARN=$(terraform output -raw cert_manager_arn)

export EXTERNAL_DNS_IRSA_ARN=$(terraform output -raw external_dns_arn)

export DB_HOST=$(terraform output -raw postgres_endpoint)

Next steps

Install Camunda 8 using Helm charts by following our installation guide Camunda 8 on Kubernetes.