Deploy an EKS cluster with Terraform
This guide offers a detailed tutorial for deploying an Amazon Web Services (AWS) Elastic Kubernetes Service (EKS) cluster, tailored explicitly for deploying Camunda 8 and using Terraform, a popular Infrastructure as Code (IaC) tool.
This is designed to help leverage the power of IaC to streamline and reproduce a Cloud infrastructure setup. By walking through the essentials of setting up an Amazon EKS cluster, configuring AWS IAM permissions, and integrating a PostgreSQL database, this guide explains the process of using Terraform with AWS, making it accessible even to those new to Terraform or IaC concepts.
If you are completely new to Terraform and the idea of IaC, read through the Terraform IaC documentation and give their interactive quick start a try for a basic understanding.
Prerequisites
- An AWS account to create any resources within AWS.
- Terraform (1.9+)
- Kubectl (1.30+) to interact with the cluster.
- IAM Roles for Service Accounts (IRSA) configured.
- This simplifies the setup by not relying on explicit credentials and instead creating a mapping between IAM roles and Kubernetes service account based on a trust relationship. A blog post by AWS visualizes this on a technical level.
- This allows a Kubernetes service account to temporarily impersonate an AWS IAM role to interact with AWS services like S3, RDS, or Route53 without having to supply explicit credentials.
Considerations
This setup provides an essential foundation for beginning with Camunda 8, though it's not tailored for optimal performance. It's a good initial step for preparing a production environment by incorporating IaC tooling.
Terraform can be opaque in the beginning. If you solely want to get an understanding for what is happening, you may try out the eksctl guide to understand what resources are created and how they interact with each other.
To try out Camunda 8 or develop against it, consider signing up for our SaaS offering. If you already have an Amazon EKS cluster, consider skipping to the Helm guide.
For the simplicity of this guide, certain best practices will be provided with links to additional documents, enabling you to explore the topic in more detail.
Following this guide will incur costs on your Cloud provider account, namely for the managed Kubernetes service, running Kubernetes nodes in EC2, Elastic Block Storage (EBS), and Route53. More information can be found on AWS and their pricing calculator as the total cost varies per region.
Outcome
Following this tutorial and steps will result in:
- An Amazon EKS Kubernetes cluster running the latest Kubernetes version with four nodes ready for Camunda 8 installation.
- The EBS CSI driver is installed and configured, which is used by the Camunda 8 Helm chart to create persistent volumes.
- A managed Aurora PostgreSQL 15.8 instance to be used by the Camunda 8 components.
Installing Amazon EKS cluster with Terraform
Terraform prerequsites
- Create an empty folder to place your Terraform files in.
- Create a
config.tf
with the following setup:
terraform {
backend "local" {
path = "terraform.tfstate"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.69"
}
}
}
provider "aws" {
region = "eu-central-1"
}
- Set up the authentication for the
AWS
provider.
It's recommended to use a different backend than local
. More information can be found in the Terraform documentation.
The AWS Terraform provider is required to create resources in AWS. You must configure the provider with the proper credentials before using it. You can further change the region and other preferences and explore different authentication methods.
There are several ways to authenticate the AWS
provider.
- (Recommended) Use the AWS CLI to configure access. Terraform will automatically default to AWS CLI configuration when present.
- Set environment variables
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
, which can be retrieved from the AWS Console.
Do not store sensitive information (credentials) in your Terraform files.
A user who creates resources in AWS will therefore own these resources. In this particular case, the user will always have admin access to the Kubernetes cluster until the cluster is deleted.
Therefore, it can make sense to create an extra AWS IAM user which credentials are used for Terraform purposes.
Cluster module
This module creates the basic layout that configures AWS access and Terraform.
The following will use Terraform modules, which allows abstracting resources into reusable components.
The Camunda provided module is publicly available. It's advisable to review this module before usage.
- In the folder where your
config.tf
resides, create an additionalcluster.tf
. - Paste the following content into the newly created
cluster.tf
file to make use of the provided module:
module "eks_cluster" {
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/eks-cluster?ref=2.5.0"
region = "eu-central-1" # change to your AWS region
name = "cluster-name" # change to name of your choosing
# Set CIDR ranges or use the defaults
cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"
}
There are various other input options to customize the cluster setup further; see the module documentation.
PostgreSQL module
The resulting PostgreSQL instance and default database camunda
is intended to be used with Keycloak. You may manually add extra databases after creation for Identity with multi-tenancy.
This will not be covered in this guide as the Identity default for multi-tenancy is to be disabled.
We separated the cluster and PostgreSQL modules from each other to allow more customization options to the user.
- In the folder where your
config.tf
resides, create an additionaldb.tf
file. - Paste the following contents into
db.tf
to make use of the provided module:
module "postgresql" {
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/aurora?ref=2.5.0"
engine_version = "15.8"
auto_minor_version_upgrade = false
cluster_name = "cluster-name-postgresql" # change "cluster-name" to your name
default_database_name = "camunda"
# Please supply your own secret values
username = "secret_user"
password = "secretvalue%23"
vpc_id = module.eks_cluster.vpc_id
subnet_ids = module.eks_cluster.private_subnet_ids
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
instance_class = "db.t3.medium"
iam_auth_enabled = true
depends_on = [module.eks_cluster]
}
To manage secrets in Terraform, we recommend injecting those via Vault.
Execution
- Open a terminal in the created Terraform folder where
config.tf
andcluster.tf
are. - Initialize the working directory:
terraform init
- Apply the configuration files:
terraform apply
- After reviewing the plan, you can type
yes
to confirm and apply the changes.
At this point, Terraform will create the Amazon EKS cluster with all the necessary configurations. The completion of this process may require approximately 20-30 minutes.
(Optional) AWS IAM access management
Kubernetes access is divided into two distinct layers. The first involves AWS IAM permissions, which enable basic Amazon EKS functionalities such as using the Amazon EKS UI and generating Amazon EKS access through the AWS CLI. The second layer provides access within the cluster itself, determining the user's permissions within the Kubernetes cluster.
As a result, we must initially grant the user adequate AWS IAM permissions and subsequently assign them a specific role within the Kubernetes cluster for proper access management.
AWS IAM permissions
A minimum set of permissions is required to access an Amazon EKS cluster to allow a user to execute aws eks update-kubeconfig
to update the local kubeconfig
with cluster access to the Amazon EKS cluster.
The policy should look as follows and can be restricted to specific Amazon EKS clusters if required:
cat <<EOF >./policy-eks.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
EOF
Via the AWS CLI, you can run the following to create the above policy in AWS IAM.
aws iam create-policy --policy-name "BasicEKSPermissions" --policy-document file://policy-eks.json
The created policy BasicEKSPermissions
has to be assigned to a group, a role, or a user to work. Consult the AWS documentation to find the correct approach for you.
Users can generate access to the Amazon EKS cluster via the AWS CLI
.
aws eks --region <region> update-kubeconfig --name <clusterName>
Terraform AWS IAM permissions
The user creating the Amazon EKS cluster has admin access by default.
To manage user access use the access_entries
configuration introduced in module version 2.0.0:
access_entries = {
example = {
kubernetes_groups = []
principal_arn = "<arn>"
policy_associations = {
example = {
policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
access_scope = {
namespaces = ["default"]
type = "namespace"
}
}
}
}
}
In this updated configuration:
principal_arn
should be replaced with the ARN of the IAM user or role.policy_associations
allow you to associate policies for fine-grained access control.
For a list of policies, please visit the AWS EKS Access Policies documentation.
Please note that the version 2.x.x of this module no longer supports direct mappings via aws_auth_roles
and aws_auth_users
. If you are upgrading from version 1.x.x, fork the module repository and follow the official AWS instructions for managing the aws-auth
ConfigMap.
For more details, refer to the official upgrade guide.
Outputs
Terraform can define outputs to make the retrieval of values generated as part of the execution easier; for example, DB endpoints or values required for the Helm setup.
- In the folder where your
config.tf
resides, create an additionaloutput.tf
. - Paste the following content to expose those variables:
output "cert_manager_arn" {
value = module.eks_cluster.cert_manager_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the cert-manager"
}
output "external_dns_arn" {
value = module.eks_cluster.external_dns_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the external-dns"
}
output "postgres_endpoint" {
value = module.postgresql.aurora_endpoint
description = "The Postgres endpoint URL"
}
- Run
terraform apply
again to print the outputs in the terraform state.
We can now export those values to environment variables to be used by Helm charts:
export CERT_MANAGER_IRSA_ARN=$(terraform output -raw cert_manager_arn)
export EXTERNAL_DNS_IRSA_ARN=$(terraform output -raw external_dns_arn)
export DB_HOST=$(terraform output -raw postgres_endpoint)
- Export required values for the Camunda 8 on Kubernetes guide. The values will likely differ based on your definitions in the PostgreSQL setup, so ensure you use the values passed to the Terraform module.
# Example guide values, ensure you use the values you pass to the Terraform module
export PG_USERNAME="secret_user"
export PG_PASSWORD="secretvalue%23"
export DEFAULT_DB_NAME="camunda"
Next steps
Install Camunda 8 using Helm charts by following our installation guide Camunda 8 on Kubernetes.