Overview
AWS recently introduced EKS Auto mode which seems to be a game changer. It exptends the responsibility of AWS from managing control plane components to managing the data plane - nodes. With EKS auto mode EKS users can shift their focus more towards application development, rather than managing individual nodes.
Prerequisites
Make sure to have terraform installed as the processes described in this guide are based on terraform
- Terraform (latest version)
- AWS CLI (configured with credentials for your AWS account)
- kubectl (for interacting with the EKS cluster)
Creating cluster in EKS auto mode
Using the EKS terraform module to create a cluster with EKS auto mode is as easy as it sounds.
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "eks-auto-mode-cluster"
cluster_version = local.cluster_version
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
cluster_compute_config = {
enabled = true
node_pools = ["general-purpose"]
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
tags = local.tags
}
An example VPC config for the nodes is the following
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]
intra_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 52)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}
After completing this step you already have an EKS cluster running on Auto mode. You can execute the following commands to make sure that you don’t have any running workloads in your cluster.
kubectl get pods -A
kubectl get nodes
You will see that you don’t have any pods nor nodes in your cluster. Let’s deploy a sample workload to test out how EKS auto mode works. Create a sample deployment similiar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 1
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
nodeSelector:
eks.amazonaws.com/compute-type: auto # Makes sure that the pod is deployed on a node managed by Auto mode
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
securityContext:
allowPrivilegeEscalation: false
Afterwards, apply the configuration
kubectl apply -f deployment.yaml
You would see that a new node is being created. It is being picked up from a node pool created by EKS Auto mode. You can create your nodepool as well. Here is an example nodepool configuration below.
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: eks-auto-mode-demo-nodepool
spec:
template:
spec:
nodeClassRef:
group: eks.amazonaws.com
kind: NodeClass
name: default
requirements:
- key: "node.kubernetes.io/instance-type"
operator: In
values:
- t3.medium
limits:
cpu: "1000"
memory: 1000Gi
Deployment of stateful workloads also becomes much simpler using EKS Auto mode. First, there is no need to install EBS CSI driver. You can provision your stateful workloads by creating a StorageClass object.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: auto-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
encrypted: "true"
So now, you can create a stateful workload!
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: inflate-stateful
spec:
serviceName: "inflate-stateful"
replicas: 1
selector:
matchLabels:
app: inflate-stateful
template:
metadata:
labels:
app: inflate-stateful
spec:
persistentVolumeClaimRetentionPolicy:
whenDeleted: Delete
whenScaled: Delete
terminationGracePeriodSeconds: 0
nodeSelector:
eks.amazonaws.com/compute-type: "auto"
node.kubernetes.io/instance-type: t3.medium
containers:
- name: bash
image: public.ecr.aws/docker/library/bash:4.4
command: ["/usr/local/bin/bash"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 60; done"]
volumeMounts:
- name: inflate-stateful-data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: inflate-stateful-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: auto-sc
It is as simple as it looks!