Amazon Elastic Kubernetes Service or Amazon Elastic Container Service for Kubernetes.
Prerequisites: Docker image #
The following Docker container lists the hostname of the container. This is very helpful the test the scaling of a Kubernetes pod, as it lists the different hostnames when it’s curled.
Node.js #
- Create the following file:
app.js
const http = require('http');
const os = require('os');
console.log("Kubernetes testing");
var handler = function(request, response) {
console.log("Received request from " + request.connection.remoteAddress);
response.writeHead(200);
response.end("Container runs on: " + os.hostname() + "\n");
};
var www = http.createServer(handler);
www.listen(8080);
Dockerfile #
- Dockerfile
FROM node:lts-alpine3.19
Copy app.js /app.js
EXPOSE 8080
ENTRYPOINT ["node", "app.js"]
Build & Push the Docker Image #
# Build the image
docker build -t jueklu/container-2 .
# Push the image to DockerHub registry
docker push jueklu/container-2
Test the Image #
# Run the container
docker run -d --name container-2 -p 8080:8080 jueklu/container-2
# Use curl to check the container host
curl localhost:8080
# Optional: Open container terminal
docker exec -it container-2 /bin/ash
# Verify the container hostname
hostname
Prerequisites: Packages #
AWS CLIv2 #
Install AWS CLIv2 #
# Install AWS CLI version 2
sudo apt install curl zip -y &&
cd /tmp &&
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" &&
unzip awscliv2.zip &&
sudo ./aws/install
# Verify / check version
aws --version
IAM User & Policies #
Create an IAM user, create access keys for the user and add the following managed policy AdministratorAccess
. In a production environment only the least necessary permissions should be granted!
Configure AWS CLIv2 #
# Add the IAM user access key, secret access key & define the default region
aws configure
Install Eksctl #
Eksctl is a command-line tool for creating and managing Kubernetes clusters on EKS.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp &&
sudo mv /tmp/eksctl /usr/local/bin
# Verify / check version
eksctl version
Install Kubectl #
# Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&
chmod +x kubectl &&
sudo mv kubectl /usr/local/bin/
# Verify installation
kubectl version --client
EKS Cluster #
Create Cluster #
# Create an EKS cluster: Define Kubernetes version & region
eksctl create cluster \
--name eks-example-cluster \
--version 1.30 \
--nodegroup-name prod-nodes \
--node-type t3.medium \
--nodes 3 \
--region eu-central-1 \
--managed
# Shell output:
...
2025-01-31 10:31:08 [✔] EKS cluster "eks-example-cluster" in "eu-central-1" region is ready
More Options:
--node-volume-size 20
Define the volume size of the worker nodes
Note: Creating the necessary CloudFormation resources can take up to 10 - 20 minutes.
List Resources in AWS Management Console #
# EKS Cluster
https://eu-central-1.console.aws.amazon.com/eks/
# Cloudformation stack
https://eu-central-1.console.aws.amazon.com/cloudformation
Verify the Cluster with EKSCTL #
# List all cluster: Define region
eksctl get cluster --region eu-central-1
# Shell output:
NAME REGION EKSCTL CREATED
eks-example-cluster eu-central-1 True
List Cluster Nodes #
# List nodes: More details
kubectl get nodes -o wide
# Shell output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-5-22.eu-central-1.compute.internal Ready <none> 2m36s v1.30.8-eks-aeac579 192.168.5.22 52.58.138.160 Amazon Linux 2 5.10.230-223.885.amzn2.x86_64 containerd://1.7.23
ip-192-168-58-127.eu-central-1.compute.internal Ready <none> 2m35s v1.30.8-eks-aeac579 192.168.58.127 52.59.56.5 Amazon Linux 2 5.10.230-223.885.amzn2.x86_64 containerd://1.7.23
ip-192-168-77-3.eu-central-1.compute.internal Ready <none> 2m38s v1.30.8-eks-aeac579 192.168.77.3 3.65.228.220 Amazon Linux 2 5.10.230-223.885.amzn2.x86_64 containerd://1.7.23
Delete the Cluster #
# Delete Cluster: Define name & region
eksctl delete cluster --name eks-example-cluster --region eu-central-1
HTTP Example Deployment #
Deploy Pod & Load Balancer #
# Create a Deployment
kubectl create deployment my-container --image=jueklu/container-2
# Scale the Deployment
kubectl scale deployment my-container --replicas=3
# Create a load balancer service for the deployment
kubectl expose deployment my-container --type=LoadBalancer --port=80 --target-port=8080 --name my-container-lb
Verify the Deployment #
# List the deployed pods
kubectl get pods -o wide
# Shell output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-container-5dd7c5dc59-2t97n 1/1 Running 0 83s 192.168.3.57 ip-192-168-5-22.eu-central-1.compute.internal <none> <none>
my-container-5dd7c5dc59-6w8mp 1/1 Running 0 19s 192.168.62.139 ip-192-168-58-127.eu-central-1.compute.internal <none> <none>
my-container-5dd7c5dc59-992vb 1/1 Running 0 19s 192.168.73.174 ip-192-168-77-3.eu-central-1.compute.internal <none> <none>
List LoadBalancer External IP #
# List load balancer service details: List external IP
kubectl get svc my-container-lb
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-container-lb LoadBalancer 10.100.150.8 a6e682e1616634258bbb60aaf322de2d-1963541611.eu-central-1.elb.amazonaws.com 80:32438/TCP 51s
Test the Deployment #
The AWS Elastic Load Balancer (ELB) URL randomly hit different pods and should output the different container hostnames when it’s curled:
# Test the deployment
curl a6e682e1616634258bbb60aaf322de2d-1963541611.eu-central-1.elb.amazonaws.com
# Shell output:
Container runs on: my-container-5dd7c5dc59-992vb
# Test the deployment
curl a6e682e1616634258bbb60aaf322de2d-1963541611.eu-central-1.elb.amazonaws.com
# Shell output:
Container runs on: my-container-5dd7c5dc59-6w8mp
# Test the deployment
curl a6e682e1616634258bbb60aaf322de2d-1963541611.eu-central-1.elb.amazonaws.com
# Shell output:
Container runs on: my-container-5dd7c5dc59-2t97n
Delete the Deployment #
# Delete the load balancer service
kubectl delete svc my-container-lb
# Delete the Deployment
kubectl delete deployment my-container
HTTPS Example Deployment #
Note: For this deployment I use a wildcard certificate *.jklug.work
from the AWS certificate manager.
Deploy Pod: CLI #
# Create a Deployment
kubectl create deployment my-container --image=jueklu/container-2
# Scale the Deployment
kubectl scale deployment my-container --replicas=3
Note: Kubernetes automatically uses the deployment name as a key for a label named “app”.
# List the deployed pods
kubectl get pods
# Shell output:
NAME READY STATUS RESTARTS AGE
my-container-7df9db746-fftnf 1/1 Running 0 9s
my-container-7df9db746-fj7ss 1/1 Running 0 9s
my-container-7df9db746-lhflx 1/1 Running 0 9s
Deploy Pod: YML #
# Create configuration for the deployment
vi my-container.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-container
spec:
replicas: 3
selector:
matchLabels:
app: my-container
template:
metadata:
labels:
app: my-container
spec:
containers:
- name: container-2
image: jueklu/container-2
ports:
- containerPort: 8080
# Apply the deployment
kubectl apply -f my-container.yml
LoadBalancer Service (NLB) #
# Create configuration for the LoadBalancer
vi loadbalancer.yml
apiVersion: v1
kind: Service
metadata:
name: my-container-lb
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:012345678912:certificate/9a4bbd3a-3518-4120-b368-2cd9c2bba184
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
selector:
app: my-container
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8080
type: LoadBalancer
service.beta.kubernetes.io/aws-load-balancer-ssl-cert:
Define the ARN of the TLS certificate from AWS Certificate Manager (ACM)
Note: The Managed certificate must be in the same AWS region as the K8s cluster.
# Apply the deployment
kubectl apply -f loadbalancer.yml
List LoadBalancer External DNS Name #
Note: The “EXTERNAL-IP” / DNS name of a load balancer service is listed as “
# List load balancer service details: List DNS name
kubectl get svc my-container-lb
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-container-lb LoadBalancer 10.100.219.226 ae09f3d448abe4d2a8e70a3b2d25a6b4-496419107.eu-central-1.elb.amazonaws.com 80:31661/TCP,443:30390/TCP 32s
Create CNAME DNS Entry #
Manually #
Create a Route 53 DNS entry with the following values:
-
Record name: For example
eks.jklug.work
-
Record type:
CNAME
-
Value:
ae09f3d448abe4d2a8e70a3b2d25a6b4-496419107.eu-central-1.elb.amazonaws.com
-
TTL:
300
AWS CLI #
# Identify the hosted zone ID
aws route53 list-hosted-zones
# Shell output:
{
"HostedZones": [
{
"Id": "/hostedzone/Z05838622LmyzoneID",
"Name": "jklug.work.",
"CallerReference": "9e7f187b-3129-424b-9cf6-c50141743c23",
"Config": {
"Comment": "",
"PrivateZone": false
},
"ResourceRecordSetCount": 19
}
- Create a JSON file for the DNS record
vi cname-record.json
{
"Comment": "Create CNAME record for custom domain",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "eks.jklug.work",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [
{
"Value": "ae09f3d448abe4d2a8e70a3b2d25a6b4-496419107.eu-central-1.elb.amazonaws.com"
}
]
}
}
]
}
Action: "UPSERT"
Create the record if it doesn’t exist or update it if it does.
# Create the record
aws route53 change-resource-record-sets --hosted-zone-id Z05838622LmyzoneID --change-batch file://cname-record.json
# Shell output:
{
"ChangeInfo": {
"Id": "/change/C059582511CQ25766UMK2",
"Status": "PENDING",
"SubmittedAt": "2025-01-31T11:14:29.463000+00:00",
"Comment": "Create CNAME record for custom domain"
}
}
Test the Deployment #
The AWS Elastic Load Balancer (ELB) URL randomly hit different pods and should output the different container hostnames when it’s curled:
# Test the deployment
curl eks.jklug.work
# Shell output:
Container runs on: my-container-7df9db746-fj7ss
# Test the deployment
curl eks.jklug.work
# Shell output:
Container runs on: my-container-7df9db746-fftnf
Delete the Deployment #
# Delete the Deployment
kubectl delete deployment my-container
# Delete the load balancer service
kubectl delete svc my-container-lb
Troubleshooting #
# List logs
kubectl get events --sort-by=.metadata.creationTimestamp
# List load balancer details
aws elb describe-load-balancers --region us-east-1
# Retrieves headers / TLS details
curl -Iv https://abcfad6f62e1a42bba95c282eea0c998-672012549.us-east-1.elb.amazonaws.com
EBS Storage Setup #
Install EBS CSI Driver Addon #
# Install AWS EBS CSI Driver Addon
eksctl create addon \
--name aws-ebs-csi-driver \
--cluster eks-example-cluster \
--region eu-central-1 \
--service-account-role-arn arn:aws:iam::012345678912:role/AmazonEKS_EBS_CSI_DriverRole
# Shell output:
2025-01-31 11:47:59 [ℹ] creating addon
Verify EBS CSI Driver Pods #
# List pods
kubectl get pods -n kube-system | grep ebs
# Shell output:
ebs-csi-controller-56ddb47f56-dv295 6/6 Running 0 30s
ebs-csi-controller-56ddb47f56-gwwpv 6/6 Running 0 30s
ebs-csi-node-hpt26 3/3 Running 0 30s
ebs-csi-node-pfbn7 3/3 Running 0 30s
ebs-csi-node-vb9fj 3/3 Running 0 30s
Verify the Managed EBS Policy Exists #
# Ensure IAM Role for EBS CSI Driver Exists
aws iam list-policies --query "Policies[?PolicyName=='AmazonEBSCSIDriverPolicy']"
# Shell output:
[
{
"PolicyName": "AmazonEBSCSIDriverPolicy",
"PolicyId": "ANPAZKAPJZG4IV6FHD2UE",
"Arn": "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy",
"Path": "/service-role/",
"DefaultVersionId": "v3",
"AttachmentCount": 1,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2022-04-04T17:24:29+00:00",
"UpdateDate": "2025-01-13T17:07:06+00:00"
}
]
List Worker Nodes IAM Role #
# Lister cluster IAM roles
aws iam list-roles --query "Roles[?contains(RoleName, 'eks-example-cluster')].RoleName"
# Shell output:
[
"eks-example-cluster-20250115164427491600000001",
"eksctl-eks-example-cluster-cluster-ServiceRole-6yZDAum3Na9M",
"eksctl-eks-example-cluster-nodegro-NodeInstanceRole-DH5n36P3EH4v"
]
Attach IAM Policy to the Worker Node Role #
Attach the Managed “AmazonEBSCSIDriverPolicy” Policy to the Worker Node Role:
# Attach IAM Role
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--role-name eksctl-eks-example-cluster-nodegro-NodeInstanceRole-DH5n36P3EH4v
Verify IAM Policy is Attached #
List the IAM policies that are attached to the Worker Node Role:
# List attached Polcies
aws iam list-attached-role-policies --role-name eksctl-eks-example-cluster-nodegro-NodeInstanceRole-DH5n36P3EH4v
# Shell output:
{
"AttachedPolicies": [
{
"PolicyName": "AmazonSSMManagedInstanceCore",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
},
{
"PolicyName": "AmazonEKS_CNI_Policy",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
},
{
"PolicyName": "AmazonEC2ContainerRegistryReadOnly",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
},
{
"PolicyName": "AmazonEKSWorkerNodePolicy",
"PolicyArn": "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
},
{
"PolicyName": "AmazonEBSCSIDriverPolicy",
"PolicyArn": "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
]
}
Restart the EBS CSI Pods #
Delete the EBS CSI pods to restart them:
# Delete the old EBS pods
kubectl delete pod -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver
Verify EBS CSI Driver Pods #
# List pods
kubectl get pods -n kube-system | grep ebs
# Shell output:
ebs-csi-controller-56ddb47f56-5hf5m 6/6 Running 0 21s
ebs-csi-controller-56ddb47f56-lq678 6/6 Running 0 21s
ebs-csi-node-k6s8s 3/3 Running 0 20s
ebs-csi-node-q2c6c 3/3 Running 0 21s
ebs-csi-node-r2d9c 3/3 Running 0 20s
Example Persistent Volume Claim #
List Storage Class #
# List StorageClasses
kubectl get storageclasses
# Shell output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 73m
# List StorageClass details
kubectl describe storageclasses gp2
# Shell output:
Name: gp2
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/aws-ebs
Parameters: fsType=ext4,type=gp2
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
Create PVC and Pod #
# Create a manifest for the PVC and pod
vi example-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: gp2
---
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: "/data"
name: example-volume
volumes:
- name: example-volume
persistentVolumeClaim:
claimName: example-pvc
# Apply the PVC and pod
kubectl apply -f example-pvc.yaml
Verify PVC, PV & Pod #
# List PVCs
kubectl get pvc
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
example-pvc Bound pvc-6d073c50-e917-41c5-832f-83487b4f9243 10Gi RWO gp2 <unset> 5s
# List PVs
kubectl get pv
# Shell output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-6d073c50-e917-41c5-832f-83487b4f9243 10Gi RWO Delete Bound default/example-pvc gp2 <unset> 61s
# List pods
kubectl get pod
# Shell output:
NAME READY STATUS RESTARTS AGE
example-pod 1/1 Running 0 94s
Verify EBS Volume via Management Console #
![](img/elastic-block-storage.jpg)
Cleanup #
# Delete PVC & Pod
kubectl delete -f example-pvc.yaml
It may take a minute till the PV and EBS volume are delted:
# Verify the PVC & PV are deleted
kubectl get pvc,pv
# Shell output:
No resources found
Links #
# Install AWS CLIv2
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
# Install AWS Eksctl
https://docs.aws.amazon.com/emr/latest/EMR-on-EKS-DevelopmentGuide/setting-up-eksctl.html
# Create Cluster
https://eksctl.io/usage/creating-and-managing-clusters/
# Create Cluster
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html