Skip to main content

Managed Kubernetes Services - AWS Elastic Kubernetes Service (EKS) : Ingress and AWS Load Balancer Controller Setup, Example Ingress Resource with HTTP & HTTPS

2230 words·
Kubernetes Kubernetes Cluster AWS EKS Eksctl Kubectl Ingress AWS Load Balancer Controller AWS Application Load Balancer (ALB) Route 53 AWS Certificate Manager (ACM)
Table of Contents

Permissions Overview
#

IAM Role of the EC2 worker nodes:

  • --alb-ingress-access Grants EKS worker nodes the ability to register themselves as ALB targets.

IAM Role mapped to a Kubernetes service account:

  • IAM Role for Service Account (IRSA): Grants the AWS Load Balancer Controller permissions to create, update, and delete ALBs, Target Groups, and Listeners.

  • A Kubernetes Service Account (SA) is used by pods to to interact with the Kubernetes API.



EKS Cluster
#

Create Cluster
#

# Create an EKS cluster: Define Kubernetes version & region
eksctl create cluster \
  --name eks-alb-example \
  --version 1.32 \
  --region eu-central-1 \
  --zones=eu-central-1a,eu-central-1b \
  --nodegroup-name managed-nodes \
  --node-type t3.medium \
  --nodes 2 \
  --managed \
  --alb-ingress-access
  • --alb-ingress-access Attaches the required IAM permissions to the EKS worker nodes so they can interact with the AWS ALB Ingress Controller.

Verify Cluster
#

# Verfy EKS Cluster
eksctl get cluster --region eu-central-1

# Shell output:
NAME            REGION          EKSCTL CREATED
eks-alb-example eu-central-1    True
# Verify EKS Node Groups
eksctl get nodegroup \
  --cluster=eks-alb-example \
  --region eu-central-1

# Shell output:
CLUSTER         NODEGROUP       STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID        ASG NAME                                             TYPE
eks-alb-example managed-nodes   ACTIVE  2025-02-09T17:39:36Z    2               2               2                       t3.medium       AL2_x86_64      eks-managed-nodes-b2ca75e5-c71d-8f91-d908-b6b6b9a79e9amanage
# Verify if any IAM Service Accounts present in EKS Cluster
eksctl get iamserviceaccount \
  --cluster=eks-alb-example \
  --region eu-central-1

# Shell output:
No iamserviceaccounts found



IAM Setup
#

Enable OIDC for IAM Authentication
#

  • OIDC (OpenID Connect) acts like a bridge between Kubernetes Service Accounts and AWS IAM Roles and let’s Kubernetes authenticate with IAM securely

Allow the cluster to use AWS Identity and Access Management (IAM) for service accounts:

# Enable OIDC for the EKS cluster
eksctl utils associate-iam-oidc-provider \
  --region eu-central-1 \
  --cluster eks-alb-example \
  --approve

# Shell output:
2025-02-09 17:43:07 []  will create IAM Open ID Connect provider for cluster "eks-alb-example" in "eu-central-1"
2025-02-09 17:43:08 []  created IAM Open ID Connect provider for cluster "eks-alb-example" in "eu-central-1"
# List EKS cluster OIDC provider URL
aws eks describe-cluster \
  --name eks-alb-example \
  --query "cluster.identity.oidc.issuer" \
  --output text \
  --region eu-central-1

# Shell output:
https://oidc.eks.eu-central-1.amazonaws.com/id/5DD6C0748D47C3E21F60CDB34D0887DD

Create IAM Policy for ALB Controller
#

Note: Use the correct policy version regarding the AWS region:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/deploy/installation

# Download the policy: (eu-central-1)
curl -o iam-policy_latest.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
# Create IAM policy: Use the downloaded json policy
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam-policy_latest.json

# Shell output:
{
    "Policy": {
        "PolicyName": "AWSLoadBalancerControllerIAMPolicy",
        "PolicyId": "ANPARCHUALINROQTGIXM3",
        "Arn": "arn:aws:iam::012345678912:policy/AWSLoadBalancerControllerIAMPolicy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2025-02-09T17:49:03+00:00",
        "UpdateDate": "2025-02-09T17:49:03+00:00"
    }
}

Create IAM Role and Kubernetes ServiceAccount
#

The AWS Load Balancer Controller (which manages ALBs for Kubernetes Ingress) needs IAM permissions to create, update, and delete AWS ALB resources (LoadBalancers, Target Groups, Listeners). For security reasons, instead of assigning these permissions to EC2 worker nodes, they are attached to a Kubernetes service account using IAM Roles for Service Accounts (IRSA).

# Check for any existing service accounts
kubectl get sa aws-load-balancer-controller -n kube-system

# Shell output:
Error from server (NotFound): serviceaccounts "aws-load-balancer-controller" not found

Create a service account named “aws-load-balancer-controller” in the “kube-system” namespace for the AWS Load Balancer Controller:

  • Define the previous IAM Policy ARN
# Create an IAM Role and attach the policy
eksctl create iamserviceaccount \
--cluster=eks-alb-example \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::012345678912:policy/AWSLoadBalancerControllerIAMPolicy \
--override-existing-serviceaccounts \
--region eu-central-1 \
--approve

# Shell output:
2025-02-09 17:51:22 []  1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules)
2025-02-09 17:51:22 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2025-02-09 17:51:22 []  1 task: {
    2 sequential sub-tasks: {
        create IAM role for serviceaccount "kube-system/aws-load-balancer-controller",
        create serviceaccount "kube-system/aws-load-balancer-controller",
    } }2025-02-09 17:51:22 []  building iamserviceaccount stack "eksctl-eks-alb-example-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-02-09 17:51:22 []  deploying stack "eksctl-eks-alb-example-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-02-09 17:51:22 []  waiting for CloudFormation stack "eksctl-eks-alb-example-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-02-09 17:51:52 []  waiting for CloudFormation stack "eksctl-eks-alb-example-addon-iamserviceaccount-kube-system-aws-load-balancer-controller"
2025-02-09 17:51:53 []  created serviceaccount "kube-system/aws-load-balancer-controller"

Verify Service Account
#

# List IAM service accounts
eksctl get iamserviceaccount \
  --cluster eks-alb-example \
  --region eu-central-1

# Shell output:
NAMESPACE       NAME                            ROLE ARN
kube-system     aws-load-balancer-controller    arn:aws:iam::012345678912:role/eksctl-eks-alb-example-addon-iamserviceaccoun-Role1-k9TC0xPyPpqy
# Verify the service account
kubectl get sa aws-load-balancer-controller -n kube-system

# Shell output:
NAME                           SECRETS   AGE
aws-load-balancer-controller   0         3m36s
# Verify the service account
kubectl get sa aws-load-balancer-controller -n kube-system -o yaml

# Shell output:
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::012345678912:role/eksctl-eks-alb-example-addon-iamserviceaccoun-Role1-k9TC0xPyPpqy
  creationTimestamp: "2025-02-09T17:51:53Z"
  labels:
    app.kubernetes.io/managed-by: eksctl
  name: aws-load-balancer-controller
  namespace: kube-system
  resourceVersion: "3576"
  uid: 801957aa-b93e-4451-b1cc-e3254075c6d8

The previously created Role ARN is added in Annotations confirming that AWS IAM role bound to a Kubernetes service account:

  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::012345678912:role/eksctl-eks-alb-example-addon-iamserviceaccoun-Role1-k9TC0xPyPpqy



Install AWS Load Balancer Controller
#

Install Helm
#

Helm should be installed per default.

# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version

Add Helm Repository
#

# Add the AWS EKS Helm repository
helm repo add eks https://aws.github.io/eks-charts &&
helm repo update

Install LoadBalancer Controller
#

Find the correct ECR image repository for eu-central-1:
https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html

It should look like this: image.repository=602401143452.dkr.ecr.eu-central-1.amazonaws.com

# Install the LoadBalancer Controller: IRSA / IAM Roles for Service Accounts using OIDC (OpenID Connect)
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  --namespace kube-system \
  --set clusterName=eks-alb-example \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set image.repository=602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon/aws-load-balancer-controller

# Shell output:
NAME: aws-load-balancer-controller
LAST DEPLOYED: Sun Feb  9 18:11:35 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!



Verify LoadBalancer Resources
#

# List deployments in "kube-system" namespace
kubectl -n kube-system get deployment aws-load-balancer-controller

# Shell output:
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           55s
coredns                        2/2     2            2           35m
metrics-server                 2/2     2            2           35m
# List LoadBalancer pods
kubectl get pods -n kube-system | grep aws-load-balancer

# Shell output:
aws-load-balancer-controller-7c4dd5457c-6cmvc   1/1     Running   0          17s
aws-load-balancer-controller-7c4dd5457c-bg6jr   1/1     Running   0          17s
# Verify the "aws-load-balancer-webhook-service" service
kubectl -n kube-system get svc aws-load-balancer-webhook-service

# Shell output:
NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
aws-load-balancer-webhook-service   ClusterIP   10.100.143.249   <none>        443/TCP   3m15s
# List LoadBalancer logs
kubectl logs -n kube-system deployment/aws-load-balancer-controller

# List "aws-load-balancer-controller" deployment details
kubectl -n kube-system describe deployment aws-load-balancer-controller

# List "aws-load-balancer-webhook-service" details
kubectl -n kube-system describe svc aws-load-balancer-webhook-service



Install Ingress Class
#

Apply Ingress Class
#

Create a new default IngressClass. Any ingress created without explicitly specifying an ingress class will use this one:

  • ingressclass-resource.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: my-aws-ingress-class
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: ingress.k8s.aws/alb
# Apply the Ingress Class
kubectl apply -f ingressclass-resource.yaml

# Shell output:
ingressclass.networking.k8s.io/my-aws-ingress-class created

Verify Ingress Class
#

# Verify IngressClass
kubectl get ingressclass

# Shell output:
NAME                   CONTROLLER            PARAMETERS   AGE
alb                    ingress.k8s.aws/alb   <none>       11m
my-aws-ingress-class   ingress.k8s.aws/alb   <none>       119s
# List IngressClass details
kubectl describe ingressclass my-aws-ingress-class

# Shell output:
Name:         my-aws-ingress-class
Labels:       <none>
Annotations:  ingressclass.kubernetes.io/is-default-class: true
Controller:   ingress.k8s.aws/alb
Events:       <none>



Example Ingress Resource: HTTP Version
#

Deployment, ClusterIP Service & Ingress
#

The following configuration creates a deployment with ClusterIP service and ALB Ingress:

  • example-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example-container
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-container
  template:
    metadata:
      labels:
        app: example-container
    spec:
      containers:
      - name: nginx
        image: jueklu/container-2
        ports:
        - containerPort: 8080
---

apiVersion: v1
kind: Service
metadata:
  name: example-container-nodeport-service
  labels:
    app: example-container
spec:
  type: NodePort
  selector:
    app: example-container
  ports:
    - port: 80
      targetPort: 8080
---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  labels:
    app: example-container
  annotations:
    alb.ingress.kubernetes.io/load-balancer-name: example-alb # ALB name
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /  # root endpoint
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
  ingressClassName: my-aws-ingress-class # Ingress Class
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-container-nodeport-service
            port:
              number: 80  # Service port
# Apply resources
kubectl apply -f example-app.yaml

Verify the Ingress Resource
#

# List the deployment
kubectl get deployment

# Shell output:
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
example-deployment   3/3     3            3           9s
# List Ingress
kubectl get ingress example-ingress

# Shell output:
NAME              CLASS                  HOSTS   ADDRESS                                                 PORTS   AGE
example-ingress   my-aws-ingress-class   *       example-alb-1153623869.eu-central-1.elb.amazonaws.com   80      30s

Test the Ingress
#

Note: It can take a minute till the ALB is ready!

# Curl the Ingress resource
curl example-alb-1153623869.eu-central-1.elb.amazonaws.com

# Shell output: (Wait a bit)
curl: (6) Could not resolve host: example-alb-1153623869.eu-central-1.elb.amazonaws.com

# Shell output:
Container runs on: example-deployment-84fd4f4979-ncxxb

# Shell output:
Container runs on: example-deployment-84fd4f4979-9rdxk

# Shell output:
Container runs on: example-deployment-84fd4f4979-vm6wq

Delete Resources
#

# Delete resources
kubectl delete -f example-app.yaml



Example Ingress Resource: HTTPS Version
#

Managed TLS Certificate
#

Create Certificate
#

# Create a New Wildcard ACM Certificate
aws acm request-certificate \
  --domain-name "alb-ingress-test.jklug.work" \
  --validation-method DNS \
  --region eu-central-1

# Shell output:
{
    "CertificateArn": "arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8"
}

List Certificate Details
#

# List certificate details: Define certificate ARN
aws acm describe-certificate \
  --certificate-arn arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8 \
  --region eu-central-1

# Shell output:
{
    "Certificate": {
        "CertificateArn": "arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8",
        "DomainName": "alb-ingress-test.jklug.work",
        "SubjectAlternativeNames": [
            "alb-ingress-test.jklug.work"
        ],
        "DomainValidationOptions": [
            {
                "DomainName": "alb-ingress-test.jklug.work",
                "ValidationDomain": "alb-ingress-test.jklug.work",
                "ValidationStatus": "PENDING_VALIDATION",
                "ResourceRecord": {
                    "Name": "_138e3c25598d5ea548e59a0cec234b93.alb-ingress-test.jklug.work.",
                    "Type": "CNAME",
                    "Value": "_4c8505a542fd7a515423e89b6855d61b.zfyfvmchrl.acm-validations.aws."
                },
                "ValidationMethod": "DNS"
            }
        ],
        "Subject": "CN=alb-ingress-test.jklug.work",
        "Issuer": "Amazon",
        "CreatedAt": "2025-02-09T18:59:44.653000+00:00",
        "Status": "PENDING_VALIDATION",
        "KeyAlgorithm": "RSA-2048",
        "SignatureAlgorithm": "SHA256WITHRSA",
        "InUseBy": [],
        "Type": "AMAZON_ISSUED",
        "KeyUsages": [],
        "ExtendedKeyUsages": [],
        "RenewalEligibility": "INELIGIBLE",
        "Options": {
            "CertificateTransparencyLoggingPreference": "ENABLED"
        }
    }
}

Take the values of the following section and create a CNAME DNS entry at Route 53:

        "DomainValidationOptions": [
            {
                "DomainName": "alb-ingress-test.jklug.work",
                "ValidationDomain": "alb-ingress-test.jklug.work",
                "ValidationStatus": "PENDING_VALIDATION",
                "ResourceRecord": {
                    "Name": "_138e3c25598d5ea548e59a0cec234b93.alb-ingress-test.jklug.work.",
                    "Type": "CNAME",
                    "Value": "_4c8505a542fd7a515423e89b6855d61b.zfyfvmchrl.acm-validations.aws."
                },
                "ValidationMethod": "DNS"
            }
        ],

List Hosted Zone ID
#

# Identify the hosted zone ID
aws route53 list-hosted-zones

# Shell output:
{
    "HostedZones": [
        {
            "Id": "/hostedzone/Z05838622L1FJFSmyzone",
            "Name": "jklug.work.",
            "CallerReference": "9e7f187b-3129-424b-9cf6-c50141743c23",
            "Config": {
                "Comment": "",
                "PrivateZone": false
            },
            "ResourceRecordSetCount": 20
        }

Create DNS Entry / CNAME Record
#

# Create a CNAME DNS entry at Route 53: Define 
aws route53 change-resource-record-sets --hosted-zone-id Z05838622L1FJFSmyzone --change-batch '{
    "Changes": [
        {
            "Action": "UPSERT",
            "ResourceRecordSet": {
                "Name": "_138e3c25598d5ea548e59a0cec234b93.alb-ingress-test.jklug.work.",
                "Type": "CNAME",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "_4c8505a542fd7a515423e89b6855d61b.zfyfvmchrl.acm-validations.aws."
                    }
                ]
            }
        }
    ]
}'

# Shell output:
{
    "ChangeInfo": {
        "Id": "/change/C05722161TO4R9FRGV1TT",
        "Status": "PENDING",
        "SubmittedAt": "2025-02-09T19:02:04.901000+00:00"
    }
}

Verify Certificate Status
#

Wait till the “ValidationStatus” changes to “SUCCESS”:

# List certificate details: Define certificate ARN
aws acm describe-certificate \
  --certificate-arn arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8 \
  --region eu-central-1

# Shell output:
{
    "Certificate": {
        "CertificateArn": "arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8",
        "DomainName": "alb-ingress-test.jklug.work",
        "SubjectAlternativeNames": [
            "alb-ingress-test.jklug.work"
        ],
        "DomainValidationOptions": [
            {
                "DomainName": "alb-ingress-test.jklug.work",
                "ValidationDomain": "alb-ingress-test.jklug.work",
                "ValidationStatus": "SUCCESS",
                "ResourceRecord": {
                    "Name": "_138e3c25598d5ea548e59a0cec234b93.alb-ingress-test.jklug.work.",
                    "Type": "CNAME",
                    "Value": "_4c8505a542fd7a515423e89b6855d61b.zfyfvmchrl.acm-validations.aws."
                },
                "ValidationMethod": "DNS"
            }
        ],
        "Subject": "CN=alb-ingress-test.jklug.work",
        "Issuer": "Amazon",
        "CreatedAt": "2025-02-09T18:59:44.653000+00:00",
        "Status": "PENDING_VALIDATION",
        "KeyAlgorithm": "RSA-2048",
        "SignatureAlgorithm": "SHA256WITHRSA",
        "InUseBy": [],
        "Type": "AMAZON_ISSUED",
        "KeyUsages": [],
        "ExtendedKeyUsages": [],
        "RenewalEligibility": "INELIGIBLE",
        "Options": {
            "CertificateTransparencyLoggingPreference": "ENABLED"
        }
    }
}



Deployment, ClusterIP Service & Ingress
#

The following configuration creates a deployment with ClusterIP service and ALB Ingress:

  • example-app-tls.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example-container
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-container
  template:
    metadata:
      labels:
        app: example-container
    spec:
      containers:
      - name: nginx
        image: jueklu/container-2
        ports:
        - containerPort: 8080
---

apiVersion: v1
kind: Service
metadata:
  name: example-container-nodeport-service
  labels:
    app: example-container
spec:
  type: NodePort
  selector:
    app: example-container
  ports:
    - port: 80
      targetPort: 8080
---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  labels:
    app: example-container
  annotations:
    alb.ingress.kubernetes.io/load-balancer-name: example-alb  # ALB name
    alb.ingress.kubernetes.io/scheme: internet-facing
    # Redirect HTTP to HTTPS
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8
    # Health Check Settings
    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP 
    alb.ingress.kubernetes.io/healthcheck-port: traffic-port
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/success-codes: '200'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
  ingressClassName: my-aws-ingress-class  # Ingress Class
  rules:
  - host: alb-ingress-test.jklug.work  # domain name
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-container-nodeport-service
            port:
              number: 80  # Service port
# Apply resources
kubectl apply -f example-app-tls.yaml


Verify the Ingress Resource
#

# List the deployment
kubectl get deployment

# Shell output:
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
example-deployment   3/3     3            3           35s
# List Ingress
kubectl get ingress example-ingress

# Shell output:
NAME              CLASS                  HOSTS                         ADDRESS                                                 PORTS   AGE
example-ingress   my-aws-ingress-class   alb-ingress-test.jklug.work   example-alb-1470213930.eu-central-1.elb.amazonaws.com   80      44s

Create Route53 DNS Entry
#

# Create a CNAME DNS entry at Route 53: Define 
aws route53 change-resource-record-sets --hosted-zone-id Z05838622L1FJFSmyzone --change-batch '{
    "Changes": [
        {
            "Action": "UPSERT",
            "ResourceRecordSet": {
                "Name": "alb-ingress-test.jklug.work.",
                "Type": "CNAME",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "example-alb-1470213930.eu-central-1.elb.amazonaws.com"
                    }
                ]
            }
        }
    ]
}'

# Shell output:
{
    "ChangeInfo": {
        "Id": "/change/C08267623NPFUFP6MD0UP",
        "Status": "PENDING",
        "SubmittedAt": "2025-02-09T19:11:58.094000+00:00"
    }
}

Test the Ingress
#

Note: It can take a minute till the ALB is ready!

# Curl the Ingress resource
curl https://alb-ingress-test.jklug.work

# Shell output:
Container runs on: example-deployment-84fd4f4979-j5qzs

# Shell output:
Container runs on: example-deployment-84fd4f4979-bg8tv

# Shell output:
Container runs on: example-deployment-84fd4f4979-dvfn4

Delete Resources
#

# Delete resources
kubectl delete -f example-app-tls.yaml



Cleanup
#

Delete IAM Policy
#

# Get Policy ARN
aws iam list-policies --query "Policies[?PolicyName=='AWSLoadBalancerControllerIAMPolicy'].Arn" --output text

# Shell output:
arn:aws:iam::012345678912:policy/AWSLoadBalancerControllerIAMPolicy
# Delete the policy
aws iam delete-policy --policy-arn arn:aws:iam::012345678912:policy/AWSLoadBalancerControllerIAMPolicy

Delete Route 53 DNS Records
#

aws route53 change-resource-record-sets --hosted-zone-id Z05838622L1FJFSmyzone --change-batch '{
    "Comment": "Deleting CNAME record for ALB",
    "Changes": [
        {
            "Action": "DELETE",
            "ResourceRecordSet": {
                "Name": "alb-ingress-test.jklug.work.",
                "Type": "CNAME",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "example-alb-1470213930.eu-central-1.elb.amazonaws.com"
                    }
                ]
            }
        }
    ]
}'
aws route53 change-resource-record-sets --hosted-zone-id Z05838622L1FJFSmyzone --change-batch '{
    "Comment": "Deleting CNAME record for ACM validation",
    "Changes": [
        {
            "Action": "DELETE",
            "ResourceRecordSet": {
                "Name": "_138e3c25598d5ea548e59a0cec234b93.alb-ingress-test.jklug.work.",
                "Type": "CNAME",
                "TTL": 300,
                "ResourceRecords": [
                    {
                        "Value": "_4c8505a542fd7a515423e89b6855d61b.zfyfvmchrl.acm-validations.aws."
                    }
                ]
            }
        }
    ]
}'

Delete Managed Certificate
#

# Delete the certificate
aws acm delete-certificate \
  --certificate-arn arn:aws:acm:eu-central-1:012345678912:certificate/53274868-3286-4c7a-be40-7efec9547ae8 \
  --region eu-central-1
# Verify the certificate was deleted
aws acm list-certificates --region eu-central-1

# Shell output:
{
    "CertificateSummaryList": []
}

Delete EKS Cluster
#

# Delete EKS cluster
eksctl delete cluster \
  --name eks-alb-example \
  --region eu-central-1



Links #

# AWS LoadBalancer Controller installation
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/deploy/installation/

# Ingress Class
https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/ingress_class/