Skip to main content

K8s Kubeadm - Basic Kubernetes Cluster Deployment with one Controller and two Worker Nodes, Containerd and Kubeadmin Cgroup Driver Configuration, Cilium Network Add-On, MetalLB & Nginx Ingress Controller, Test-Deployment with TLS Encryption

2643 words·
Kubernetes Kubernetes Cluster K8s Kubeadm MetalLB Nginx Ingress Controller Ubuntu
Table of Contents
Kubernetes-Cluster - This article is part of a series.
Part 10: This Article

Overview
#

In this tutorial I’m setting up a basic Kubernetes cluster with one Controller and two Worker nodes, based on Ubuntu 24.04 virtual machines.

192.168.30.10 # Controller Node
192.168.30.11 # Worker Node 1
192.168.30.12 # Worker Node 2

Prerequisites
#

Install Dependencies
#

# Install dependencies
sudo apt update && sudo apt upgrade -y &&
sudo apt install apt-transport-https ca-certificates curl -y

Enable IPv4 Packet Forwarding
#

Note: This example uses a separate configuration file instead of the default “/etc/sysctl.conf” file, the “sudo sysctl –system” command reloads all system-wide sysctl settings from the standard locations.

# Enable IPv4 forwarding between network interfaces
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Apply settings
sudo sysctl --system
# Verify IPv4 forwarding
sysctl net.ipv4.ip_forward

# Shell output:
net.ipv4.ip_forward = 1

Disable Swap
#

# Uncomment "swap" in /etc/fstab & disable swap
sudo sed -i '/[ \t]swap[ \t]/ s/^\(.*\)$/#\1/g' /etc/fstab && 
sudo swapoff -a

Kernel Modules
#

Overview
#

  • Overlay Module: Enables the overlay filesystem that allows one filesystem to be “overlayed” over another filesystem, allowing modifications to be made without altering the original underlying filesystem. The overlay filesystem is crucial for how container images are built and layered.

  • br_netfilter Module: Allows bridged IP/ARP/NDP traffic to be filtered through the Linux networking stacks netfilter framework. Kubernetes uses this module to enforce network policies that control traffic flow at the IP address or port level between pods and network endpoints.

Enable Kernel Modules
#

# Automatically load the kernel modules at boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# Immediately load the kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter
# Verify the modules are loaded
lsmod | grep overlay
lsmod | grep br_netfilter

Container Runtime
#

Overview
#

  • containerd Package available in Ubuntu default repositories

  • containerd.io Package provided by Docker and is usually more up-to-date

Install Containerd Runtime
#

Add the Docker Repository
#

# Download the Docker GPG Key / save to file
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o docker.gpg

# Add the Key to the Trusted Keyring
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --import docker.gpg &&
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --export --output /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg

# Set up the stable Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Containerd
#

# Install the Containerd package
sudo apt-get update && sudo apt-get install -y containerd.io

Configure Containerd Runtime
#

Copy Default Configuration:
#

# Create configuration directory
sudo mkdir -p /etc/containerd

# Generates and save default configuration
containerd config default | sudo tee /etc/containerd/config.toml

Adopt the Configuration
#

Adopt the containerd configuration and set SystemdCgroup to true. This sets the cgroup driver to “systemd”.

# Open the conteainerd configuration
sudo vi /etc/containerd/config.toml
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          cni_conf_dir = ""
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_path = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = true # Set to true

Or use the following shell command:

# Set "SystemdCgroup" to "true"
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

Apply the Changes
#

# Restart Containerd service
sudo systemctl restart containerd

# Enable containerd after boot (Should be enabled per default)
sudo systemctl enable containerd

# Check the Containerd status
systemctl status containerd

Install Kubeadm, Kubelet, Kubectl
#

Add Kubernetes Repository
#

# Download the Kubernetes GPG Key / save to file
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.26/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


# Add the latest stable Kubernetes repository (Version 1.26.15)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Install Kubernetes Components
#

# Install kubelet kubeadm kubectl
sudo apt update &&
sudo apt install -y kubelet kubeadm kubectl

# Stop automatic upgrades for the packages
sudo apt-mark hold kubelet kubeadm kubectl

Kubeadm Configuration File
#

Create a Kubeadm configuration file, that defines the cgroup driver systemd and the subnet range for the pod network:

# Create a configuration file for the cluster initialization
vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "stable"
networking:
  podSubnet: "10.0.0.0/16"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

Initialize the Kubernetes Cluster
#

Note: If “apiserver-advertise-address” is not defined, Kubeadm automatically selects the default network interface’s IP address.

# Pull images
sudo kubeadm config images pull

# Initialize cluster
sudo kubeadm init --config kubeadm-config.yaml
# Shell output:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.30.10:6443 --token p411xg.3qzelxexr571wkr0 \
        --discovery-token-ca-cert-hash sha256:2c637a451bbbda1187495450fb06310e576cae1dc72ca6b2570f80e3edb3d01a

Kubectl Configuration
#

On the Controller node, prepare the kubeconfig configuration with the cluster details & credentials:

# Verify Kubectl is installed
kubectl version --client

None root user:

# Make kubeconfig available for current user
mkdir -p $HOME/.kube &&
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config &&
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Root user: Temporary

# Export the kubeconfig
export KUBECONFIG=/etc/kubernetes/admin.conf

Root user: Permanent

# Add kubeconfig path environment variable
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc

# Apply changes
source ~/.bashrc

Install Pod Network Add-On
#

# Download the Cilium binaries
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz

# Extract the binarie into the "/usr/local/bin" directory
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

# Remove the archive
rm cilium-linux-amd64.tar.gz
# Install Cilium
cilium install

# Shell output:
ℹ️  Using Cilium version 1.15.5
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed
# Verify status
cilium status

# Shell output:
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 1
                       cilium-operator    Running: 1
Cluster Pods:          2/2 managed by Cilium
Helm chart version:
Image versions         cilium             quay.io/cilium/cilium:v1.15.5@sha256:4ce1666a73815101ec9a4d360af6c5b7f1193ab00d89b7124f8505dee147ca40: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.15.5@sha256:f5d3d19754074ca052be6aac5d1ffb1de1eb5f2d947222b5f10f6d97ad4383e8: 1

Restart Kubelet
#

# Restart Kubelet
sudo systemctl restart kubelet

Verify Kubelet
#

# List the Kubelet pods
kubectl get pods -n kube-system

# Shell output:
NAME                              READY   STATUS    RESTARTS   AGE
cilium-kjpm8                      1/1     Running   0          49m
cilium-operator-fdf6bc9f4-xwf79   1/1     Running   0          49m
coredns-787d4945fb-44rvg          1/1     Running   0          62m
coredns-787d4945fb-pz7xc          1/1     Running   0          62m
etcd-ubuntu1                      1/1     Running   0          63m
kube-apiserver-ubuntu1            1/1     Running   0          63m
kube-controller-manager-ubuntu1   1/1     Running   0          63m
kube-proxy-jxjdz                  1/1     Running   0          62m
kube-scheduler-ubuntu1            1/1     Running   0          63m
# Check the Kubelet status
sudo systemctl status kubelet
# List Kubelet logs
sudo journalctl -u kubelet

Verify the Cluster
#

# List the nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   66m   v1.26.15   192.168.30.10   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
# List cluster info
kubectl cluster-info

# Shell output:
Kubernetes control plane is running at https://192.168.30.10:6443
CoreDNS is running at https://192.168.30.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.



Add Worker Node
#

Node Prerequisites
#

Use the following script to prepare the new node, this sets up the prerequisites, installes the Containerd runtime, as well as Kubeadm and Kubelet.

### Prerequisites ###
# Install dependencies
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl -y

# Enable IPv4 forwarding between network interfaces
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Apply settings
sudo sysctl --system

# Disable Swap
sudo sed -i '/[ \t]swap[ \t]/ s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

# Load the kernel modules at boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# Load the kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter


### Containerd Runtime ###
# Download the Docker GPG Key / save to file
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o docker.gpg

# Add the Key to the Trusted Keyring
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --import docker.gpg
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --export --output /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg

# Set up the stable Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install the Containerd package
sudo apt-get update && sudo apt-get install -y containerd.io

# Create configuration directory
sudo mkdir -p /etc/containerd

# Generates and save default configuration
containerd config default | sudo tee /etc/containerd/config.toml

# Set "SystemdCgroup" to "true"
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart Containerd service
sudo systemctl restart containerd

# Enable containerd after boot (Should be enabled per default)
sudo systemctl enable containerd


### PKubeadm & Kubelet ###
# Download the Kubernetes GPG Key / save to file
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.26/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add the latest stable Kubernetes repository (Version 1.26.15)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet & kubeadm
sudo apt update &&
sudo apt install -y kubelet kubeadm

# Stop automatic upgrades for the packages
sudo apt-mark hold kubelet kubeadm

# Start & enable kubelet
sudo systemctl enable --now kubelet

Create Join Token
#

On the initial / first Controller node, create a take and a discovery hash:

# Generate token and discovery hash
kubeadm token create --print-join-command

# Shell output:
kubeadm join 192.168.30.10:6443 --token ky44vc.nv8ncgxnyw8z0zuk --discovery-token-ca-cert-hash sha256:2c637a451bbbda1187495450fb06310e576cae1dc72ca6b2570f80e3edb3d01a

Join the Worker Nodes
#

Run the following command on the worker nodes to join the cluster:

# Join the worker nodes
sudo kubeadm join 192.168.30.10:6443 --token ky44vc.nv8ncgxnyw8z0zuk \
  --discovery-token-ca-cert-hash sha256:2c637a451bbbda1187495450fb06310e576cae1dc72ca6b2570f80e3edb3d01a
# Shell output:
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Label Worker Nodes
#

# Label the worker nodes
kubectl label nodes ubuntu2 kubernetes.io/role=worker &&
kubectl label nodes ubuntu3 kubernetes.io/role=worker

Verify the Cluster
#

Verify the Kubernetes nodes from the Controller noes:

# List Kubernetes nodes (It may takes a while till the status jumps to Ready)
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   13h    v1.26.15   192.168.30.10   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
ubuntu2   Ready    worker          15m    v1.26.15   192.168.30.11   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
ubuntu3   Ready    worker          2m8s   v1.26.15   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33



Install Helm
#

# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version

MetalLB
#

Add Helm Repository
#

# Add the MetalLB repository
helm repo add metallb https://metallb.github.io/metallb

# Update index
helm repo update
# Optional: Save & adopt the MetalLB chart values
helm show values metallb/metallb > metallb-values.yaml

Install MetalLB
#

# Install MetalLB
helm install --create-namespace --namespace metallb-system metallb metallb/metallb

# Shell output:
NAME: metallb
LAST DEPLOYED: Sat Jun 22 10:12:15 2024
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
# Verify the resources: Wait till all pods are up and running
kubectl get pods -n metallb-system

# Shell output:
NAME                                 READY   STATUS    RESTARTS   AGE
metallb-controller-f7cff5b89-zrkjv   1/1     Running   0          3m49s
metallb-speaker-8dfkz                4/4     Running   0          3m49s
metallb-speaker-dk7bg                4/4     Running   0          3m49s
metallb-speaker-k26z6                4/4     Running   0          3m49s

MetalLB Configuration
#

# Create a configuration for MetalLB
vi metallb-configuration.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: main-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.30.200-192.168.30.254

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: main-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - main-pool
# Deploy the MetalLB configuration
kubectl apply -f metallb-configuration.yaml

Verify the Configuration
#

# Verify the MetalLB IP pools
kubectl get IPAddressPool -n metallb-system

# Shell output:
NAME        AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
main-pool   true          false             ["192.168.30.200-192.168.30.254"]
# Verify the L2Advertisement
kubectl get L2Advertisement -n metallb-system

# Shell output:
NAME                 IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
main-advertisement   ["main-pool"]

Test Deployment
#

Deploy a pod with LoadBalancer service to verify the LoadBalancer assigns an IP address.


Deploy Pod and LoadBalancer
#

# Run container: Example
kubectl run my-container --image=jueklu/container-2 --port=8080 --restart=Never --labels app=testing

# Create a LoadBalancer service to expose the pod "my-container"
kubectl expose pod/my-container --port=8080 --target-port=8080 --type=LoadBalancer --name=my-container-service

Verify the Deployment
#

# List the pods
kubectl get pods

# Shell output
NAME           READY   STATUS    RESTARTS   AGE
my-container   1/1     Running   0          14s
# List LoadBalancer service details
kubectl get svc my-container-service

# Shell output
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)          AGE
my-container-service   LoadBalancer   10.107.195.9   192.168.30.200   8080:31492/TCP   23s

Access the Deployment
#

# Access the deployment from a browser
192.168.30.200:8080

Delete the Deployment
#

# Delete the deployment
kubectl delete pod my-container

# Delete the LoadBalancer service
kubectl delete svc my-container-service

Nginx Ingress Controller
#

Add Helm Chart
#

# Add Helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Update package index
helm repo update

Install Nginx Ingress
#

# Install the Nginx ingress controller 
helm install ingress-nginx ingress-nginx/ingress-nginx \
    --namespace ingress-nginx \
    --create-namespace
# Optional: Scale the Nginx Ingress deployment
kubectl scale deployment ingress-nginx-controller --replicas=3 -n ingress-nginx

Verify the Deployment
#

# List pods
kubectl get pods -n ingress-nginx

# Shell outpout:
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-f4d9f7b9d-jcx5x   1/1     Running   0          58s
ingress-nginx-controller-f4d9f7b9d-schg4   1/1     Running   0          58s
ingress-nginx-controller-f4d9f7b9d-v27j7   1/1     Running   0          66s

List the IngressClass:

# List IngressClass
kubectl get ingressclass

# Shell output:
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       54s

Test Deployment
#

Create an example deployment with Ingress controller to verify the Nginx Ingress Controller and TLS encryption is working.


Kubernetes TLS Certificate Secret
#

In this setup I’m using a Let’s Encrypt wildcard certificate.

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls k8s-kubeadm-test-tls --cert=./fullchain.pem --key=./privkey.pem

Pod, ClusterIP Service, Ingress
#

# Create a manifest for the exmaple deployment
vi test-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jueklu-container-2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: jueklu-container-2
  template:
    metadata:
      labels:
        app: jueklu-container-2
    spec:
      containers:
      - name: jueklu-container-2
        image: jueklu/container-2
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: jueklu-container-2
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: jueklu-container-2

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jueklu-container-2-ingress
spec:
  ingressClassName: "nginx"
  tls:
  - hosts:
    - k8s-kubeadm-test.jklug.work
    secretName: k8s-kubeadm-test-tls
  rules:
  - host: k8s-kubeadm-test.jklug.work
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: jueklu-container-2
            port:
              number: 8080
# Deploy the manifest
kubectl apply -f test-deployment.yaml

Verify the Resources
#

Verify pods:

# List pods
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,NAMESPACE:.metadata.namespace,NODE:.spec.nodeName

# Shell output:
NAME                                 STATUS    NAMESPACE   NODE
jueklu-container-2-7d9c7f5dc-ln2gm   Running   default     ubuntu3
jueklu-container-2-7d9c7f5dc-qwbt4   Running   default     ubuntu2
jueklu-container-2-7d9c7f5dc-tbcjz   Running   default     ubuntu3

Get the Ingress IP:

# List the ingress resources
kubectl get ingress

# Shell output: (May takes a view seconds till the Ingress gets an external IP)
NAME                         CLASS   HOSTS                         ADDRESS          PORTS     AGE
jueklu-container-2-ingress   nginx   k8s-kubeadm-test.jklug.work   192.168.30.200   80, 443   56s

List Ingress Details:

# List Ingress details:
kubectl get svc -n ingress-nginx

# Shell output:
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.110.24.224   192.168.30.200   80:30079/TCP,443:31334/TCP   15m
ingress-nginx-controller-admission   ClusterIP      10.100.129.56   <none>           443/TCP                      15m

List Ingress Logs:

# List Ingress logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller

Hosts Entry
#

# Create an hosts entry for the Ingress
192.168.30.200 k8s-kubeadm-test.jklug.work

Access the Deployment
#

# Access the deployment with TLS encryption
https://k8s-kubeadm-test.jklug.work

Delete the Deployment
#

# Delete the deployment
kubectl delete -f test-deployment.yaml

# Delete the TLS secret
kubectl delete secret k8s-kubeadm-test-tls

Shutdown the Cluster
#

# Drain the worker nodes
kubectl drain ubuntu3 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu2 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu1 --ignore-daemonsets --delete-emptydir-data

# Shutdown the virtual machines
sudo shutdown

Links #

# Configure cgroup driver
https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
Kubernetes-Cluster - This article is part of a series.
Part 10: This Article