Overview #
The following deployment is based on Ubuntu 24.04 servers with the following specs:
-
2 cores, 4 GB RAM for the HA nodes
-
4 cores, 6 GB RAM for the Kubernetes nodes
192.168.30.10 ubuntu1 # HAproxy & Keepalived Node 1
192.168.30.11 ubuntu2 # HAproxy & Keepalived Node 2
192.168.30.12 ubuntu3 # Controller Node 1
192.168.30.13 ubuntu4 # Controller Node 2
192.168.30.14 ubuntu5 # Controller Node 3
192.168.30.15 ubuntu6 # Worker Node 1
192.168.30.16 ubuntu7 # Worker Node 2
192.168.30.9 # Floating IP for HA
In this tutorial I’m using a script to setup the Kubernetes nodes. For more details about the required setup, please refer to my previous post:
Load Balancer #
Overview #
Keepalived: Provides a virtual IP managed by a configurable health check.
HAproxy: Load balancer for the Controller nodes.
Install Packages #
# Install Keepalived, HAproxy & psmisc
sudo apt install keepalived haproxy psmisc -y
# Verify the installation / check version
haproxy -v
# Verify haproxy user and group exist
getent passwd haproxy
getent group haproxy
The psmisc package contains utilities that are used for managing processes.
HAproxy #
Create Configuration #
Use the same configuration for both HAproxy & Keepalived nodes:
# Edit the configuration
sudo vi /etc/haproxy/haproxy.cfg
global
log /dev/log local0 warning
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /run/haproxy/admin.sock mode 660 level admin
defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend kube-apiserver
bind *:6443
mode tcp
option tcplog
default_backend kube-apiserver
backend kube-apiserver
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server kube-apiserver-1 192.168.30.12:6443 check # Controller Node 1
server kube-apiserver-2 192.168.30.13:6443 check # Controller Node 2
server kube-apiserver-3 192.168.30.14:6443 check # Controller Node 3
Test the Configuration #
# Validate configuration
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
Restart HAproxy #
# Restart HAproxy
sudo systemctl restart haproxy
# Enable HAproxy after boot (Should be enabled per default)
sudo systemctl enable haproxy
Keepalived #
Service Configuration #
# Edit the configuration
sudo vi /etc/keepalived/keepalived.conf
HAproxy & Keepalived Node 1:
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
script_security 1
max_auto_priority 1000
}
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy" # Full path specified
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state BACKUP
priority 100
interface ens33 # Define the network interface
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.30.10 # Current Keepalived Node
unicast_peer {
192.168.30.11 # Peer Keepalived Node
}
virtual_ipaddress {
192.168.30.9/24 # Floating IP
}
track_script {
chk_haproxy
}
}
HAproxy & Keepalived Node 2:
global_defs {
notification_email {
}
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_garp_interval 0
vrrp_gna_interval 0
script_security 1
max_auto_priority 1000
}
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy" # Full path specified
interval 2
weight 2
}
vrrp_instance haproxy-vip {
state BACKUP
priority 100
interface ens33 # Define the network interface
virtual_router_id 60
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
unicast_src_ip 192.168.30.11 # Current Keepalived Node
unicast_peer {
192.168.30.10 # Peer Keepalived Node
}
virtual_ipaddress {
192.168.30.9/24 # Floating IP
}
track_script {
chk_haproxy
}
}
Restart Serivce #
# Restart Keepalived
sudo systemctl restart keepalived
# Enable Keepalived after boot (Should be enabled per default)
sudo systemctl enable keepalived
# Check the status
systemctl status keepalived
Initialize the Cluster #
Prerequisites #
Use the following script to prepare all the Controller and Worker nodes, this sets up the prerequisites, installes the Containerd runtime, as well as Kubeadm, Kubelet and Kubectl. Kubectl can be removed for the preparation of the worker nodes.
### Prerequisites ###
# Install dependencies
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl -y
# Enable IPv4 forwarding between network interfaces
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Apply settings
sudo sysctl --system
# Disable Swap
sudo sed -i '/[ \t]swap[ \t]/ s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
# Load the kernel modules at boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
# Load the kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter
### Containerd Runtime ###
# Download the Docker GPG Key / save to file
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o docker.gpg
# Add the Key to the Trusted Keyring
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --import docker.gpg
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --export --output /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg
# Set up the stable Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install the Containerd package
sudo apt-get update && sudo apt-get install -y containerd.io
# Create configuration directory
sudo mkdir -p /etc/containerd
# Generates and save default configuration
containerd config default | sudo tee /etc/containerd/config.toml
# Set "SystemdCgroup" to "true"
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# Restart Containerd service
sudo systemctl restart containerd
# Enable containerd after boot (Should be enabled per default)
sudo systemctl enable containerd
### PKubeadm & Kubelet ###
# Download the Kubernetes GPG Key / save to file
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.26/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the latest stable Kubernetes repository (Version 1.26.15)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install kubelet & kubeadm
sudo apt update &&
sudo apt install -y kubelet kubeadm kubectl
# Stop automatic upgrades for the packages
sudo apt-mark hold kubelet kubeadm kubectl
# Start & enable kubelet
sudo systemctl enable --now kubelet
Create Kubeadm Configuration #
# Create a configuration file for the cluster initialization
vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "stable"
controlPlaneEndpoint: "192.168.30.9:6443"
networking:
podSubnet: "10.0.0.0/16"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
Initialize First Controller Node #
# Pull the images
kubeadm config images pull
# Initialize the cluster with the first Controller node
sudo kubeadm init --config kubeadm-config.yaml --upload-certs
# Shell output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 192.168.30.9:6443 --token ad7fwg.tvzm3p8y4iaxdag5 \
--discovery-token-ca-cert-hash sha256:dcf7293478e21246b4b0f6c7f51e6780badb0e41a98be4b9453403f4e2ef4a48 \
--control-plane --certificate-key 79a7db921a797bbe02bd9bbbc5c6df45763de69178b5dbc872c6e8e17757cdbc
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.30.9:6443 --token ad7fwg.tvzm3p8y4iaxdag5 \
--discovery-token-ca-cert-hash sha256:dcf7293478e21246b4b0f6c7f51e6780badb0e41a98be4b9453403f4e2ef4a48
Kubectl Configuration #
Root user: Permanent
# Add kubeconfig path environment variable
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
# Apply changes
source ~/.bashrc
Install Pod Network Add-On #
Download the binaries:
# Download the Cilium binaries
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
# Extract the binarie into the "/usr/local/bin" directory
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
# Remove the archive
rm cilium-linux-amd64.tar.gz
Install Cilium:
# Install Cilium
cilium install
# Shell output:
ℹ️ Using Cilium version 1.15.5
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed
Check the Status:
# Verify status
cilium status
# Shell output:
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode)
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-operator Running: 1
Cluster Pods: 0/2 managed by Cilium
Helm chart version:
Image versions cilium quay.io/cilium/cilium:v1.15.5@sha256:4ce1666a73815101ec9a4d360af6c5b7f1193ab00d89b7124f8505dee147ca40: 1
cilium-operator quay.io/cilium/operator-generic:v1.15.5@sha256:f5d3d19754074ca052be6aac5d1ffb1de1eb5f2d947222b5f10f6d97ad4383e8: 1
Verify the Kubelet #
Verify the pods are up and running:
# List the Kubelet pods
kubectl get pods -n kube-system
# Shell output:
NAME READY STATUS RESTARTS AGE
cilium-52vgf 1/1 Running 0 94s
cilium-operator-fdf6bc9f4-6jwc7 1/1 Running 0 94s
coredns-787d4945fb-9vjn4 1/1 Running 0 4m9s
coredns-787d4945fb-s4jd7 1/1 Running 0 4m9s
etcd-ubuntu3 1/1 Running 0 4m8s
kube-apiserver-ubuntu3 1/1 Running 0 4m8s
kube-controller-manager-ubuntu3 1/1 Running 0 4m8s
kube-proxy-2v4ss 1/1 Running 0 4m8s
kube-scheduler-ubuntu3 1/1 Running 0 4m8s
# Check the Kubelet status
sudo systemctl status kubelet
# List Kubelet logs
sudo journalctl -u kubelet
Verify the Cluster #
# List the nodes
kubectl get nodes -o wide
# Shell output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu3 Ready control-plane 5m6s v1.26.15 192.168.30.12 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
# List cluster info
kubectl cluster-info
# Shell output:
Kubernetes control plane is running at https://192.168.30.9:6443
CoreDNS is running at https://192.168.30.9:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Add Controller Nodes #
Add the other Controller Nodes #
Kubelet automatically sets up a stacked etcd cluster where etcd runs as a set of pods, one on each Controller node.
# Add the other two Controller nodes
sudo kubeadm join 192.168.30.9:6443 --token ad7fwg.tvzm3p8y4iaxdag5 \
--discovery-token-ca-cert-hash sha256:dcf7293478e21246b4b0f6c7f51e6780badb0e41a98be4b9453403f4e2ef4a48 \
--control-plane --certificate-key 79a7db921a797bbe02bd9bbbc5c6df45763de69178b5dbc872c6e8e17757cdbc
# Shell output:
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
Verify the Cluster #
# List the nodes
kubectl get nodes -o wide
# Shell output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu3 Ready control-plane 18m v1.26.15 192.168.30.12 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu4 Ready control-plane 5m27s v1.26.15 192.168.30.13 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu5 Ready control-plane 88s v1.26.15 192.168.30.14 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
Verify Etcd #
# List pods in "kube-system" namespace
kubectl get pods -n kube-system
# Shell output:
NAME READY STATUS RESTARTS AGE
cilium-52vgf 1/1 Running 0 16m
cilium-d5c66 1/1 Running 0 5m41s
cilium-m27s8 1/1 Running 0 102s
cilium-operator-fdf6bc9f4-6jwc7 1/1 Running 0 16m
coredns-787d4945fb-9vjn4 1/1 Running 0 18m
coredns-787d4945fb-s4jd7 1/1 Running 0 18m
etcd-ubuntu3 1/1 Running 0 18m # Etcd Node 1
etcd-ubuntu4 1/1 Running 0 5m30s # Etcd Node 2
etcd-ubuntu5 1/1 Running 0 92s # Etcd Node 3
kube-apiserver-ubuntu3 1/1 Running 0 18m
kube-apiserver-ubuntu4 1/1 Running 0 5m26s
kube-apiserver-ubuntu5 1/1 Running 0 91s
kube-controller-manager-ubuntu3 1/1 Running 0 18m
kube-controller-manager-ubuntu4 1/1 Running 0 4m25s
kube-controller-manager-ubuntu5 1/1 Running 0 95s
kube-proxy-2v4ss 1/1 Running 0 18m
kube-proxy-vjqzv 1/1 Running 0 102s
kube-proxy-xrv48 1/1 Running 0 5m41s
kube-scheduler-ubuntu3 1/1 Running 0 18m
kube-scheduler-ubuntu4 1/1 Running 0 5m31s
kube-scheduler-ubuntu5 1/1 Running 0 96s
Add Worker Nodes #
Join the Worker Nodes #
# Join the Worker nodes
sudo kubeadm join 192.168.30.9:6443 --token ad7fwg.tvzm3p8y4iaxdag5 \
--discovery-token-ca-cert-hash sha256:dcf7293478e21246b4b0f6c7f51e6780badb0e41a98be4b9453403f4e2ef4a48
# Shell output:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Label Worker Nodes #
# Label the worker nodes
kubectl label nodes ubuntu6 kubernetes.io/role=worker &&
kubectl label nodes ubuntu7 kubernetes.io/role=worker
Verify the Cluster #
# List the nodes
kubectl get nodes -o wide
# Shell output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu3 Ready control-plane 29m v1.26.15 192.168.30.12 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu4 Ready control-plane 15m v1.26.15 192.168.30.13 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu5 Ready control-plane 11m v1.26.15 192.168.30.14 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu6 Ready worker 3m18s v1.26.15 192.168.30.15 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
ubuntu7 Ready worker 85s v1.26.15 192.168.30.16 <none> Ubuntu 24.04 LTS 6.8.0-35-generic containerd://1.6.33
Install Helm #
# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version
MetalLB #
Add Helm Repository #
# Add the MetalLB repository
helm repo add metallb https://metallb.github.io/metallb
# Update index
helm repo update
# Optional: Save & adopt the MetalLB chart values
helm show values metallb/metallb > metallb-values.yaml
Install MetalLB #
# Install MetalLB
helm install --create-namespace --namespace metallb-system metallb metallb/metallb
# Shell output:
NAME: metallb
LAST DEPLOYED: Sat Jun 22 10:12:15 2024
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.
Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
# Verify the resources: Wait till all pods are up and running
kubectl get pods -n metallb-system
# Shell output:
metallb-controller-f7cff5b89-vkwgm 1/1 Running 0 3m49s
metallb-speaker-689zj 4/4 Running 0 3m49s
metallb-speaker-hkqn8 4/4 Running 0 3m49s
metallb-speaker-kl75t 4/4 Running 0 3m49s
metallb-speaker-kqpj9 4/4 Running 0 3m49s
metallb-speaker-sfpt5 4/4 Running 0 3m49s
MetalLB Configuration #
# Create a configuration for MetalLB
vi metallb-configuration.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: main-pool
namespace: metallb-system
spec:
addresses:
- 192.168.30.200-192.168.30.254
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: main-advertisement
namespace: metallb-system
spec:
ipAddressPools:
- main-pool
# Deploy the MetalLB configuration
kubectl apply -f metallb-configuration.yaml
Verify the Configuration #
# Verify the MetalLB IP pools
kubectl get IPAddressPool -n metallb-system
# Shell output:
NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES
main-pool true false ["192.168.30.200-192.168.30.254"]
# Verify the L2Advertisement
kubectl get L2Advertisement -n metallb-system
# Shell output:
NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES
main-advertisement ["main-pool"]
Nginx Ingress Controller #
Add Helm Chart #
# Add Helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Update package index
helm repo update
Install Nginx Ingress #
# Install the Nginx ingress controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=3
# Optional: Scale the Nginx Ingress deployment
kubectl scale deployment ingress-nginx-controller --replicas=3 -n ingress-nginx
Verify the Deployment #
# List pods
kubectl get pods -n ingress-nginx
# Shell outpout:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-f4d9f7b9d-c4nvz 1/1 Running 0 38s
ingress-nginx-controller-f4d9f7b9d-c9rx6 1/1 Running 0 38s
ingress-nginx-controller-f4d9f7b9d-l9thn 1/1 Running 0 38s
List the IngressClass:
# List IngressClass
kubectl get ingressclass
# Shell output:
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 51s
Test Deployment #
Create an example deployment with Ingress controller to verify the Nginx Ingress Controller and TLS encryption is working.
Kubernetes TLS Certificate Secret #
In this setup I’m using a Let’s Encrypt wildcard certificate.
# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls k8s-kubeadm-test-tls --cert=./fullchain.pem --key=./privkey.pem
Pod, ClusterIP Service, Ingress #
# Create a manifest for the exmaple deployment
vi test-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jueklu-container-2
spec:
replicas: 5
selector:
matchLabels:
app: jueklu-container-2
template:
metadata:
labels:
app: jueklu-container-2
spec:
containers:
- name: jueklu-container-2
image: jueklu/container-2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: jueklu-container-2
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
selector:
app: jueklu-container-2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jueklu-container-2-ingress
spec:
ingressClassName: "nginx"
tls:
- hosts:
- k8s-kubeadm-test.jklug.work
secretName: k8s-kubeadm-test-tls
rules:
- host: k8s-kubeadm-test.jklug.work
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: jueklu-container-2
port:
number: 8080
# Deploy the manifest
kubectl apply -f test-deployment.yaml
Verify the Resources #
Verify pods:
# List pods
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,NAMESPACE:.metadata.namespace,NODE:.spec.nodeName
# Shell output:
NAME STATUS NAMESPACE NODE
jueklu-container-2-7d9c7f5dc-2j5pz Running default ubuntu6
jueklu-container-2-7d9c7f5dc-glv7k Running default ubuntu6
jueklu-container-2-7d9c7f5dc-jdfp9 Running default ubuntu7
jueklu-container-2-7d9c7f5dc-krksn Running default ubuntu7
jueklu-container-2-7d9c7f5dc-pkwbp Running default ubuntu6
Get the Ingress IP:
# List the ingress resources
kubectl get ingress
# Shell output: (May takes a view seconds till the Ingress gets an external IP)
NAME CLASS HOSTS ADDRESS PORTS AGE
jueklu-container-2-ingress nginx k8s-kubeadm-test.jklug.work 192.168.30.200 80, 443 56s
List Ingress Details:
# List Ingress details:
kubectl get svc -n ingress-nginx
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.109.216.101 192.168.30.200 80:30359/TCP,443:31640/TCP 3m22s
ingress-nginx-controller-admission ClusterIP 10.108.166.235 <none> 443/TCP 3m22s
List Ingress Logs:
# List Ingress logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller
Hosts Entry #
# Create an hosts entry for the Ingress
192.168.30.200 k8s-kubeadm-test.jklug.work
Access the Deployment #
# Access the deployment with TLS encryption
https://k8s-kubeadm-test.jklug.work
Delete the Deployment #
# Delete the deployment
kubectl delete -f test-deployment.yaml
# Delete the TLS secret
kubectl delete secret k8s-kubeadm-test-tls
Shutdown the Cluster #
# Drain the worker nodes
kubectl drain ubuntu7 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu6 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu5 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu4 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu3 --ignore-daemonsets --delete-emptydir-data
# Shutdown the virtual machines
sudo shutdown
Links #
# Load Balancing
https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing
# Kubeadm High Availability
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
# Network Add-ons
https://kubernetes.io/docs/concepts/cluster-administration/addons/
# Configure cgroup driver
https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/