Skip to main content

K8s Kubeadm - High Availability Kubernetes Cluster Deployment with HAproxy and Keepalived, External etcd Cluster

3881 words·
Kubernetes Kubernetes Cluster K8s Kubeadm etcd HAproxy Keepalived Ubuntu
Table of Contents
Kubernetes-Cluster - This article is part of a series.
Part 12: This Article

Overview
#

I’m using the following VMs based on Ubuntu 24.04 servers:

192.168.30.10 ubuntu1 # HAproxy & Keepalived Node 1
192.168.30.11 ubuntu2 # HAproxy & Keepalived Node 2
192.168.30.12 ubuntu3 # Controller Node 1
192.168.30.13 ubuntu4 # Controller Node 2
192.168.30.14 ubuntu5 # Controller Node 3
192.168.30.15 ubuntu6 # Etcd Node 1
192.168.30.16 ubuntu7 # Etcd Node 2
192.168.30.17 ubuntu8 # Etcd Node 3
192.168.30.18 ubuntu9 # Worker Node 1
192.168.30.19 ubuntu10 # Worker Node 2

192.168.30.9 # Floating IP for HA

In this tutorial I’m using a script to setup the Kubernetes nodes. For more details about the required setup and the deployment of MetalLB and the Nginx Ingress controller, please refer to my previous post:

K8s Kubeadm - Basic Kubernetes Cluster Deployment with one Controller and two Worker Nodes, Containerd and Kubeadmin Cgroup Driver Configuration, Cilium Network Add-On, MetalLB & Nginx Ingress Controller, Test-Deployment with TLS Encryption
2643 words
Kubernetes Kubernetes Cluster K8s Kubeadm MetalLB Nginx Ingress Controller Ubuntu



Load Balancer
#

Overview
#

Keepalived: Provides a virtual IP managed by a configurable health check.

HAproxy: Load balancer for the Controller nodes.


Install Packages
#

# Install Keepalived, HAproxy & psmisc
sudo apt install keepalived haproxy psmisc -y
# Verify the installation / check version
haproxy -v

# Verify haproxy user and group exist
getent passwd haproxy
getent group haproxy

The psmisc package contains utilities that are used for managing processes.


HAproxy
#

Create Configuration
#

Use the same configuration for both HAproxy & Keepalived nodes:

# Edit the configuration
sudo vi /etc/haproxy/haproxy.cfg
global
    log /dev/log  local0 warning
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    stats socket /run/haproxy/admin.sock mode 660 level admin

defaults
  log     global
  option  httplog
  option  dontlognull
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  errorfile 400 /etc/haproxy/errors/400.http
  errorfile 403 /etc/haproxy/errors/403.http
  errorfile 408 /etc/haproxy/errors/408.http
  errorfile 500 /etc/haproxy/errors/500.http
  errorfile 502 /etc/haproxy/errors/502.http
  errorfile 503 /etc/haproxy/errors/503.http
  errorfile 504 /etc/haproxy/errors/504.http

frontend kube-apiserver
  bind *:6443
  mode tcp
  option tcplog
  default_backend kube-apiserver

backend kube-apiserver
    mode tcp
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server kube-apiserver-1 192.168.30.12:6443 check # Controller Node 1
    server kube-apiserver-2 192.168.30.13:6443 check # Controller Node 2
    server kube-apiserver-3 192.168.30.14:6443 check # Controller Node 3

Test the Configuration
#

# Validate configuration
sudo haproxy -c -f /etc/haproxy/haproxy.cfg

Restart HAproxy
#

# Restart HAproxy
sudo systemctl restart haproxy

# Enable HAproxy after boot (Should be enabled per default)
sudo systemctl enable haproxy

Keepalived
#

Service Configuration
#

# Edit the configuration
sudo vi /etc/keepalived/keepalived.conf

HAproxy & Keepalived Node 1:

global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
  script_security 1
  max_auto_priority 1000
}

vrrp_script chk_haproxy {
  script "/usr/bin/killall -0 haproxy" # Full path specified
  interval 2
  weight 2
}

vrrp_instance haproxy-vip {
  state BACKUP
  priority 100
  interface ens33 # Define the network interface
  virtual_router_id 60
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_src_ip 192.168.30.10 # Current Keepalived Node
  unicast_peer {
    192.168.30.11 # Peer Keepalived Node
  }

  virtual_ipaddress {
    192.168.30.9/24 # Floating IP
  }

  track_script {
    chk_haproxy
  }
}

HAproxy & Keepalived Node 2:

global_defs {
  notification_email {
  }
  router_id LVS_DEVEL
  vrrp_skip_check_adv_addr
  vrrp_garp_interval 0
  vrrp_gna_interval 0
  script_security 1
  max_auto_priority 1000
}

vrrp_script chk_haproxy {
  script "/usr/bin/killall -0 haproxy" # Full path specified
  interval 2
  weight 2
}

vrrp_instance haproxy-vip {
  state BACKUP
  priority 100
  interface ens33 # Define the network interface
  virtual_router_id 60
  advert_int 1
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_src_ip 192.168.30.11 # Current Keepalived Node
  unicast_peer {
    192.168.30.10 # Peer Keepalived Node
  }

  virtual_ipaddress {
    192.168.30.9/24 # Floating IP
  }

  track_script {
    chk_haproxy
  }
}

Restart Serivce
#

# Restart Keepalived
sudo systemctl restart keepalived

# Enable Keepalived after boot (Should be enabled per default)
sudo systemctl enable keepalived

# Check the status
systemctl status keepalived



Kubernetes Nodes
#

Prerequisites
#

This scripts set up the requirements for the Kubernetes nodes, installes the Containerd runtime, as well as Kubeadm, Kubelet and Kubectl. Kubectl is optional for the etcd nodes, and not necessary for the Worker nodes.

Install Kubeadm, Kubelet & Kubectl
# Create a file for the script
vi kubernetes-setup.sh
### Prerequisites ###
# Install dependencies
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl -y

# Enable IPv4 forwarding between network interfaces
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Apply settings
sudo sysctl --system

# Disable Swap
sudo sed -i '/[ \t]swap[ \t]/ s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

# Load the kernel modules at boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# Load the kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter


### Containerd Runtime ###
# Download the Docker GPG Key / save to file
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o docker.gpg

# Add the Key to the Trusted Keyring
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --import docker.gpg
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --export --output /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg

# Set up the stable Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install the Containerd package
sudo apt-get update && sudo apt-get install -y containerd.io

# Create configuration directory
sudo mkdir -p /etc/containerd

# Generates and save default configuration
containerd config default | sudo tee /etc/containerd/config.toml

# Set "SystemdCgroup" to "true"
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart Containerd service
sudo systemctl restart containerd

# Enable containerd after boot (Should be enabled per default)
sudo systemctl enable containerd


### PKubeadm & Kubelet ###
# Download the Kubernetes GPG Key / save to file
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.26/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add the latest stable Kubernetes repository (Version 1.26.15)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet & kubeadm
sudo apt update &&
sudo apt install -y kubelet kubeadm kubectl

# Stop automatic upgrades for the packages
sudo apt-mark hold kubelet kubeadm kubectl

# Start & enable kubelet
sudo systemctl enable --now kubelet
# Run the script
chmod +x kubernetes-setup.sh &&
./kubernetes-setup.sh
Install Kubeadm & Kubelet
# Create a file for the script
vi kubernetes-setup.sh
### Prerequisites ###
# Install dependencies
sudo apt update && sudo apt upgrade -y
sudo apt install apt-transport-https ca-certificates curl -y

# Enable IPv4 forwarding between network interfaces
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# Apply settings
sudo sysctl --system

# Disable Swap
sudo sed -i '/[ \t]swap[ \t]/ s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

# Load the kernel modules at boot
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

# Load the kernel modules
sudo modprobe overlay && sudo modprobe br_netfilter


### Containerd Runtime ###
# Download the Docker GPG Key / save to file
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o docker.gpg

# Add the Key to the Trusted Keyring
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --import docker.gpg
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/docker-archive-keyring.gpg --export --output /etc/apt/trusted.gpg.d/docker-archive-keyring.gpg

# Set up the stable Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install the Containerd package
sudo apt-get update && sudo apt-get install -y containerd.io

# Create configuration directory
sudo mkdir -p /etc/containerd

# Generates and save default configuration
containerd config default | sudo tee /etc/containerd/config.toml

# Set "SystemdCgroup" to "true"
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart Containerd service
sudo systemctl restart containerd

# Enable containerd after boot (Should be enabled per default)
sudo systemctl enable containerd


### PKubeadm & Kubelet ###
# Download the Kubernetes GPG Key / save to file
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.26/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# Add the latest stable Kubernetes repository (Version 1.26.15)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.26/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install kubelet & kubeadm
sudo apt update &&
sudo apt install -y kubelet kubeadm

# Stop automatic upgrades for the packages
sudo apt-mark hold kubelet kubeadm

# Start & enable kubelet
sudo systemctl enable --now kubelet
# Run the script
chmod +x kubernetes-setup.sh &&
./kubernetes-setup.sh



Etcd Nodes
#

Kubelet Configuration
#

Create Configuration
#

Note: The path in the official documentation /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf is deprecated. This is the correct path: /usr/lib/systemd/system/kubelet.service.d/20-etcd-service-manager.conf


Create a new systemd service unit file that has higher precedence than the kubeadm-provided kubelet unit file:

cat << EOF | sudo tee /usr/lib/systemd/system/kubelet.service.d/20-etcd-service-manager.conf > /dev/null
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --runtime-request-timeout=15m --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
Restart=always
EOF

Restart Systemd Service Unit
#

# Reload service unit configurations & restart the Kubelet service
sudo systemctl daemon-reload &&
sudo systemctl restart kubelet

Check Kubelet Status & Logs
#

# Check the Kubelet status
sudo systemctl status kubelet

# Check the Kubelet logs
journalctl -u kubelet

Kubeadm Configuration Files
#

This script creates a kubeadm configuration file for every etcd node:

# Create a file for the Kubeadm configuration script
vi kubeadm-config.sh
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
export HOST0=192.168.30.15
export HOST1=192.168.30.16
export HOST2=192.168.30.17

# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
export NAME0="ubuntu6"
export NAME1="ubuntu7"
export NAME2="ubuntu8"

# Create temp directories to store files that will end up on other hosts
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/

HOSTS=(${HOST0} ${HOST1} ${HOST2})
NAMES=(${NAME0} ${NAME1} ${NAME2})

for i in "${!HOSTS[@]}"; do
HOST=${HOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: InitConfiguration
nodeRegistration:
    name: ${NAME}
localAPIEndpoint:
    advertiseAddress: ${HOST}
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${HOST}"
        peerCertSANs:
        - "${HOST}"
        extraArgs:
            initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380
            initial-cluster-state: new
            name: ${NAME}
            listen-peer-urls: https://${HOST}:2380
            listen-client-urls: https://${HOST}:2379
            advertise-client-urls: https://${HOST}:2379
            initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
# Run the script
chmod +x kubeadm-config.sh &&
./kubeadm-config.sh

Generate the Certificate Authority
#

Create CA Certs
#

# Create CA certs
sudo kubeadm init phase certs etcd-ca

# Shell output:
I0627 13:35:49.570866    9642 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/ca" certificate and key

Create Certs for other Etcd Nodes
#

Create certificates for each etcd member:

# Create a file for the cert script
vi kubeadm-certs.sh
# Update HOST0, HOST1 and HOST2 with the IPs of your hosts
export HOST0=192.168.30.15
export HOST1=192.168.30.16
export HOST2=192.168.30.17

# Update NAME0, NAME1 and NAME2 with the hostnames of your hosts
export NAME0="ubuntu6"
export NAME1="ubuntu7"
export NAME2="ubuntu8"

kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# cleanup non-reusable certificates
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# No need to move the certs because they are for HOST0

# clean up certs that should not be copied off this host
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
# Run the script
chmod +x kubeadm-certs.sh &&
sudo ./kubeadm-certs.sh
# Shell output:
I0627 13:42:35.499902   12242 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu8] and IPs [192.168.30.17 127.0.0.1 ::1]
I0627 13:42:36.343628   12255 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu8] and IPs [192.168.30.17 127.0.0.1 ::1]
I0627 13:42:37.208700   12269 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/healthcheck-client" certificate and key
I0627 13:42:38.159396   12282 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "apiserver-etcd-client" certificate and key
I0627 13:42:39.009913   12297 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu7] and IPs [192.168.30.16 127.0.0.1 ::1]
I0627 13:42:39.868768   12310 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu7] and IPs [192.168.30.16 127.0.0.1 ::1]
I0627 13:42:40.917507   12324 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/healthcheck-client" certificate and key
I0627 13:42:41.805712   12337 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "apiserver-etcd-client" certificate and key
I0627 13:42:42.618189   12352 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu6] and IPs [192.168.30.15 127.0.0.1 ::1]
I0627 13:42:43.405003   12359 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu6] and IPs [192.168.30.15 127.0.0.1 ::1]
I0627 13:42:44.356683   12373 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "etcd/healthcheck-client" certificate and key
I0627 13:42:45.160194   12386 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[certs] Generating "apiserver-etcd-client" certificate and key

Create SSH Key
#

Create a SSH key pair on the first etcd node and copy the public key to the other two etcd nodes and the first Controller node:

# Create a SSH key pair
ssh-keygen -t rsa -b 4096 -C "etcd-node-1"

# Copy the public key to the other two etcd nodes
ssh-copy-id ubuntu@192.168.30.16
ssh-copy-id ubuntu@192.168.30.17

# Copy the public key to the first controller node
ssh-copy-id ubuntu@192.168.30.12

Copy Certs and Kubeadm Configs
#

Copy the Kubeadmin files and certificates that were created via the scripts on the first etcd node, to the other two etcd nodes.

Etcd Node 1:

# Copy the filles from the tmp directory into the home directory
sudo cp /tmp/192.168.30.15/* ~/

Etcd Node 2:

# Copy all files from /tmp/192.168.30.15/ to the remote host
sudo rsync -avz -e "ssh -i /home/ubuntu/.ssh/id_rsa" /tmp/192.168.30.16/* ubuntu@192.168.30.16:
# SSH into the host
ssh ubuntu@192.168.30.16

# Switch to sudo
sudo su

# Change ownership of the pki directory
chown -R root:root pki

# Move the pki directory to the proper location
mv pki /etc/kubernetes/  

Etcd Node 3:

# Copy all files from /tmp/192.168.30.15/ to the remote host
sudo rsync -avz -e "ssh -i /home/ubuntu/.ssh/id_rsa" /tmp/192.168.30.17/* ubuntu@192.168.30.17:
# SSH into the host
ssh ubuntu@192.168.30.17

# Switch to sudo
sudo su

# Change ownership of the pki directory
chown -R root:root pki

# Move the pki directory to the proper location
mv pki /etc/kubernetes/  

Verify the files and folders on all three etcd notes:

# List file and folder structure
tree /etc/kubernetes/pki

# Shell output:
/etc/kubernetes/pki
├── apiserver-etcd-client.crt
├── apiserver-etcd-client.key
└── etcd
    ├── ca.crt
    ├── healthcheck-client.crt
    ├── healthcheck-client.key
    ├── peer.crt
    ├── peer.key
    ├── server.crt
    └── server.key

Create Static Pod Manifests
#

Run the following command on all three etcd nodes:

#  Generate a static pod manifest for etcd
sudo kubeadm init phase etcd local --config=/home/ubuntu/kubeadmcfg.yaml
# Shell output: Etcd node 1
I0627 14:06:32.898612   20735 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

# Shell output: Etcd node 2
I0627 14:06:47.806495   10250 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"


# Shell output: Etcd node 3
I0627 14:06:55.302481   10549 version.go:256] remote version is much newer: v1.30.2; falling back to: stable-1.26
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

Copy Certs to first Controller Node
#

# Copy the certificates from the first etcd node to the first Controller node
sudo rsync -avz -e "ssh -i /home/ubuntu/.ssh/id_rsa" /etc/kubernetes/pki/etcd/ca.crt ubuntu@192.168.30.12: &&
sudo rsync -avz -e "ssh -i /home/ubuntu/.ssh/id_rsa" /etc/kubernetes/pki/apiserver-etcd-client.crt ubuntu@192.168.30.12: &&
sudo rsync -avz -e "ssh -i /home/ubuntu/.ssh/id_rsa" /etc/kubernetes/pki/apiserver-etcd-client.key ubuntu@192.168.30.12:

SSH into the Controller node and move the files into the right directory:

sudo mkdir -p /etc/kubernetes/pki/etcd/
sudo mv ~/ca.crt /etc/kubernetes/pki/etcd/
sudo mv ~/apiserver-etcd-client.crt /etc/kubernetes/pki/
sudo mv ~/apiserver-etcd-client.key /etc/kubernetes/pki/

# Set owner and permissions
sudo chown root:root /etc/kubernetes/pki/etcd/* &&
sudo chmod 600 /etc/kubernetes/pki/etcd/{apiserver-etcd-client.crt,apiserver-etcd-client.key} &&
sudo chmod 644 /etc/kubernetes/pki/etcd/ca.crt

Initialize the Cluster
#

Create Kubeadm Configuration
#

Create a Kubeadm configuration on the first Controller node:

# Create a configuration file for the cluster initialization
vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "stable"
controlPlaneEndpoint: "192.168.30.9:6443"
networking:
  podSubnet: "10.0.0.0/16"
etcd:
  external:
    endpoints:
      - https://192.168.30.15:2379 # Define etc nodes
      - https://192.168.30.16:2379 # Define etc nodes
      - https://192.168.30.17:2379 # Define etc nodes
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

Initialize First Controller Node
#

# Pull the images
sudo kubeadm config images pull

# Initialize the cluster with the first Controller node
sudo kubeadm init --config kubeadm-config.yaml --upload-certs
# Shell output:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.30.9:6443 --token k7y6gf.q1v4twk9l5d31rxc \
        --discovery-token-ca-cert-hash sha256:ed7f440f5638284ec5f70063a93127f80e19d8d605fcc3d6028e036486d12c2a \
        --control-plane --certificate-key 91d02730de815d933ea08f1ce9aaa1653a6009d959e1dee3ae7f26fc9b72aa93

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.30.9:6443 --token k7y6gf.q1v4twk9l5d31rxc \
        --discovery-token-ca-cert-hash sha256:ed7f440f5638284ec5f70063a93127f80e19d8d605fcc3d6028e036486d12c2a

Kubectl Configuration
#

Root user: Permanent

# Add kubeconfig path environment variable
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc

# Apply changes
source ~/.bashrc

Non root user: Permanent

# Create directory and copy kubeconfig file
mkdir -p $HOME/.kube &&
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config &&
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Add kubeconfig path environment variable & apply changes
echo 'export KUBECONFIG=$HOME/.kube/config' >> ~/.bashrc &&
source ~/.bashrc

Install Pod Network Add-On
#

# Switch to root user
sudo su

Download the binaries:

# Download the Cilium binaries
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz

# Extract the binarie into the "/usr/local/bin" directory
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin

# Remove the archive
rm cilium-linux-amd64.tar.gz

Install Cilium:

# Install Cilium
cilium install

# Shell output:
ℹ️  Using Cilium version 1.15.5
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected kube-proxy has been installed

Check the Status:

# Verify status
cilium status

# Shell output:
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 1
                       cilium-operator    Running: 1
Cluster Pods:          2/2 managed by Cilium
Helm chart version:
Image versions         cilium             quay.io/cilium/cilium:v1.15.5@sha256:4ce1666a73815101ec9a4d360af6c5b7f1193ab00d89b7124f8505dee147ca40: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.15.5@sha256:f5d3d19754074ca052be6aac5d1ffb1de1eb5f2d947222b5f10f6d97ad4383e8: 1

Verify the Kubelet
#

Verify the pods are up and running:

# List the Kubelet pods
kubectl get pods -n kube-system

# Shell output:
NAME                              READY   STATUS    RESTARTS   AGE
cilium-445h5                      1/1     Running   0          75s
cilium-operator-fdf6bc9f4-959rv   1/1     Running   0          75s
coredns-787d4945fb-6mxwg          1/1     Running   0          3m31s
coredns-787d4945fb-kcwdz          1/1     Running   0          3m31s
kube-apiserver-ubuntu3            1/1     Running   0          3m36s
kube-controller-manager-ubuntu3   1/1     Running   0          3m36s
kube-proxy-5zlqx                  1/1     Running   0          3m31s
kube-scheduler-ubuntu3            1/1     Running   0          3m36s
# Check the Kubelet status
sudo systemctl status kubelet
# List Kubelet logs
sudo journalctl -u kubelet

Verify the Cluster
#

# List the nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE     VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu3   Ready    control-plane   3m54s   v1.26.15   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
# List cluster info
kubectl cluster-info

# Shell output:
Kubernetes control plane is running at https://192.168.30.9:6443
CoreDNS is running at https://192.168.30.9:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Verify Etcd Health
#

Verify the etcd health / status from one of the Controller nodes:

Environment Variables
#

echo ’export KUBECONFIG=/etc/kubernetes/admin.conf’ » ~/.bashrc

cat <<EOF >> ~/.bashrc
export ETCDCTL_API=3
export ETCDCTL_CACERT="/etc/kubernetes/pki/etcd/ca.crt"
export ETCDCTL_CERT="/etc/kubernetes/pki/apiserver-etcd-client.crt"
export ETCDCTL_KEY="/etc/kubernetes/pki/apiserver-etcd-client.key"
export ETCDCTL_ENDPOINTS="https://192.168.30.15:2379,https://192.168.30.16:2379,https://192.168.30.17:2379"
EOF

# Apply
source ~/.bashrc

Install Etcd Client
#

# Install the etcd client
sudo apt update && sudo apt install etcd-client

Check the Cluster Health
#

# Check the Etcd cluster health
etcdctl endpoint health

# Shell output:
https://192.168.30.15:2379 is healthy: successfully committed proposal: took = 7.039241ms
https://192.168.30.16:2379 is healthy: successfully committed proposal: took = 6.95793ms
https://192.168.30.17:2379 is healthy: successfully committed proposal: took = 8.35439ms

# Check the Etcd cluster health: More details
etcdctl endpoint status --write-out=table

# Shell output:
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.30.15:2379 | 1e18502efc8cdc6c |  3.5.10 |  5.4 MB |      true |      false |         2 |       5242 |               5242 |        |
| https://192.168.30.16:2379 | 83b8fa383a8312d2 |  3.5.10 |  5.4 MB |     false |      false |         2 |       5242 |               5242 |        |
| https://192.168.30.17:2379 | 340916df7770c511 |  3.5.10 |  5.3 MB |     false |      false |         2 |       5242 |               5242 |        |
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+



Add Controller Nodes
#

Add the other Controller Nodes
#

Kubelet automatically sets up a stacked etcd cluster where etcd runs as a set of pods, one on each Controller node.

# Add the other two Controller nodes
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.


To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

Verify the Cluster
#

# List the nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE     VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu3   Ready    control-plane   53m     v1.26.15   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu4   Ready    control-plane   3m15s   v1.26.15   192.168.30.13   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu5   Ready    control-plane   44s     v1.26.15   192.168.30.14   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18

br>

Add Worker Nodes
#

Join the Worker Nodes
#

# Join the Worker nodes
sudo kubeadm join 192.168.30.9:6443 --token k7y6gf.q1v4twk9l5d31rxc \
        --discovery-token-ca-cert-hash sha256:ed7f440f5638284ec5f70063a93127f80e19d8d605fcc3d6028e036486d12c2a

Label Worker Nodes
#

# Label the worker nodes
kubectl label nodes ubuntu9 kubernetes.io/role=worker &&
kubectl label nodes ubuntu10 kubernetes.io/role=worker

Verify the Cluster
#

# List the nodes
kubectl get nodes -o wide

# Shell output:
NAME       STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu10   Ready    worker          73s   v1.26.15   192.168.30.19   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu3    Ready    control-plane   63m   v1.26.15   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu4    Ready    control-plane   12m   v1.26.15   192.168.30.13   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu5    Ready    control-plane   10m   v1.26.15   192.168.30.14   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18
ubuntu9    Ready    worker          91s   v1.26.15   192.168.30.18   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.18



Links #

# Kubeadm for external etcd nodes
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/

# Cluster initialization with external etcd nodes
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#manual-certs
# Load Balancing
https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing

# Kubeadm High Availability
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

# Network Add-ons
https://kubernetes.io/docs/concepts/cluster-administration/addons/

# Configure cgroup driver
https://v1-26.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
Kubernetes-Cluster - This article is part of a series.
Part 12: This Article