Skip to main content

K8s Kubeadm - Upgrade Kubernetes Version with Kubeadm

1530 words·
Kubernetes Kubernetes Cluster K8s Kubeadm
Kubernetes-Cluster - This article is part of a series.
Part 13: This Article

Overview
#

In this tutorial I’m using the following Kubernetes cluster, deployed with Kubeadm:

NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   69d   v1.28.11   192.168.30.10   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu2   Ready    worker          69d   v1.28.11   192.168.30.11   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu3   Ready    worker          69d   v1.28.11   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18

I’ll upgrade the cluster from Kuberbetes version v1.28.1 to v1.29.8.


Controller Node
#

Node Prerequisites
#

Drain Controller Node
#

# Drain the node, which safely evicts all pods from the node in preparation for maintenance
kubectl drain ubuntu1 --ignore-daemonsets --delete-emptydir-data

# Shell output:
Warning: ignoring DaemonSet-managed Pods: kube-system/cilium-qvjpb, kube-system/kube-proxy-rkg4z, metallb-system/metallb-speaker-2czxq
evicting pod kube-system/coredns-5dd5756b68-jlmhf
evicting pod kube-system/coredns-5dd5756b68-8smln
evicting pod kube-system/cilium-operator-579c6c96c4-6gtvz
pod/cilium-operator-579c6c96c4-6gtvz evicted
pod/coredns-5dd5756b68-jlmhf evicted
pod/coredns-5dd5756b68-8smln evicted
node/ubuntu1 drained

Verify Scheduling is Disabled
#

Verify that scheduling is disabled on the controller node:

# List cluster nodes
kubectl get nodes

# Shell output:
NAME      STATUS                     ROLES           AGE   VERSION
ubuntu1   Ready,SchedulingDisabled   control-plane   69d   v1.28.11
ubuntu2   Ready                      worker          69d   v1.28.11
ubuntu3   Ready                      worker          69d   v1.28.11

Upgrade Packages
#

# Make sure that Kubeadm, Kubelet & Kubectl are on hold
sudo apt-mark hold kubeadm kubelet kubectl
# Upgrade the installed packages
sudo apt update && sudo apt upgrade -y

# If necessary reboot the node
sudo reboot

Verify Current Kubeadm version
#

# List current Kubeadm version
kubeadm version -o json

# Shell output:
ubuntu@ubuntu1:~$ kubeadm version -o json
{
  "clientVersion": {
    "major": "1",
    "minor": "28",
    "gitVersion": "v1.28.11",
    "gitCommit": "f25b321b9ae42cb1bfaa00b3eec9a12566a15d91",
    "gitTreeState": "clean",
    "buildDate": "2024-06-11T20:18:34Z",
    "goVersion": "go1.21.11",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

Add New Kubernetes Repository
#

Find the latest Kubernetes release: https://github.com/kubernetes/kubernetes/tags

# Add the latest stable Kubernetes repository (Version 1.29)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update repository index
sudo apt update 

List available Kubeadm Versions
#

# List currently available Kubeadm versions
sudo apt-cache madison kubeadm

# Shell output:
   kubeadm | 1.29.9-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.8-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.7-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.4-2.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages

Upgrade Controller Node
#

Verify the Upgrade Plan
#

# Plan Upgrade: Check the upgrade actions to be taken
sudo kubeadm upgrade plan 1.29.8

# Shell output:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.28.11
[upgrade/versions] kubeadm version: v1.28.11
[upgrade/versions] Target version: 1.29.8
[upgrade/versions] Latest version in the v1.28 series: 1.29.8

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     3 x v1.28.11   1.29.8

Upgrade to the latest version in the v1.28 series:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.28.11   1.29.8
kube-controller-manager   v1.28.11   1.29.8
kube-scheduler            v1.28.11   1.29.8
kube-proxy                v1.28.11   1.29.8
CoreDNS                   v1.10.1    v1.10.1
etcd                      3.5.12-0   3.5.12-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply 1.29.8

Note: Before you can perform this upgrade, you have to update kubeadm to 1.29.8.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

Unhold & Upgrade Kubeadm
#

Remove the hold on the Kubeadm packages that prevents them from being automatically upgraded:

# Unhold the Kubeadm apt package
sudo apt-mark unhold kubeadm

# Shell output:
Canceled hold on kubeadm.
# Optional: Verify the Kubeadm apt package is unhold
apt-mark showhold | grep kubeadm

Upgrade the Kubeadm package:

# Upgrade Kubeadm to version "1.29.8"
sudo apt update &&
sudo apt install -y kubeadm=1.29.8-1.1

Enroll the upgrade:

# Apply the upgrade: Ignore the unstable message
sudo kubeadm upgrade apply 1.29.8

# Shell output:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.29.8". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

Hold the Kubeadm apt package:

# Hold the Kubeadm package
sudo apt-mark hold kubeadm

Upgrade Kubelet & Kubectl
#

# Unhold the Kubelet / Kubectl apt packages
sudo apt-mark unhold kubelet kubectl

# Shell output:
Canceled hold on kubelet.
Canceled hold on kubectl.
# Upgrade Kubeadm, Kubelet & Kubectl
sudo apt install kubelet=1.29.8-1.1 kubectl=1.29.8-1.1
# Hold the Kubelet / Kubectl apt packages
sudo apt-mark hold kubelet kubectl

Restart Services
#

# Restart Kubelet service
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# Verify the Kubelet status
sudo systemctl status kubelet

Uncordon the Node
#

# Uncordon the node
kubectl uncordon ubuntu1

# Shell output:
node/ubuntu1 uncordoned

Verify Cluster Status & Version
#

# List cluster nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   69d   v1.29.8    192.168.30.10   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
ubuntu2   Ready    worker          69d   v1.28.11   192.168.30.11   <none>        Ubuntu 24.04 LTS     6.8.0-36-generic   containerd://1.7.18
ubuntu3   Ready    worker          69d   v1.28.11   192.168.30.12   <none>        Ubuntu 24.04 LTS     6.8.0-36-generic   containerd://1.7.18
# Verify Kubeadm version
kubeadm version -o json

# Shell output:
kubeadm version: &version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.8", GitCommit:"234bc63696ad15dcf62584b6ba48671bf0f25fb6", GitTreeState:"clean", BuildDate:"2024-08-14T19:48:05Z", GoVersion:"go1.22.5", Compiler:"gc", Platform:"linux/amd64"}
# Verify Kubelet version
kubelet --version

# Shell output:
Kubernetes v1.29.8



Worker Nodes
#

It’s faster to just drain the worker node, remove it from the cluster and add a new worker node with the correct Kubernetes version to the cluster, but here are the steps to manually upgrade the worker nodes:

Node Prerequisites
#

Drain Worker Node
#

# Drain the worker node
kubectl drain ubuntu2 --ignore-daemonsets --delete-emptydir-data

# Shell output:
Warning: ignoring DaemonSet-managed Pods: kube-system/cilium-dd9vk, kube-system/kube-proxy-45ft4, metallb-system/metallb-speaker-kfvnl
evicting pod kube-system/coredns-76f75df574-dwk8q
evicting pod ingress-nginx/ingress-nginx-controller-6dfcb8658d-94vg6
evicting pod ingress-nginx/ingress-nginx-controller-6dfcb8658d-zl5dd
pod/coredns-76f75df574-dwk8q evicted
pod/ingress-nginx-controller-6dfcb8658d-94vg6 evicted
pod/ingress-nginx-controller-6dfcb8658d-zl5dd evicted
node/ubuntu2 drained

Verify Scheduling is Disabled
#

Verify that scheduling is disabled on the controller node:

# List cluster nodes
kubectl get nodes

# Shell output:
NAME      STATUS                     ROLES           AGE   VERSION
ubuntu1   Ready                      control-plane   69d   v1.29.8
ubuntu2   Ready,SchedulingDisabled   worker          69d   v1.28.11
ubuntu3   Ready                      worker          69d   v1.28.11

Upgrade Packages
#

# Make sure that Kubeadm & Kubelet are on hold
sudo apt-mark hold kubeadm kubelet kubectl
# Upgrade the installed packages
sudo apt update && sudo apt upgrade -y

# If necessary reboot the node (Drain the node first)
sudo reboot

Add New Kubernetes Repository
#

# Add the latest stable Kubernetes repository (Version 1.29)
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Update repository index
sudo apt update 

List available Kubeadm Versions
#

# List currently available Kubeadm versions
sudo apt-cache madison kubeadm

# Shell output:
   kubeadm | 1.29.9-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.8-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.7-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.6-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.5-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.4-2.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages
   kubeadm | 1.29.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.29/deb  Packages



Upgrade Worker Node
#

Unhold & Upgrade Kubeadm
#

Remove the hold on the Kubeadm packages that prevents them from being automatically upgraded:

# Unhold the Kubeadm apt package
sudo apt-mark unhold kubeadm

# Shell output:
Canceled hold on kubeadm.
# Optional: Verify the Kubeadm apt package is unhold
apt-mark showhold | grep kubeadm

Upgrade the Kubeadm package:

# Upgrade Kubeadm to version "1.29.8"
sudo apt update &&
sudo apt install -y kubeadm=1.29.8-1.1

Hold the Kubeadm apt package:

# Hold the Kubeadm package
sudo apt-mark hold kubeadm

Upgrade Kubelet
#

# Unhold the Kubelet / Kubectl apt packages
sudo apt-mark unhold kubelet

# Shell output:
Canceled hold on kubelet.
# Upgrade Kubeadm, Kubelet & Kubectl
sudo apt install kubelet=1.29.8-1.1
# Hold the Kubelet / Kubectl apt packages
sudo apt-mark hold kubelet

Restart Services
#

# Restart Kubelet service
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# Verify the Kubelet status
sudo systemctl status kubelet

Uncordon the Node
#

# Uncordon the node
kubectl uncordon ubuntu2

# Shell output:
node/ubuntu1 uncordoned

Verify Cluster Status & Version
#

# List cluster nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   69d   v1.29.8    192.168.30.10   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
ubuntu2   Ready    worker          69d   v1.29.8    192.168.30.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
ubuntu3   Ready    worker          69d   v1.28.11   192.168.30.12   <none>        Ubuntu 24.04 LTS     6.8.0-36-generic   containerd://1.7.18

Follow the same upgrade steps for the second worker node:

# List cluster nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   69d   v1.29.8   192.168.30.10   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
ubuntu2   Ready    worker          69d   v1.29.8   192.168.30.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
ubuntu3   Ready    worker          69d   v1.29.8   192.168.30.12   <none>        Ubuntu 24.04.1 LTS   6.8.0-44-generic   containerd://1.7.22
Kubernetes-Cluster - This article is part of a series.
Part 13: This Article