Skip to main content

K0s: Deploy a K0s Cluster with K0sctl, Deploy and Configure MetalLB, Deploy Nginx Ingress Controller; Example Deployment with TLS Encryption

1644 words·
Kubernetes Kubernetes Cluster K0s MetalLB Nginx Ingress Controller Ubuntu
Table of Contents
Kubernetes-Cluster - This article is part of a series.
Part 9: This Article

Overview
#

My setup is based on the following Ubuntu 24.04 servers:

192.168.30.10 ubuntu1 # K0sctl host
192.168.30.11 ubuntu2 # Controller
192.168.30.12 ubuntu3 # Controller
192.168.30.13 ubuntu4 # Controller
192.168.30.14 ubuntu5 # Worker
192.168.30.15 ubuntu6 # Worker

I’m using 4 cores and 8 GB RAM for the controller & worker nodes and 4 cores 4 GB RAM and the K0sctl host.


Prerequisites
#

SSH Key
#

Create a SSH key on the K0sCTL host and copy it to the Kubernetes nodes:

# Create SSH key
ssh-keygen -t rsa -b 4096

# Copythe SSH key to the controller and worker nodes
ssh-copy-id ubuntu@192.168.30.11
ssh-copy-id ubuntu@192.168.30.12
ssh-copy-id ubuntu@192.168.30.13
ssh-copy-id ubuntu@192.168.30.14
ssh-copy-id ubuntu@192.168.30.15

Sudoers
#

Add the default user of the Kubernetes nodes to the sudoers file:

# Allow sudo without pw on all controller and worker nodes
echo "ubuntu ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ubuntu

K0sctl Host
#

Install K0sctl
#

Find latest release:
https://github.com/k0sproject/k0sctl/tags

# Download the latest binary
curl -sSLf https://github.com/k0sproject/k0sctl/releases/download/v0.18.0/k0sctl-linux-x64 -o k0sctl

# Change permission to executable
chmod +x k0sctl

# Move the binary
sudo mv k0sctl /usr/local/bin/
# Verify the installation / check the version
k0sctl version

# Shell output:
version: v0.18.0
commit: 1afb01f

Create Configuration
#

# Create a configuration file
k0sctl init ubuntu@192.168.30.11 ubuntu@192.168.30.12 ubuntu@192.168.30.13 ubuntu@192.168.30.14  ubuntu@192.168.30.15 > k0sctl.yaml

Adopt the Configuration
#

# Open the K0sCTL configuration
vi k0sctl.yaml

Original configuration:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 192.168.30.11
      user: ubuntu
      port: 22
      keyPath: null
    role: controller
  - ssh:
      address: 192.168.30.12
      user: ubuntu
      port: 22
      keyPath: null
    role: worker
  - ssh:
      address: 192.168.30.13
      user: ubuntu
      port: 22
      keyPath: null
    role: worker
  - ssh:
      address: 192.168.30.14
      user: ubuntu
      port: 22
      keyPath: null
    role: worker
  - ssh:
      address: 192.168.30.15
      user: ubuntu
      port: 22
      keyPath: null
    role: worker

Adopt the configuration to add more Controller nodes:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 192.168.30.11
      user: ubuntu
      port: 22
      keyPath: null
    role: controller
  - ssh:
      address: 192.168.30.12
      user: ubuntu
      port: 22
      keyPath: null
    role: controller
  - ssh:
      address: 192.168.30.13
      user: ubuntu
      port: 22
      keyPath: null
    role: controller
  - ssh:
      address: 192.168.30.14
      user: ubuntu
      port: 22
      keyPath: null
    role: worker
  - ssh:
      address: 192.168.30.15
      user: ubuntu
      port: 22
      keyPath: null
    role: worker

Deploy the Cluster
#

# Deploy the cluster
k0sctl apply --config k0sctl.yaml

# Shell output:
INFO ==> Finished in 1m26s
INFO k0s cluster version v1.30.1+k0s.0 is now installed
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
INFO      k0sctl kubeconfig

K0s Configuration
#

Kubeconfig
#

Create a Kubectl configuration file on the K0sctl node and copy it the the controller nodes:

# Generate a kubeconfig file
k0sctl kubeconfig > k0s-kubeconfig

# Copy the Kubeconfig file from the K0sCTL node the controller nodes
scp /home/ubuntu/k0s-kubeconfig ubuntu@192.168.30.11:/home/ubuntu/

Install Kubectl
#

Install Kubectl on the controller nodes:

# Download Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Make binary executable
chmod +x ./kubectl

# Move the binary
sudo mv kubectl /usr/local/bin/kubectl
# Verify the installation
kubectl version --client

# Shell output:
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Add Path Environment Variable
#

Temporary:

# Add kubeconfig path environment variable
export KUBECONFIG=/home/ubuntu/k0s-kubeconfig

Permanent (Current user):

# Add kubeconfig path environment variable
echo 'export KUBECONFIG=/home/ubuntu/k0s-kubeconfig' >> ~/.bashrc

# Apply changes
source ~/.bashrc

Verify the Cluster
#

Note: Per default Controller nodes don’t run kubelet and will not accept any workloads, so they don’t show up in the K0s Kubectl list:

# List Kubernetes nodes
kubectl get nodes -o wide

# Shell output:
NAME      STATUS   ROLES    AGE   VERSION       INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu5   Ready    <none>   30m   v1.30.1+k0s   192.168.30.14   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.17
ubuntu6   Ready    <none>   30m   v1.30.1+k0s   192.168.30.15   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.7.17
# List pods
kubectl get pods --all-namespaces

# Shell output:
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   konnectivity-agent-fm2jc          1/1     Running   0          46m
kube-system   konnectivity-agent-zrvdr          1/1     Running   0          46m
kube-system   kube-proxy-fwk8w                  1/1     Running   0          46m
kube-system   kube-proxy-l9tvn                  1/1     Running   0          46m
kube-system   kube-router-drknj                 1/1     Running   0          46m
kube-system   kube-router-jnp7q                 1/1     Running   0          46m
kube-system   metrics-server-5cd4986bbc-hrxbt   1/1     Running   0          46m

MetalLB
#

Install Helm
#

# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version

Add Helm Repository
#

# Add the MetalLB repository
helm repo add metallb https://metallb.github.io/metallb

# Update index
helm repo update
# Optional: Save & adopt the MetalLB chart values
helm show values metallb/metallb > metallb-values.yaml

Install MetalLB
#

# Install MetalLB
helm install --create-namespace --namespace metallb-system metallb metallb/metallb

# Shell output:
NAME: metallb
LAST DEPLOYED: Thu Jun 20 14:14:44 2024
NAMESPACE: metallb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
MetalLB is now running in the cluster.

Now you can configure it via its CRs. Please refer to the metallb official docs
on how to use the CRs.
# Verify the resources
kubectl get pods -n metallb-system

# Shell output:
NAME                                 READY   STATUS    RESTARTS   AGE
metallb-controller-66fddf5ff-dqqnc   1/1     Running   0          39s
metallb-speaker-7dc72                4/4     Running   0          39s
metallb-speaker-mdx6f                4/4     Running   0          39s

MetalLB Configuration
#

# Create a configuration for MetalLB
vi metallb-configuration.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: main-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.30.200-192.168.30.254

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: main-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - main-pool
# Deploy the MetalLB configuration
kubectl apply -f metallb-configuration.yaml

Verify the Configuration
#

# Verify the MetalLB IP pools
kubectl get IPAddressPool -n metallb-system

# Shell output:
NAME        AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
main-pool   true          false             ["192.168.30.200-192.168.30.254"]
# Verify the L2Advertisement
kubectl get L2Advertisement -n metallb-system

# Shell output:
NAME                 IPADDRESSPOOLS   IPADDRESSPOOL SELECTORS   INTERFACES
main-advertisement   ["main-pool"]

Test Deployment
#

Deploy a pod with LoadBalancer service to verify the LoadBalancer assigns an IP address.


Deploy Pod and LoadBalancer
#

# Run container: Example
kubectl run my-container --image=jueklu/container-2 --port=8080 --restart=Never --labels app=testing

# Create a LoadBalancer service to expose the pod "my-container"
kubectl expose pod/my-container --port=8080 --target-port=8080 --type=LoadBalancer --name=my-container-service

Verify the Deployment
#

# List the pods
kubectl get pods

# Shell output
NAME           READY   STATUS    RESTARTS   AGE
my-container   1/1     Running   0          15s
# List LoadBalancer service details
kubectl get svc my-container-service

# Shell output
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
my-container-service   LoadBalancer   10.108.181.148   192.168.30.200   8080:31987/TCP   18s

Access the Deployment
#

# Access the deployment from a browser
192.168.30.200:8080

Delete the Deployment
#

# Delete the deployment
kubectl delete pod my-container

# Delete the LoadBalancer service
kubectl delete svc my-container-service

Nginx Ingress Controller
#

Add Helm Chart
#

# Add Helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# Update package index
helm repo update

Install Nginx Ingress
#

# Install the Nginx ingress controller 
helm install ingress-nginx ingress-nginx/ingress-nginx \
    --namespace ingress-nginx \
    --create-namespace
# Optional: Scale the Nginx Ingress deployment
kubectl scale deployment ingress-nginx-controller --replicas=3 -n ingress-nginx

Verify the Deployment
#

# List pods
kubectl get pods -n ingress-nginx

# Shell outpout:
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-cf668668c-4p2t6   1/1     Running   0          29s
ingress-nginx-controller-cf668668c-j9qnb   1/1     Running   0          29s
ingress-nginx-controller-cf668668c-xfhx4   1/1     Running   0          5m12s

List the IngressClass:

# List IngressClass
kubectl get ingressclass

# Shell output:
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       33m

Test Deployment
#

Create an example deployment with Ingress controller to verify the Nginx Ingress Controller and TLS encryption is working.


Kubernetes TLS Certificate Secret
#

In this setup I’m using a Let’s Encrypt wildcard certificate.

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls k0s-tls --cert=./fullchain.pem --key=./privkey.pem

Pod Deploymentwith ClusterIP Service
#

# Create a manifest for the exmaple deployment
vi test-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jueklu-container-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jueklu-container-2
  template:
    metadata:
      labels:
        app: jueklu-container-2
    spec:
      containers:
      - name: jueklu-container-2
        image: jueklu/container-2
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: jueklu-container-2
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: jueklu-container-2
# Deploy the manifest
kubectl apply -f test-deployment.yaml

Nginx Ingress for ClusterIP Service
#

# Create manifest
vi test-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jueklu-container-2-ingress
spec:
  ingressClassName: "nginx"
  tls:
  - hosts:
    - k0s.jklug.work
    secretName: k0s-tls
  rules:
  - host: k0s.jklug.work
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: jueklu-container-2
            port:
              number: 8080
# Deploy the Ingress manifest
kubectl apply -f test-ingress.yml

Verify the Resources
#

Get the Ingress IP:

# List the ingress resources
kubectl get ingress

# Shell output:
NAME                         CLASS   HOSTS            ADDRESS          PORTS     AGE
jueklu-container-2-ingress   nginx   k0s.jklug.work   192.168.30.201   80, 443   94s

List Ingress Details:

# List Ingress details:
kubectl get svc -n ingress-nginx

# Shell output:
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.100.116.234   192.168.30.201   80:30685/TCP,443:31625/TCP   43m
ingress-nginx-controller-admission   ClusterIP      10.105.83.179    <none>           443/TCP                      43m

List Ingress Logs:

# List Ingress logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller

Hosts Entry
#

# Create an hosts entry for the Ingress
192.168.30.201 k0s.jklug.work

Access the Deployment
#

# Access the deployment with TLS encryption
https://k0s.jklug.work

Delete the Deployment
#

# Delete the Ingress resource
kubectl delete -f test-ingress.yml

# Delete the TLS secret
kubectl delete secret k0s-tls

# Delete the example deployment resources
kubectl delete -f test-deployment.yaml

More
#

K0s Status
#

# Check the K0s status of one of the Controller nodes
sudo k0s status

# Shell output:
ersion: v1.30.1+k0s.0
Process ID: 2128
Role: controller
Workloads: false
SingleNode: false
# Check the K0s status of one of the Worker nodes
sudo k0s status

# Shell output:
Version: v1.30.1+k0s.0
Process ID: 1932
Role: worker
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:

Shut Down the Cluster
#

# Drain the worker nodes
kubectl drain ubuntu6 --ignore-daemonsets --delete-emptydir-data
kubectl drain ubuntu5 --ignore-daemonsets --delete-emptydir-data
# Stop the K0s services: Worker nodes
sudo systemctl stop k0sworker

# Stop the K0s services: Controller nodes
sudo systemctl stop k0scontroller
# Shut down the servers
sudo shutdown now

Links #

# K0sctl Latest Release
https://github.com/k0sproject/k0sctl/tags

# K0s Official Documentation
https://docs.k0sproject.io/v1.27.2+k0s.0/k0sctl-install/

# Why doesn't kubectl list Controller nodes
https://docs.k0sproject.io/v1.28.4+k0s.0/FAQ/
# MetalLB Configuration
https://metallb.universe.tf/configuration/

# MetalLB Configuration Examples
https://github.com/metallb/metallb/tree/v0.14.5/configsamples
# Nginx Ingress Controller
https://kubernetes.github.io/ingress-nginx/
https://github.com/kubernetes/ingress-nginx
Kubernetes-Cluster - This article is part of a series.
Part 9: This Article