K3s Single Node Cluster #
In this tutorial I’m using a Debian 12 server with the IP “192.168.30.70”. The container that I deploy is running a webserver that’s running on port “8080”.
Installation #
The following command installs K3s without traefik as the default ingres controller (incoming network traffic to the cluster).
# Install prerequisites
sudo apt update && sudo apt upgrade -y &&
sudo apt install -y curl iptables
# Install K3s: With Traefik Ingres
curl -sfL https://get.k3s.io | sh -s -
Enable the ability to run the kubectl command without sudo privileg:
# Set permissions
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
Verify the Cluster #
Verify the installation:
# List cluster nodes
k3s kubectl get nodes
# shell output:
NAME STATUS ROLES AGE VERSION
deb-01 Ready control-plane,master 8m47s v1.28.8+k3s1
Deploy Test Pod #
Deploy a Docker Container to test the cluster:
# Run container: Example
kubectl run my-container --image=jueklu/container-1 --port=8080 --restart=Never --labels app=testing
-
--restart=Never
Define the restart policy -
--labels app=testing
Define a label
# Create a LoadBalancer service to expose the pod "my-container"
kubectl expose pod/my-container --port=8888 --target-port=8080 --type=LoadBalancer --name=my-container-service
-
--port=8888
Port on which the service will be exposed and be accessed from outside -
--target-port=8080
The container port that the service should forward to -
--type=LoadBalancer
Service type LoadBalancer makes the service accessible through a cloud provider’s load balancer. In a single-node K3s cluster, especially when running locally or in environments without a cloud provider’s load balancer, K3s will automatically provision a service of type LoadBalancer using Klipper LB, which is a lightweight load balancer designed for K3s. This makes it possible to access the service from outside the cluster using the node’s IP address.
Test the Deployment #
# Open the URL in a browser
http://192.168.30.70:8888/
List Deployment Resources #
# List the deployed pods
kubectl get pods
# Shell output:
NAME READY STATUS RESTARTS AGE
my-container 1/1 Running 0 2m53s
# List services
kubectl get svc
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 24m
my-container-service LoadBalancer 10.43.97.36 192.168.30.70 8888:30095/TCP 36s
Delete the Deployment #
# Delete the Pod
kubectl delete pod my-container
# Delete the Service
kubectl delete service my-container-service
K3s Multi Node Cluster #
Installation #
Traefik Ingres #
The following command installs K3s with traefik as the default ingres controller (incoming network traffic to the cluster).
# Install prerequisites
sudo apt update && sudo apt upgrade -y &&
sudo apt install -y curl iptables
# Install K3s: With Traefik Ingres
curl -sfL https://get.k3s.io | sh -
Enable the ability to run the kubectl command without sudo privileg:
# Set permissions
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
Nginx Ingres #
The following command installs K3s without traefik as the default ingres controller (incoming network traffic to the cluster).
# Install prerequisites
sudo apt update && sudo apt upgrade -y &&
sudo apt install -y curl iptables
# Install K3s: Without Traefik Ingres
curl -sfL https://get.k3s.io | sh -s - --disable traefik
# Make sure: Kubeconfig file points to the K3s cluster
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Enable the ability to run the kubectl command without sudo privileg:
# Set permissions
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
Token #
Get the token from the master node that can be useed to join the worker nodes to the cluster:
# Extract the token from the master node
sudo cat /var/lib/rancher/k3s/server/node-token
# Shell output:
K102fe07d56118731d4c8a8673b0ef02c6a6db49f2c86fc210a6b8bacd068267a16::server:4ccf4bf688a5083fe8d5ce68c934d08f
Join Worker Nodes #
On the worker nodes, use the token from the master node to join the cluster:
# Install curl
sudo apt install -y curl
# Add Worker Node
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.30.70:6443 K3S_TOKEN=K102fe07d56118731d4c8a8673b0ef02c6a6db49f2c86fc210a6b8bacd068267a16::server:4ccf4bf688a5083fe8d5ce68c934d08f sh -
Check K3s Status #
# Check master node K3s service
systemctl status k3s
# Check worker node K3s-Agent service
systemctl status k3s-agent
Verify the Cluster #
Check the status of the nodes to ensure they are part of the cluster_
# List the nodes from the master node
k3s kubectl get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
deb-01 Ready control-plane,master 70m v1.28.8+k3s1
deb-03 Ready <none> 21s v1.28.8+k3s1
deb-02 Ready <none> 33s v1.28.8+k3s1
Optional: Label the Worker Nodes #
Labeling worker nodes is not strictly necessary for the Kubernetes cluster to function. Pods can be scheduled regardless of whether it’s labeled as a worker.
# Label the worker nodes
kubectl label nodes deb-02 kubernetes.io/role=worker &&
kubectl label nodes deb-03 kubernetes.io/role=worker
# Verify the labels
k3s kubectl get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
deb-02 Ready worker 10m v1.28.8+k3s1
deb-03 Ready worker 10m v1.28.8+k3s1
deb-01 Ready control-plane,master 80m v1.28.8+k3s1
Optional: Taint the Master Node #
If the master node should be dedicated to managing the cluster and not running other workloads, configure it to be unschedulable for general pods:
# Taint the master node
kubectl taint nodes deb-01 key=value:NoSchedule
# Remove existing taints (if needed)
kubectl taint nodes deb-01 key=value:NoSchedule-
# Verify node taints
kubectl describe node deb-01 | grep Taints
Helm #
Helm is a package manager for Kubernetes, package definitions are called Helm Charts.
# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version
# Add Helm repository
helm repo add bitnami https://charts.bitnami.com/bitnami
# Update package index
helm repo update
Traefik Ingress Controller #
Test Deployment #
Create a deployment to test if Traefik ingress controller is working:
# Create deployment
kubectl create deployment my-container --image=jueklu/container-1
# Expose deployment with ClusterIP: So that the container port can be accessed
kubectl expose deployment my-container --port=8888 --target-port=8080 --type=ClusterIP --name=my-container-deployment
# Create a YML file for the Ingress
vi my-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
traefik.ingress.kubernetes.io/rule.type: PathPrefixStrip
spec:
ingressClassName: traefik
rules:
- http:
paths:
- path: /mypath
pathType: Prefix
backend:
service:
name: my-container-deployment
port:
number: 8888
# Create the Ingress
kubectl create -f my-ingress.yml
# Test the deployment
http://192.168.30.70/mypath
Nginx Ingress Controller #
Setup #
Note: This is only necessary if you want to install the master node without the default Traefik ingress option.
# Create a namespace to install the nginx Ingress controller
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Update package index
helm repo update
# Install the Nginx ingress controller
helm install my-nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Test Deployment #
Create a deployment to test if nginx ingress controller is working:
# Create deployment
kubectl create deployment my-container --image=jueklu/container-1
# Expose deployment with ClusterIP: So that the container port can be accessed
kubectl expose deployment my-container --port=8888 --target-port=8080 --type=ClusterIP --name=my-container-deployment
# Create a YML file for the Ingress
vi my-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /mypath
pathType: Prefix
backend:
service:
name: my-container-deployment
port:
number: 8888
# Create the Ingress
kubectl create -f my-ingress.yml
# Test the deployment
http://192.168.30.70/mypath
List Deployment Resources #
# List the deployment resources
kubectl get deployments
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
my-container 1/1 1 1 14m
# List services
kubectl get svc
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 128m
my-container-deployment ClusterIP 10.43.231.37 <none> 8888/TCP 14m
# List on which node the pod is running
kubectl get pods -o wide
# Shell output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-container-69885c5b84-6k8r2 1/1 Running 0 17m 10.42.1.3 deb-02 <none> <none>
Scale the Deployment #
# Increase the number of replicas for the deployment
kubectl scale deployment my-container --replicas=3
# Scale the deployment back to one pod
kubectl scale deployment my-container --replicas=1
# List on which node the pod is running
kubectl get pods -o wide
# Shell output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-container-69885c5b84-6k8r2 1/1 Running 0 34m 10.42.1.3 deb-02 <none> <none>
my-container-69885c5b84-wlt8v 1/1 Running 0 8s 10.42.2.5 deb-03 <none> <none>
my-container-69885c5b84-g7s4t 1/1 Running 0 8s 10.42.2.4 deb-03 <none> <none>
Restart Node #
Drain the worker nodesbefore restarting them:
# Safely evict all pods from the node marks it as unschedulable
kubectl drain deb-02 --ignore-daemonsets --delete-emptydir-data
# Make node schedulable again by uncordoning it (after the reboot)
kubectl uncordon deb-02
Delete Deployment #
# Delete the deployment
kubectl delete deployment my-container
# Delete the service to clean up the network access
kubectl delete service my-container-deployment
# Delete the Ingress resource
kubectl delete ingress my-ingress
HTTPS Deployment: Traefik Ingress #
For this tutorial I’m using a Debian 12 based cloud server with a public IP address.
DNS Entry #
# Create a Fully Qualified Domain Name DNS entry
37.27.23.185 kubernetes-pod-1.jklug.work
Deploy Pod #
# Create deployment
kubectl create deployment my-container --image=jueklu/container-1
# Expose deployment with ClusterIP: So that the container port can be accessed
kubectl expose deployment my-container --port=8888 --target-port=8080 --type=ClusterIP --name=my-container-deployment
Install Cert-Manager #
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml
Configure Let’s Encrypt Issuer #
# Create YML file
vi cert-manager.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: juergen@jklug.work
privateKeySecretRef:
name: letsencrypt-production-private-key
solvers:
- http01:
ingress:
class: traefik
# Apply configuration
kubectl apply -f cert-manager.yml
Create a Certificate #
# Create YML file
vi cert-manager-certificate.yml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-certificate
spec:
secretName: my-certificate-secret
dnsNames:
- "kubernetes-pod-1.jklug.work"
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
# Apply configuration
kubectl apply -f cert-manager-certificate.yml
HTTPS Ingress #
# Create YML file
vi ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
ingressClassName: traefik
tls:
- hosts:
- "kubernetes-pod-1.jklug.work"
secretName: my-certificate-secret
rules:
- host: "kubernetes-pod-1.jklug.work"
http:
paths:
- path: /mypath
pathType: Prefix
backend:
service:
name: my-container-deployment
port:
number: 8888
# Apply configuration
kubectl apply -f ingress.yml
Test the Deployment #
# Open the URL in a browser
https://kubernetes-pod-1.jklug.work/mypath
# Or test from the shell
curl https://kubernetes-pod-1.jklug.work/mypath
HTTPS Deployment: Nginx Ingress #
For this tutorial I’m using a Debian 12 based cloud server with a public IP address.
DNS Entry #
# Create a Fully Qualified Domain Name DNS entry
37.27.23.185 kubernetes-pod-1.jklug.work
Install Nginx Ingress Controller #
# Install Nginx Ingress Controller
helm install my-nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Deploy Pod #
# Create deployment
kubectl create deployment my-container --image=jueklu/container-1
# Expose deployment with ClusterIP: So that the container port can be accessed
kubectl expose deployment my-container --port=8888 --target-port=8080 --type=ClusterIP --name=my-container-deployment
Install Cert-Manager #
# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml
Configure Let’s Encrypt Issuer #
# Create YML file
vi cert-manager.yml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-production
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: juergen@jklug.work
privateKeySecretRef:
name: letsencrypt-production-private-key
solvers:
- http01:
ingress:
class: nginx
# Apply configuration
kubectl apply -f cert-manager.yml
Create a Certificate #
# Create YML file
vi cert-manager-certificate.yml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-certificate
spec:
secretName: my-certificate-secret
dnsNames:
- "kubernetes-pod-1.jklug.work"
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
# Apply configuration
kubectl apply -f cert-manager-certificate.yml
HTTPS Ingress #
# Create YML file
vi ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-production
spec:
ingressClassName: nginx
tls:
- hosts:
- "kubernetes-pod-1.jklug.work"
secretName: my-certificate-secret
rules:
- host: "kubernetes-pod-1.jklug.work"
http:
paths:
- path: /mypath
pathType: Prefix
backend:
service:
name: my-container-deployment
port:
number: 8888
# Apply configuration
kubectl apply -f ingress.yml
Test the Deployment #
# Verify the DNS settings are pointing to the IP address where the Nginx Ingress Controller is exposed
kubectl get svc -n ingress-nginx
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-ingress-ingress-nginx-controller-admission ClusterIP 10.43.20.204 <none> 443/TCP 2m28s
my-nginx-ingress-ingress-nginx-controller LoadBalancer 10.43.189.101 37.27.23.185 80:31615/TCP,443:30594/TCP 2m28s
# Open the URL in a browser
https://kubernetes-pod-1.jklug.work/mypath
# Or test from the shell
curl https://kubernetes-pod-1.jklug.work/mypath
Kubernetes Dashboard: Nginx Ingress #
This dashboard deployment requires the “Configure Let’s Encrypt Issuer” from the HTTPS Nginx Ingres deployment.
DNS Entry #
# Create a Fully Qualified Domain Name DNS entry
37.27.23.185 k3s-dashboard.jklug.work
Deploy Kubernetes Dashboard #
Check latest version:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
# Create a new namespace
kubectl create namespace kubernetes-dashboard
# Deploy Kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Create a Service Account #
# Create yml file
vi dashboard-adminuser.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# Apply settings
kubectl apply -f dashboard-adminuser.yml
Obtain the Access Token #
# Create token for the "admin-user"
kubectl -n kubernetes-dashboard create token admin-user
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
# Shell output:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjdKUWlXVDVNNTAzdU9DQXlyMGFqUGJHZUluOXYwR3lQQTVmSGQ1NGZfWEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiLCJrM3MiXSwiZXhwIjoxNzEzMTEzNzUzLCJpYXQiOjE3MTMxMTAxNTMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiYTkxODc1MDctNGUzNC00NTM3LTkxNDItM2M3YjY3NWY5OWFjIn19LCJuYmYiOjE3MTMxMTAxNTMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.WSfliH0S37dZeql2c7LQkInNLGxqX-6xvVjrK5lBQcXd2Ih7J0DymRI9pjmieBKfSczuFNf_XEhBJJuGAuVbmrIc6fc4Gd8v_msmmKHM6zsqAnOV05lrT4dmsoqQqczwlFd7S8YU08twL5uWMwLAaqM_70s06yaTzqIXTpYA_43ywesOfo_akxlmaP9dK0gLakkMaamcdZtnuOvaRJf8b7h50FrM7d2IKHWZe3nVjLym6LhjhxN8qVKOO0oPNhz8V3sEeXrO-fLPjhRV_VK5cqLHMEkl6NUkBAyJClWST5y7FE4NuHxN1eWFHFQ3ZOFWHi2AQenRhsYrrLgPG0WvKA
Create a Certificate #
# Create YML file
vi dashboard-certificate.yml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: kubernetes-dashboard-cert
namespace: kubernetes-dashboard
spec:
secretName: kubernetes-dashboard-cert
dnsNames:
- "k3s-dashboard.jklug.work"
issuerRef:
name: letsencrypt-production
kind: ClusterIssuer
# Apply configuration
kubectl apply -f dashboard-certificate.yml
Create Dashboard Ingress Resource #
# Create YML file
vi dashboard-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: "k3s-dashboard.jklug.work"
http:
paths:
- pathType: ImplementationSpecific
path: "/"
backend:
service:
name: kubernetes-dashboard
port:
number: 443
tls:
- hosts:
- "k3s-dashboard.jklug.work"
secretName: kubernetes-dashboard-cert
# Apply configuration
kubectl apply -f dashboard-ingress.yml
Access the Dashboard #
# Open the URL in a browser
https://k3s-dashboard.jklug.work
Useful K3s & Kubectl Commands #
Uninstall K3s #
# Uninstall K3s
/usr/local/bin/k3s-uninstall.sh
Restart Traefik Ingres #
# Relaunch the traefik pod by scaling down/up:
kubectl -n kube-system scale deploy traefik --replicas 0
kubectl -n kube-system scale deploy traefik --replicas 1
Pod Logs #
# List pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-container-69f894487d-pcqbj 1/1 Running 0 35m
# Show logs of pod
kubectl logs my-container-69f894487d-pcqbj
# Shell output:
Kubernetes testing
Links #
# Official Documentation
https://docs.k3s.io/quick-start
# Helm Releases
https://github.com/helm/helm/releases