Skip to main content

K8s Kubespray: Kubernetes Cluster Deployment, MetalLB Configuration, Add & Remove Nodes from the Cluster, Example Deployment with LoadBalancer and NodePort Services

3043 words·
Kubernetes Kubernetes Cluster K8s Kubespray MetalLB Kubectl Debian Ansible Python
Table of Contents
Kubernetes-Cluster - This article is part of a series.
Part 7: This Article

Kubespray is a open-source tool that uses Ansible playbooks to deploy and manage Kubernetes clusters.

I use the following nodes based on Debian 12 in this tutorial:

192.168.30.70 deb-01 # Kubespray / Ansible Node
192.168.30.71 deb-02 # Controller / Master Node
192.168.30.72 deb-03 # Controller / Master Node
192.168.30.73 deb-04 # Worker Node
192.168.30.74 deb-05 # Worker Node

Prerequisites Kubernetes Nodes
#

SSH Key
#

Create a SSH key on the Kubespray node and copy it to the Kubernetes nodes:

# Create SSH key
ssh-keygen -t rsa -b 4096

# Copythe SSH key to the controller and worker nodes
ssh-copy-id debian@192.168.30.71
ssh-copy-id debian@192.168.30.72
ssh-copy-id debian@192.168.30.73
ssh-copy-id debian@192.168.30.74

Sudoers
#

Add the default user of the Kubernetes nodes to the sudoers file:

# Allow sudo without pw on all controller and worker nodes
echo "debian ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/debian

Prerequisites Kubespray / Ansible Node
#

Install Dependencies
#

# Update package index & install dependencies
sudo apt update && sudo apt install git -y

Python, Pip & Venv
#

# Install Python3, pip & venv
sudo apt install python3 python3-pip python3-venv -y

# Verify installation / check Version
python3 --version
pip3 --version

Install Kubespray Dependencies
#

# Clone the Kubespray repository
cd && git clone https://github.com/kubernetes-sigs/kubespray.git && cd ~/kubespray

# Create a virtual environment
python3 -m venv kubespray-venv

# Active the virtual environment
source kubespray-venv/bin/activate

# Install requirements / dependencies
pip install -r requirements.txt
# Verify the Ansible installation / check version
ansible --version

Configure Kubespray Inventory
#

Create Inventory File
#

# Copies the sample inventory configuration into a new cluster
sudo cp -rfp inventory/sample inventory/jkw-cluster

# Declare an array containing node IPs
declare -a IPS=(192.168.30.71 192.168.30.72 192.168.30.73 192.168.30.74)

# Populate the hosts.yaml with the node IPs
CONFIG_FILE=inventory/jkw-cluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# Shell output:
DEBUG: Adding group all
DEBUG: Adding group kube_control_plane
DEBUG: Adding group kube_node
DEBUG: Adding group etcd
DEBUG: Adding group k8s_cluster
DEBUG: Adding group calico_rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node4 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube_control_plane
DEBUG: adding host node2 to group kube_control_plane
DEBUG: adding host node1 to group kube_node
DEBUG: adding host node2 to group kube_node
DEBUG: adding host node3 to group kube_node
DEBUG: adding host node4 to group kube_node

Change Inventory
#

# Modify the inventory file
vi inventory/jkw-cluster/hosts.yaml

Default hosts.yaml file

all:
  hosts:
    node1:
      ansible_host: 192.168.30.71
      ip: 192.168.30.71
      access_ip: 192.168.30.71
    node2:
      ansible_host: 192.168.30.72
      ip: 192.168.30.72
      access_ip: 192.168.30.72
    node3:
      ansible_host: 192.168.30.73
      ip: 192.168.30.73
      access_ip: 192.168.30.73
    node4:
      ansible_host: 192.168.30.74
      ip: 192.168.30.74
      access_ip: 192.168.30.74
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node1:
        node2:
        node3:
        node4:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

Optional: Define control and worker nodes

all:
  hosts:
    node1:
      ansible_host: 192.168.30.71
      ip: 192.168.30.71
      access_ip: 192.168.30.71
    node2:
      ansible_host: 192.168.30.72
      ip: 192.168.30.72
      access_ip: 192.168.30.72
    node3:
      ansible_host: 192.168.30.73
      ip: 192.168.30.73
      access_ip: 192.168.30.73
    node4:
      ansible_host: 192.168.30.74
      ip: 192.168.30.74
      access_ip: 192.168.30.74
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node3:
        node4:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
  • Etcd is a highly available distributed key-value store used by Kubernetes to store all data used to manage the cluster. In production environments, etcd hosts should ideally be dedicated to only running etcd.

Note: To deploy the cluster it’s required to have an odd number of etcd hosts, this also prevents split-brain.

Cluster Configuration: Without MetalLB
#

k8s-cluster.yml
#

# Define cluster-related variables
vi inventory/jkw-cluster/group_vars/k8s_cluster/k8s-cluster.yml
# Define the Kubernetes version
kube_version: v1.29.4

#  Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18

# This network must be unused in your network infrastructure
kube_pods_subnet: 10.233.64.0/18

# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routin
kube_network_plugin: calico

# Kubernetes cluster name, also will be used as DNS domain
cluster_name: k8s.jkw.local

addons.yml
#

# Open the addons configuration file
vi inventory/jkw-cluster/group_vars/k8s_cluster/addons.yml

Enable Addons: Ingres

# Nginx ingress controller deployment
ingress_nginx_enabled: true
ingress_nginx_host_network: true

Cluster Configuration: With MetalLB
#

k8s-cluster.yml
#

# Define cluster-related variables
vi inventory/jkw-cluster/group_vars/k8s_cluster/k8s-cluster.yml

Default settings:

# Define the Kubernetes version
kube_version: v1.29.4

#  Kubernetes internal network for services, unused block of space.
kube_service_addresses: 10.233.0.0/18

# This network must be unused in your network infrastructure
kube_pods_subnet: 10.233.64.0/18

# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routin
kube_network_plugin: calico

# Kubernetes cluster name, also will be used as DNS domain
cluster_name: k8s.jkw.local

Strict ARP:

# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB, kube-vip(ARP enabled) to work
kube_proxy_strict_arp: true

addons.yml
#

# Open the addons configuration file
vi inventory/jkw-cluster/group_vars/k8s_cluster/addons.yml

Enable Addons: MetalLB

# Enable MetalLB
metallb_enabled: true
metallb_speaker_enabled: "{{ metallb_enabled }}"
metallb_namespace: "metallb-system"
metallb_config:
  address_pools:
    primary:
      ip_range:
        - 192.168.30.240-192.168.30.250

  layer2:
    - primary

Enable Addons: Ingres

# Nginx ingress controller deployment
ingress_nginx_enabled: true
ingress_nginx_host_network: false
ingress_nginx_service_type: LoadBalancer
ingress_publish_status_address: ""

Node Settings
#

# Enable IPV4 forwarding on all controller and worker nodes
cd ~/kubespray && ansible all -i inventory/jkw-cluster/hosts.yaml -m shell -a "echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf"

# Disable Swap on all controller and worker nodes
cd ~/kubespray && ansible all -i inventory/jkw-cluster/hosts.yaml -m shell -a "sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab && sudo swapoff -a"

# Disable Firewalld on all controller and worker nodes: If Firewalld is installed
cd ~/kubespray && ansible all -i inventory/jkw-cluster/hosts.yaml -m shell -a "sudo systemctl stop firewalld && sudo systemctl disable firewalld"

Deploy Kubernetes Cluster
#

# Run the Ansible playbook to start the deployment:
cd ~/kubespray && ansible-playbook -i inventory/jkw-cluster/hosts.yaml --become --become-user=root cluster.yml

Note: It took about 11 minutes to deploy the cluster.

Reset Kubernetes Cluster
#

# Run the Ansible playbook to reset the cluster:
cd ~/kubespray && ansible-playbook -i inventory/jkw-cluster/hosts.yaml --become --become-user=root reset.yml

Verify the K8s Deployment
#

Verify the Cluster Nodes
#

Login to one of the master nodes:

# Make the Kubernetes configuration file available for a non root user: Run the following commands as the desired non root user
mkdir ~/.kube/ &&
sudo cp /root/.kube/config ~/.kube/ && 
sudo chown $(whoami):$(whoami) ~/.kube/config
# List nodes
kubectl get nodes

# Shell output
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   11m   v1.29.5
node2   Ready    control-plane   11m   v1.29.5
node3   Ready    <none>          10m   v1.29.5
node4   Ready    <none>          10m   v1.29.5

Optional: Label the Worker Nodes
#

Labeling worker nodes is not strictly necessary for the Kubernetes cluster to function. Pods can be scheduled regardless of whether it’s labeled as a worker.

# Label the worker nodes
kubectl label nodes node3 kubernetes.io/role=worker &&
kubectl label nodes node4 kubernetes.io/role=worker
# Verify the labels
kubectl get nodes

# Shell output:
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   14m   v1.29.5
node2   Ready    control-plane   13m   v1.29.5
node3   Ready    worker          13m   v1.29.5
node4   Ready    worker          13m   v1.29.5

List the Kubernetes Pods
#

# List pods
kubectl get pods -A
# Shell output: Without MetalLB
NAMESPACE       NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx   ingress-nginx-controller-b75qm             1/1     Running   0          10m
ingress-nginx   ingress-nginx-controller-dnr7x             1/1     Running   0          10m
kube-system     calico-kube-controllers-68485cbf9c-krqfx   1/1     Running   0          10m
kube-system     calico-node-2mkzg                          1/1     Running   0          10m
kube-system     calico-node-44mst                          1/1     Running   0          10m
kube-system     calico-node-gtfz4                          1/1     Running   0          10m
kube-system     calico-node-mcjd7                          1/1     Running   0          10m
kube-system     coredns-69db55dd76-kld22                   1/1     Running   0          10m
kube-system     coredns-69db55dd76-wvng4                   1/1     Running   0          10m
kube-system     dns-autoscaler-6f4b597d8c-hhzhw            1/1     Running   0          10m
kube-system     kube-apiserver-node1                       1/1     Running   1          11m
kube-system     kube-apiserver-node2                       1/1     Running   1          11m
kube-system     kube-controller-manager-node1              1/1     Running   2          11m
kube-system     kube-controller-manager-node2              1/1     Running   2          11m
kube-system     kube-proxy-md877                           1/1     Running   0          10m
kube-system     kube-proxy-tqgqm                           1/1     Running   0          10m
kube-system     kube-proxy-v88xj                           1/1     Running   0          10m
kube-system     kube-proxy-wqch5                           1/1     Running   0          10m
kube-system     kube-scheduler-node1                       1/1     Running   1          11m
kube-system     kube-scheduler-node2                       1/1     Running   1          11m
kube-system     nginx-proxy-node3                          1/1     Running   0          10m
kube-system     nginx-proxy-node4                          1/1     Running   0          10m
kube-system     nodelocaldns-2mc7j                         1/1     Running   0          10m
kube-system     nodelocaldns-hs68l                         1/1     Running   0          10m
kube-system     nodelocaldns-k7hr9                         1/1     Running   0          10m
kube-system     nodelocaldns-qxcb5                         1/1     Running   0          10m
# Shell output: With MetalLB
NAMESPACE        NAME                                       READY   STATUS    RESTARTS      AGE
ingress-nginx    ingress-nginx-controller-54m5n             1/1     Running   0             47m
ingress-nginx    ingress-nginx-controller-7cbss             1/1     Running   0             46m
kube-system      calico-kube-controllers-68485cbf9c-dh8wx   1/1     Running   1 (10h ago)   10h
kube-system      calico-node-8wtt2                          1/1     Running   1 (10h ago)   10h
kube-system      calico-node-pf9jz                          1/1     Running   1 (10h ago)   10h
kube-system      calico-node-px5pd                          1/1     Running   1 (10h ago)   10h
kube-system      calico-node-zqdcg                          1/1     Running   1 (10h ago)   10h
kube-system      coredns-69db55dd76-l4hgb                   1/1     Running   1 (10h ago)   10h
kube-system      coredns-69db55dd76-pprj5                   1/1     Running   1 (10h ago)   10h
kube-system      dns-autoscaler-6f4b597d8c-px5c4            1/1     Running   1 (10h ago)   10h
kube-system      kube-apiserver-node1                       1/1     Running   2 (66m ago)   10h
kube-system      kube-apiserver-node2                       1/1     Running   3 (66m ago)   10h
kube-system      kube-controller-manager-node1              1/1     Running   3 (10h ago)   10h
kube-system      kube-controller-manager-node2              1/1     Running   3 (10h ago)   10h
kube-system      kube-proxy-8qjkd                           1/1     Running   0             29m
kube-system      kube-proxy-bm4dm                           1/1     Running   0             29m
kube-system      kube-proxy-gbt2j                           1/1     Running   0             29m
kube-system      kube-proxy-jrk7r                           1/1     Running   0             29m
kube-system      kube-scheduler-node1                       1/1     Running   2 (10h ago)   10h
kube-system      kube-scheduler-node2                       1/1     Running   2 (10h ago)   10h
kube-system      nginx-proxy-node3                          1/1     Running   1 (10h ago)   10h
kube-system      nginx-proxy-node4                          1/1     Running   1 (10h ago)   10h
kube-system      nodelocaldns-226sc                         1/1     Running   1 (10h ago)   10h
kube-system      nodelocaldns-6bc8l                         1/1     Running   1 (10h ago)   10h
kube-system      nodelocaldns-hnkx2                         1/1     Running   1 (10h ago)   10h
kube-system      nodelocaldns-mcvxl                         1/1     Running   1 (10h ago)   10h
metallb-system   controller-666f99f6ff-6plgw                1/1     Running   0             47m
metallb-system   speaker-729l7                              1/1     Running   0             47m
metallb-system   speaker-b58wx                              1/1     Running   0             47m
metallb-system   speaker-jf4xb                              1/1     Running   0             47m
metallb-system   speaker-lx9wt                              1/1     Running   0             47m

Verify & Debug MetalLB
#

Verify the Installation
#

# List all pods in the "metallb-system" namespace
kubectl get pods -n metallb-system

# Shell output:
NAME                          READY   STATUS    RESTARTS   AGE
controller-666f99f6ff-6plgw   1/1     Running   0          33m
speaker-729l7                 1/1     Running   0          33m
speaker-b58wx                 1/1     Running   0          33m
speaker-jf4xb                 1/1     Running   0          33m
speaker-lx9wt                 1/1     Running   0          33m
# Verify the MetalLB daemonsets in the "metallb-system" namespace
kubectl get daemonset -n metallb-system

# Shell output:
NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
speaker   4         4         4       4            4           kubernetes.io/os=linux   34m
# Verify the MetalLB deployments in the "metallb-system" namespace
kubectl get deployment -n metallb-system

# Shell output:
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
controller   1/1     1            1           34m

Logs
#

Find Labels
#

# List pod labels for the "metallb-system" namespace
kubectl get pods -n metallb-system --show-labels

# Shell output
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
controller-666f99f6ff-6plgw   1/1     Running   0          34m   app=metallb,component=controller,pod-template-hash=666f99f6ff
speaker-729l7                 1/1     Running   0          34m   app=metallb,component=speaker,controller-revision-hash=774668f5f8,pod-template-generation=1
speaker-b58wx                 1/1     Running   0          34m   app=metallb,component=speaker,controller-revision-hash=774668f5f8,pod-template-generation=1
speaker-jf4xb                 1/1     Running   0          34m   app=metallb,component=speaker,controller-revision-hash=774668f5f8,pod-template-generation=1
speaker-lx9wt                 1/1     Running   0          34m   app=metallb,component=speaker,controller-revision-hash=774668f5f8,pod-template-generation=1

List Logs
#

Controller logs

# List controller logs
kubectl logs -n metallb-system -l app=metallb,component=controller

# Alternative: List controller logs with pod name
kubectl logs -n metallb-system controller-666f99f6ff-6plgw

Speaker logs

# List speaker logs
kubectl logs -n metallb-system -l app=metallb,component=speaker



Add New Node to the Cluster
#

Overview
#

I’m adding the following VM based on a Debian 12 server as worker node to the cluster:

192.168.30.75 deb-06 # New Worker Node

Prepare the New Node
#

Sudoers
#

# Allow sudo without pw on the new node
echo "debian ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/debian

Kubespray Host
#

SSH Key
#

# Copy the SSH key to the new node
ssh-copy-id debian@192.168.30.75

Virtual Environment
#

# Active the virtual environment
cd ~/kubespray && source kubespray-venv/bin/activate

Update the Inventory File
#

# Modify the inventory file
vi inventory/jkw-cluster/hosts.yaml
all:
  hosts:
    node1:
      ansible_host: 192.168.30.71
      ip: 192.168.30.71
      access_ip: 192.168.30.71
    node2:
      ansible_host: 192.168.30.72
      ip: 192.168.30.72
      access_ip: 192.168.30.72
    node3:
      ansible_host: 192.168.30.73
      ip: 192.168.30.73
      access_ip: 192.168.30.73
    node4:
      ansible_host: 192.168.30.74
      ip: 192.168.30.74
      access_ip: 192.168.30.74
    node5: # Add the new node
      ansible_host: 192.168.30.75 # Define the IP
      ip: 192.168.30.75 # Define the IP
      access_ip: 192.168.30.75 # Define the IP
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node3:
        node4:
        node5: # Define the new node as worker node
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

Run Kubespray: Add the new Node
#

# Run the Ansible playbook to start the deployment:
cd ~/kubespray && ansible-playbook -i inventory/jkw-cluster/hosts.yaml --become --become-user=root --limit=node5 cluster.yml

Label the new Node
#

# Optional: Label a new node as worker node
kubectl label nodes node5 kubernetes.io/role=worker

Verify the Cluster / Added Node
#

# List Kubernetes clustet nodes
kubectl get nodes

# Shell output:
NAME    STATUS   ROLES           AGE    VERSION
node1   Ready    control-plane   41h    v1.29.5
node2   Ready    control-plane   41h    v1.29.5
node3   Ready    worker          41h    v1.29.5
node4   Ready    worker          41h    v1.29.5
node5   Ready    worker          7m6s   v1.29.5
# List Kubernetes clustet nodes: More details
kubectl get nodes -o wide

# Shell output:
NAME    STATUS   ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
node1   Ready    control-plane   41h     v1.29.5   192.168.30.71   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64   containerd://1.7.16
node2   Ready    control-plane   41h     v1.29.5   192.168.30.72   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64   containerd://1.7.16
node3   Ready    worker          41h     v1.29.5   192.168.30.73   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64   containerd://1.7.16
node4   Ready    worker          41h     v1.29.5   192.168.30.74   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64   containerd://1.7.16
node5   Ready    worker          7m32s   v1.29.5   192.168.30.75   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-21-amd64   containerd://1.7.16

Remove a Node from the Cluster
#

Overview
#

I’m removing the previously added VM from the Kubernetes cluster:

192.168.30.75 deb-06 # New Worker Node

Drain the Node
#

# Drain the node: Evict all pods running on the node
kubectl drain node5 --ignore-daemonsets --delete-emptydir-data
  • --ignore-daemonsets Ignore DaemonSet-managed pods, as these will be automatically recreated by their DaemonSets on other nodes
  • --delete-emptydir-data Is necessary if any pods use emptyDir volumes, as the data in these volumes is deleted when pods are deleted

Remove Node from Cluster
#

# Remove the node from the cluster
kubectl delete node node5

Verify the Cluster
#

# List the Kubernetes cluster nodes
kubectl get nodes

# Shell output:
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   41h   v1.29.5
node2   Ready    control-plane   41h   v1.29.5
node3   Ready    worker          41h   v1.29.5
node4   Ready    worker          41h   v1.29.5

Kubespray Host
#

Update the Inventory File
#

# Modify the inventory file
vi inventory/jkw-cluster/hosts.yaml

Remove the entries for node5:

all:
  hosts:
    node1:
      ansible_host: 192.168.30.71
      ip: 192.168.30.71
      access_ip: 192.168.30.71
    node2:
      ansible_host: 192.168.30.72
      ip: 192.168.30.72
      access_ip: 192.168.30.72
    node3:
      ansible_host: 192.168.30.73
      ip: 192.168.30.73
      access_ip: 192.168.30.73
    node4:
      ansible_host: 192.168.30.74
      ip: 192.168.30.74
      access_ip: 192.168.30.74
    node5: # Delete the following entry
      ansible_host: 192.168.30.75 # Delete the following entry
      ip: 192.168.30.75 # Delete the following entry
      access_ip: 192.168.30.75 # Delete the following entry
  children:
    kube_control_plane:
      hosts:
        node1:
        node2:
    kube_node:
      hosts:
        node3:
        node4:
        node5: # Delete the following entry
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}



Test Deployment: Run Pod with LoadBalancer Service
#

Deploy Pod and LoadBalancer
#

# Run container: Example
kubectl run my-container --image=jueklu/container-2 --port=8080 --restart=Never --labels app=testing

# Create a LoadBalancer service to expose the pod "my-container"
kubectl expose pod/my-container --port=8080 --target-port=8080 --type=LoadBalancer --name=my-container-service

Verify the Deployment
#

# List the pods
kubectl get pods

# Shell output:
NAME           READY   STATUS    RESTARTS   AGE
my-container   1/1     Running   0          14m
# List LoadBalancer service details
kubectl get svc my-container-service

# Shell output:
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
my-container-service   LoadBalancer   10.233.63.200   192.168.30.241   8080:30359/TCP   15m
# List LoadBalancer service details: More
kubectl describe svc my-container-service

# Shell output:
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
my-container-service   LoadBalancer   10.233.63.200   192.168.30.241   8080:30359/TCP   15m
debian@node1:~$ kubectl describe svc my-container-service
Name:                     my-container-service
Namespace:                default
Labels:                   app=testing
Annotations:              metallb.universe.tf/ip-allocated-from-pool: primary
Selector:                 app=testing
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.233.63.200
IPs:                      10.233.63.200
LoadBalancer Ingress:     192.168.30.241
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30359/TCP
Endpoints:                10.233.74.68:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age   From                Message
  ----    ------        ----  ----                -------
  Normal  IPAllocated   16m   metallb-controller  Assigned IP ["192.168.30.241"]
  Normal  nodeAssigned  16m   metallb-speaker     announcing from node "node4" with protocol "layer2"

Test the Deployment
#

# Open the URL in a browser
192.168.30.241:8080

Delete the Deployment
#

# Delete the deployment
kubectl delete pod my-container

# Delete the LoadBalancer service
kubectl delete svc my-container-service

Test Deployment with NodePort Service
#

Pod Deployment & NodePort Service
#

# Create a deployment
kubectl create deployment mycontainer --image=jueklu/container-2 --replicas=2

# Create a NodePort service for the deployment
kubectl expose deployment mycontainer --type NodePort --port=8080 --name mycontainer-nodeport
  • NodePort Exposes the deployment on a static port (NodePort) on each node in the Kubernetes cluster. Kubernetes automatically selects a port from a default range.

Optional: Scale the Deployment
#

# Scale the Deployment
kubectl scale deployment mycontainer --replicas=4

Verify / List Resources
#

# List all deployments in the current Kubernetes namespace
kubectl get deployments.apps

# Shell output:
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
mycontainer   4/4     4            4           2m27s
# List the pods
kubectl get pods

# Shell output:
NAME                           READY   STATUS    RESTARTS   AGE
mycontainer-5bc6b6b7f5-4xb5h   1/1     Running   0          30s
mycontainer-5bc6b6b7f5-9lqvd   1/1     Running   0          2m43s
mycontainer-5bc6b6b7f5-fgxx8   1/1     Running   0          30s
mycontainer-5bc6b6b7f5-v4h88   1/1     Running   0          2m43s

NodePort Service Details
#

Use the port of the NodePort service to access the deployment:

# List NodePort service details
kubectl get svc mycontainer-nodeport

# Shell output:
NAME                   TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
mycontainer-nodeport   NodePort   10.233.6.52   <none>        8080:31303/TCP   2m23s

Access the Deployment
#

Access the deployment from any of the master or worker nodes:

192.168.30.71:31303 # Controller Node
192.168.30.72:31303 # Controller Node
192.168.30.73:31303 # Worker Node
192.168.30.74:31303 # Worker Node

Use curl to verify the different pod hosts:

debian@deb-01:~$ curl 192.168.30.71:31303
Container runs on: mycontainer-5bc6b6b7f5-fgxx8
debian@deb-01:~$ curl 192.168.30.71:31303
Container runs on: mycontainer-5bc6b6b7f5-4xb5h
debian@deb-01:~$ curl 192.168.30.71:31303
Container runs on: mycontainer-5bc6b6b7f5-v4h88
debian@deb-01:~$ curl 192.168.30.71:31303
Container runs on: mycontainer-5bc6b6b7f5-9lqvd
debian@deb-01:~$ curl 192.168.30.71:31303
Container runs on: mycontainer-5bc6b6b7f5-fgxx8

Note: More details about this container deployment can be found in my blogpost “Managed Kubernetes Services: AWS Elastic Kubernetes Service (EKS)”

Delete the Deployment
#

# Delete the deployment
kubectl delete deployment mycontainer

# Delete the NodePort service
kubectl delete service mycontainer-nodeport

Helm
#

Install Helm
#

# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version

More
#

Shut down the Cluster
#

# Drain the worker nodes
kubectl drain node3 --ignore-daemonsets --delete-emptydir-data
kubectl drain node4 --ignore-daemonsets --delete-emptydir-data

Deactivate the Venv
#

Deactivate the virtual environment on the Kubespray node:

# Deactivate the virtual environment
cd ~/kubespray && deactivate

Links #

# GitHub
https://github.com/kubernetes-sigs/kubespray

# GitHub: Getting Started
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting_started/getting-started.md

# GitHub: MetalLB
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ingress/metallb.md
# Official Documentation
https://kubespray.io/
# MetalLB Installation
https://metallb.universe.tf/installation/

# MetalLB GitHub
https://github.com/metallb/metallb
Kubernetes-Cluster - This article is part of a series.
Part 7: This Article