Skip to main content

Automated K8s Cluster - Cluster API vSphere (CAPV): Deploy Kubernetes Cluster on vSphere, with Cluster API from a KIND Management Cluster

1541 words·
Kubernetes Kubernetes Cluster Cluster API Kubeadm vSphere KIND
Table of Contents
Kubernetes-Cluster - This article is part of a series.
Part 10: This Article

Overview
#

In this tutorial I’m deploying a Kubeadm based Kubernetes cluster on vSphere with CAPV. Here are the steps:

  • Deploy a KIND cluster that serves as the management cluster

  • Install the Cluster API Provider for vSphere (CAPV)

  • Use Cluster API to deploy a Kubernetes cluster on vSphere

  • Deploy a CNI (Calico) to enable networking

This was my first attempt to deploy a Kubernetes cluster with Cluster API, so I just deployed a small cluster consisting of one controller and one worker node. The VM IPs are assigned via DHCP.


I’m using the following VM for the KIND cluster deployment:

192.168.70.9 vcsa1.vsphere.intern # VMware vCenter Server
192.168.70.5 # Ubuntu 24.04 / KIND VM

192.168.70.55 # Virtual IP for the Controller Nodes LoadBalancer
  • Ubuntu 24.04, 4 CPU, 4 GB RAM, Docker installed
  • VMware vCenter version 8

Prerequisites
#

DNS Entry
#

Make sure the server where KIND will be deployed can resolve the vSphere domain name:

# Create the following DNS / hosts entry
192.168.70.9 vcsa1.vsphere.intern

Kind, Clusterctl & Kubectl
#

Install Kind, Clusterctl & Kubectl
#

Check for latest version:

# KIND
https://kind.sigs.k8s.io/docs/user/quick-start

# Cluster API
https://github.com/kubernetes-sigs/cluster-api/releases/latest
# Install KIND
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 &&
chmod +x kind &&
sudo mv kind /usr/local/bin/kind

# Install ClusterAPI CLI (clusterctl)
wget https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.8.3/clusterctl-linux-amd64
chmod +x clusterctl-linux-amd64
sudo mv clusterctl-linux-amd64 /usr/local/bin/clusterctl

# Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&
chmod +x kubectl &&
sudo mv kubectl /usr/local/bin/

Verify Installation
#

Verify KIND installation:

# Verify installation / check version
kind version

# Shell output:
kind v0.24.0 go1.22.6 linux/amd64

Verify Clusterctl installation:

# Verify installation / check version
clusterctl version

# Shell output:
clusterctl version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"945c938ce3e093c71950e022de4253373f911ae8", GitTreeState:"clean", BuildDate:"2024-09-10T16:29:18Z", GoVersion:"go1.22.7", Compiler:"gc", Platform:"linux/amd64"}

Verify Kubectl installation:

# Verify installation / check version
kubectl version

# Shell output:
Client Version: v1.31.1
Kustomize Version: v5.4.2

Add vSphere Root CA Certificate
#

Download vSphere Certificate
#

# Download vSphere Root CA Certificate
wget https://vcsa1.vsphere.intern/certs/download.zip --no-check-certificate
# Install unzip
sudo apt install unzip

# Unzip the certificates
unzip download.zip

Install vSphere Certificate
#

# Create a folder for the certificate
sudo mkdir /usr/share/ca-certificates/vsphere

# Copy the certificate
sudo cp certs/lin/f093c9a0.0 /usr/share/ca-certificates/vsphere

# Rename the certificate
sudo mv /usr/share/ca-certificates/vsphere/f093c9a0.0 /usr/share/ca-certificates/vsphere/94dfc8ac.crt
# Add the vSphere certificate
sudo dpkg-reconfigure ca-certificates

Test the TLS Encryption
#

# Test the TLS encryption with curl
curl https://vcsa1.vsphere.intern/

# Shell output:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en">
 <head>
...

vSphere VM Template
#

Download OVA File
#

Download a Cluster API vSphere OVA template for the Kubernetes cluster deployment:

# OVA Templates
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/README.md#kubernetes-versions-with-published-ovas

# For example version 1.28
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/tag/templates/v1.28.0

# Download the OVA template
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/download/templates%2Fv1.28.0/ubuntu-2004-kube-v1.28.0.ova

Deploy VM from OVA Template
#

  • Create a folder for the template, for example “Template”

  • Right-click on the “Template” folder

  • Select “Deploy OVF Template”

  • Deploy the ubuntu-2004-kube-v1.28.0.ova template


Convert VM to Template
#

  • Select the newly created “ubuntu-2004-kube-v1.28.0” VM

  • Right click on the VM

  • Select “Template” > “Convert to Template”

  • Click “Yes”


Create vSphere VM Folder
#

Create a vSphere folder for the Kubernetes VMs, mine is called k8s:

  • Right-click on (Datacenter) > “New Folder”

  • Select “New VM and Template Folder…”



KIND Management Cluster
#

Create KIND Management Cluster
#

# Create KIND cluster
kind create cluster --name management-cluster

# Shell output:
reating cluster "management-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.31.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-management-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-management-cluster

Have a nice day! 👋

Verify the KIND Cluster
#

# List KIND clusters
kind get clusters

# Shell output:
management-cluster
# List KIND cluster nodes
kubectl get nodes

# Shell output:
NAME                               STATUS   ROLES           AGE   VERSION
management-cluster-control-plane   Ready    control-plane   39s   v1.31.0

Kubernetes Cluster
#

Clusterctl Config File
#

# Create the default folder for the Cluster API configuration
mkdir -p ~/.cluster-api

# Create the configuration file
vi ~/.cluster-api/clusterctl.yaml
# vSphere Credentials
VSPHERE_USERNAME: "Administrator@vsphere.intern"
VSPHERE_PASSWORD: "my-secure-pw"

# vSphere Resources
VSPHERE_SERVER: "vcsa1.vsphere.intern"
VSPHERE_DATACENTER: "Datacenter"
VSPHERE_DATASTORE: "datastore1"
VSPHERE_NETWORK: "VM Network"

VSPHERE_RESOURCE_POOL: "*/Resources"
VSPHERE_FOLDER: "k8s"

VSPHERE_STORAGE_POLICY: ""
VSPHERE_TLS_THUMBPRINT: "0D:C8:B6:CB:9C:76:10:61:3F:40:49:74:BA:79:AC:96:23:01:9A:B9"


# vSphere VM Template
VSPHERE_TEMPLATE: "ubuntu-2004-kube-v1.28.0"

# Public SSH key to access the VMs
VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDWRm3X/zXtzB8ScSY6vIjxdse2/x16/oHUewvfo11CCq6B5iABrUo+X7pYXYWm2JohtB3TRCA0OlT1E/1QgUMLz9MZViwvihzRIfzVnGDx3JqJefEtEYqBMAZlPXmfD+liPeiJcNedb4PCHk8BNTza72Bb5KPzbxyvt5Qz27sSOCiuvpLgf2PLDP2KPtKi8owVnJ1dCwGzZ2Y66GHxTmrrtdrIjAboOxRa4FclRshS0rkYICrIdVXpbC21BAUZdxeOYty+IgCTZyB2pAfk8i44QqczzVf2LNUwmUNhMuyeijBtF4JEV4ds2MmO/KX7VPFS9ENFdTFpav5Kijws6VNHedpnNzp3ECFHi5SoC/S8nkIESjMu8MmBkeHnA1oCONq08fK24138pAAPxrOaJF4A1mx6zLbDkLHTKDoym/qF3vRoSE5AUrC4JqAo291oTXUDsAGT5CC7czgyjwoHiXyUnxM6PTPAo3tNYVixEQR9yR3YOFhc6TmSaM4XRFPbVU2SUej4TH50Ipqycs4n8n2zX4SfQvbCAdUDyeb6FRtNTQhpWAY/XmtcUtbNW5Xvz/gfi1hzXUj3b0KVSVYX7ok83sOaHgXOr/4ZuexID51i3SAzXjJPe+Hnk6BNxWrTlZEN9laeOfAay7VeidsjT76zNHJFrwUpD1E8ADbVPguzcQ== ubuntu@ubuntu1"

# Kubernetes
KUBERNETES_VERSION: "v1.28.0"
CPI_IMAGE_K8S_VERSION: "v1.28.0"
VSPHERE_CONTROL_PLANE_ENDPOINT: "192.168.70.55"
CONTROL_PLANE_ENDPOINT_IP: "192.168.70.55"
  • VSPHERE_TLS_THUMBPRINT: Copy the vSphere fingerprint:
# List the vSphere fingerprint
openssl s_client -connect vcsa1.vsphere.intern:443 2>/dev/null | openssl x509 -noout -fingerprint -sha1

# Shell output:
sha1 Fingerprint=0D:C8:B6:CB:9C:76:10:61:3F:40:49:74:BA:79:AC:96:23:01:9A:B9

Initialize ClusterAPI with vSphere Provider
#

# Initialize ClusterAPI with vSphere Provider
clusterctl init --infrastructure vsphere

# Shell output:
Fetching providers
Installing cert-manager version="v1.15.3"
Waiting for cert-manager to be available...
Installing provider="cluster-api" version="v1.8.3" targetNamespace="capi-system"
Installing provider="bootstrap-kubeadm" version="v1.8.3" targetNamespace="capi-kubeadm-bootstrap-system"
Installing provider="control-plane-kubeadm" version="v1.8.3" targetNamespace="capi-kubeadm-control-plane-system"
Installing provider="infrastructure-vsphere" version="v1.11.1" targetNamespace="capv-system"

Your management cluster has been initialized successfully!

You can now create your first workload cluster by running the following:

  clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -

Create Kubernetes Cluster
#

Create cluster manifest:

# Create a manifest for the new cluster
clusterctl generate cluster my-cluster \
  --infrastructure vsphere \
  --kubernetes-version v1.28.0 \
  --control-plane-machine-count 1 \
  --worker-machine-count 1 > k8s-cluster.yaml

Adopt cluster manifest:

# Change storage & memory
sed -i "s/memoryMiB: 8192/memoryMiB: 4096/g" k8s-cluster.yaml
sed -i "s/diskGiB: 25/diskGiB: 20/g" k8s-cluster.yaml

# Inspect and make any changes
vi k8s-cluster.yaml

Deploy the cluster:

# Create the workload cluster in the current namespace on the management cluster
kubectl apply -f k8s-cluster.yaml

# Shell output:
cluster.cluster.x-k8s.io/my-cluster created
vspherecluster.infrastructure.cluster.x-k8s.io/my-cluster created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/my-cluster created
vspheremachinetemplate.infrastructure.cluster.x-k8s.io/my-cluster-worker created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/my-cluster created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/my-cluster-md-0 created
machinedeployment.cluster.x-k8s.io/my-cluster-md-0 created
clusterresourceset.addons.cluster.x-k8s.io/my-cluster-crs-0 created
secret/my-cluster created
secret/vsphere-config-secret created
configmap/csi-manifests created
secret/cloud-provider-vsphere-credentials created
configmap/cpi-manifests created

Verify Cluster Deployment
#

Verify CAPI pods
#

# List pods
kubectl get pods -A

# Shell output
NAMESPACE                           NAME                                                             READY   STATUS    RESTARTS   AGE
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager-67dd7486c5-kqxw4       1/1     Running   0          10m
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager-776695f658-smg2t   1/1     Running   0          10m
capi-system                         capi-controller-manager-667d8cf6bd-zpggt                         1/1     Running   0          10m
capv-system                         capv-controller-manager-56494f6d86-vbkjq                         1/1     Running   0          10m
cert-manager                        cert-manager-7fbbc65b49-n9ksx                                    1/1     Running   0          10m
cert-manager                        cert-manager-cainjector-6664fc84f6-9cwlw                         1/1     Running   0          10m
cert-manager                        cert-manager-webhook-59598898fd-zwr5b                            1/1     Running   0          10m
kube-system                         coredns-6f6b679f8f-mhs52                                         1/1     Running   0          94m
kube-system                         coredns-6f6b679f8f-zw4cf                                         1/1     Running   0          94m
kube-system                         etcd-management-cluster-control-plane                            1/1     Running   0          94m
kube-system                         kindnet-gkdk4                                                    1/1     Running   0          94m
kube-system                         kube-apiserver-management-cluster-control-plane                  1/1     Running   0          94m
kube-system                         kube-controller-manager-management-cluster-control-plane         1/1     Running   0          94m
kube-system                         kube-proxy-zpbhd                                                 1/1     Running   0          94m
kube-system                         kube-scheduler-management-cluster-control-plane                  1/1     Running   0          94m
local-path-storage                  local-path-provisioner-57c5987fd4-vfb5g

Check Cluster Status
#

# Check cluster status
kubectl get cluster my-cluster -o yaml
# Shell output: (Wait a while)
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"cluster.x-k8s.io/v1beta1","kind":"Cluster","metadata":{"annotations":{},"labels":{"cluster.x-k8s.io/cluster-name":"my-cluster"},"name":"my-cluster","namespace":"default"},"spec":{"clusterNetwork":{"pods":{"cidrBlocks":["192.168.0.0/16"]}},"controlPlaneRef":{"apiVersion":"controlplane.cluster.x-k8s.io/v1beta1","kind":"KubeadmControlPlane","name":"my-cluster"},"infrastructureRef":{"apiVersion":"infrastructure.cluster.x-k8s.io/v1beta1","kind":"VSphereCluster","name":"my-cluster"}}}
  creationTimestamp: "2024-09-23T18:25:40Z"
  finalizers:
  - cluster.cluster.x-k8s.io
  generation: 2
  labels:
    cluster.x-k8s.io/cluster-name: my-cluster
  name: my-cluster
  namespace: default
  resourceVersion: "13274"
  uid: 9457b96c-fe1a-496c-957f-4c2805a90f3e
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
  controlPlaneEndpoint:
    host: 192.168.70.55
    port: 6443
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: my-cluster
    namespace: default
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereCluster
    name: my-cluster
    namespace: default
status:
  conditions:
  - lastTransitionTime: "2024-09-23T18:26:22Z"
    status: "True"
    type: Ready
  - lastTransitionTime: "2024-09-23T18:26:21Z"
    status: "True"
    type: ControlPlaneInitialized
  - lastTransitionTime: "2024-09-23T18:26:22Z"
    status: "True"
    type: ControlPlaneReady
  - lastTransitionTime: "2024-09-23T18:25:40Z"
    status: "True"
    type: InfrastructureReady
  infrastructureReady: true
  observedGeneration: 2
  phase: Provisioned

CAPV Controller Logs
#

# Check capv-controller-manager logs
kubectl logs -n capv-system $(kubectl get pods -n capv-system -l control-plane=controller-manager -o name)

List Virtual Machines
#

# List the Kubernetes cluster vSphere VMs
kubectl get machines

# Shell output:
NAME                          CLUSTER      NODENAME                      PROVIDERID                                       PHASE     AGE   VERSION
my-cluster-md-0-zbqzb-j9jkq   my-cluster   my-cluster-md-0-zbqzb-j9jkq   vsphere://4219b762-72f4-61b4-4bbe-40931e7263c2   Running   5m    v1.28.0
my-cluster-rlpz2              my-cluster   my-cluster-rlpz2              vsphere://42194446-da10-00fe-8b51-36e711683a1a   Running   5m    v1.28.0

Verify the Cluster
#

# List clusters
kubectl get clusters

# Shell output:
NAME         CLUSTERCLASS   PHASE         AGE     VERSION
my-cluster                  Provisioned   7m33s

Retrieve Kubeconfig
#

# Save the Kubeconfig file for the new cluster
clusterctl get kubeconfig my-cluster > my-cluster.kubeconfig

Deploy Calico CNI
#

# Apply Calico CNI
kubectl --kubeconfig=./my-cluster.kubeconfig \
  apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

# Shell output:
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

Verify Controller Nodes
#

# List controller nodes
kubectl get kubeadmcontrolplane

# Shell output: (Wait till "UNAVAILABLE" changes from "1" to "0")
NAME         CLUSTER      INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
my-cluster   my-cluster   true          true                   1          1       1         0             21m   v1.28.0

List Cluster Details
#

# Describe cluster
clusterctl describe cluster my-cluster

# Shell output: (wait till worker nodes is ready)
NAME                                                 READY  SEVERITY  REASON  SINCE  MESSAGE
Cluster/my-cluster                                   True                     21m
├─ClusterInfrastructure - VSphereCluster/my-cluster  True                     22m
├─ControlPlane - KubeadmControlPlane/my-cluster      True                     21m
│ └─Machine/my-cluster-rlpz2                         True                     21m
└─Workers
  └─MachineDeployment/my-cluster-md-0                True                     2m35s
    └─Machine/my-cluster-md-0-zbqzb-j9jkq            True                     20m

Verify Cluster Nodes
#

# List cluster nodes
kubectl --kubeconfig=./my-cluster.kubeconfig get nodes

# Shell output:
NAME                          STATUS   ROLES           AGE   VERSION
my-cluster-md-0-zbqzb-j9jkq   Ready    <none>          41m   v1.28.0
my-cluster-rlpz2              Ready    control-plane   42m   v1.28.0



Access Cluster VMs
#

# Default user
capv

# SSH into the controller node (use the private SSH key)
ssh capv@192.168.70.55

Delete Resources
#

Delete vSphere Kubernetes Cluster
#

# Delete the cluster
kubectl delete cluster my-cluster

Delete KIND Cluster
#

# Delete the KIND management cluster
kind delete cluster --name management-cluster



Links #

# Cluster API Official Documentation: Quickstart
https://cluster-api.sigs.k8s.io/user/quick-start.html

# Cluster API Official Documentation: GitHub
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/docs/getting_started.md

# Cluster API vSphere OVA Templates
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/README.md#kubernetes-versions-with-published-ovas

# Cluster API vSphere OVA Templates: Version 1.28
https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/releases/tag/templates/v1.28.0
Kubernetes-Cluster - This article is part of a series.
Part 10: This Article