Overview #
This was my first attempt to deploy an OpenShift cluster with VMware vSphere.
I’m using the “Interactive Installer” and deploy the VM in the vSphere subnet 192.168.70.0/24 with DHCP network configuration.
Deploy OpenShift Cluster #
OpenShift Webconsole #
Installation Pt. 1 #
Open the OpenShift webconsole: https://console.redhat.com/
-
Go to: “Services” > “Containers”
-
Click (Openshift) “Clusters”
- Click “Create cluster”
- Select “Datacenter” / “vSphere”
- Select “Interactive”
-
Define a cluster name, like
openshift-sn
-
Define a base domain, like
jklug.local
-
Define the OpenShift version “OpenShift 4.16.3”
-
Set CPU architecture to “x86_64”
-
Select “Install single node OpenShift (SNO)”
-
Click “Next”
- Click “Next”
- Click “Add host”
-
Select “Provisioning type” x “Full image file - Download a self-contained ISO”
-
Add a public SSH key
-
Click “Generate Discovery ISO”
-
Download the ISO
vSphere #
Upload ISO #
- Upload the discovery ISO file to a datastore
Create & Start VM #
- Define VM name
-
Select compute resource (ESXi node)
-
Select storage
-
Select “Compatible with” “ESXi 7.0 U2 and later”
-
Select “Guest OS Family” x “Linux”
-
Select “Guest OS Version” x “Red Hat Enterprise Linux 8 (64-bit)”
- Add the Discovery ISO
- Open the “VM Options” tab
- Scroll down to “Advanced” > (Configuration Parameters) “EDIT CONFIGURATION…”
- Click “ADD CONFIGURATION PARAMS”
- Add the following paramaters
# Name
disk.EnableUUID
# Value
TRUE
- Click “OK”
- Start the VM
OpenShift Webconsole #
Installation Pt. 2 #
After the OpenShift VM was started in vSphere, it should be available in the OpenShift Console:
- Click on the OpenShift host name to rename it
- Click “Next”
- Click “Next”
- Click “Next”
- Click “Install cluster”
Post Installation #
Wait till the installation has finished:
-
Click “Download kubeconfig”
-
Click “Not able to access the Web Console?”
DNS Entry #
Option 1: Add the following records to your DNS server (recommended)
api.openshift-sn.jklug.local A 192.168.70.202
*.apps.openshift-sn.jklug.local A 192.168.70.202
Option 2: Update your local /etc/hosts or /etc/resolv.conf files
192.168.70.202 api.openshift-sn.jklug.local
192.168.70.202 oauth-openshift.apps.openshift-sn.jklug.local
192.168.70.202 console-openshift-console.apps.openshift-sn.jklug.local
192.168.70.202 grafana-openshift-monitoring.apps.openshift-sn.jklug.local
192.168.70.202 thanos-querier-openshift-monitoring.apps.openshift-sn.jklug.local
192.168.70.202 prometheus-k8s-openshift-monitoring.apps.openshift-sn.jklug.local
192.168.70.202 alertmanager-main-openshift-monitoring.apps.openshift-sn.jklug.local
Cluster Web Console #
# Open the web console
https://console-openshift-console.apps.openshift-sn.jklug.local/
# Username
kubeadmin
# Password
WSnQT-viNHY-8Scec-NSSLo
Manage OpenShift Cluster #
Install OpenShift CLI #
-
Open the download page: https://access.redhat.com/downloads/content/290
-
Download the “OpenShift v4.16.2 Linux Client”:
oc-4.16.2-linux.tar.gz
# Unpack the archive
tar xvf oc-4.16.2-linux.tar.gz
# List paths
echo $PATH
# Move binary
sudo mv oc /usr/local/bin/
# Verify the installation / check version
oc version
# Shell output:
Client Version: 4.16.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.16.3
Kubernetes Version: v1.29.6+aba1e8d
Set Kubeconfig Environment Variable #
Export the kubeconfig environment variable so that it points to the downloaded kubeconfig file:
# Export the kubeconfig environment variable
export KUBECONFIG=/home/ubuntu/.kubeadm/kubeconfig
List Cluster Nodes #
# List nodes
oc get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
openshift-sn Ready control-plane,master,worker 48m v1.29.6+aba1e8d
SSH into OpenShift VM #
# SSH into VM
ssh core@192.168.70.202