Overview #
In this tutorial I’m deploying a multi node OpenShift cluster based on VMware vSphere, with the following details:
192.168.70.90 # master-01 (16 GB RAM, 8 CPU Cores, 120 GB storage)
192.168.70.91 # master-02 (16 GB RAM, 8 CPU Cores, 120 GB storage)
192.168.70.92 # master-03 (16 GB RAM, 8 CPU Cores, 120 GB storage)
192.168.70.93 # worker-01 (8 GB RAM, 4 CPU Cores, 130 GB storage)
192.168.70.94 # worker-02 (8 GB RAM, 4 CPU Cores, 130 GB storage)
192.168.70.150 # Kubernetes API
192.168.70.151 # Ingress
I’m using the “Interactive Installer” and deploy the VM in the vSphere subnet 192.168.70.0/24 with static IPv4 network configuration.
Deploy OpenShift Cluster #
OpenShift Webconsole #
Installation Pt. 1 #
Open the OpenShift webconsole: https://console.redhat.com/
-
Go to: “Services” > “Containers”
-
Click (Openshift) “Clusters”
- Click “Create cluster”
- Select “Datacenter” / “vSphere”
- Select “Interactive”
-
Define a cluster name, like
openshift-mn
-
Define a base domain, like
jklug.local
-
Define the OpenShift version “OpenShift 4.16.3”
-
Set CPU architecture to “x86_64”
-
Set “Host’s network configuration” to “Static IP, bridges, and bonds”
-
Click “Next”
-
Select “Networking stack type” x “IPv4”
-
Define DNS servers, for example
1.1.1.1,8.8.8.8
-
Define a subnet, for example
192.168.70.0/24
-
Define a default gateway, for example
192.168.70.1
-
Click “Next”
vSphere #
Create VMs #
-
Define VM name
-
Select compute resource (ESXi node)
-
Select storage
-
Select “Compatible with” “ESXi 7.0 U2 and later”
-
Select “Guest OS Family” x “Linux”
-
Select “Guest OS Version” x “Red Hat Enterprise Linux 8 (64-bit)”
- Allocate at least the following resources: 8 CPU cores, 16 GB RAM, 120 GB Disk
-
Open the “VM Options” tab
-
Scroll down to “Advanced” > (Configuration Parameters) “EDIT CONFIGURATION…”
-
Click “ADD CONFIGURATION PARAMS”
-
Add the following paramaters
# Name
disk.EnableUUID
# Value
TRUE
- Click “OK”
Copy MAC Address (ESXi) #
Edit the VMs via ESXi and copy the MAC addresses:
Copy MAC Address (PowerCLI) #
# List network adapter details of "master-01" VM
Get-VM -Name master-01 | Get-NetworkAdapter
# Shell output:
Name Type NetworkName MacAddress WakeOnLan
Enabled
---- ---- ----------- ---------- ---------
Network adapter 1 Vmxnet3 VM Network 00:50:56:85:af:45 True
OpenShift Webconsole #
Installation Pt. 2 #
- Paste the MAC addresses of the VMs and define an IP for each VM
- Click “Next”
- Click “Next”
-
Select “Provisioning type” x “Full image file - Download a self-contained ISO”
-
Add a public SSH key
# Create SSH key pair
ssh-keygen -t rsa -b 4096
- Click “Generate Discovery ISO”
-
Download the ISO
-
Click “Close”
vSphere #
Upload ISO #
Upload the Discovery ISO to the necessary datastore or datastores and add the ISO to the VMs:
Start VMs #
Start the first VM in vSphere, wait till it appears in the OpenShift web console and change it’s name. Do the same with all VMs.
OpenShift Webconsole #
Installation Pt. 3 #
- After the VM was started in vSphere, it should appear in the OpenShift web console. Click at it’s name to change it:
- Define VM names according to your name scheme
- After all VMs are added and there status is “Ready” assign the desired roles
- Click “Next”
-
Define an IP for the Kubernetes API, for example
192.168.70.150
-
Define an IP for the Ingress, for example
192.168.70.151
- Click “Install cluster”
Post Installation #
Wait till the installation has finished:
-
Click “Download kubeconfig”
-
Click “Not able to access the Web Console?” and copy the DNS entries
DNS Entry #
Option 1: Add the following records to your DNS server (recommended)
api.openshift-mn.jklug.local A 192.168.70.150
*.apps.openshift-mn.jklug.local A 192.168.70.151
Option 2: Update your local /etc/hosts or /etc/resolv.conf files
192.168.70.150 api.openshift-mn.jklug.local
192.168.70.151 oauth-openshift.apps.openshift-mn.jklug.local
192.168.70.151 console-openshift-console.apps.openshift-mn.jklug.local
192.168.70.151 grafana-openshift-monitoring.apps.openshift-mn.jklug.local
192.168.70.151 thanos-querier-openshift-monitoring.apps.openshift-mn.jklug.local
192.168.70.151 prometheus-k8s-openshift-monitoring.apps.openshift-mn.jklug.local
192.168.70.151 alertmanager-main-openshift-monitoring.apps.openshift-mn.jklug.local
Cluster Web Console #
# Open the web console
https://console-openshift-console.apps.openshift-mn.jklug.local/
# Username
kubeadmin
# Password
ZEaVq-5YzGd-LZuuI-3JHZw
Manage OpenShift Cluster #
Install OpenShift CLI #
-
Open the download page: https://access.redhat.com/downloads/content/290
-
Download the “OpenShift v4.16.2 Linux Client”:
oc-4.16.2-linux.tar.gz
# Unpack the archive
tar xvf oc-4.16.2-linux.tar.gz
# List paths
echo $PATH
# Move binary
sudo mv oc /usr/local/bin/
# Verify the installation / check version
oc version
# Shell output:
Client Version: 4.16.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: 4.16.3
Kubernetes Version: v1.29.6+aba1e8d
Set Kubeconfig Environment Variable #
Export the kubeconfig environment variable so that it points to the downloaded kubeconfig file:
# Export the kubeconfig environment variable
export KUBECONFIG=/home/ubuntu/.kubeadm/kubeconfig
List Cluster Nodes #
# List nodes
oc get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
master-01 Ready control-plane,master 65m v1.29.6+aba1e8d
master-02 Ready control-plane,master 65m v1.29.6+aba1e8d
master-03 Ready control-plane,master 52m v1.29.6+aba1e8d
worker-01 Ready worker 51m v1.29.6+aba1e8d
worker-02 Ready worker 51m v1.29.6+aba1e8d
# List nodes: More details
oc get nodes -o wide
# Shell output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-01 Ready control-plane,master 67m v1.29.6+aba1e8d 192.168.70.90 <none> Red Hat Enterprise Linux CoreOS 416.94.202407081958-0 5.14.0-427.26.1.el9_4.x86_64 cri-o://1.29.6-3.rhaos4.16.gitfd433b7.el9
master-02 Ready control-plane,master 67m v1.29.6+aba1e8d 192.168.70.91 <none> Red Hat Enterprise Linux CoreOS 416.94.202407081958-0 5.14.0-427.26.1.el9_4.x86_64 cri-o://1.29.6-3.rhaos4.16.gitfd433b7.el9
master-03 Ready control-plane,master 54m v1.29.6+aba1e8d 192.168.70.92 <none> Red Hat Enterprise Linux CoreOS 416.94.202407081958-0 5.14.0-427.26.1.el9_4.x86_64 cri-o://1.29.6-3.rhaos4.16.gitfd433b7.el9
worker-01 Ready worker 53m v1.29.6+aba1e8d 192.168.70.93 <none> Red Hat Enterprise Linux CoreOS 416.94.202407081958-0 5.14.0-427.26.1.el9_4.x86_64 cri-o://1.29.6-3.rhaos4.16.gitfd433b7.el9
worker-02 Ready worker 53m v1.29.6+aba1e8d 192.168.70.94 <none> Red Hat Enterprise Linux CoreOS 416.94.202407081958-0 5.14.0-427.26.1.el9_4.x86_64 cri-o://1.29.6-3.rhaos4.16.gitfd433b7.el9
SSH into OpenShift VMs #
# SSH into VMs
ssh core@192.168.70.90
ssh core@192.168.70.91
ssh core@192.168.70.92
ssh core@192.168.70.93
ssh core@192.168.70.94
Example Deployment #
Deployment & Service Manifest #
# Create manifest
vi example-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: container
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: default
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Apply the deployment a service
oc apply -f example-deployment.yaml
Ingress Manifest #
# Create manifest
vi example-deployment-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
spec:
rules:
- host: nginx-example.apps.openshift-mn.jklug.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
# Apply the ingress resource
oc apply -f example-deployment-ingress.yaml
Verify Resources #
List Deployments #
# List deployments
oc get deployments -n default
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 65s
List Pods #
# List pods: More details
oc get pods -o wide -n default
# Shell output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6f7d5c8f7f-5w8vw 1/1 Running 0 2m2s 10.128.2.17 worker-01 <none> <none>
nginx-deployment-6f7d5c8f7f-966tt 1/1 Running 0 2m2s 10.131.0.23 worker-02 <none> <none>
nginx-deployment-6f7d5c8f7f-nfcnz 1/1 Running 0 2m2s 10.131.0.24 worker-02 <none> <none>
List Services #
# List services
oc get services -n default
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.30.0.1 <none> 443/TCP 99m
nginx-service ClusterIP 172.30.65.83 <none> 80/TCP 2m58s
openshift ExternalName <none> kubernetes.default.svc.cluster.local <none> 91m
List Ingress Resources #
# List ingress resources
oc get ingress -n default
# Shell output:
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress <none> nginx-example.apps.openshift-mn.jklug.local router-default.apps.openshift-mn.jklug.local 80 54s
Access the Deployment / Ingress #
Create DNS Entry #
If a hosts file is used for the DNS resolution, it’s necessary to create a DNS entry for the ingress resource:
# Create hosts entry
192.168.70.151 nginx-example.apps.openshift-mn.jklug.local
Access the Deployment #
# Access the deployment
http://nginx-example.apps.openshift-mn.jklug.local
Delete Resources #
# Delete example deployment & service
kubectl apply -f example-deployment.yaml
# Delete ingress resource
kubectl delete -f example-deployment-ingress.yaml