In this tutorial I’m using a Debian 12 desktop with the IP “192.168.30.64”. The container that I deploy is running a webserver that’s running on port “8080”.
Installation #
Install KIND #
Note: Check for latest version
https://kind.sigs.k8s.io/docs/user/quick-start
# For AMD64 / x86_64:
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 &&
chmod +x kind &&
sudo mv kind /usr/local/bin/kind
# Verify installation / check version
kind version
Install Kubectl #
# Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&
chmod +x kubectl &&
sudo mv kubectl /usr/local/bin/
Default KIND Cluster & Dashboard #
Create Cluster #
The following command deploys a single-node cluster for testing:
# Create cluster
kind create cluster --wait 1m
This will create a new cluster and add it to the kubectl config file.
Kubernetes Dashboard #
Create Proxy #
To access the Dashboard, it’s necessary to create a secure channel to the Kubernetes cluster
# Create proxy for Minikube Dashboard
kubectl proxy
Install Kubernetes Dashboard #
Check latest version:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
# Install Kubernetes Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Create a Service Account #
# Create yml file
vi dashboard-adminuser.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
# Apply settings
kubectl apply -f dashboard-adminuser.yml
Obtain the Access Token #
# Create token for the "admin-user"
kubectl -n kubernetes-dashboard create token admin-user
# Shell output:
eyJhbGciOiJSUzI1NiIsImtpZCI6IkRKdXlqcExZX3Q2YkxlZ3N4TmdtZERmUFhXMjVEc2xPVFpzUnNXckFpRE0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzEyNjA5NTYyLCJpYXQiOjE3MTI2MDU5NjIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZDA5MzYxMDItODkyZS00NDU3LTgzYjUtODk2NWZkODljZWIyIn19LCJuYmYiOjE3MTI2MDU5NjIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.I6dTBbJVCxIOUWIZeclT2sO5amUX-uZeg1dVKIphotjoMC5dXHXPJSDUhO2kNGUNYDafaNUX3w5YI2b1yyH4KWLcXutz2ZKBYNI-25JeWvOHwtvwW8A_BsT1nV08aviqQOsE4WdaZEOJflWZDe-Ha4Tz3obYGdvci7FhF3Gil_PU71LJSWKEdSbr-3ePgifaA9nH9UqyFrlMPzk6MT9eKfLyin7Lui0asSJfaTaNYNlsvWDUYysC07UJE2kj0JaoWqtiKbZoAtUBySG3M7NDZMQJpM7hdpIqICQQFi2y8YdhU6sgSt3drYgBMqWaVxilPkZ9m0MqZVFeYA-6nQM0xQ
Access the Dashboard #
# Open the dasboard
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
# Enter the admin-token
Note: Per default the dashboard is only accessible from the local host, to make it available for other hosts, set up a reverse proxy.
Cluster Command Overview #
Create Cluster #
# Create cluster: Default name "kind-control-plane"
kind create cluster --wait 1m
# Create cluster: Custom name
kind create cluster --name my-cluster --wait 1m
List Cluster #
# List currently running clusters
kind get clusters
# Shell output:
my-cluster
Cluster Details #
- Kubectl config file
# Output the kubectl config file contents for the default cluster
kind get kubeconfig
# Default path
cat ~/.kube/config
- Verify cluster
# List cluster nodes
kubectl get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 17m v1.27.3
Delete Cluster #
# Delete the default KIND cluster
kind delete cluster
# Delete cluster with specific name
kind delete cluster --name my-cluster
This will delete the cluster, including the entries in the kubeconfig file.
Multi Node Cluster #
Cluster Config #
# Create a cluster config file
vi cluster-01.yml
- Example for a multi node cluster
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "0.0.0.0"
apiServerPort: 6443
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30007
hostPort: 30007
listenAddress: "0.0.0.0"
protocol: TCP
- role: worker
- role: worker
Note: When multiple control plane nodes are deployed, KIND will create an additional HAProxy container as load balancer.
# Create cluster
kind create cluster --name cluster-01 --config cluster-01.yml
# List cluster nodes
kubectl get nodes
# Shell output:
NAME STATUS ROLES AGE VERSION
cluster-01-control-plane Ready control-plane 2m29s v1.27.3
cluster-01-worker Ready <none> 2m6s v1.27.3
cluster-01-worker2 Ready <none> 2m6s v1.27.3
Pod Deployment: Testing #
This deployment type is useful for testing and is only accessible from the local host.
# Run container: Example
kubectl run my-container --image=jueklu/container-1 --port=8080 --labels app=testing
# Port forwarding: Map the local port to the container
kubectl port-forward pod/my-container 8888:8080
Note: Port forwarding allows access from the local host.
# Test the deployment in the browser
http://localhost:8888
# Delete deployment
kubectl delete pod my-container
Pod Deployment: YML #
The following deployment is available from other hosts on the network.
Deployment YML file #
# Create YML file
vi my-container.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 2
selector:
matchLabels:
app: my-container
template:
metadata:
labels:
app: my-container
spec:
containers:
- name: my-container
image: jueklu/container-1
ports:
- containerPort: 8080
# Apply the deployment
kubectl apply -f my-container.yml
NodePort Service #
Expose the deployment with a NodePort service:
- The NodePort service exposes a specific port on all nodes in the cluster to the outside world
# Create YML file
vi nodeport-service.yml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-container
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30007
port: 80
This is the port on which the Service itself listens within the cluster. It’s the port through which the Service is accessible to other components within the cluster. When another pod or service within the same Kubernetes cluster wants to communicate with the deployed service, it would use this cluster-internal port value.
# Apply the service
kubectl apply -f nodeport-service.yml
Test Deployment #
# Test the deployment: Other hosts
http://192.168.30.64:30007
# Test the deployment: Local host
localhost:30007
List Resources #
# List the deployment resources
kubectl get deployments
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
my-deployment 2/2 2 2 5m14s
# List services
kubectl get svc
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7m11s
my-service NodePort 10.96.207.129 <none> 80:30007/TCP 4m56s
# List service details
kubectl describe service my-service
# Shell output:
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=my-container
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.207.129
IPs: 10.96.207.129
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30007/TCP
Endpoints: 10.244.1.2:8080,10.244.2.2:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Deploy Kubernetes Dashboard #
# Ensure that kubectl context is set to the new cluster "cluster-01"
kubectl config current-context
# Shell output:
kind-cluster-01
# If the current kubectl context is not "cluster-01", switch to it using:
kubectl config use-context kind-cluster-01
Rebeat the steps from the beginning of this tutorial to deploy the dashboard and don’t forget to start the proxy.
Delete Deployment & Resources #
# Delete the deployment and its pods
kubectl delete deployment my-deployment
The best approach is to organize the resources with labels and use a label selector to delete everything at once.
# Delete the deployment and all related resources
kubectl delete all --selector app=my-container
# Delete NodePort service
kubectl delete -f nodeport-service.yml
# Delete the whole cluster with all it's resources
kind delete cluster --name cluster-01
Links #
# KIND Documentation
https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster
# Kubernetes Dashboard: Installation
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
# Kubernetes Dashboard: Create admin user
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md