Overview #
I use the following nodes based on a Kubernetes K8s cluster with MetalLB, that is deployed bare metal on Debian 12 in this tutorial:
192.168.30.71 deb-02 # Controller / Master Node
192.168.30.72 deb-03 # Controller / Master Node
192.168.30.73 deb-04 # Worker Node
192.168.30.74 deb-05 # Worker Node
192.168.30.60 # External Resource Node
Nginx & CentOS Pod #
The following Pod definition deploys an Nginx and a Alpine container within the same pod. The Nginx container serves the default content on port “80” and the Alpine container is configured to curl the Nginx service at localhost port “80” every three seconds.
Containers within the same pod share the same network space and can communicate via localhost
, which refers to other containers in the same pod. All containers in the pod share the same IP address, and each must ensure they use different port numbers if they serve network traffic.
Create Pod #
vi nginx-alpine-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-alpine-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: alpine-container
image: alpine
command: ["/bin/ash", "-c"]
args:
- apk add --no-cache curl &&
while true; do
curl http://localhost:80/;
sleep 3;
done
"/bin/ash", "-c"
Path to ash shell, the “-c” option tells the shell to take the next string as a command to execute.
# Launch pod
kubectl create -f nginx-alpine-pod.yaml
Pod Status & Details #
# Check pod status
kubectl get pods
# Shell output: Wait till both containers a ready
NAME READY STATUS RESTARTS AGE
nginx-alpine-pod 2/2 Running 0 24s
# List pod details
kubectl describe pod nginx-alpine-pod
# Shell output:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23m default-scheduler Successfully assigned default/nginx-alpine-pod to node4
Normal Pulling 23m kubelet Pulling image "nginx"
Normal Pulled 22m kubelet Successfully pulled image "nginx" in 7.034s (7.034s including waiting)
Normal Created 22m kubelet Created container nginx-container
Normal Started 22m kubelet Started container nginx-container
Normal Pulling 22m kubelet Pulling image "alpine"
Normal Pulled 22m kubelet Successfully pulled image "alpine" in 2.92s (2.92s including waiting)
Normal Created 22m kubelet Created container alpine-container
Normal Started 22m kubelet Started container alpine-container
Alpine Container Logs #
In the logs of the Alpine container, the curled output of the Nginx service is visible:
# Check the logs of the "alpine-container" in the "nginx-alpine-pod"
kubectl logs nginx-alpine-pod -c alpine-container --tail=30
Alpine Container Logs
# Shell output:
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
100 615 100 615 0 0 150k 0 --:--:-- --:--:-- --:--:-- 200k
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
100 615 100 615 0 0 154k 0 --:--:-- --:--:-- --:--:-- 200k
Deployment Node #
Find Node #
# List on which node the pods runs
kubectl get pods -o wide
# Shell output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-alpine-pod 2/2 Running 0 6m21s 10.233.74.71 node4 <none> <none>
Find Container Runtime #
# Use the following command to find the container runtime
kubectl describe node node4
# Shell output:
...
Container Runtime Version: containerd://1.7.16
List Containers on Node #
Log in to the Kubernetes node where the pod runs, to check the container status:
# List all running containers managed by containerd within the Kubernetes namespace (k8s.io)
sudo ctr -n k8s.io containers list
# Shell output:
CONTAINER IMAGE RUNTIME
...
0a68e2f17ff0a92ed5d7ee14d4041362ab62851377a34a153c0a04c4e047a772 docker.io/library/alpine:latest io.containerd.runc.v2
1ea8b91d268c81819ccfd2a8d41936d53f545f2a8933a6de28e9c34a133c391d docker.io/library/nginx:1.25.2-alpine io.containerd.runc.v2
# List detailed information about a specific container
sudo ctr -n k8s.io containers info container-id
sudo ctr -n k8s.io containers info 0a68e2f17ff0a92ed5d7ee14d4041362ab62851377a34a153c0a04c4e047a772
Delete the Pod #
# Delete the pod
kubectl delete pod nginx-alpine-pod
Replication Controller Example #
The replication controller ensures that a specified number of pods is running all the time. Replication controllers has mostly been replaced by deployments, which offer more features and functionality.
Create Replication Controller #
vi replication-controller.yaml
Example: Single Container
The following replication controller deploys two replicas of pod that consists of an Nginx container.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
labels:
app: nginx
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
Example: Multi Container
The following replication controller deploys two replicas of pod that consists of two containers, from the previous “Nginx & CentOS Pod” example.
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-alpine-controller
labels:
app: nginx-alpine
spec:
replicas: 2
selector:
app: nginx-alpine
template:
metadata:
labels:
app: nginx-alpine
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: alpine-container
image: alpine
command: ["/bin/ash", "-c"]
args:
- apk add --no-cache curl &&
while true; do
curl http://localhost:80/;
sleep 3;
done
kubectl apply -f replication-controller.yaml
List Replication Controllers #
# List all replication controllers
kubectl get rc
# Shell output: Single container version
NAME DESIRED CURRENT READY AGE
nginx-controller 2 2 2 30s
# Shell output: Multi container version
NAME DESIRED CURRENT READY AGE
nginx-alpine-controller 2 2 1 5s
Replication Controller Details #
# List replication controller details
kubectl describe rc nginx-controller
kubectl describe rc nginx-alpine-controller
# Shell output: Single container version
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 23s replication-controller Created pod: nginx-controller-kgdgb
Normal SuccessfulCreate 23s replication-controller Created pod: nginx-controller-vm2rt
# Shell output: Multi container version
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 45s replication-controller Created pod: nginx-alpine-controller-2pr8v
Normal SuccessfulCreate 45s replication-controller Created pod: nginx-alpine-controller-t7ff7
Pod Status & Details #
# List pods: With label nginx
kubectl get pod -l app=nginx
kubectl get pod -l app=nginx-alpine
# Shell output: Single container version
NAME READY STATUS RESTARTS AGE
nginx-controller-kgdgb 1/1 Running 0 71s
nginx-controller-vm2rt 1/1 Running 0 71s
# Shell output: Multi container version
NAME READY STATUS RESTARTS AGE
nginx-alpine-controller-2pr8v 2/2 Running 0 2m4s
nginx-alpine-controller-t7ff7 2/2 Running 0 2m4s
Edit Replication Controller #
Use the kubectl edit rc
command to edit the replication controller, for example change the amount of replicas to “3”:
# Edit replication controller "nginx-controller"
kubectl edit rc/nginx-controller
# Verify the number of pods
kubectl get pod -l app=nginx
# Shell output:
NAME READY STATUS RESTARTS AGE
nginx-controller-2rdkf 1/1 Running 0 5s
nginx-controller-kgdgb 1/1 Running 0 109s
nginx-controller-vm2rt 1/1 Running 0 109s
Delete the Replication Controller #
# Delete the replication controller and it's pods
kubectl delete rc nginx-controller
kubectl delete rc nginx-alpine-controller
# Verify
kubectl get rc
Kubernetes Services #
Services expose a network port to make pods accessible from outside the Kubernetes cluster.
-
Pods
Services select Pods based on labels, a selector in the Service configuration is defined, that matches labels assigned to Pods. -
Deployments
Services can also route traffic to Pods managed by a Deployment. Similar the traffic is routed based on the Pod labels that match the Service selector. -
ReplicationControllers
Like with Deployments, a Service would use label selectors to route traffic to Pods managed by a Replication Controller.
Service Types #
There a three types of Kubernetes services: ClusterIP, NodePort and LoadBalancer. ClusterIP is the default service type.
By default, every service is created as a ClusterIP type
-
ClusterIP
Exposes the service on a cluster-internal IP, this makes the service only reachable from within the cluster. -
NodePort
Exposes the service on each nodes IP at a static port (the NodePort).The NodePort service is accessible from outside the cluster byNodeIP:NodePort
. This isuseful to expose services that need to be accessible from outside the cluster but do not use load balancing. -
LoadBalancer
This Service type integrates with the cloud provider’s load balancer, or for example MetalLB on bare-metal clusters based on K8s and KlipperLB on bare-metal clusters based on K3s. The service is externally accessible through the load balancer IP. Underneath it sets up a NodePort and the external load balancer IP will route to the NodePorts. This distributes load across the nodes.
Service for a Pod #
Run Pod #
# Run example pod
kubectl run nginx-pod --image=nginx --port=80 --restart="Never" --labels="app=nginx-example"
Create Service #
When no service type is defined, the default service type is ClusterIP.
# Create a service: Without external IP
kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-nginx-pod"
# Create a service: With external IP
kubectl expose pod nginx-pod --port=8000 --target-port=80 --name="service-nginx-pod" --external-ip="192.168.30.100"
Make sure to choose an unused IP address in the subnet.
Service Details #
# List service details
kubectl get svc service-nginx-pod
# Shell output: Without external IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-nginx-pod ClusterIP 10.233.25.1 <none> 8000/TCP 5s
# Shell output: With external IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-nginx-pod ClusterIP 10.233.40.6 192.168.30.100 8000/TCP 5s
Access the Service #
Curl the service, this will show the Nginx default output:
# Access the service with the cluster IP from within the internal network
curl 10.233.25.1:8000
# Access the service with it's external IP:
192.168.30.100:8000
Delete Resources #
# Delete the service
kubectl delete service service-nginx-pod
# Delete the pod
kubectl delete pod nginx-pod
Service for Replication Controller #
Create Replication Controller #
vi replication-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-alpine-controller
labels:
app: nginx-alpine
spec:
replicas: 2
selector:
app: nginx-alpine
template:
metadata:
labels:
app: nginx-alpine
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: alpine-container
image: alpine
command: ["/bin/ash", "-c"]
args:
- apk add --no-cache curl &&
while true; do
curl http://localhost:80/;
sleep 3;
done
kubectl apply -f replication-controller.yaml
List Replication Controllers #
# List all replication controllers
kubectl get rc
# Shell output:
NAME DESIRED CURRENT READY AGE
nginx-alpine-controller 2 2 1 5s
Create Service #
Make sure to choose an unused IP address in the subnet.
# Create a service for the Replication Controller
kubectl expose rc nginx-alpine-controller --name="service-rc" --external-ip="192.168.30.100"
# Create a service for the Replication Controller: Define service port
kubectl expose rc nginx-alpine-controller --port=8000 --target-port=80 --name="service-rc" --external-ip="192.168.30.100"
Service Details #
# List service details
kubectl get svc service-rc
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-rc ClusterIP 10.233.14.11 192.168.30.100 80/TCP 7s
# Shell output: With defined service port
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service-rc ClusterIP 10.233.57.67 192.168.30.100 8000/TCP 10s
Note: When a specific port for the service is defined, it is also necessary to define the target-port, when both ports differ!
Access the Service #
# Access the service / the nginx pod
192.168.30.100:80
# Access the service / the nginx pod: With defined service port
192.168.30.100:8000
Service Details #
The Labels
and Selector
of the service are set to match the label of the replication controller nginx-alpine
.
# List service details
kubectl describe svc service-rc
# Shell output:
Name: service-rc
Namespace: default
Labels: app=nginx-alpine
Annotations: <none>
Selector: app=nginx-alpine
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.233.36.27
IPs: 10.233.36.27
External IPs: 192.168.30.100
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.233.71.11:80,10.233.74.82:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Delete Resources #
# Delete the service
kubectl delete service service-rc
# Delete the Replication Controller and it's pods
kubectl delete rc nginx-alpine-controller
Service for External Resource #
External Resource #
I’m deploying an Nginx container on a Debian 12 server with the IP address 192.168.30.60
.
# Create an Nginx container on a resource outside the Kubernetes cluster
docker run -d -p 80:80 nginx
# Verify the Nginx container
192.168.30.60:80
Endpoint for External Resource #
Route traffic to an external resource with a Endpoint resource, that points to the external resource, and a no-selector Service that exposes the Endpoint resource. This setup is useful for integrating external services into a Kubernetes environment.
# Create endpoint resource
vi ext-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: service-external-resource
subsets:
- addresses:
- ip: "192.168.30.60"
ports:
- port: 80
# Create the endpoint resource
kubectl create -f ext-endpoint.yaml
List Endpoint Resources #
# List endpoint resources
kubectl get endpoints
# shell output:
NAME ENDPOINTS AGE
service-external-resource 192.168.30.60:80 4s
Verify the Endpoint IP #
# Verify the external resource IP
kubectl get ep service-external-resource
# Shell output:
NAME ENDPOINTS AGE
service-external-resource 192.168.30.60:80 30s
Endpoint Service #
Since the endpoint is not in the Kubernetes system, there is no selector defined in the configuration.
# Create a service for the endpoint
vi service-ext-endpoint.yaml
apiVersion: v1
kind: Service
metadata:
name: service-external-resource
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
externalIPs:
- 192.168.30.100
kubectl create -f service-ext-endpoint.yaml
Service Details #
# List service details
kubectl describe svc service-external-resource
# Shell output:
Name: service-external-resource
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.233.58.241
IPs: 10.233.58.241
External IPs: 192.168.30.100
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.30.60:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Access the Services #
# Access the service / external resource
192.168.30.100:80
Delete Resources #
Since both the endpoint and the service resource have the same name, they can be deleted with one name.
# Delete the endpoint resource
kubectl delete endpoints service-external-resource
# Delete the service for the endpont
kubectl delete service service-external-resource
Service for Deployments #
Create Deployment #
vi nginx-alpine-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-alpine-deployment
labels:
app: nginx-alpine
spec:
replicas: 3
selector:
matchLabels:
app: nginx-alpine
template:
metadata:
labels:
app: nginx-alpine
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: alpine-container
image: alpine
command: ["/bin/ash", "-c"]
args:
- |
apk add --no-cache curl &&
while true; do
curl http://localhost:80/;
sleep 3;
done
# Deploy the pod
kubectl apply -f nginx-alpine-deployment.yaml
Check the deployment status #
# Check the deployment status
kubectl get deployment nginx-alpine-deployment
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-alpine-deployment 3/3 3 3 33s
Verify the Deployment #
# List deployment details
kubectl describe deployment nginx-alpine-deployment
# List the pods of the deployment
kubectl get pods -l app=nginx-alpine
# Shell output:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 48s deployment-controller Scaled up replica set nginx-alpine-deployment-5c46498874 to 3
Create a LoadBalancer Service #
vi nginx-alpine-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-alpine-loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx-alpine # This must match the label selector of the deployment
ports:
- protocol: TCP
port: 80 # The port the LoadBalancer service will be accessible on
targetPort: 80 # The container port to direct traffic to
# Deploy the LoadBalancer service
kubectl apply -f nginx-alpine-loadbalancer.yaml
LoadBalancer Service Details #
# List LoadBalancer service details
kubectl get svc nginx-alpine-loadbalancer
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-alpine-loadbalancer LoadBalancer 10.233.8.179 192.168.30.241 80:30806/TCP 62s
# List LoadBalancer service details: More
kubectl describe svc nginx-alpine-loadbalancer
# Shell output:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 2m13s metallb-controller Assigned IP ["192.168.30.241"]
Normal nodeAssigned 2m13s metallb-speaker announcing from node "node4" with protocol "layer2"
Access / test the Deployment #
# Open the URL in a browser
192.168.30.241:80
Delete Resources #
# Delete the deployment
kubectl delete deployment nginx-alpine-deployment
# Delete the loadbalancer service
kubectl delete service nginx-alpine-loadbalancer