Skip to main content

Kubernetes Container Storage Interface (CSI): Longhorn Distributed Block Storage System - Deploy Longhorn via Helm Chart, Define Custom Storage Mountpoints; StorageClass & PVC Example

1712 words·
Kubernetes CSI Kubectl Helm Longhorn
Table of Contents
Kubernetes-Components - This article is part of a series.
Part 17: This Article

Overview
#

Longhorn
#

Longhorn is an open-source lightweight distributed block storage system for Kubernetes that provides persistent storage applications in a Kubernetes cluster.

Longhorn leverages HostPath volumes and builds an abstraction layer on top of them, to treat individual HostPath directories as replicas of a PersistentVolume, spreading replicas out between nodes to ensure availability of data in case of node or network failure.


Kubernetes Cluster
#

In this tutorial I’m using the following Kubernetes cluster deployed with Kubeadm:

NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   77d   v1.28.11   192.168.30.10   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu2   Ready    worker          77d   v1.28.11   192.168.30.11   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu3   Ready    worker          77d   v1.28.11   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18

Storage Configuration
#

I’m using the following storage config on each Kubernetes worker node, the sdb disk will be used for Longhorn:

# List blockdevices
lsblk

# Shell output:
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   70G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    2G  0 part /boot
└─sda3                      8:3    0   68G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   34G  0 lvm  /
sdb                         8:16   0   30G  0 disk



Prerequisites
#

Storage Setup
#

I’m using the sdb disk on each worker nodes for the Longhorn storage pool.

# Create ext4 filesystem
sudo mkfs.ext4 /dev/sdb
# Create mountpoint directory
sudo mkdir -p /var/lib/longhorn-sdb

# Create fstab entry
echo '/dev/sdb       /var/lib/longhorn-sdb       ext4    noatime 0       0' | sudo tee -a /etc/fstab

# Mount disk
sudo mount -a

Install Prerequisites
#

# Install Open-ISCSI & NFS Client
sudo apt install open-iscsi nfs-common -y
# Verify Open-ISCSI is installed
iscsiadm --version

# Shell output:
iscsiadm version 2.1.9



LonghornInstallation (Helm)
#

Add Longhorn Helm Repository
#

# Add the Longhorn repository & update the repository index
helm repo add longhorn https://charts.longhorn.io &&
helm repo update

Adopt Longhorn Herlm Chart Values
#

Adopt the Longhorn Helm chart values to use the sdb disk:

# Save the Longhorn Helm values into a file
helm show values longhorn/longhorn > longhorn-values.yaml

# Adopt the values
vi longhorn-values.yaml

Adopt the following values:

defaultSettings:
  defaultDataPath: ~

Define the sdb mountpoint:

defaultSettings:
  defaultDataPath: /var/lib/longhorn-sdb

Install Longhorn
#

Install the Longhorn Helm chart:

# Install longhorn in "longhorn-system" namespace
helm install longhorn longhorn/longhorn \
  -f longhorn-values.yaml \
  --namespace longhorn-system \
  --create-namespace


# Shell output:
NAME: longhorn
LAST DEPLOYED: Sat Sep 21 12:38:07 2024
NAMESPACE: longhorn-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Longhorn is now installed on the cluster!

Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.

Visit our documentation at https://longhorn.io/docs/

Verify Longhorn Installation
#

# List resources in "longhorn-system" namespace
kubectl get all -n longhorn-system

# Shell output:
NAME                                                    READY   STATUS    RESTARTS      AGE
pod/csi-attacher-d49d56548-gst62                        1/1     Running   1 (39s ago)   2m8s
pod/csi-attacher-d49d56548-v4kdd                        1/1     Running   0             2m8s
pod/csi-attacher-d49d56548-z6fmm                        1/1     Running   1 (40s ago)   2m8s
pod/csi-provisioner-64fb94f78c-l2czh                    1/1     Running   0             2m8s
pod/csi-provisioner-64fb94f78c-lcl4s                    1/1     Running   0             2m8s
pod/csi-provisioner-64fb94f78c-t7pl5                    1/1     Running   0             2m8s
pod/csi-resizer-69c444fccd-lcgrl                        1/1     Running   0             2m8s
pod/csi-resizer-69c444fccd-rgt7l                        1/1     Running   0             2m8s
pod/csi-resizer-69c444fccd-wvls5                        1/1     Running   0             2m8s
pod/csi-snapshotter-6cdbd8f5b8-54h27                    1/1     Running   0             2m8s
pod/csi-snapshotter-6cdbd8f5b8-88blj                    1/1     Running   0             2m8s
pod/csi-snapshotter-6cdbd8f5b8-mgkt2                    1/1     Running   0             2m8s
pod/engine-image-ei-f4f7aa25-99dwd                      1/1     Running   0             2m48s
pod/engine-image-ei-f4f7aa25-mv9dv                      1/1     Running   0             2m48s
pod/instance-manager-8c7c1cfe91a05f52d4a83f5290bc29eb   1/1     Running   0             2m18s
pod/instance-manager-954186bcdf91811677a2ca137e7ea00a   1/1     Running   0             2m18s
pod/longhorn-csi-plugin-6q9jz                           3/3     Running   0             2m8s
pod/longhorn-csi-plugin-sxvb2                           3/3     Running   0             2m8s
pod/longhorn-driver-deployer-5f5fb7cfbb-dxjpq           1/1     Running   0             3m34s
pod/longhorn-manager-8vpfh                              2/2     Running   0             3m34s
pod/longhorn-manager-j8w87                              2/2     Running   0             3m34s
pod/longhorn-ui-8f9d758b8-tdfzd                         1/1     Running   0             3m34s
pod/longhorn-ui-8f9d758b8-zpj2b                         1/1     Running   0             3m34s

NAME                                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/longhorn-admission-webhook    ClusterIP   10.110.110.219   <none>        9502/TCP   3m34s
service/longhorn-backend              ClusterIP   10.110.3.199     <none>        9500/TCP   3m34s
service/longhorn-conversion-webhook   ClusterIP   10.106.22.250    <none>        9501/TCP   3m34s
service/longhorn-frontend             ClusterIP   10.101.4.18      <none>        80/TCP     3m34s
service/longhorn-recovery-backend     ClusterIP   10.98.8.65       <none>        9503/TCP   3m34s

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/engine-image-ei-f4f7aa25   2         2         2       2            2           <none>          2m49s
daemonset.apps/longhorn-csi-plugin        2         2         2       2            2           <none>          2m8s
daemonset.apps/longhorn-manager           2         2         2       2            2           <none>          3m34s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/csi-attacher               3/3     3            3           2m8s
deployment.apps/csi-provisioner            3/3     3            3           2m8s
deployment.apps/csi-resizer                3/3     3            3           2m8s
deployment.apps/csi-snapshotter            3/3     3            3           2m8s
deployment.apps/longhorn-driver-deployer   1/1     1            1           3m34s
deployment.apps/longhorn-ui                2/2     2            2           3m34s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/csi-attacher-d49d56548                3         3         3       2m8s
replicaset.apps/csi-provisioner-64fb94f78c            3         3         3       2m8s
replicaset.apps/csi-resizer-69c444fccd                3         3         3       2m8s
replicaset.apps/csi-snapshotter-6cdbd8f5b8            3         3         3       2m8s
replicaset.apps/longhorn-driver-deployer-5f5fb7cfbb   1         1         1       3m34s
replicaset.apps/longhorn-ui-8f9d758b8                 2         2         2       3m34s

Deploy Ingress Resource
#

Create Kubernetes TLS Certificate Secret
#

  • Create a Kubernetes secret for the TLS certificate.

  • I’m using an Let’s Encrypt wildcard certificate in this tutorial.

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls longhorn-tls --cert=./fullchain.pem --key=./privkey.pem -n longhorn-system
# Verify the secret
kubectl get secret -n longhorn-system

# Shell output:
NAME                             TYPE                 DATA   AGE
longhorn-tls                     kubernetes.io/tls    2      5s
longhorn-webhook-ca              kubernetes.io/tls    2      4m10s
longhorn-webhook-tls             kubernetes.io/tls    2      4m10s
sh.helm.release.v1.longhorn.v1   helm.sh/release.v1   1      4m35s
# List secret details
kubectl describe secret longhorn-tls -n longhorn-system

# Shell output:
...
Data
====
tls.crt:  3578 bytes
tls.key:  1708 bytes

Create Nginx Ingres Resource
#

  • Create an Nginx Ingress resource for the Longhorn frontend ClusterIP service:
# Create a manifest for the Nginx Ingress
vi longhorn-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: longhorn-ingress
  namespace: longhorn-system
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  rules:
    - host: longhorn.jklug.work  # Define domain name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: longhorn-frontend
                port:
                  number: 80
  tls:
    - hosts:
        - longhorn.jklug.work  # Define domain name
      secretName: longhorn-tls  # Define secret name
# Apply the Ingress resource
kubectl apply -f longhorn-ingress.yml

Verify Ingress Resource
#

# List Ingress resources in "longhorn-system" namespace
kubectl get ingress -n longhorn-system

# Shell output: (Wait a min till Ingress gets an address)
NAME               CLASS   HOSTS                 ADDRESS          PORTS     AGE
longhorn-ingress   nginx   longhorn.jklug.work   192.168.30.200   80, 443   74s
# List Nginx Ingress details
kubectl describe ingress longhorn-ingress -n longhorn-system

Create DNS Entry
#

# Create a DNS entry for the Longhorn Ingress
192.168.30.200 longhorn.jklug.work

Longhorn Dashboard
#

Access Dashboard
#

# Open the Longhorn Dashboard
https://longhorn.jklug.work

Verify Disks
#

  • Select the “Node” tab

Longhorn CLI Overview
#

List Nodes
#

# List nodes
kubectl get nodes.longhorn.io -n longhorn-system

# Shell output:
NAME      READY   ALLOWSCHEDULING   SCHEDULABLE   AGE
ubuntu2   True    true              True          39s
ubuntu3   True    true              True          39s

Verify Disk Mount Path
#

# List node details / verify disk path
kubectl describe node.longhorn.io ubuntu2 -n longhorn-system

# Shell output:
...
  Disk Status:
    default-disk-1b061ad07ffc9771:
      Conditions:
        Last Probe Time:
        Last Transition Time:  2024-09-21T12:39:23Z
        Message:               Disk default-disk-1b061ad07ffc9771(/var/lib/longhorn-sdb) on node ubuntu2 is ready
        Reason:
        Status:                True
        Type:                  Ready
        Last Probe Time:
        Last Transition Time:  2024-09-21T12:39:23Z
        Message:               Disk default-disk-1b061ad07ffc9771(/var/lib/longhorn-sdb) on node ubuntu2 is schedulable
        Reason:
        Status:                True
        Type:                  Schedulable
      Disk Driver:
      Disk Name:               default-disk-1b061ad07ffc9771
      Disk Path:               /var/lib/longhorn-sdb # Verify the disk path
      Disk Type:               filesystem
      Disk UUID:               f0be37be-7d59-4b59-8df9-aee0c66b5395
      Filesystem Type:         ext2/ext3
      Instance Manager Name:
      Scheduled Replica:
      Storage Available:  31457280000
      Storage Maximum:    31526391808
      Storage Scheduled:  0
  Region:
  Snapshot Check Status:
  Zone:
Events:
  Type    Reason       Age   From                      Message
  ----    ------       ----  ----                      -------
  Normal  Ready        49s   longhorn-node-controller  Node ubuntu2 is ready
  Normal  Schedulable  49s   longhorn-node-controller
  Normal  Ready        49s   longhorn-node-controller  Node ubuntu2 is ready
  Normal  Schedulable  49s   longhorn-node-controller
  Normal  Ready        19s   longhorn-node-controller  Disk default-disk-1b061ad07ffc9771(/var/lib/longhorn-sdb) on node ubuntu2 is ready
  Normal  Schedulable  19s   longhorn-node-controller  Disk default-disk-1b061ad07ffc9771(/var/lib/longhorn-sdb) on node ubuntu2 is schedulable



Longhorn StorageClasses
#

List StorageClasses
#

# List StorageClasses
kubectl get sc

# Shell output:
NAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io   Delete          Immediate           true                   3m13s
longhorn-static      driver.longhorn.io   Delete          Immediate           true                   3m9s

StorageClass Details
#

# List StorageClass details
kubectl describe sc longhorn
kubectl describe sc longhorn-static

# Shell output:
Name:            longhorn
IsDefaultClass:  Yes
Annotations:     longhorn.io/last-applied-configmap=kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: longhorn
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: "Delete"
volumeBindingMode: Immediate
parameters:
  numberOfReplicas: "3"
  staleReplicaTimeout: "30"
  fromBackup: ""
  fsType: "ext4"
  dataLocality: "disabled"
  unmapMarkSnapChainRemoved: "ignored"
  disableRevisionCounter: "true"
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           driver.longhorn.io
Parameters:            dataLocality=disabled,disableRevisionCounter=true,fromBackup=,fsType=ext4,numberOfReplicas=3,staleReplicaTimeout=30,unmapMarkSnapChainRemoved=ignored
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Create RWO Retain StorageClass
#

  • Persistent Volume Claims use the ReadWriteOnly (RWO) access mode to mount a volume with read-write privileges on a single pod at a time.

  • RWO does not support volume sharing and restricts all read-write privileges to a single pod at a time.

# Create a manifest for the StorageClass
vi longhorn-rwo-retain.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: longhorn-rwo-retain
provisioner: driver.longhorn.io
allowVolumeExpansion: true
parameters:
  numberOfReplicas: "2"
  staleReplicaTimeout: "2880" # 48 hours in minutes
  fromBackup: ""
  fsType: "ext4"
volumeBindingMode: Immediate
reclaimPolicy: Retain
  • numberOfReplicas: "2" Longhorn will attempt to create 2 replicas of the volume, each stored on a different node for high availability.
# Apply StorageClass
kubectl apply -f longhorn-rwo-retain.yaml

Verify the StorageClass:

# List StorageClasses
kubectl get sc

# Shell output:
NAME                  PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)    driver.longhorn.io   Delete          Immediate           true                   47m
longhorn-rwo-retain   driver.longhorn.io   Retain          Immediate           true                   39s
longhorn-static       driver.longhorn.io   Delete          Immediate           true                   47m

Verify Storage Class
#

Create Persistent Volume Claim
#

# Create a manifest for the PVC
vi rwo-retain-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-rwo-retain
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn-rwo-retain
  resources:
    requests:
      storage: 2Gi
# Apply the PVC
kubectl apply -f rwo-retain-pvc.yaml

Verify the PVC & PV
#

# List PVCs
kubectl get pvc

# Shell output:
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
example-rwo-retain   Bound    pvc-18437730-1996-430f-bfb2-76d593784e2d   2Gi        RWO            longhorn-rwo-retain   12s
# List PVs
kubectl get pv

# Shell output:
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS          REASON   AGE
pvc-18437730-1996-430f-bfb2-76d593784e2d   2Gi        RWO            Retain           Bound    default/example-rwo-retain   longhorn-rwo-retain            19s

PVC Volume Example Pod
#

# Create a manifest for the example pod
vi nginx-rwo-retain.yaml
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: rwo-retain-container
    image: nginx:latest
    volumeMounts:
    - name: rwo-retain-volume
      mountPath: /data
  volumes:
  - name: rwo-retain-volume
    persistentVolumeClaim:
      claimName: example-rwo-retain
# Deploy the example pod
kubectl apply -f nginx-rwo-retain.yaml

Verify the pod:

# List pods in default namespace
kubectl get pod

# Shell output:
NAME          READY   STATUS    RESTARTS   AGE
example-pod   1/1     Running   0          31s

Verify the Volume in the Longhorn GUI
#

  • Select the “Volume” tab
  • Click on the PVC

Create a Snapshot of the Volume
#

  • Volume Head" refers to the active live state of the volume / is the most recent and active state of the volume

  • Click “Take Snapshot”

  • Verify the snapshot

Delete Resources
#

# Delete the example pod
kubectl delete -f nginx-rwo-retain.yaml

# Delete the PVC
kubectl delete pvc example-rwo-retain

# Delete the PV
kubectl delete pv pvc-18437730-1996-430f-bfb2-76d593784e2d



Links #

# Official Documentation
https://longhorn.io/docs/1.7.1/
Kubernetes-Components - This article is part of a series.
Part 17: This Article