Skip to main content

Nextcloud - Kubernetes Deployment

1427 words·
Kubernetes Helm CSI Nextcloud
Table of Contents

Overview
#

In this tutorial I’m using the following Kubernetes cluster deployed with Kubeadm, and an Ubuntu 24.04 based NFS server:

# Kubernetes cluster
NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   90d   v1.28.11   192.168.30.10   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu2   Ready    worker          90d   v1.28.11   192.168.30.11   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18
ubuntu3   Ready    worker          90d   v1.28.11   192.168.30.12   <none>        Ubuntu 24.04 LTS   6.8.0-36-generic   containerd://1.7.18

192.168.30.13 # NFS server

This was just a test for a Nextcloud deployment in an Kubernetes cluster.

For a more extensive configuration check out my older post “Nextcloud: Docker Compose Stack, HTTPS, S3 Storage, LDAPS Active Directory Authentication, Maintenence & other Settings”.


Prerequisites
#

NFS
#

NFS Server Setup
#

# Install NFS server
sudo apt install nfs-kernel-server -y
# Create directory for NFS share
sudo mkdir -p /srv/nfs/k8s_nfs-csi
# Open NFS configuration
sudo vi /etc/exports

# Define Kubernetes nodes
/srv/nfs/k8s_nfs-csi 192.168.30.10(rw,sync,no_root_squash)
/srv/nfs/k8s_nfs-csi 192.168.30.11(rw,sync,no_root_squash)
/srv/nfs/k8s_nfs-csi 192.168.30.12(rw,sync,no_root_squash)

# Restart NFS server
sudo systemctl restart nfs-server

Install NFS Client on Kubernetes Nodes
#

Install the NFS Client on all the Kubernetes nodes:

# Install NFS utilities package and rpcbind package
sudo apt install nfs-common rpcbind -y

Note: The “rpcbind” package is necessary for NFSv3, which relies on remote procedure calls (RPCs) for various operations.


Verify the NFS connectivity from the Kubernetes nodes:

# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e 192.168.30.13

# Shell output:
Export list for 192.168.30.13:
/srv/nfs/k8s_nfs-csi 192.168.30.12,192.168.30.11,192.168.30.10

NFS CSI Driver
#

CSI Setup
#

Add the NFS CSI Helm repository:

# Add Helm repository & update repository index
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts &&
helm repo update

Install CSI NFS Driver:

# Install the CSI NFS Driver
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system

# Shell output:
NAME: csi-driver-nfs
LAST DEPLOYED: Fri Oct  4 14:23:38 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The CSI NFS Driver is getting deployed to your cluster.

To check CSI NFS Driver pods status, please run:

  kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/instance=csi-driver-nfs" --watch

Verify CSI NFS Driver:

# List pods
kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/instance=csi-driver-nfs" --watch

# Shell output: (Wait a while will the images are pulled)
NAME                                  READY   STATUS    RESTARTS        AGE
csi-nfs-controller-68466bd89b-9k2v5   4/4     Running   2 (42s ago)     3m34s
csi-nfs-node-c6lcr                    3/3     Running   1 (2m14s ago)   3m34s
csi-nfs-node-rkvj6                    3/3     Running   1 (96s ago)     3m34s
csi-nfs-node-sqbgd                    3/3     Running   1 (2m26s ago)   3m34s

Create Storage Class
#

# Create a manifest for the storage class
vi csi-nfs-storage-class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io # NFS CSI Driver
parameters:
  server: 192.168.30.13
  share: /srv/nfs/k8s_nfs-csi
reclaimPolicy: Retain
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=3
# Create storage class
kubectl apply -f csi-nfs-storage-class.yml

Verify Storage Class
#

# List StorageClasses
kubectl get storageclasses

# Shell output:
NAME      PROVISIONER      RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-csi   nfs.csi.k8s.io   Retain          Immediate           false                  23s

Create Persistent Volume Claim
#

# Create a manifest for the persistent volume claim
vi nextcloud-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud
  namespace: nextcloud
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: nfs-csi # Define the nfs-csi StorageClass
# Create persistant volume claim
kubectl apply -f nextcloud-pvc.yaml

Verify Persistent Volume Claim
#

# List PVC in "nextcloud" namespace
kubectl get pvc -n nextcloud

# Shell output:
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nextcloud   Bound    pvc-3b480d06-8f76-4343-86b8-3323cf33b6ef   8Gi        RWO            nfs-csi        38



Nextcloud Kubernetes Deployment
#

Create Namespace
#

Create a namespace for the Nextcloud deployment:

# Create namespace
kubectl create namespace nextcloud

TLS Certificate Secret
#

In this setup I’m using a Let’s Encrypt wildcard certificate.

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls nextcloud-tls --cert=./fullchain.pem --key=./privkey.pem -n nextcloud

Add Helm Repository
#

# Add Helm repository
helm repo add nextcloud https://nextcloud.github.io/helm/
helm repo update

Save & Adopt Helm Chart Values
#

# Save the Helm chart values
helm show values nextcloud/nextcloud > nextcloud-values.yaml

# Adopt values
vi nextcloud-values.yaml

Nginx Ingress
#

Adopt the values for the Nginx Ingress:

ingress:
  enabled: true
  className: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 4G
  tls:
    - secretName: nextcloud-tls # Define secret with TLS certificates
      hosts:
        - nextcloud.jklug.work # Define domainname
  labels: {}
  path: /
  pathType: Prefix

Domain & PW
#

Adopt the Nextcloud domain and define a default admin user & pw:

nextcloud:
  host: nextcloud.jklug.work
  username: admin
  password: my-secure-pw

Service
#

Original service definition:

service:
  type: ClusterIP
  port: 8080
  loadBalancerIP: ""
  nodePort:
  annotations: {}

Adopt to LoadBalancer:

service:
  type: LoadBalancer
  loadBalancerIP: ""

Persistent Storage
#

Original Configuration:

persistence:
  # Nextcloud Data (/var/www/html)
  enabled: false
  annotations: {}
  ## nextcloud data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  accessMode: ReadWriteOnce
  size: 8Gi

  ## Use an additional pvc for the data directory rather than a subpath of the default PVC
  ## Useful to store data on a different storageClass (e.g. on slower disks)
  nextcloudData:
    enabled: false
    subPath:
    annotations: {}
    # storageClass: "-"
    # existingClaim:
    accessMode: ReadWriteOnce
    size: 8Gi

Adopt the values to use the previously created PVC:

persistence:
  enabled: true
  existingClaim: nextcloud # Define the PVC
  accessMode: ReadWriteOnce
  size: 6Gi
resources:
  {}

Liveness Probe
#

Adopt the liveness probe and give it more time.

Original version:

livenessProbe:
  enabled: true
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3
  successThreshold: 1
startupProbe:
  enabled: false
  initialDelaySeconds: 30
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 30
  successThreshold: 1

Adopted version:

livenessProbe:
  enabled: true
  initialDelaySeconds: 100
  periodSeconds: 100
  timeoutSeconds: 10
  failureThreshold: 5
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 100
  periodSeconds: 100
  timeoutSeconds: 10
  failureThreshold: 5
  successThreshold: 1
startupProbe:
  enabled: true
  initialDelaySeconds: 600
  periodSeconds: 100
  timeoutSeconds: 10
  failureThreshold: 5
  successThreshold: 1

Deploy Nextcloud
#

# Install Nextcloud
helm install my-nextcloud nextcloud/nextcloud -n nextcloud -f nextcloud-values.yaml

# Shell output:
NAME: my-nextcloud
LAST DEPLOYED: Fri Oct  4 15:38:03 2024
NAMESPACE: nextcloud
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
#######################################################################################################
## WARNING: You did not provide an external database host in your 'helm install' call                ##
## Running Nextcloud with the integrated sqlite database is not recommended for production instances ##
#######################################################################################################

For better performance etc. you have to configure nextcloud with a resolvable database
host. To configure nextcloud to use and external database host:


1. Complete your nextcloud deployment by running:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace nextcloud -w my-nextcloud'

  export APP_HOST=$(kubectl get svc --namespace nextcloud my-nextcloud --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
  export APP_PASSWORD=$(kubectl get secret --namespace nextcloud my-nextcloud -o jsonpath="{.data.nextcloud-password}" | base64 --decode)

  ## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##

  helm upgrade my-nextcloud nextcloud/nextcloud \
    --set nextcloud.password=$APP_PASSWORD,nextcloud.host=$APP_HOST,service.type=LoadBalancer,mariadb.enabled=false,externalDatabase.user=nextcloud,externalDatabase.database=nextcloud,externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST

Verify Nextcloud Deployment
#

Verify Resources
#

# List resources in the "nextcloud" namespace
kubectl get all -n nextcloud

# Shell output:
NAME                                READY   STATUS    RESTARTS   AGE
pod/my-nextcloud-77dddbdf76-x4js8   1/1     Running   0          10m

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
service/my-nextcloud   LoadBalancer   10.110.190.68   192.168.30.201   8080:31805/TCP   10m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nextcloud   1/1     1            1           10m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nextcloud-77dddbdf76   1         1         1       10m

Verify Ingress
#

# List ingress resources in "nextcloud" namespace
kubectl get ingress -n nextcloud

# Shell output:
NAME           CLASS   HOSTS                  ADDRESS          PORTS     AGE
my-nextcloud   nginx   nextcloud.jklug.work   192.168.30.200   80, 443   11m

Verify Data Persistence
#

# List the data inside the PVC
ls -la /srv/nfs/k8s_nfs-csi/pvc-3b480d06-8f76-4343-86b8-3323cf33b6ef/

# Shell output:
total 36
drwxrwsr-x  9 root     www-data 4096 Oct  4 15:38 .
drwxr-xr-x  3 root     root     4096 Oct  4 15:36 ..
drwxrwsr-x  2 www-data www-data 4096 Oct  4 15:44 config
drwxrwsr-x  2 www-data www-data 4096 Oct  4 15:38 custom_apps
drwxrwx---  3 www-data www-data 4096 Oct  4 15:49 data
drwxrwsr-x 16 www-data www-data 4096 Oct  4 15:41 html
drwxrwsr-x  4 root     www-data 4096 Oct  4 15:38 root
drwxrwsr-x  3 www-data www-data 4096 Oct  4 15:41 themes
drwxrwsr-x  2 root     www-data 4096 Oct  4 15:38 tmp

DNS Entry
#

# Create a DNS entry for Nextcloud
192.168.30.200 nextcloud.jklug.work

Access Nextcloud Webinterface
#

# Access the Nextcloud Webinterface
https://nextcloud.jklug.work/

Use the user and pw defined in the nextcloud-values.yaml manifest. Optional use the following command to list the password:

# List password
echo $(kubectl get secret --namespace nextcloud my-nextcloud -o jsonpath="{.data.nextcloud-password}" | base64 --decode)

Delete Nextcloud
#

# Delete Nextcloud
helm delete my-nextcloud -n nextcloud