Overview #
I use the following nodes based on a Kubernetes K8s cluster with MetalLB, that is deployed bare metal on Debian 12 in this tutorial:
192.168.30.71 deb-02 # Controller / Master Node
192.168.30.72 deb-03 # Controller / Master Node
192.168.30.73 deb-04 # Worker Node
192.168.30.74 deb-05 # Worker Node
192.168.30.60 # NFS server
Kubernetes Volumes Types #
-
nfs
Multiple pods can mount and share the files in the same nfs volume. If the pod is terminated, the data is still accessible on the NFS share. -
emptyDir
Creates an volume for containers in the same pod to share. The volume and it’s files is erased when the pod is removed.
Nginx with NFS Volume Example #
NFS Server Setup #
I’m using the following NFS server configuration:
# Install NFS package
sudo apt install nfs-kernel-server
# Open NFS configuration
sudo vi /etc/exports
# NFS configuration
/srv/nfs/k8s_share 192.168.30.71(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.72(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.73(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.74(rw,sync,no_root_squash)
# Install NFS package
sudo apt install nfs-kernel-server
# Restart NFS server
sudo systemctl restart nfs-server
Install NFS on Kubernetes Nodes #
Install the NFS utilities package on all Kubernetes nodes:
# Install NFS utilities package
sudo apt install nfs-common -y
Verify the NFS Server Configuration #
Verify the connection to the NFS server from all Kubernetes nodes:
# Install NFS utilities package (Should already be installed)
sudo apt install nfs-common -y
# Find the showmount bin
find / -name showmount 2>/dev/null
# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e 192.168.30.60
# Shell output:
Export list for 192.168.30.60:
/srv/nfs/k8s_share 192.168.30.74,192.168.30.73,192.168.30.72,192.168.30.71
Optional: Manually mount the NFS server on the Kubernetes nodes, to verify that it workes:
# Mount NFS share
sudo mount -t nfs 192.168.30.60:/srv/nfs/k8s_share /mnt
# Verify mount
ls /mnt
# Shell output:
index.html
# Unmount
sudo umount /mnt
Nginx Pod with NFS Mount #
Pod Configuration #
# Create configuration for the pod
vi nginx-nfs-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-nginx
labels:
app: nfs-nginx
spec:
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: nfs
mountPath: "/usr/share/nginx/html" # Mount to the Nginx web root
ports:
- containerPort: 80
volumes:
- name: nfs
nfs:
server: "192.168.30.60"
path: "/srv/nfs/k8s_share"
# Deploy the pod
kubectl create -f nginx-nfs-pod.yaml
Pod Details & Logs #
# List pod details: Check the NFS mounting status
kubectl describe pod nfs-nginx
# Shell output:
Name: nfs-nginx
Namespace: default
Priority: 0
Service Account: default
Node: node4/192.168.30.74
Start Time: Tue, 28 May 2024 19:37:02 +0200
Labels: app=nfs-nginx
Annotations: cni.projectcalico.org/containerID: 52ab9edd6534cc409bed8ecf4ad5210b46aa42e34a1dd48e98817a39c3b0a629
cni.projectcalico.org/podIP: 10.233.74.92/32
cni.projectcalico.org/podIPs: 10.233.74.92/32
Status: Running
IP: 10.233.74.92
IPs:
IP: 10.233.74.92
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5s default-scheduler Successfully assigned default/nfs-nginx to node4
Normal Pulling 5s kubelet Pulling image "nginx"
Normal Pulled 4s kubelet Successfully pulled image "nginx" in 1.011s (1.011s including waiting)
Normal Created 4s kubelet Created container nginx-container
Normal Started 3s kubelet Started container nginx-container
# List the pods logs:
kubectl logs nfs-nginx
Verify the Nginx Website #
# Curl the Nginx website
curl 10.233.74.92:80
Delete the Resources #
# Delete the pod
kubectl delete pod nfs-nginx
Nginx Deployment with NFS Mount #
Deployment Configuration #
# Create configuration for the deployment
vi nginx-nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-nginx-deployment
labels:
app: nfs-nginx
spec:
replicas: 2
selector:
matchLabels:
app: nfs-nginx # Labels of the pods being selected
template:
metadata:
labels:
app: nfs-nginx # Labels applied to the pods created from this template
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nfs
mountPath: "/usr/share/nginx/html" # Mount to the Nginx web root
volumes:
- name: nfs
nfs:
server: "192.168.30.60"
path: "/srv/nfs/k8s_share"
# Deploy the pod
kubectl create -f nginx-nfs-deployment.yaml
Verify the Deployment #
# Verify the pods are deployed
kubectl get deployments nfs-nginx-deployment
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-nginx-deployment 2/2 2 2 20s
List Deployment Details #
# List the deployment details
kubectl describe deployment nfs-nginx-deployment
# Shell output:
Pod Template:
Labels: app=nfs-nginx
Containers:
nginx-container:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from nfs (rw)
Volumes:
nfs:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.30.60
Path: /srv/nfs/k8s_share
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nfs-nginx-deployment-cf964486f (2/2 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m10s deployment-controller Scaled up replica set nfs-nginx-deployment-cf964486f to 2
Create a LoadBalancer Service #
vi nfs-nginx-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: nfs-nginx-loadbalancer
spec:
type: LoadBalancer
selector:
app: nfs-nginx # This must match the label selector of the deployment
ports:
- protocol: TCP
port: 80 # The port the LoadBalancer service will be accessible on
targetPort: 80 # The container port to direct traffic to
# Deploy the LoadBalancer service
kubectl apply -f nfs-nginx-loadbalancer.yaml
LoadBalancer Service Details #
# List LoadBalancer service details
kubectl get svc nfs-nginx-loadbalancer
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nfs-nginx-loadbalancer LoadBalancer 10.233.23.246 192.168.30.242 80:30389/TCP 32s
Access the LoadBalancer #
# Open the URL in a browser
192.168.30.242:80
Delete Resources #
# Delete the LoadBalancer service
kubectl delete svc nfs-nginx-loadbalancer
# Delete the deployment
delete deployment nfs-nginx-deployment
EmptyDir Volume Example #
This is a simple example of two containers in a pod using a shared emptyDir volume in the pod. The frist alpine container writes a html file into the volume, the second Nginx container deployes the file as a website.
Two Container Pod with EmptyDir Volume #
Pod Configuration #
# Create configuration for the pod
vi emptydir-shared-storage-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: emptydir-shared-storage
labels:
app: nfs-nginx
spec:
containers:
- name: alpine-container
image: alpine
volumeMounts:
- name: shared-storage
mountPath: /usr/share/data
command: ["/bin/sh"]
args: ["-c", "echo 'EmptyDir Volume Example' > /usr/share/data/index.html; sleep infinity"]
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-storage
mountPath: "/usr/share/nginx/html"
ports:
- containerPort: 80
volumes:
- name: shared-storage
emptyDir: {}
# Deploy the pod
kubectl create -f emptydir-shared-storage-pod.yaml
Pod Details & Logs #
# List pods
kubectl get pods
# Shell output:
NAME READY STATUS RESTARTS AGE
emptydir-shared-storage 2/2 Running 0 7s
# List pod details:
kubectl describe pod emptydir-shared-storage
# Shell output:
Name: emptydir-shared-storage
Namespace: default
Priority: 0
Service Account: default
Node: node4/192.168.30.74
Start Time: Tue, 28 May 2024 22:55:16 +0200
Labels: app=nfs-nginx
Annotations: cni.projectcalico.org/containerID: 2cb0c5f2f828a260b8a85128e87b6831a1958bd7f06fa46e9f67643c8768e220
cni.projectcalico.org/podIP: 10.233.74.98/32
cni.projectcalico.org/podIPs: 10.233.74.98/32
Status: Running
IP: 10.233.74.98
IPs:
IP: 10.233.74.98
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned default/emptydir-shared-storage to node4
Normal Pulling 21s kubelet Pulling image "alpine"
Normal Pulled 20s kubelet Successfully pulled image "alpine" in 1.008s (1.008s including waiting)
Normal Created 20s kubelet Created container alpine-container
Normal Started 20s kubelet Started container alpine-container
Normal Pulling 20s kubelet Pulling image "nginx"
Normal Pulled 19s kubelet Successfully pulled image "nginx" in 967ms (967ms including waiting)
Normal Created 19s kubelet Created container nginx-container
Normal Started 19s kubelet Started container nginx-container
# List the pods logs:
kubectl logs emptydir-shared-storage
Verify the Nginx Website #
# Curl the Nginx website
curl 10.233.74.98:80
# Shell output:
EmptyDir Volume Example
Delete the Resources #
# Delete the pod
kubectl delete pod emptydir-shared-storage
Two Container Deployment with EmptyDir Volume #
Deployment Configuration #
# Create configuration for the deployment
vi emptydir-shared-storage-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: emptydir-shared-storage-deployment
labels:
app: emptydir-nginx
spec:
replicas: 2
selector:
matchLabels:
app: emptydir-nginx # Labels of the pods being selected
template:
metadata:
labels:
app: emptydir-nginx # Labels applied to the pods created from this template
spec:
containers:
- name: alpine-container
image: alpine
volumeMounts:
- name: shared-storage
mountPath: /usr/share/data
command: ["/bin/sh"]
args: ["-c", "echo 'EmptyDir Volume Example' > /usr/share/data/index.html; sleep infinity"]
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-storage
mountPath: "/usr/share/nginx/html"
ports:
- containerPort: 80
volumes:
- name: shared-storage
emptyDir: {}
# Deploy the pod
kubectl create -f emptydir-shared-storage-deployment.yaml
Verify the Deployment #
# List the pods of the deployment
kubectl get pods -l app=emptydir-nginx
# Shell output:
NAME READY STATUS RESTARTS AGE
emptydir-shared-storage-deployment-76f857754f-5fjwk 2/2 Running 0 3m45s
emptydir-shared-storage-deployment-76f857754f-pghcv 2/2 Running 0 3m45
# Verify the pods are deployed
kubectl get deployments emptydir-shared-storage-deployment
# Shell output:
NAME READY UP-TO-DATE AVAILABLE AGE
emptydir-shared-storage-deployment 2/2 2 2 4m52s
List Deployment Details #
# List the deployment details
kubectl describe deployment emptydir-shared-storage-deployment
# Shell output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 37s deployment-controller Scaled up replica set emptydir-shared-storage-deployment-76f857754f to 2
Create a LoadBalancer Service #
vi emptydir-shared-storage-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: emptydir-shared-storage-loadbalancer
spec:
type: LoadBalancer
selector:
app: emptydir-nginx # This must match the label selector of the deployment
ports:
- protocol: TCP
port: 80 # The port the LoadBalancer service will be accessible on
targetPort: 80 # The container port to direct traffic to
# Deploy the LoadBalancer service
kubectl apply -f emptydir-shared-storage-loadbalancer.yaml
LoadBalancer Service Details #
# List LoadBalancer service details
kubectl get svc emptydir-shared-storage-loadbalancer
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
emptydir-shared-storage-loadbalancer LoadBalancer 10.233.25.241 192.168.30.241 80:30931/TCP 6s
Access the LoadBalancer #
# Open the URL in a browser
192.168.30.241:80
# Or use curl
curl 192.168.30.241:80
# Shell output
Delete Resources #
# Delete the LoadBalancer service
kubectl delete svc emptydir-shared-storage-loadbalancer
# Delete the deployment
kubectl delete deployment emptydir-shared-storage-deployment
PersistentVolume Example #
Overview #
-
Provision the specification of a PersistentVolume (PV)
-
Request for storage by PersistentVolumeClaim (PVC). PVCs can request specific size and access modes (such as read/write or read-only).
-
A pod mounts the volume by the reference of the PersistentVolumeClaim (PVC)
PersistentVolume (PV) #
Host Path & HTML File #
Create the host path on a specific worker node, or all worker nodes, regarding the PersistentVolume configuration that is used:
# Create the host path
sudo mkdir -p /mnt/example-dir
# Create a index.html file for Nginx
echo "PersistentVolume Example" | sudo tee /mnt/example-dir/index.html
YAML Configuration #
# Create the PersistentVolume configuration
vi persistentvolume-example.yaml
PersistentVolume: General host path
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /mnt/example-dir
PersistentVolume: Specified node host path
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 3Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /mnt/example-dir
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node3
Note: This setup ties the scheduling of any pod that uses this PV to Kubernetes node3
.
# Create the PersistentVolume
kubectl create -f persistentvolume-example.yaml
Description #
AccessModes: (Based on the capability of the storage provider)
-
ReadWriteOnce
Allows the volume to be mounted as read-write by a single node. This means only one node can mount the volume and read from or write to it at any one time. (Commonly used for databases) -
ReadOnlyMany
Storage provider must support multiple readers and writers simultaneously -
ReadWriteMany
Storage provider must support multiple readers and writers simultaneously
persistentVolumeReclaimPolicy:
Retain
Volume persists when the PVC is deletedRecycle
Kubernetes will delete the volume
List PersistentVolumes & Details #
# List PersistentVolumes
kubectl get pv
# Shell output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
example-pv 3Gi RWO Retain Available manual <unset> 11s
# List PV details
kubectl describe pv example-pv
# Shell:
Name: example-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 3Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [node3]
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /mnt/example-dir
HostPathType:
Events: <none>
PersistentVolumeClaim (PVC) #
# Create a PersistentVolumeClaim configuration
vi persistentvolume-claim-example.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
storageClassName: manual
# Create the PersistentVolumeClaim
kubectl create -f persistentvolume-claim-example.yaml
List PersistentVolumeClaims & Details #
# List PersistentVolumeClaims
kubectl get pvc
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
example-pvc Bound example-pv 3Gi RWO manual <unset> 4s
STATUS:
Bound
Claim is bound successfullyUnbound
No PersistentVolume matches the request
# List PV details
kubectl describe pvc example-pvc
# Shell:
Name: example-pvc
Namespace: default
StorageClass: manual
Status: Bound
Volume: example-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 3Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events: <none>
Deployment with PVC Volume #
# Create deployment configuration
vi pvc-nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pvc-nginx-deployment
labels:
app: pvc-nginx
spec:
replicas: 1
selector:
matchLabels:
app: pvc-nginx # Labels of the pods being selected
template:
metadata:
labels:
app: pvc-nginx # Labels applied to the pods created from this template
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: example-pvc-volume
mountPath: "/usr/share/nginx/html" # Mount to the Nginx web root
volumes:
- name: example-pvc-volume
persistentVolumeClaim:
claimName: example-pvc
# Deploy the pod
kubectl create -f pvc-nginx-deployment.yaml
List Deployment Details #
# List deployment details
kubectl describe deployment pvc-nginx-deployment
# Shell putput:
Name: pvc-nginx-deployment
Namespace: default
CreationTimestamp: Wed, 29 May 2024 23:32:24 +0200
Labels: app=pvc-nginx
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=pvc-nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=pvc-nginx
Containers:
nginx-container:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from example-pvc-volume (rw)
Volumes:
example-pvc-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: example-pvc
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: pvc-nginx-deployment-6f5d749f7f (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m36s deployment-controller Scaled up replica set pvc-nginx-deployment-6f5d749f7f to 1
Create a LoadBalancer Service #
# Create the LoadBalancer configuration
vi pvc-nginx-deployment-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: pvc-nginx-deployment-loadbalancer
spec:
type: LoadBalancer
selector:
app: pvc-nginx # This must match the label selector of the deployment
ports:
- protocol: TCP
port: 80 # The port the LoadBalancer service will be accessible on
targetPort: 80 # The container port to direct traffic to
# Deploy the LoadBalancer service
kubectl apply -f pvc-nginx-deployment-loadbalancer.yaml
LoadBalancer Service Details #
# List LoadBalancer service details
kubectl get svc pvc-nginx-deployment-loadbalancer
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pvc-nginx-deployment-loadbalancer LoadBalancer 10.233.56.172 192.168.30.241 80:31292/TCP 4s
Access the LoadBalancer #
# Open the URL in a browser
192.168.30.241:80
# Or use curl
curl 192.168.30.241:80
Delete Resources #
# Delete the LoadBalancer service
kubectl delete svc pvc-nginx-deployment-loadbalancer
# Delete the deployment
kubectl delete deployment pvc-nginx-deployment
# Delete the PVC
kubectl delete pvc example-pvc
# Delete the PV
kubectl delete pv example-pv
External Storage Provider #
Helm #
Helm Repository: NFS Subdir External Provisioner #
# Add Helm repository
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
Deploy the Provisioner #
Install the NFS provisioner in the “kube-system” namespace and configures it to use the NFS server:
# Update package index
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.30.60 \
--set nfs.path=/srv/nfs/k8s_share \
--namespace kube-system
# Shell output:
NAME: nfs-provisioner
LAST DEPLOYED: Sat Jun 15 16:08:06 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify the StorageClass #
# List storage classes
kubectl get storageclass
# Shell output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete Immediate false 44h
nfs-client cluster.local/nfs-provisioner-nfs-subdir-external-provisioner Delete Immediate true 24s
# List storage class details
kubectl describe sc nfs-client
# Shell output:
Name: nfs-client
IsDefaultClass: No
Annotations: meta.helm.sh/release-name=nfs-provisioner,meta.helm.sh/release-namespace=kube-system
Provisioner: cluster.local/nfs-provisioner-nfs-subdir-external-provisioner
Parameters: archiveOnDelete=true
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Configure Default StorageClass #
# Set the "nfs-client" as default StorageClass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verify the “IsDefaultClass” is set to “Yes”:
# Verify / list storage class details
kubectl describe storageclass nfs-client
# Shell output:
Name: nfs-client
IsDefaultClass: Yes
Annotations: meta.helm.sh/release-name=nfs-provisioner,meta.helm.sh/release-namespace=kube-system,storageclass.kubernetes.io/is-default-class=true
Provisioner: cluster.local/nfs-provisioner-nfs-subdir-external-provisioner
Parameters: archiveOnDelete=true
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
PersistentVolumeClaim Test #
Test if a PersistentVolumeClaim is able to bind:
PersistentVolumeClaim Manifest #
# Create a manifest for the test PVC
vi nfs-test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client
resources:
requests:
storage: 1Gi
# Deploy the PVS
kubectl apply -f nfs-test-pvc.yaml
Verify the PVC Bind #
# Verify the binding of the PVC
kubectl get pvc test-nfs-pvc
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
nfs-test-pvc Bound pvc-cd8b737a-155d-4ef0-aac7-c80f15334296 1Gi RWX nfs-client <unset> 4s
Delete the PVC Test #
# Delete the PVC
kubectl delete pvc nfs-test-pvc