Overview #
In this tutorial I’m using an Ubuntu 24.04 based NFS server and a Kubernetes version 1.28 Kubernetes Cluster with Nginx Ingress, deployed with Kubeadm:
192.168.30.13 # NFS server
# Kubernetes cluster
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu1 Ready control-plane 48d v1.28.11 192.168.30.10 <none> Ubuntu 24.04 LTS 6.8.0-36-generic containerd://1.7.18
ubuntu2 Ready worker 48d v1.28.11 192.168.30.11 <none> Ubuntu 24.04 LTS 6.8.0-36-generic containerd://1.7.18
ubuntu3 Ready worker 48d v1.28.11 192.168.30.12 <none> Ubuntu 24.04 LTS 6.8.0-36-generic containerd://1.7.18
Prerequisites #
NFS Server Setup #
# Create directory for NFS share
sudo mkdir -p /srv/nfs/k8s_share/postgres
# Change permissions
sudo chown -R 1001:1001 /srv/nfs/k8s_share/postgres
# Install NFS package
sudo apt install nfs-kernel-server -y
# Open NFS configuration
sudo vi /etc/exports
# NFS configuration: Define the kubernetes nodes
/srv/nfs/k8s_share 192.168.30.10(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.11(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.12(rw,sync,no_root_squash)
# Restart NFS server
sudo systemctl restart nfs-server
Install NFS Client on Kubernetes Nodes #
Install the NFS utilities package on the Kubernetes node:
# Install NFS utilities package
sudo apt install nfs-common -y
Verify the NFS connectivity:
# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e 192.168.30.13
# Shell output:
Export list for 192.168.30.13:
/srv/nfs/k8s_share 192.168.30.12,192.168.30.11,192.168.30.1
PostgreSQL Installation #
Create Namespace #
# Create a namespace for the PostgreSQL deployment
kubectl create namespace postgres
StorageClass, PV, PVC #
vi postgresql-pv.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: postgresql-pv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgresql-pv
spec:
storageClassName: postgresql-pv
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
nfs:
path: /srv/nfs/k8s_share/postgres
server: 192.168.30.13
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgresql-pv-claim
namespace: postgres # Define the "postgres" namespace
spec:
storageClassName: postgresql-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# Apply the StorageClass, PV & PVC
kubectl apply -f postgresql-pv.yaml
kubernetes.io/no-provisioner
StorageClass does not have a dynamic provisioner, Kubernetes will not automatically create new Persistent Volumes when a Persistent Volume Claim is made. The Persistent Volumes must be created beforehand.
Verify the PVC, PV & StorageClass #
Verify the status of the PVC is set to “Bound”:
# Verify the PVC is bound to the PV
kubectl get pvc -n postgres
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
postgresql-pv-claim Bound postgresql-pv 10Gi RWO postgresql-pv 9s
# List PV
kubectl get pv
# Shell output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
postgresql-pv 10Gi RWO Retain Bound postgres/postgresql-pv-claim postgresql-pv 25s
# List StorageClass
kubectl get sc
# Shell output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
postgresql-pv kubernetes.io/no-provisioner Delete Immediate false 34s
Install Helm #
# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the Helm installation
helm version
Add Helm Repository #
# Add Bitnami Helm repository
helm repo add bitnami https://charts.bitnami.com/bitnami &&
helm repo update
Save & Adopt PostgreSQL Helm Values #
# Save the PostgreSQL Helm values into a file
helm show values bitnami/postgresql > postgresql-values.yaml
# Adopt the values
vi postgresql-values.yaml
persistence:
enabled: true
volumeName: "data"
existingClaim: "postgresql-pv-claim" # Define the previously created PVC
volumePermissions:
enabled: true # Set to true
Deploy PostgreSQL #
# Install PostgreSQL
helm install postgres bitnami/postgresql -f postgresql-values.yaml \
--namespace postgres
# Delete PostgrSQL
helm delete postgres --namespace postgres
# Shell output:
NAME: postgres
LAST DEPLOYED: Fri Aug 23 12:29:49 2024
NAMESPACE: postgres
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: postgresql
CHART VERSION: 15.5.24
APP VERSION: 16.4.0
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:
postgres-postgresql.postgres.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
To connect to your database run the following command:
kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace postgres --image docker.io/bitnami/postgresql:16.4.0-debian-12-r2 --env="PGPASSWORD=$POSTGRES_PASSWORD" \
--command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432
> NOTE: If you access the container using bash, make sure that you execute "/opt/bitnami/scripts/postgresql/entrypoint.sh /bin/bash" in order to avoid the error "psql: local user with ID 1001} does not exist"
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace postgres svc/postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
WARNING: The configured password will be ignored on new installation in case when previous PostgreSQL release was deleted through the helm command. In that case, old PVC will have an old password, and setting it through helm won't take effect. Deleting persistent volumes (PVs) will solve the issue.
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
- primary.resources
- readReplicas.resources
- volumePermissions.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Verify NFS Share #
Verify the PostgreSQL deployment is actually using the NFS share:
# List files & folders in the "data" directory
ls -la /srv/nfs/k8s_share/postgres/data/
# Shell output:
drwx------ 19 1001 1001 4096 Aug 23 12:30 .
drwxr-xr-x 3 1001 1001 4096 Aug 23 12:29 ..
drwx------ 5 1001 1001 4096 Aug 23 12:29 base
drwx------ 2 1001 1001 4096 Aug 23 12:30 global
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_commit_ts
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_dynshmem
-rw------- 1 1001 1001 2640 Aug 23 12:29 pg_ident.conf
drwx------ 4 1001 1001 4096 Aug 23 12:30 pg_logical
drwx------ 4 1001 1001 4096 Aug 23 12:29 pg_multixact
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_notify
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_replslot
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_serial
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_snapshots
drwx------ 2 1001 1001 4096 Aug 23 12:30 pg_stat
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_stat_tmp
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_subtrans
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_tblspc
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_twophase
-rw------- 1 1001 1001 3 Aug 23 12:29 PG_VERSION
drwx------ 3 1001 1001 4096 Aug 23 12:29 pg_wal
drwx------ 2 1001 1001 4096 Aug 23 12:29 pg_xact
-rw------- 1 1001 1001 88 Aug 23 12:29 postgresql.auto.conf
-rw------- 1 1001 1001 249 Aug 23 12:30 postmaster.opts
-rw------- 1 1001 1001 79 Aug 23 12:30 postmaster.pid
Verify PostgreSQL Deployment #
# List resources in the "postgres" namespace
kubectl get all -n postgres
# Shell output
NAME READY STATUS RESTARTS AGE
pod/postgres-postgresql-0 1/1 Running 0 72s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgres-postgresql ClusterIP 10.98.181.147 <none> 5432/TCP 72s
service/postgres-postgresql-hl ClusterIP None <none> 5432/TCP 72s
NAME READY AGE
statefulset.apps/postgres-postgresql 1/1 72s
Access PostgreSQL Server #
Export PostgreSQL PW #
# Export the PostgreSQL pw as environment variable
export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgres-password}" | base64 -d)
# Verify the env variable
echo $POSTGRES_PASSWORD
Connect to PostgreSQL Server #
# Connect to the PostgreSQL server: From within the Kubernetes cluster
kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace postgres --image docker.io/bitnami/postgresql:16.4.0-debian-12-r1 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql -h postgres-postgresql -U postgres
# Shell output:
If you don't see a command prompt, try pressing enter. # Press [enter]
Optional: Create a test database
# Create an example database
CREATE DATABASE exampledb;
# List databases
\l
# Shell output:
Name | Owner | Encoding | Locale Provider | Collate | Ctype | ICU Locale | ICU Rules | Access privileges
-----------+----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
exampledb | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
postgres | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | |
template0 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | libc | en_US.UTF-8 | en_US.UTF-8 | | | =c/postgres +
| | | | | | | | postgres=CTc/postgres
# Exit the PostgreSQL shell
\q
Kube-Prometheus-Stack Installation #
Create Namespace #
# Create a namespace for the Preometheus stack
kubectl create ns monitoring
Helm Repository #
# Add Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts &&
helm repo update
Install Kube Prometheus Stack #
# Install Kube Prometheus Stack
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
-n monitoring
# Shell output:
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Aug 23 12:38:09 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
Verify Deployment Resources #
# List pods in the "monitoring" namespace
kubectl get pods -n monitoring
# Shell output:
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 0 2m5s
kube-prometheus-stack-grafana-6d44b964f9-wp4fh 3/3 Running 0 2m19s
kube-prometheus-stack-kube-state-metrics-7db65fb76b-zsg2p 1/1 Running 0 2m19s
kube-prometheus-stack-operator-6b5b8b4dcd-r4bhk 1/1 Running 0 2m19s
kube-prometheus-stack-prometheus-node-exporter-8kts5 1/1 Running 0 2m19s
kube-prometheus-stack-prometheus-node-exporter-dhxgs 1/1 Running 0 2m19s
kube-prometheus-stack-prometheus-node-exporter-sq486 1/1 Running 0 2m19s
prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 2m5s
# List services in the "monitoring" namespace
kubectl get svc -n monitoring
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 2m31s
kube-prometheus-stack-alertmanager ClusterIP 10.107.146.156 <none> 9093/TCP,8080/TCP 2m45s
kube-prometheus-stack-grafana ClusterIP 10.109.166.155 <none> 80/TCP 2m45s
kube-prometheus-stack-kube-state-metrics ClusterIP 10.109.94.161 <none> 8080/TCP 2m45s
kube-prometheus-stack-operator ClusterIP 10.99.186.8 <none> 443/TCP 2m45s
kube-prometheus-stack-prometheus ClusterIP 10.108.3.85 <none> 9090/TCP,8080/TCP 2m45s
kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.102.43.220 <none> 9100/TCP 2m45s
prometheus-operated ClusterIP None <none> 9090/TCP 2m31s
Optional: Create Ingress Resource #
Kubernetes TLS Certificate Secret #
Create a Kubernetes secret for the TLS certificate. I’m using an Let’s Encrypt wildcard certificate in this tutorial.
# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls prometheus-tls --cert=./fullchain.pem --key=./privkey.pem -n monitoring
# Verify the secret
kubectl get secret -n monitoring
# Shell output:
NAME TYPE DATA AGE
...
prometheus-tls kubernetes.io/tls 2 7s
# List secret details
kubectl describe secret prometheus-tls -n monitoring
# Shell output:
...
Data
====
tls.crt: 3574 bytes
tls.key: 1704 bytes
Create Nginx Ingres Resource #
# Create a manifest for the Nginx Ingress
vi prometheus-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-ingress
namespace: monitoring
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: prometheus.jklug.work # Define domain name
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-prometheus
port:
number: 9090
tls:
- hosts:
- prometheus.jklug.work # Define domain name
secretName: prometheus-tls # Define secret name
# Apply the Ingress resource
kubectl apply -f prometheus-ingress.yml
Verify Ingres Resource #
# List ingress resources in "monitoring" namespace
kubectl get ingress -n monitoring
# Shell output: (Wait till Ingress gets an address)
NAME CLASS HOSTS ADDRESS PORTS AGE
prometheus-ingress nginx prometheus.jklug.work 192.168.30.200 80, 443 2m21s
# List Nginx Ingress details
kubectl describe ingress prometheus-ingress -n monitoring
Nginx Ingress Troubleshooting #
# List Nginx Ingress Pods
kubectl get pods -n ingress-nginx
# Shell output:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-6dfcb8658d-94vg6 1/1 Running 1 (80m ago) 48d
ingress-nginx-controller-6dfcb8658d-hdtkx 1/1 Running 1 (80m ago) 48d
ingress-nginx-controller-6dfcb8658d-zl5dd 1/1 Running 1 (80m ago) 48d
# Check the Nginx Ingress logs
kubectl logs ingress-nginx-controller-6dfcb8658d-94vg6 -n ingress-nginx
kubectl logs ingress-nginx-controller-6dfcb8658d-hdtkx -n ingress-nginx
kubectl logs ingress-nginx-controller-6dfcb8658d-zl5dd -n ingress-nginx
Create DNS Entry #
# Create a DNS entry for the Prometheus Ingress
192.168.30.200 prometheus.jklug.work
Access Prometheus Dashboard #
# Open the Prometheus Dashboard
https://prometheus.jklug.work/
PostgreSQL Monitoring #
Adopt Values #
# Adopt the PostgreSQL Helm values
vi postgresql-values.yaml
metrics:
enabled: true # Set to "true"
serviceMonitor:
enabled: true # Set to "true"
namespace: "monitoring" # Define Prometheus namespace
labels: # Remove {}
release: kube-prometheus-stack # Add the following label
Upgrade PostgreSQL Installation #
# Upgrade the PostgreSQL installation
helm upgrade postgres bitnami/postgresql -f postgresql-values.yaml \
--namespace postgres
Verify PostgreSQL Exporter #
Verify the PostgreSQL metrics service:
# List services in "postgres" namespace
kubectl get svc -n postgres
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgres-postgresql ClusterIP 10.98.181.147 <none> 5432/TCP 54m
postgres-postgresql-hl ClusterIP None <none> 5432/TCP 54m
postgres-postgresql-metrics ClusterIP 10.96.118.62 <none> 9187/TCP 25s # Check
Verify that metrics are exposed:
# Port-forward the exporter service
kubectl port-forward svc/postgres-postgresql-metrics 9187:9187 -n postgres
# Verify that metrics are being exposed (open in new terminal)
curl http://localhost:9187/metrics
Verify Prometheus ServiceMonitor #
# Verify the ServiceMonitor
kubectl get servicemonitors -n monitoring
# Shell output:
NAME AGE
kube-prometheus-stack-alertmanager 47m
kube-prometheus-stack-apiserver 47m
kube-prometheus-stack-coredns 47m
kube-prometheus-stack-grafana 47m
kube-prometheus-stack-kube-controller-manager 47m
kube-prometheus-stack-kube-etcd 47m
kube-prometheus-stack-kube-proxy 47m
kube-prometheus-stack-kube-scheduler 47m
kube-prometheus-stack-kube-state-metrics 47m
kube-prometheus-stack-kubelet 47m
kube-prometheus-stack-operator 47m
kube-prometheus-stack-prometheus 47m
kube-prometheus-stack-prometheus-node-exporter 47m
postgres-postgresql 102s # Check
# Verify the ServiceMonitor configuration
kubectl describe servicemonitor postgres-postgresql -n monitoring
# Shell output:
Name: postgres-postgresql
Namespace: monitoring
Labels: app.kubernetes.io/component=metrics
app.kubernetes.io/instance=postgres
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=postgresql
app.kubernetes.io/version=16.4.0
helm.sh/chart=postgresql-15.5.24
release=kube-prometheus-stack
Annotations: meta.helm.sh/release-name: postgres
meta.helm.sh/release-namespace: postgres
API Version: monitoring.coreos.com/v1
Kind: ServiceMonitor
Metadata:
Creation Timestamp: 2024-08-23T13:23:43Z
Generation: 1
Resource Version: 15552
UID: 9cffce9f-f23e-44fe-b07c-b7b4e86c7510
Spec:
Endpoints:
Port: http-metrics
Namespace Selector:
Match Names:
postgres
Selector:
Match Labels:
app.kubernetes.io/component: metrics
app.kubernetes.io/instance: postgres
app.kubernetes.io/name: postgresql
Events: <none>
Test Prometheus #
Access Prometheus Dashboard #
# Open the Prometheus Dashboard
https://prometheus.jklug.work/
Verify ServiceMonitor Target #
-
Go to: “Status” > “Targets”
-
Select
serviceMonitor/monitoring/postgres-postgresql/0
from the dropdown menu
PostgreSQL Query Example #
-
Go to: “Graph”
-
Execute a query for the PostgreSQL database size:
pg_database_size_bytes