Skip to main content

PostgreSQL: PostgreSQL HA Cluster Deployment with Kubernetes Operator, Cluster Failover Test, External Access via Master LoadBalancer, Access Postgres Operator UI with TLS encryption

2791 words·
PostgreSQL PostgreSQL HA Cluster Kubernetes Operator Kubernetes Helm CSI
Table of Contents

Overview
#

Kubernetes Operator
#

A Kubernetes Operator is like a manager, that runs and maintains specific applications in Kubernetes.


Custom Resource Definition (CRD)
#

A Custom Resource Definition (CRD) in Kubernetes is a way to extend the built-in Kubernetes API by defining your own custom resources. These resources let you manage more than just the standard Kubernetes objects like Pods, Services, or Deployments.

  • A Custom Resource is like a new type of Kubernetes object. It can represent something specific like a database cluster.

  • The CRD is used to define the application-specific resources that the Operator will manage.


Postgres Operator
#

The Postgres Operator is developed and maintained by Zalando.



Prerequisites
#

Overview
#

I’m using the following K3s Kubernetes cluster and an Ubuntu 22.04 based NFS server:

# Kubernetes Cluster
NAME      STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane,master   9m29s   v1.30.5+k3s1   192.168.30.10   <none>        Ubuntu 24.04.1 LTS   6.8.0-45-generic   containerd://1.7.21-k3s2
ubuntu2   Ready    worker                 8m32s   v1.30.5+k3s1   192.168.30.11   <none>        Ubuntu 24.04.1 LTS   6.8.0-45-generic   containerd://1.7.21-k3s2
ubuntu3   Ready    worker                 8m25s   v1.30.5+k3s1   192.168.30.12   <none>        Ubuntu 24.04.1 LTS   6.8.0-45-generic   containerd://1.7.21-k3s2
ubuntu4   Ready    worker                 3m42s   v1.30.5+k3s1   192.168.30.13   <none>        Ubuntu 24.04.1 LTS   6.8.0-45-generic   containerd://1.7.21-k3s2

192.168.30.90 # NFS server

NFS
#

NFS Server Setup
#

# Create directory for NFS share
sudo mkdir -p /srv/nfs/k8s_nfs-csi
# Open NFS configuration
sudo vi /etc/exports

# Define Kubernetes nodes
/srv/nfs/k8s_nfs-csi 192.168.30.10(rw,sync,no_root_squash)
/srv/nfs/k8s_nfs-csi 192.168.30.11(rw,sync,no_root_squash)
/srv/nfs/k8s_nfs-csi 192.168.30.12(rw,sync,no_root_squash)
/srv/nfs/k8s_nfs-csi 192.168.30.13(rw,sync,no_root_squash)

# Restart NFS server
sudo systemctl restart nfs-server

Install NFS Client on Kubernetes Nodes
#

Install the NFS Client on all the Kubernetes nodes:

# Install NFS utilities package and rpcbind package
sudo apt install nfs-common rpcbind -y

Note: The “rpcbind” package is necessary for NFSv3, which relies on remote procedure calls (RPCs) for various operations.

Verify the NFS connectivity from the Kubernetes nodes:

# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e 192.168.30.90

# Shell output:
Export list for 192.168.30.90:
/srv/nfs/k8s_nfs-csi 192.168.30.13,192.168.30.12,192.168.30.11,192.168.30.10

NFS CSI Driver
#

Install Helm
#

# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version

CSI Setup
#

Add the NFS CSI Helm repository:

# Add Helm repository & update repository index
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts &&
helm repo update

Install CSI NFS Driver:

# Install the CSI NFS Driver
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system

# Shell output:
NAME: csi-driver-nfs
LAST DEPLOYED: Tue Oct  1 13:18:27 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The CSI NFS Driver is getting deployed to your cluster.

To check CSI NFS Driver pods status, please run:

  kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/instance=csi-driver-nfs" --watch

Verify CSI NFS Driver:

# List pods
kubectl --namespace=kube-system get pods --selector="app.kubernetes.io/instance=csi-driver-nfs" --watch

# Shell output: (Wait a while)
NAME                                  READY   STATUS    RESTARTS      AGE
csi-nfs-controller-868b6df87f-pwxcf   4/4     Running   1 (38s ago)   2m15s
csi-nfs-node-6w8t7                    3/3     Running   1 (48s ago)   2m15s
csi-nfs-node-jvjrd                    3/3     Running   1 (46s ago)   2m15s
csi-nfs-node-kwqxb                    3/3     Running   1 (43s ago)   2m15s
csi-nfs-node-sfb8x                    3/3     Running   1 (47s ago)   2m15s

Create Storage Class
#

# Create a manifest for the storage class
vi csi-nfs-storage-class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io # NFS CSI Driver
parameters:
  server: 192.168.30.90
  share: /srv/nfs/k8s_nfs-csi
reclaimPolicy: Retain
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=3
# Create storage class
kubectl apply -f csi-nfs-storage-class.yml

Verify Storage Class
#

# List StorageClasses
kubectl get storageclasses

# Shell output:
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  19h
nfs-csi                nfs.csi.k8s.io          Retain          Immediate              false                  6s



Deploy Postgres Operator
#

Add Helm Repository
#

# Add postgres-operator repository
helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator

# Add postgres-operator-ui repository
helm repo add postgres-operator-ui-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator-ui

# Update all Helm repositories
helm repo update

Install Postgres Operator
#

# Install the postgres-operator
helm install postgres-operator postgres-operator-charts/postgres-operator

# Shell output:
NAME: postgres-operator
LAST DEPLOYED: Tue Oct  1 13:22:09 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that postgres-operator has started, run:

  kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator"

Optional, install the Postgres Operator UI:

# Install the postgres-operator-ui
helm install postgres-operator-ui postgres-operator-ui-charts/postgres-operator-ui

# Shell output:
NAME: postgres-operator-ui
LAST DEPLOYED: Tue Oct  1 13:22:58 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that postgres-operator has started, run:

  kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator-ui"

Verify the Installation
#

# List pods
kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator"

# Shell output:
NAME                                 READY   STATUS    RESTARTS   AGE
postgres-operator-75c8bb45cd-khcgj   1/1     Running   0          21s
# List pods
kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator-ui"

# Shell output:
NAME                                    READY   STATUS    RESTARTS   AGE
postgres-operator-ui-659b45d69c-4z54b   0/1     Running   0          24s

Verify the CRDS

# List CRDs
kubectl get crds | grep postgresqls.acid.zalan.do

# Shell output:
postgresqls.acid.zalan.do               2024-10-01T13:22:09Z

Verify the Services:

# List services in the default namespace
kubectl get svc

# Shell ouput:
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes             ClusterIP   10.43.0.1       <none>        443/TCP    19h
postgres-operator      ClusterIP   10.43.120.44    <none>        8080/TCP   106s
postgres-operator-ui   ClusterIP   10.43.162.216   <none>        80/TCP     57s



Access Postgres Operator UI
#

Port Forwarding
#

# Create a port forwarding for the Postgres Operator UI service
kubectl port-forward --address 0.0.0.0 svc/postgres-operator-ui 8081:80

# Access the Postgres Operator UI from an external host
http://192.168.30.10:8081

Treafik Ingress
#

TLS Certificate
#

In this setup I’m using a Let’s Encrypt wildcard certificate.

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls postgres-operator-tls --cert=./fullchain.pem --key=./privkey.pem

# Shell output:
secret/postgres-operator-tls created
# Verify the secret
kubectl get secrets

# Shell output:
NAME                                         TYPE                 DATA   AGE
postgres-operator-tls                        kubernetes.io/tls    2      41s
sh.helm.release.v1.postgres-operator-ui.v1   helm.sh/release.v1   1      14m
sh.helm.release.v1.postgres-operator.v1      helm.sh/release.v1   1      15m
# List secret details
kubectl describe secret postgres-operator-tls

# Shell output:
...
Data
====
tls.crt:  3578 bytes
tls.key:  1708 bytes

Deploy Ingress
#

# Create a manifest for the ingress
vi postgres-operator-ui-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: postgres-operator-ui
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
  ingressClassName: traefik
  tls:
  - hosts:
    - "postgress-operator.jklug.work"
    secretName: postgres-operator-tls
  rules:
  - host: "postgress-operator.jklug.work"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: postgres-operator-ui
            port:
              number: 80
# Deploy the ingress resource
kubectl apply -f postgres-operator-ui-ingress.yaml
# Verify the ingress resource
kubectl get ingress

# Shell output:
NAME                   CLASS     HOSTS                           ADDRESS                                                   PORTS     AGE
postgres-operator-ui   traefik   postgress-operator.jklug.work   192.168.30.10,192.168.30.11,192.168.30.12,192.168.30.13   80, 443   7s

DNS Entry
#

# Create a DNS entry for the Postgress Operator UI
192.168.30.10 postgress-operator.jklug.work

Access the Postgres Operator UI
#

# Open the Postgres Operator UI
https://postgress-operator.jklug.work/

Create Postgres Cluster
#

Clone Postgres Operator Repository
#

# Clone the Zalando postgres-operator repository
git clone https://github.com/zalando/postgres-operator.git &&
cd postgres-operator

Adopt the Postgres Cluster Manifest
#

# Open the manifest
vi manifests/minimal-postgres-manifest.yaml

Original version:

apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: acid-minimal-cluster
spec:
  teamId: "acid"
  volume:
    size: 1Gi
  numberOfInstances: 2
  users:
    zalando:  # database owner
    - superuser
    - createdb
    foo_user: []  # role for application foo
  databases:
    foo: zalando  # dbname: owner
  preparedDatabases:
    bar: {}
  postgresql:
    version: "16"

Add the NFS CSI storage class:

apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: postgres-cluster-1 # Define a name for the cluster
spec:
  teamId: "project-1" # Define organizational name
  volume:
    size: 2Gi
    storageClass: nfs-csi # Add the storage class
  numberOfInstances: 3 # Define nodes number
  users:
    zalando:  # database owner
    - superuser
    - createdb
    foo_user: []  # role for application foo
  databases:
    foo: zalando  # dbname: owner
  preparedDatabases:
    bar: {}
  postgresql:
    version: "16"

Create Postgres Cluster
#

# Create a Postgres cluster
kubectl create -f manifests/minimal-postgres-manifest.yaml

# Shell output:
postgresql.acid.zalan.do/postgres-cluster-1 created

Verify the Postgres Cluster
#

Verify the Postgres Cluster:

# Check the cluster deployment
kubectl get postgresql

# Shell output:
NAME                 TEAM        VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
postgres-cluster-1   project-1   16        3      2Gi                                     17s   Creating

# Shell output: (Wait a view minutes)
NAME                 TEAM        VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE     STATUS
postgres-cluster-1   project-1   16        3      2Gi                                     2m28s   Running

Verify the Postgres Node Pods:

# Verify the Postgres pods
kubectl get pods -l application=spilo -L spilo-role

# Shell output:
NAME                   READY   STATUS    RESTARTS   AGE     SPILO-ROLE
postgres-cluster-1-0   1/1     Running   0          2m43s   master
postgres-cluster-1-1   1/1     Running   0          2m12s   replica
postgres-cluster-1-2   1/1     Running   0          99s     replica
# List pods in "default" namespace
kubectl get pods

# Shell output:
NAME                                    READY   STATUS    RESTARTS   AGE
postgres-cluster-1-0                    1/1     Running   0          2m56s
postgres-cluster-1-1                    1/1     Running   0          2m25s
postgres-cluster-1-2                    1/1     Running   0          112s
postgres-operator-75c8bb45cd-khcgj      1/1     Running   0          25m
postgres-operator-ui-659b45d69c-4z54b   1/1     Running   0          24m

Verify the Services:

# Verify the services
kubectl get svc -l application=spilo -L spilo-role

# Shell output:
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE     SPILO-ROLE
postgres-cluster-1          ClusterIP   10.43.190.173   <none>        5432/TCP   3m7s    master
postgres-cluster-1-config   ClusterIP   None            <none>        <none>     2m34s
postgres-cluster-1-repl     ClusterIP   10.43.45.69     <none>        5432/TCP   3m7s    replica

Verify the Persistent Volume Claim
#

# List Persistent Volume Claims
kubectl get pvc

# Shell output:
NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
pgdata-postgres-cluster-1-0   Bound    pvc-459c0e99-ee90-4005-88d8-c6dbea13abe8   2Gi        RWO            nfs-csi        <unset>                 3m21s
pgdata-postgres-cluster-1-1   Bound    pvc-ac08f468-25bd-4e11-98d3-267f59a2b234   2Gi        RWO            nfs-csi        <unset>                 2m50s
pgdata-postgres-cluster-1-2   Bound    pvc-a71f346f-4dd4-40f9-aa49-8be8b4034172   2Gi        RWO            nfs-csi        <unset>                 2m17s
# Verify the volumes on the NFS server
ls -la /srv/nfs/k8s_nfs-csi/

# Shell output:
drwxr-xr-x 3 root root 4096 Oct  1 13:44 pvc-459c0e99-ee90-4005-88d8-c6dbea13abe8
drwxr-xr-x 3 root root 4096 Oct  1 13:45 pvc-a71f346f-4dd4-40f9-aa49-8be8b4034172
drwxr-xr-x 3 root root 4096 Oct  1 13:45 pvc-ac08f468-25bd-4e11-98d3-267f59a2b234



Test the Cluster Failover
#

Connect to Master Pod / Node
#

Connect to the Postgres master pod and create an example database.

  • Create a port forwarding to the master node
# Export master node pod name
export PGMASTER=$(kubectl get pods -o jsonpath={.items..metadata.name} -l application=spilo,cluster-name=postgres-cluster-1,spilo-role=master -n default)

# Create port forwarding
kubectl port-forward $PGMASTER 5432:5432 -n default
  • Open another shell:
# Install Postgres client
sudo apt install postgresql-client -y
# Export the password etrieved from the Postgres Kubernetes secret
export PGPASSWORD=$(kubectl get secret postgres.postgres-cluster-1.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)
export PGSSLMODE=require
  • export PGSSLMODE=require Ensures that the PostgreSQL client will require an SSL connection when connecting to the database
# Connect to Postgres
psql -U postgres -h localhost -p 5432
  • Create an example database and some tables
# Create example database "example_database"
create database example_database;

# Connect to "example_database" database
\c example_database

# Create some tables
create table some_table_1(a varchar(100), b timestamp);
create table some_table_2(a varchar(100), b timestamp);
create table some_table_3(a varchar(100), b timestamp);

# Insert some values
insert into some_table_1 values('some value', '2024-01-01 01:01:01');
insert into some_table_2 values('some value', '2024-01-01 01:01:01');
insert into some_table_3 values('some value', '2024-01-01 01:01:01');
  • List the tables in the example database
# List and describe object in current database
\d

# Shell output:
                  List of relations
 Schema |          Name           | Type  |  Owner
--------+-------------------------+-------+----------
 public | pg_stat_kcache          | view  | postgres
 public | pg_stat_kcache_detail   | view  | postgres
 public | pg_stat_statements      | view  | postgres
 public | pg_stat_statements_info | view  | postgres
 public | some_table_1            | table | postgres
 public | some_table_2            | table | postgres
 public | some_table_3            | table | postgres
(7 rows)

# Exit
\q
  • Stop the port forwarding to the master node pod

Delete Master Node
#

List the Postgres pods:

# Verify the Postgres Pods
kubectl get pods -l application=spilo -L spilo-role

# Shell output:
NAME                   READY   STATUS    RESTARTS   AGE     SPILO-ROLE
postgres-cluster-1-0   1/1     Running   0          7m10s   master
postgres-cluster-1-1   1/1     Running   0          6m39s   replica
postgres-cluster-1-2   1/1     Running   0          6m6s    replica

Delete the Master Pod:

# Delete the master pod
kubectl delete pod postgres-cluster-1-0

# Shell output:
pod "postgres-cluster-1-0" deleted

List the Postgres pods: Verify the new replica node

# Verify the Postgres Pods
kubectl get pods -l application=spilo -L spilo-role

# Shell output:
NAME                   READY   STATUS    RESTARTS   AGE     SPILO-ROLE
postgres-cluster-1-0   1/1     Running   0          15s     replica
postgres-cluster-1-1   1/1     Running   0          7m21s   replica
postgres-cluster-1-2   1/1     Running   0          6m48s   master

Check the Logs
#

Optional, check the logs:

# List the Postgres Pods / Nodes logs:
kubectl logs postgres-cluster-1-0
kubectl logs postgres-cluster-1-1
kubectl logs postgres-cluster-1-2

Verify Data Persistence
#

Verify the data persistence after the failover.

  • Create a port forwarding to the master node
# Create port forwarding
kubectl port-forward $PGMASTER 5432:5432 -n default
  • Open another shell
# Export the password etrieved from the Postgres Kubernetes secret
export PGPASSWORD=$(kubectl get secret postgres.postgres-cluster-1.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)
export PGSSLMODE=require
# Connect to Postgres
psql -U postgres -h localhost -p 5432

# Connect to the "example_database" database
\c example_database
# List and describe object in current database
\d

# Shell output:
                  List of relations
 Schema |          Name           | Type  |  Owner
--------+-------------------------+-------+----------
 public | pg_stat_kcache          | view  | postgres
 public | pg_stat_kcache_detail   | view  | postgres
 public | pg_stat_statements      | view  | postgres
 public | pg_stat_statements_info | view  | postgres
 public | some_table_1            | table | postgres
 public | some_table_2            | table | postgres
 public | some_table_3            | table | postgres
(7 rows)
# Verify the values from table1
select * from some_table_1;

# Shell output:
     a      |          b
------------+---------------------
 some value | 2024-01-01 01:01:01
(1 row)

# Exit
\q

Verify the Cluster Status from within a Pod
#

Exec Pod Terminal
#

# Exec the terminal of one of the Postgres node pods
kubectl exec -it postgres-cluster-1-0 -- /bin/bash

List Postgres Nodes
#

# List Postgres nodes
patronictl list

# Shell output:
+ Cluster: postgres-cluster-1 (7420802808068677691) -----+----+-----------+
| Member               | Host      | Role    | State     | TL | Lag in MB |
+----------------------+-----------+---------+-----------+----+-----------+
| postgres-cluster-1-0 | 10.42.2.6 | Replica | streaming |  2 |         0 |
| postgres-cluster-1-1 | 10.42.3.5 | Replica | streaming |  2 |         0 |
| postgres-cluster-1-2 | 10.42.1.4 | Leader  | running   |  2 |           |
+----------------------+-----------+---------+-----------+----+-----------+
  • Patroni Is a high-availability (HA) solution for PostgreSQL clusters

  • patronictl list - Lists a detailed overview of the current state of the PostgreSQL cluster managed by Patroni.


Connect to Database
#

# Switch to the postgres user 
su - postgres

# Connect to the default db
psql -d postgres
# List all databases
\l

# Shell output:
                                                           List of databases
       Name       |   Owner   | Encoding | Locale Provider |   Collate   |    Ctype    | ICU Locale | ICU Rules |   Access privileges
------------------+-----------+----------+-----------------+-------------+-------------+------------+-----------+-----------------------
 bar              | bar_owner | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           |
 example_database | postgres  | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           |
 foo              | zalando   | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           |
 postgres         | postgres  | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           |
 template0        | postgres  | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           | =c/postgres          +
                  |           |          |                 |             |             |            |           | postgres=CTc/postgres
 template1        | postgres  | UTF8     | libc            | en_US.utf-8 | en_US.utf-8 |            |           | =c/postgres          +
                  |           |          |                 |             |             |            |           | postgres=CTc/postgres
(6 rows)

# Exit
\q

# Exit the terminal
exit



External Access via Master LoadBalancer
#

Adopt the Postgres Cluster Manifest
#

# Adopt the Postgres cluster manifest
vi manifests/minimal-postgres-manifest.yaml
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: postgres-cluster-1 
spec:
  teamId: "project-1"
  volume:
    size: 2Gi
    storageClass: nfs-csi
  enableMasterLoadBalancer: true # Enable LoadBalancer
  numberOfInstances: 3
  users:
    zalando:
    - superuser
    - createdb
    foo_user: []
  databases:
    foo: zalando
  preparedDatabases:
    bar: {}
  postgresql:
    version: "16"
# Update the Postgres cluster with the master LB
kubectl apply -f manifests/minimal-postgres-manifest.yaml

Verify the Master LoadBalancer
#

# List services
kubectl get svc

# Shell output: Previous output
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes                  ClusterIP   10.43.0.1       <none>        443/TCP    20h
postgres-cluster-1          ClusterIP   10.43.190.173   <none>        5432/TCP   21m
postgres-cluster-1-config   ClusterIP   None            <none>        <none>     20m
postgres-cluster-1-repl     ClusterIP   10.43.45.69     <none>        5432/TCP   21m
postgres-operator           ClusterIP   10.43.120.44    <none>        8080/TCP   43m
postgres-operator-ui        ClusterIP   10.43.162.216   <none>        80/TCP     42m

# Shell output: Verify the LoadBalancer service
NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP                                               PORT(S)          AGE
kubernetes                  ClusterIP      10.43.0.1       <none>                                                    443/TCP          20h
postgres-cluster-1          LoadBalancer   10.43.190.173   192.168.30.10,192.168.30.11,192.168.30.12,192.168.30.13   5432:31315/TCP   29m
postgres-cluster-1-config   ClusterIP      None            <none>                                                    <none>           29m
postgres-cluster-1-repl     ClusterIP      10.43.45.69     <none>                                                    5432/TCP         29m
postgres-operator           ClusterIP      10.43.120.44    <none>                                                    8080/TCP         52m
postgres-operator-ui        ClusterIP      10.43.162.216   <none>                                                    80/TCP           51m

Retrieve Postgres Password
#

# List the Postgres cluster secrets
kubectl get secret |grep postgres-cluster-1

# Shell output:
foo-user.postgres-cluster-1.credentials.postgresql.acid.zalan.do   Opaque               2      35m
postgres.postgres-cluster-1.credentials.postgresql.acid.zalan.do   Opaque               2      35m
standby.postgres-cluster-1.credentials.postgresql.acid.zalan.do    Opaque               2      35m
zalando.postgres-cluster-1.credentials.postgresql.acid.zalan.do    Opaque               2      35m
# Print Postgres PW to shell
export PGPASSWORD=$(kubectl get secret postgres.postgres-cluster-1.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)
echo "PGPASSWORD=$PGPASSWORD"

# Shell output:
PGPASSWORD=iCLGF9PQjbcPhi7XH92YJKEX1znUF4dVYcHumcaUQ4hUo31EhlnlfFJii7kyUiLl

Connect from External Host
#

Install Postgres Client
#

# Install Postgres client
sudo apt install postgresql-client -y

Export the Postgres PW
#

# Export the password on the external host
export PGPASSWORD=iCLGF9PQjbcPhi7XH92YJKEX1znUF4dVYcHumcaUQ4hUo31EhlnlfFJii7kyUiLl
export PGSSLMODE=require
  • export PGSSLMODE=require Ensures that the PostgreSQL client will require an SSL connection when connecting to the database

Connect to Postgres DB
#

# Connect to the database
psql -U postgres -h 192.168.30.10 -p 31315
# List all databases
\l

# Shell output:
                                      List of databases
       Name       |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
------------------+-----------+----------+-------------+-------------+-----------------------
 bar              | bar_owner | UTF8     | en_US.utf-8 | en_US.utf-8 |
 example_database | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 |
 foo              | zalando   | UTF8     | en_US.utf-8 | en_US.utf-8 |
 postgres         | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 |
 template0        | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 | =c/postgres          +
                  |           |          |             |             | postgres=CTc/postgres
 template1        | postgres  | UTF8     | en_US.utf-8 | en_US.utf-8 | =c/postgres          +
                  |           |          |             |             | postgres=CTc/postgres
(6 rows)



Links #

# Official Documentation
https://github.com/zalando/postgres-operator/blob/master/docs/quickstart.md