Overview #
In this tutorial I’m using a Kubernetes K8s cluster with MetalLB deployed with Kubespray on Debian 12 servers:
192.168.30.21 node1 # Controller / Master Node
192.168.30.22 node2 # Controller / Master Node
192.168.30.23 node3 # Worker Node
192.168.30.24 node4 # Worker Node
192.168.30.60 NFS Server
Artifactory Prerequisites #
Create a Free License #
Open the following URL to create a free license to evaluate Artifactory:
https://jfrog.com/start/
Minimum System Requirements: 8CPU, 16GB Memory, 300GB Fast Disk (3000+iops). External ports: 8081, 8082.
# Default User:
admin
# Default password:
password
# Artifactory license key:
...
# Xray license key:
...
NFS Prerequisites #
NFS Server Setup #
NFS Folder Structure #
Create the folder for the NFS export
# Create folder structure
sudo mkdir -p /srv/nfs/k8s_share
NFS Exports #
I’m using the following NFS server configuration:
# Install NFS package
sudo apt install nfs-kernel-server
# Open NFS configuration
sudo vi /etc/exports
# NFS configuration: Define the kubernetes nodes
/srv/nfs/k8s_share 192.168.30.71(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.72(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.73(rw,sync,no_root_squash)
/srv/nfs/k8s_share 192.168.30.74(rw,sync,no_root_squash)
# Restart NFS server
sudo systemctl restart nfs-server
Install NFS on Kubernetes Nodes #
Install the NFS utilities package on the Kubernetes nodes:
# Install NFS utilities package
sudo apt install nfs-common -y
Verify the NFS connectivity:
# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e 192.168.30.60
Kubernetes Prerequisites #
Create Namespace #
# Create a namespace for the Artifactory deployment
kubectl create namespace artifactory
Kubernetes TLS Certificate Secret #
In this setup I’m using a Let’s Encrypt wildcard certificate.
#Create a Kubernetes secret for the TLS certificate
kubectl create secret tls artifactory-tls --cert=./fullchain.pem --key=./privkey.pem -n artifactory
External Storage Provider #
Helm NFS Subdir External Provisioner #
# Add Helm repository
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm repo update
Deploy the Provisioner #
Install the NFS provisioner in the “kube-system” namespace and configures it to use the NFS server:
# Update package index
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.30.60 \
--set nfs.path=/srv/nfs/k8s_share \
--namespace kube-system
# Shell output:
NAME: nfs-provisioner
LAST DEPLOYED: Sat Jun 15 16:08:06 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
Verify the StorageClass #
# List storage classes
kubectl get storageclass
# Shell output:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
jenkins-pv kubernetes.io/no-provisioner Delete Immediate false 25h
local-storage kubernetes.io/no-provisioner Delete Immediate false 44h
nfs-client cluster.local/nfs-provisioner-nfs-subdir-external-provisioner Delete Immediate true 24s
# List storage class details
kubectl describe sc nfs-client
# Shell output:
Name: nfs-client
IsDefaultClass: No
Annotations: meta.helm.sh/release-name=nfs-provisioner,meta.helm.sh/release-namespace=kube-system
Provisioner: cluster.local/nfs-provisioner-nfs-subdir-external-provisioner
Parameters: archiveOnDelete=true
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Configure Default StorageClass #
# Set the "nfs-client" as default StorageClass
kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verify the “IsDefaultClass” is set to “Yes”:
# Verify / list storage class details
kubectl describe storageclass nfs-client
# Shell output:
Name: nfs-client
IsDefaultClass: Yes
Annotations: meta.helm.sh/release-name=nfs-provisioner,meta.helm.sh/release-namespace=kube-system,storageclass.kubernetes.io/is-default-class=true
Provisioner: cluster.local/nfs-provisioner-nfs-subdir-external-provisioner
Parameters: archiveOnDelete=true
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
Artifactory Helm Deployment #
Add JFrog Repository #
# Add JFrog repository
helm repo add jfrog https://charts.jfrog.io
# Update package index
helm repo update
# Optional: If necessary check out the values of the Helm chart
helm show values jfrog/artifactory > artifactory-values.yaml
Install Artifactory #
# Install Artifactory
helm upgrade --install artifactory \
--namespace artifactory \
jfrog/artifactory
Verify the Deployment #
Verify the PVC #
# Verify the PVC
kubectl get pvc -n artifactory
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
artifactory-volume-artifactory-0 Bound pvc-ad292737-e958-4a90-be02-0b1888fdfdd3 20Gi RWO nfs-client <unset> 21s
data-artifactory-postgresql-0 Bound pvc-32afa3f1-2af4-4301-bb61-a5828314dd51 200Gi RWO nfs-client <unset> 21s
Verify the Pods #
# List pods
kubectl get pods -n artifactory
# Shell output:
NAME READY STATUS RESTARTS AGE
artifactory-0 7/7 Running 0 3m56s
artifactory-artifactory-nginx-5864f9f664-s5zn5 1/1 Running 0 3m56s
artifactory-postgresql-0 1/1 Running 0 3m56s
Verify Services #
# List services
kubectl get svc -n artifactory
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
artifactory ClusterIP 10.233.17.8 <none> 8082/TCP,8025/TCP,8081/TCP 6m25s
artifactory-artifactory-nginx LoadBalancer 10.233.40.162 192.168.30.241 80:30881/TCP,443:31868/TCP 6m25s
artifactory-postgresql ClusterIP 10.233.6.146 <none> 5432/TCP 6m25s
artifactory-postgresql-headless ClusterIP None <none> 5432/TCP 6m25s
Ingress #
Note: I know that this setup is not optimal, but I have not found out how to define the domain name directly in the LoadBalancer service, so I use a Ingress to pass it to the LoadBalancer.
# Create ingress manifest
vi artifactory-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: artifactory-ingress
namespace: artifactory
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- "artifactory.jklug.work"
secretName: artifactory-tls
rules:
- host: "artifactory.jklug.work"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: artifactory-artifactory-nginx
port:
number: 443
# Deploy ingress resource
kubectl create -f artifactory-ingress.yml
# Delete ingress resource
kubectl delete -f artifactory-ingress.yml
Access Artifactory #
DNS Entry #
Create a DNS entry for the Ingress that points to one of the worker nodes:
192.168.30.23 artifactory.jklug.work
# Or
192.168.30.24 artifactory.jklug.work
Webinterface #
# Open the Artifactory webinterface
https://artifactory.jklug.work
Login with the default credentials and enter the Artifactory license key:
# Default User:
admin
# Default password:
password