Skip to main content

Argo CD ApplicationSet with GitLab CI: Multi Branch Build and Deployment via ArgoCD ApplicationSet

4996 words·
Argo CD Argo CD CLI ApplicationSet GitLab GitLab CI CI Pipeline Kubernetes CoreDNS Helm
Table of Contents

GitLab Repositories Overivew
#

  • Code Repository: k8s/example-project-1

The code repository emulates a microservice architecture consisting of a frontend and a backend, both of which use a Caddy container.

  • Helm Chart Repository: k8s/example-project-1-helm

The Helm repository contains a Helm chart that deploys the frontend and backend using a Deployment, a ClusterIP service, and an Ingress.



Code Repository Branches
#

Main Branch (Pipeline Branch)
#

The main branch of the code repository is only used for the GitLab CI pipeline.


File and Folder Structure
#

The file and folder structure looks like this:

k8s/example-project-1
├── .gitlab-ci.yml
└── README.md

Pipeline Manifest
#

Note: To simplify the setup, this pipeline triggers on commits rather than tags. The version value in the helm-chart/Chart.yaml file of the Helm chart repository is statically updated with the commit count.

  • .gitlab-ci.yml
variables:
  # Define the image name for Backend, tagging it with the GitLab CI registry and the current commit SHA
  BACKEND_IMAGE_SHA: $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_SHA
  FRONTEND_IMAGE_SHA: $CI_REGISTRY_IMAGE/frontend:$CI_COMMIT_SHA
  # Repository URL of the "Helm chart repository"
  GIT_REPO_URL: "git@gitlab.jklug.work:k8s/example-project-1-helm.git"
  DEV_APP_Version: "1.1."
  STAGING_APP_Version: "1.1."
  PROD_APP_Version: "1.1."

stages:
  - build
  - update_helm_chart


build_backend:
  image: docker:stable  # Official Docker CLI image (used to build, tag, and push images)
  stage: build
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "staging" || $CI_COMMIT_BRANCH == "prod"'
  variables:
    DOCKER_DRIVER: overlay2  # Storage driver
    DOCKER_HOST: tcp://docker:2375  # Docker CLI communicate with DinD
    DOCKER_TLS_CERTDIR: ""
  services:
    - name: docker:27.5.1-dind  # DnD version
      command: ["--tls=false"]
  before_script:
    # Log in to the GitLab Container registry using CI credentials
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    # Build the Backend Docker image / enable caching from the previously pulled image / tag the new image with the current commit SHA and as "latest"
    - docker build -f Dockerfiles/Dockerfile_backend --cache-from $CI_REGISTRY_IMAGE/backend:latest --tag $BACKEND_IMAGE_SHA --tag $CI_REGISTRY_IMAGE/backend:latest .
    # Push the newly built image to the GitLab Container registry
    - docker push $BACKEND_IMAGE_SHA
    # Push the image tagged as "latest" to the GitLab Container registry
    - docker push $CI_REGISTRY_IMAGE/backend:latest

build_frontend:
  image: docker:stable  # Official Docker CLI image (used to build, tag, and push images)
  stage: build
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "staging" || $CI_COMMIT_BRANCH == "prod"'
  variables:
    DOCKER_DRIVER: overlay2  # Storage driver
    DOCKER_HOST: tcp://docker:2375  # Docker CLI communicate with DinD
    DOCKER_TLS_CERTDIR: ""
  services:
    - name: docker:27.5.1-dind  # DnD version
      command: ["--tls=false"]
  before_script:
    # Log in to the GitLab Container registry using CI credentials
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    # Build the Frontend Docker image / enable caching from the previously pulled image / tag the new image with the current commit SHA and as "latest"
    - docker build -f Dockerfiles/Dockerfile_frontend --cache-from $CI_REGISTRY_IMAGE/frontend:latest --tag $FRONTEND_IMAGE_SHA --tag $CI_REGISTRY_IMAGE/frontend:latest .
    # Push the newly built image to the GitLab Container registry
    - docker push $FRONTEND_IMAGE_SHA
    # Push the image tagged as "latest" to the GitLab Container registry
    - docker push $CI_REGISTRY_IMAGE/frontend:latest


update_helm_chart:
  image: alpine:latest
  stage: update_helm_chart
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev" || $CI_COMMIT_BRANCH == "staging" || $CI_COMMIT_BRANCH == "prod"'
  needs: # Run this job only if the build jobs succeed
  - job: build_backend
  - job: build_frontend
  variables:
    HELM_REPO_BRANCH: "$CI_COMMIT_REF_NAME"  # Dynamically set the branch
  script:
    # Install required tools
    - apk add --no-cache yq git openssh
    # Configure SSH for Git
    - mkdir -p ~/.ssh
    - echo "$example_project_pipeline_key" > ~/.ssh/id_rsa  # Use the correct variable name
    - chmod 600 ~/.ssh/id_rsa
    - ssh-keyscan -H gitlab.jklug.work >> ~/.ssh/known_hosts
    # Clone the external repository
    - git clone --branch $HELM_REPO_BRANCH $GIT_REPO_URL helm-chart-repo
    - cd helm-chart-repo
    # Verify the Helmchart does exist
    - ls -R
    - ls -R helm-chart
    # Update values.yaml with new container images
    - yq e ".image_backend.repository = \"$CI_REGISTRY_IMAGE/backend\" | .image_backend.tag = \"$CI_COMMIT_SHA\"" -i helm-chart/values.yaml
    - yq e ".image_frontend.repository = \"$CI_REGISTRY_IMAGE/frontend\" | .image_frontend.tag = \"$CI_COMMIT_SHA\"" -i helm-chart/values.yaml
    # Get the number of commits in the current branch
    - export COMMIT_COUNT=$(git rev-list --count HEAD)
    - if [[ "$CI_COMMIT_REF_NAME" == "dev" ]]; then export CHART_VERSION="${DEV_APP_Version}${COMMIT_COUNT}"; fi
    - if [[ "$CI_COMMIT_REF_NAME" == "staging" ]]; then export CHART_VERSION="${STAGING_APP_Version}${COMMIT_COUNT}"; fi
    - if [[ "$CI_COMMIT_REF_NAME" == "prod" ]]; then export CHART_VERSION="${PROD_APP_Version}${COMMIT_COUNT}"; fi
    # Update Chart.yaml with new version
    - yq e ".version = \"$CHART_VERSION\" | .appVersion = \"$CI_COMMIT_SHA\"" -i helm-chart/Chart.yaml
    # Update Commit Message
    - yq e ".message = \"$CI_COMMIT_MESSAGE\"" -i helm-chart/Chart.yaml
    # Add and commit the changes
    - git config --global user.email "ci@example.com"
    - git config --global user.name "GitLab CI"
    - git add helm-chart/values.yaml # Add values.yaml
    - git add helm-chart/Chart.yaml # Add Chart.yaml
    - git commit -m "Update Helm chart with image $CI_COMMIT_SHA"
    # Push the changes back to the external repository
    - git push origin $HELM_REPO_BRANCH
  • $CI_COMMIT_SHA Full SHA hash of the commit

  • $CI_COMMIT_REF_NAME Branch or tag name of the commit that triggered the pipeline

  • $CI_COMMIT_BRANCH Branch name that triggered the pipeline

  • $CI_REGISTRY GitLab container registry URL

  • $CI_REGISTRY_IMAGE Container image path in the GitLab container registry



Dev, Staging & Prod Branch
#

File and Folder Structure
#

The file and folder structure of the dev, staging & prod branches looke like this:

k8s/example-project-1
├── app
│   ├── backend
│   │   └── index.html # Backend HTML file
│   └── frontend
│       └── index.html # Frontend HTML file
├── Dockerfiles
│   ├── Dockerfile_backend # Backend Dockerfile
│   └── Dockerfile_frontend # Frontend  Dockerfile
└── .gitlab-ci.yml  # Reference to CI pipeline in main branch

CI Pipeline Reference
#

The GitLab CI pipeline in the dev, staging and prod branches references the pipeline in the main repository.

.gitlab-ci.yml

include:
  - project: 'k8s/example-project-1'
    file: '/.gitlab-ci.yml'
    ref: main

Dockerfiles
#

  • Dockerfiles/Dockerfile_backend
# Use the official Caddy image as the base
FROM caddy:alpine

# Create a non-root user "caddy"
RUN addgroup -S caddy && adduser -S -G caddy caddy

# Adjust permissions
RUN mkdir -p /usr/share/caddy && \
    chown -R caddy:caddy /usr/share/caddy /config /data

# Copy website files into the container
COPY app/backend/ /usr/share/caddy

# Switch to the non-root user
USER caddy

# Expose the default Caddy port
EXPOSE 80
  • Dockerfiles/Dockerfile_frontend
# Use the official Caddy image as the base
FROM caddy:alpine

# Create a non-root user "caddy"
RUN addgroup -S caddy && adduser -S -G caddy caddy

# Adjust permissions
RUN mkdir -p /usr/share/caddy && \
    chown -R caddy:caddy /usr/share/caddy /config /data

# Copy website files into the container
COPY app/frontend/ /usr/share/caddy

# Switch to the non-root user
USER caddy

# Expose the default Caddy port
EXPOSE 80

HTML Files
#

  • app/backend/index.html
<!DOCTYPE html>
<html>

<head>
        <title>jklug.work</title>
</head>

<body>
        <h1>Backend Dev</h1>
        <p>Version 1</p>
</body>

</html>
  • app/frontend/index.html
<!DOCTYPE html>
<html>

<head>
        <title>jklug.work</title>
</head>

<body>
        <h1>Frontend Dev</h1>
        <p>Version 1<p/>
</body>

</html>

Adapt the HTML files according to the branches, for example “Backend Staging”, “Backend Prod”.


Push to Remote Repository
#

# Add all files in dev branch
git add .

# Commit
git commit -m "Version 1"

# Push the remote repository
git push

Create Staging & Prod Branches
#

The staging and prod branches are just copies of the dev branch, but with adapted HTML files.

# Create a new "staging" branch and switch to it
git switch --create staging

# Push local branch and establish tracking relationship between local branch and it's remote counterpart
git push -u origin staging
# Create a new "prod" branch and switch to it
git switch --create prod

# Push local branch and establish tracking relationship between local branch and it's remote counterpart
git push -u origin prod



Helm Chart Repository
#

Create Dev, Staging & Prod Branch
#

# Create a new "dev" branch and switch to it
git switch --create dev

# Push local branch and establish tracking relationship between local branch and it's remote counterpart
git push -u origin dev

File and Folder Structure
#

The folder structure for the dev, staging and prod branches is the same:

k8s/example-project-1-helm
├── helm-chart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── deployment.yaml
│   │   ├── ingress.yaml
│   │   └── service.yaml
│   └── values.yaml

Deployment
#

  • helm-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app-backend
  labels:
    app: example-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-app-backend
  template:
    metadata:
      labels:
        app: example-app-backend
    spec:
      imagePullSecrets:
      - name: "{{ .Values.imagePullSecrets | first }}"  # Correctly access the first item in the array
      containers:
      - name: example-app-backend
        image: "{{ .Values.image_backend.repository }}:{{ .Values.image_backend.tag }}"
        ports:
        - containerPort: 80
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app-frontend
  labels:
    app: example-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example-app-frontend
  template:
    metadata:
      labels:
        app: example-app-frontend
    spec:
      imagePullSecrets:
      - name: "{{ .Values.imagePullSecrets | first }}"  # Correctly access the first item in the array
      containers:
      - name: example-app-frontend
        image: "{{ .Values.image_frontend.repository }}:{{ .Values.image_frontend.tag }}"
        ports:
        - containerPort: 80

ClusterIP Service
#

  • helm-chart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: example-app-backend
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 80
  selector:
    app: example-app-backend
---

apiVersion: v1
kind: Service
metadata:
  name: example-app-frontend
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 80
  selector:
    app: example-app-frontend

Ingress
#

HTTP Version
#

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-project-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: "nginx"
  rules:
    - host: example-project-dev.jklug.work # Define domain name
      http:
        paths:
          - path: "/frontend"
            pathType: Prefix
            backend:
              service:
                name: example-app-frontend
                port:
                  number: 8080
          - path: "/backend"
            pathType: Prefix
            backend:
              service:
                name: example-app-backend
                port:
                  number: 8080

HTTPS Version
#

Note: Dont’t forget to create a Kubernetes secret named “ingress-secret” with the TLS certificate, in the corresponding namespace.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-project-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: "nginx"
  tls:
    - hosts:
        - example-project-dev.jklug.work
      secretName: ingress-secret
  rules:
    - host: example-project-dev.jklug.work
      http:
        paths:
          - path: "/frontend"
            pathType: Prefix
            backend:
              service:
                name: example-app-frontend
                port:
                  number: 8080
          - path: "/backend"
            pathType: Prefix
            backend:
              service:
                name: example-app-backend
                port:
                  number: 8080

Values
#

  • helm-chart/values.yaml
# Define the image details
image_backend:
  repository: "gitlab-registry.jklug.work/k8s/example-project/backend" # GitLab Registry; will be updated by the pipeline
  tag: "latest" # Tag placeholder; will be updated by the pipeline
image_frontend:
  repository: "gitlab-registry.jklug.work/k8s/example-project/frontend" # GitLab Registry; will be updated by the pipeline
  tag: "latest" # Tag placeholder; will be updated by the pipeline
imagePullSecrets:
  - gitlab-registry-secret # The Kubernetes secret name

Chart
#

  • helm-chart/Chart.yaml
apiVersion: v2
name: example-project
description: A Helm chart for example-project
type: application
version: 1.0.0
appVersion: latest
message:



GitLab Deploy Key: For CI Pipeline
#

Create SSH Key Pair
#

Create an SSH keys pair, that is used for authentication, so that the pipeline in the code repository can update the Helm chart in the chart repository:

# Create SSH RSA key pair: 4096 bit
ssh-keygen -t rsa -b 4096 -f example_project_pipeline_key

# Copy the public SSH key
cat pipeline_key.pub

Add Public Key to Helm Project
#

Add the public SSH key to the GitLab Helm chart project k8s/example-project-1-helm:

  • Go to: (Project) “Settings” > “Repository”

  • Expand the “Deploy keys” section

  • Click “Add new key”

  • Define the title “example_project_pipeline_key”

  • “Key”: Paste the value of the public SSH key example_project_pipeline_key.pub

  • Select “Grant write permissions to this key”

  • Click “Add key”

The deploy key should look like this:


Add Private Key to Pipeline Project
#

  • Add the private SSh key as variable to the GitLab code and pipeline project k8s/example-project-1:

  • Go to: (Project) “Settings” > “CI/CD”

  • Expand the “Variables” section

  • Click “Add variable”

  • Select type: “Variable (default)”

  • Unflag “Protect variable”

  • Define a key name example_project_pipeline_key

  • Paste the value of the private key example_project_pipeline_key

  • Click “Add variable”


The CI variable section should look like this:



Verify Helm Chart Repo after Push
#

After the pipeline run through, the values in the Helm chart repository should change like this:

  • helm-chart/ values.yaml
# Define the image details
image_backend:
  repository: "gitlab-registry.jklug.work/k8s/example-project-1/backend" # GitLab Registry; will be updated by the pipeline
  tag: "7fa89c488fc2a27c357e616bc4bd4f7e719c6101" # Tag placeholder; will be updated by the pipeline
image_frontend:
  repository: "gitlab-registry.jklug.work/k8s/example-project-1/frontend" # GitLab Registry; will be updated by the pipeline
  tag: "7fa89c488fc2a27c357e616bc4bd4f7e719c6101" # Tag placeholder; will be updated by the pipeline
imagePullSecrets:
  - gitlab-registry-secret # The Kubernetes secret name
  • helm-chart/Chart.yaml
apiVersion: v2
name: example-project
description: A Helm chart for example-project
type: application
version: 1.1.2
appVersion: 7fa89c488fc2a27c357e616bc4bd4f7e719c6101
message: |
    fixed typo



Argo CD Installation
#

Create Namespace
#

# Create a Kubernetes namespace named "argocd" for Argo CD installation
kubectl create namespace argocd

Install Argo CD
#

# Deploy Argo CD in the "argocd" namespace
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Verify Installation / Resources
#

Wait till all resources are ready:

# List the Argo CD resources
kubectl get all -n argocd
Shell Output:
# Shell output:
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          88s
pod/argocd-applicationset-controller-6dfb7b585b-rd672   1/1     Running   0          88s
pod/argocd-dex-server-7ff4b8d9df-lzt2n                  1/1     Running   0          88s
pod/argocd-notifications-controller-768c89485f-kftwc    1/1     Running   0          88s
pod/argocd-redis-db6c68bbb-lzk7m                        1/1     Running   0          88s
pod/argocd-repo-server-6f8774799c-s24fv                 1/1     Running   0          88s
pod/argocd-server-545bfdcc88-ztnqr                      0/1     Running   0          88s

NAME                                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/argocd-applicationset-controller          ClusterIP   10.105.182.168   <none>        7000/TCP,8080/TCP            88s
service/argocd-dex-server                         ClusterIP   10.111.65.219    <none>        5556/TCP,5557/TCP,5558/TCP   88s
service/argocd-metrics                            ClusterIP   10.98.10.84      <none>        8082/TCP                     88s
service/argocd-notifications-controller-metrics   ClusterIP   10.105.169.191   <none>        9001/TCP                     88s
service/argocd-redis                              ClusterIP   10.108.25.47     <none>        6379/TCP                     88s
service/argocd-repo-server                        ClusterIP   10.102.148.3     <none>        8081/TCP,8084/TCP            88s
service/argocd-server                             ClusterIP   10.96.124.125    <none>        80/TCP,443/TCP               88s
service/argocd-server-metrics                     ClusterIP   10.110.61.76     <none>        8083/TCP                     88s

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           88s
deployment.apps/argocd-dex-server                  1/1     1            1           88s
deployment.apps/argocd-notifications-controller    1/1     1            1           88s
deployment.apps/argocd-redis                       1/1     1            1           88s
deployment.apps/argocd-repo-server                 1/1     1            1           88s
deployment.apps/argocd-server                      0/1     1            0           88s

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-6dfb7b585b   1         1         1       88s
replicaset.apps/argocd-dex-server-7ff4b8d9df                  1         1         1       88s
replicaset.apps/argocd-notifications-controller-768c89485f    1         1         1       88s
replicaset.apps/argocd-redis-db6c68bbb                        1         1         1       88s
replicaset.apps/argocd-repo-server-6f8774799c                 1         1         1       88s
replicaset.apps/argocd-server-545bfdcc88                      1         1         0       88s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     88s



Adapt Argo CD Service
#

Convert ClusterIP to LoadBalancer Service
#

# Change the Argo CD Service service to type LoadBalancer
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

Custom TLS Certificate Secret
#

Update the default self-signed TLS certificate with a custom TLS certificate:

# Update the TLS Kubernetes secret
kubectl create -n argocd secret tls argocd-server-tls \
  --cert=fullchain.pem \
  --key=privkey.pem



Argo CD Webinterface
#

DNS Entry
#

To access the Argo CD webinterface from your host, create a DNS entry that points to the external IP provided by the LoadBalancer service:

# Create a DNS entry
192.168.30.201 argocd.jklug.work

Access Webinterface
#

# Access the webinterface: Via LoadBalancer service
https://argocd.jklug.work

Initial Admin Password
#

# Default user
admin

# Retrieve "admin" password
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 --decode; echo

# Shell output:
9U87kFCBXOI9aaH8

Delete the Kubernetes secret after the password was retrieved:

# Delete the "argocd-initial-admin-secret" secret
kubectl delete secret argocd-initial-admin-secret -n argocd



Argo CD CLI Installation
#

In this example, I’m installing the Argo CD CLI on my Kubernetes Controller node.


Install Argo CD CLI
#

Find the latest stable release:
https://github.com/argoproj/argo-cd/tags


Install Argo CD CLI:

# Download Argo CD binary (This takes a while)
curl -SL --progress-bar -o argocd https://github.com/argoproj/argo-cd/releases/download/v2.14.2/argocd-linux-amd64

# Change permissions
chmod +x argocd

# Move the binary
sudo mv argocd /usr/local/bin/
# Verify the installation / check version
argocd version

# Shell output:
argocd: v2.14.2+ad27246
  BuildDate: 2025-02-06T00:06:23Z
  GitCommit: ad2724661b66ede607db9b5bd4c3c26491f5be67
  GitTreeState: clean
  GoVersion: go1.23.3
  Compiler: gc
  Platform: linux/amd64
FATA[0000] Argo CD server address unspecified

Argo CD CLI Configuration
#

Argo CD DNS Entry
#

Make sure the host where the Argo CD CLI is deploy can resolve the DNS name of the Argo CD server and of GitLab:

# Add DNS entry to /etc/hosts
sudo tee -a /etc/hosts <<EOF
192.168.30.201 argocd.jklug.work
192.168.70.4 gitlab.jklug.work gitlab-registry.jklug.work
EOF

Login & Set Argo CD Server Address
#

# Set the the Argo CD server address
argocd login argocd.jklug.work


# Shell output:
Username: admin
Password:
'admin:login' logged in successfully
Context 'argocd.jklug.work' updated



GitLab DNS Entry
#

Add DNS Entry
#

Add DNS entries for GitLab & the GitLab registry to the Kubernetes cluster nodes:

# Add DNS entry to /etc/hosts
sudo tee -a /etc/hosts <<EOF
192.168.70.4 gitlab.jklug.work
192.168.70.4 gitlab-registry.jklug.work
EOF

CoreDNS ConfigMap
#

Backup the ConfigMap
#

# Export the current ConfigMap
kubectl get cm coredns -n kube-system -o yaml > coredns-configmap-backup.yaml

Add GitLab DNS Entry
#

Create a DNS entry that points to GitLab:

# Add the following DNS section to the CoreDNS ConfigMap
hosts {
    192.168.70.4 gitlab.jklug.work
    192.168.70.4 gitlab-registry.jklug.work
    fallthrough
}
# Edit the CoreDNS ConfigMap
kubectl edit cm coredns -n kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        hosts {
            192.168.70.4 gitlab.jklug.work
            192.168.70.4 gitlab-registry.jklug.work
            fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf {
           max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }    
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
            {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health {\n       lameduck 5s\n    }\n    ready\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n       pods insecure\n       fallthrough in-addr.arpa ip6.arpa\n       ttl 30\n    }\n    prometheus :9153\n    forward . /etc/resolv.conf {\n       max_concurrent 1000\n    }\n    cache 30\n    loop\n    reload\n    loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"creationTimestamp":"2024-11-23T11:51:59Z","name":"coredns","namespace":"kube-system","resourceVersion":"253","uid":"7780f5a2-09c9-418c-b139-43231097afe7"}}
  creationTimestamp: "2024-12-06T11:27:57Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "12187"
  uid: 4c18c50e-fba5-4335-8431-2bf1cd31ef01
# Restart CoreDNS
kubectl rollout restart deployment coredns -n kube-system

Verify DNS Resolution
#

# Run a busybox pod
kubectl run busybox --image=busybox --restart=Never --stdin --tty
# Test the GitLab DNS resolution
nslookup gitlab.jklug.work

# Shell output:
Server:         10.96.0.10
Address:        10.96.0.10:53


Name:   gitlab.jklug.work
Address: 192.168.70.4


# Exit the container terminal
exit
# Delete the busybox pod
kubectl delete pod busybox



Argo CD Configuration
#

Scan GitLab Host Keys
#

From the node where the Argo CLI is installed (or any other host that can resolve the GitLab DNS name), scan the host keys of GitLab:

# List the GitLab host keys
ssh-keyscan gitlab.jklug.work

Shell output:

# gitlab.jklug.work:22 SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
# gitlab.jklug.work:22 SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
gitlab.jklug.work ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHAQj15hmfv3OwTY3RAPwx1UlZ8p4qgtAHiZ9hngfJTSScO1kf40oL3Ek5NVKSGYZQ4w6ozBFHKO3l6tHn8nPeD7mUk/nW2U5w9yYpcRyFknn/u/Z0QreHAkI8fg6LI4n+2QYFF1rZbIemtCG33FozwrWKJ+/UsJYLnuQ2fenjcvkwPYx7NKV07RtQ3xYvkFVdWQGFJK8pLG9UcsanwZbH2nVPbv3i9KKI9xxWmJDh9JOoLhG6JipNN4Q4CoodfR9k9A2PY88dEykMInSGzFddOqbHLyISO8H1oJrofPzovPR07f+bDBK6iGIqRSW00k6mM0RFkPPo9tulLJ87DgB84jVrtYGp71wmV9PQ8jPB1uaDx5JtRNc0G+IWlIzTy8hFW9djELdTdQmfxeaCceyn1AmuXhpwZin64WTqztXj29s1olZ0+Uchh2FGpEjvlVqveeMmgAQkQezidqVHKKwinW1zdeSaBaZkS0JpLtxNpA86vBnhtYE8Z4CaQAvoQXU=
# gitlab.jklug.work:22 SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
gitlab.jklug.work ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUxIo1glUPlmYJDbAOvHlRd/qjxdIEJCtBlcFLCMXECbRdp9IN/qePZdFtOnMWWVNvi8qy+7V8XbIFbzHoYwcg=
# gitlab.jklug.work:22 SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
# gitlab.jklug.work:22 SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
gitlab.jklug.work ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH6GrjO8VqbiFMwtOfaEuKd3bV2vb7jH4r5Xl9PW1TFY

Add GitLab Host Keys to Argo CD
#

Open the Argo CD webinterface an add the GitLab host keys:

  • Go to: “Settings” > “Repository certificates and known hosts”

  • Click “ADD SSH KNOWN HOSTS”

  • Paste the GitLab SSH keys in the text box, one entry per line, it should look like this:

gitlab.jklug.work ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHAQj15hmfv3OwTY3RAPwx1UlZ8p4qgtAHiZ9hngfJTSScO1kf40oL3Ek5NVKSGYZQ4w6ozBFHKO3l6tHn8nPeD7mUk/nW2U5w9yYpcRyFknn/u/Z0QreHAkI8fg6LI4n+2QYFF1rZbIemtCG33FozwrWKJ+/UsJYLnuQ2fenjcvkwPYx7NKV07RtQ3xYvkFVdWQGFJK8pLG9UcsanwZbH2nVPbv3i9KKI9xxWmJDh9JOoLhG6JipNN4Q4CoodfR9k9A2PY88dEykMInSGzFddOqbHLyISO8H1oJrofPzovPR07f+bDBK6iGIqRSW00k6mM0RFkPPo9tulLJ87DgB84jVrtYGp71wmV9PQ8jPB1uaDx5JtRNc0G+IWlIzTy8hFW9djELdTdQmfxeaCceyn1AmuXhpwZin64WTqztXj29s1olZ0+Uchh2FGpEjvlVqveeMmgAQkQezidqVHKKwinW1zdeSaBaZkS0JpLtxNpA86vBnhtYE8Z4CaQAvoQXU=
gitlab.jklug.work ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUxIo1glUPlmYJDbAOvHlRd/qjxdIEJCtBlcFLCMXECbRdp9IN/qePZdFtOnMWWVNvi8qy+7V8XbIFbzHoYwcg=
gitlab.jklug.work ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH6GrjO8VqbiFMwtOfaEuKd3bV2vb7jH4r5Xl9PW1TFY
  • Click “CREATE” to save the entry



GitLab Deploy Key: For Argo CD
#

Create SSH Key Pair
#

Create a SSH key pair, on the Argo CD CLI host and add the public key to the GitLab repository with the Helm chart. It’s used by ArgoCD, so that it can check-out the Helm chart:

# Create SSH RSA key pair: 4096 bit
ssh-keygen -t rsa -b 4096 -f argocd_repo_key
# Copy the public SSH key
cat argocd_repo_key.pub

Add Public SSH Key to GitLab Repository
#

Add the public SSH key to the GitLab repository with the Helm chart:

  • Go to: (Project) “Settings” > “Repository”

  • Expand the “Deploy keys” section

  • Click “Add new key”

  • Paste the value of the argocd_repo_key.pub public SSH key

  • Define a title like “argocd_repo_key”

  • Click “Add key”


The deploy key should look like this:



Create Kubernetes Namespaces
#

Create the namespaces for the application deployment:

# Create namespaces
kubectl create namespace example-project-1-dev
kubectl create namespace example-project-1-staging
kubectl create namespace example-project-1-prod



GitLab Code Repository Access Token
#

Create Access Token
#

Create a “Project Access Token” in the example-project-1 repository used by Argo CD to access the GitLab Registry of the code repository:

  • Go to: (Project) “Settings” > “Access Tokens”

  • Click “Add new token”

  • Define a token name like registry-token

  • Remove the expiration date

  • Select role “Developer”

  • Define the scope of the token: “read_registry” Grants read-only access to container registry images on private projects. “api” for API access

  • Click “Create project access token”

  • Copy the project access token, it should look like this: glpat-iWnxaASmCMan4Q1957WG

Manually Test the Token
#

This is optional, but useful for troubleshooting. Note it workes only with the scope “api”.

# List registry repositories
curl -s --header "PRIVATE-TOKEN: glpat-iWnxaASmCMan4Q1957WG" \
     --header "Accept: application/json" \
     "https://gitlab.jklug.work/api/v4/projects/k8s%2Fexample-project-1/registry/repositories" | jq

# Shell output:
[
  {
    "id": 31,
    "name": "frontend",
    "path": "k8s/example-project-1/frontend",
    "project_id": 39,
    "location": "gitlab-registry.jklug.work/k8s/example-project-1/frontend",
    "created_at": "2025-02-16T17:57:31.332Z",
    "cleanup_policy_started_at": null,
    "status": null
  },
  {
    "id": 32,
    "name": "backend",
    "path": "k8s/example-project-1/backend",
    "project_id": 39,
    "location": "gitlab-registry.jklug.work/k8s/example-project-1/backend",
    "created_at": "2025-02-16T17:57:32.664Z",
    "cleanup_policy_started_at": null,
    "status": null
  }
]

Copy Project Token User
#

For each project token, a bot user is created. Copy the name of the user:

  • Go to: (Project) “Manage” > “Members”

  • Copy the registry-token username from the members section, it should looks like this:

# Copy project member "registry-token"
project_39_bot_8fd07cb3c6c1c38b7a1b298526783626

Create GitLab Registry Secret
#

Create the secret in the following namespaces:

  • example-project-1-dev
  • example-project-1-staging
  • example-project-1-prod
# Create a secret with the GitLab access token: "example-project-dev" namespace
kubectl create secret docker-registry gitlab-registry-secret \
  --docker-server=gitlab-registry.jklug.work \
  --docker-username=project_39_bot_8fd07cb3c6c1c38b7a1b298526783626 \
  --docker-password=glpat-iWnxaASmCMan4Q1957WG \
  --docker-email=juergen@jklug.work \
  --namespace=example-project-1-dev

# Create a secret with the GitLab access token: "example-project-staging" namespace
kubectl create secret docker-registry gitlab-registry-secret \
  --docker-server=gitlab-registry.jklug.work \
  --docker-username=project_39_bot_8fd07cb3c6c1c38b7a1b298526783626 \
  --docker-password=glpat-iWnxaASmCMan4Q1957WG \
  --docker-email=juergen@jklug.work \
  --namespace=example-project-1-staging

# Create a secret with the GitLab access token: "example-project-prod" namespace
kubectl create secret docker-registry gitlab-registry-secret \
  --docker-server=gitlab-registry.jklug.work \
  --docker-username=project_39_bot_8fd07cb3c6c1c38b7a1b298526783626 \
  --docker-password=glpat-iWnxaASmCMan4Q1957WG \
  --docker-email=juergen@jklug.work \
  --namespace=example-project-1-prod
  • --docker-password=glpat-iWnxaASmCMan4Q1957WG Access token

  • --docker-username=project_39_bot_8fd07cb3c6c1c38b7a1b298526783626 Access token bot user


Verify the Secrets
#

# List secret details
kubectl get secret gitlab-registry-secret -o yaml -n example-project-1-dev
kubectl get secret gitlab-registry-secret -o yaml -n example-project-1-staging
kubectl get secret gitlab-registry-secret -o yaml -n example-project-1-prod

# Shell output:
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJnaXRsYWItcmVnaXN0cnkuamtsdWcud29yayI6eyJ1c2VybmFtZSI6InByb2plY3RfMzlfYm90XzhmZDA3Y2IzYzZjMWMzOGI3YTFiMjk4NTI2NzgzNjI2IiwicGFzc3dvcmQiOiJnbHBhdC1pV254YUFTbUNNYW40UTE5NTdXRyIsImVtYWlsIjoianVlcmdlbkBqa2x1Zy53b3JrIiwiYXV0aCI6ImNISnZhbVZqZEY4ek9WOWliM1JmT0daa01EZGpZak5qTm1NeFl6TTRZamRoTVdJeU9UZzFNalkzT0RNMk1qWTZaMnh3WVhRdGFWZHVlR0ZCVTIxRFRXRnVORkV4T1RVM1YwYz0ifX19
kind: Secret
metadata:
  creationTimestamp: "2025-02-16T19:02:43Z"
  name: gitlab-registry-secret
  namespace: example-project-1-dev
  resourceVersion: "65613"
  uid: d48e0262-93c4-4fd5-a45e-b9b8a46c47e8
type: kubernetes.io/dockerconfigjson



TLS Secret for Ingress
#

Create a Kubernetes secret for the TLS certificate, that is used with the Ingress (TLS version only):

# Create secret: Dev namespace
kubectl create secret tls ingress-secret \
  --cert=./fullchain.pem \
  --key=./privkey.pem \
  -n example-project-1-dev

# Create secret: Staging namespace
kubectl create secret tls ingress-secret \
  --cert=./fullchain.pem \
  --key=./privkey.pem \
  -n example-project-1-staging

# Create secret: Prod namespace
kubectl create secret tls ingress-secret \
  --cert=./fullchain.pem \
  --key=./privkey.pem \
  -n example-project-1-prod



Argo CD ApplicationSet Setup
#

Connect GitLab Repository
#

Add the GitLab repository to Argo CD, this allows Argo CD to access and manage the repository using the specified SSH private key:

# Connect a GitLab repository via SSH: Define path to private SSH key "argocd_repo_key"
argocd repo add git@gitlab.jklug.work:k8s/example-project-1-helm.git --ssh-private-key-path argocd_repo_key

# Shell output:
Repository 'git@gitlab.jklug.work:k8s/example-project-1-helm.git' added

The GitLab repository should now be availble in the Argo CD webinterface under: “Settings” > “Repositories”


Note, i the following error appears, it’s necessary to login with the Argo CD CLI:

# Error
FATA[0000] rpc error: code = Unauthenticated desc = invalid session: Token is expired

# Login via Argo CD CLI
argocd login argocd.jklug.work

Verify GitLab Repository
#

# List connected GitLab repositories
argocd repo list

# Shell output:
TYPE  NAME  REPO                                                  INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
git         git@gitlab.jklug.work:k8s/example-project-1-helm.git  false     false  false  false  Successful

Create an ApplicationSet for Multiple Branches
#

# Create a configuration for the ApplicationSet
vi example-projext-1-applicationset.yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: example-project-1-helm
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - branch: dev
            namespace: example-project-1-dev
          - branch: staging
            namespace: example-project-1-staging
          - branch: prod
            namespace: example-project-1-prod
  template:
    metadata:
      name: example-project-1-helm-{{branch}}
    spec:
      project: default  # ArgoCD Project
      source:
        repoURL: git@gitlab.jklug.work:k8s/example-project-1-helm.git
        targetRevision: "{{branch}}"
        path: helm-chart
        helm:
          valueFiles:
            - values.yaml
      destination:
        server: https://kubernetes.default.svc
        namespace: "{{namespace}}"
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
# Create the ApplicationSet
kubectl apply -f example-project-1-applicationset.yaml

# Shell output:
applicationset.argoproj.io/example-project-helm created



Verify the Deployment
#

Verify the Argo CD Applications
#

# Verify Argo CD applications
argocd app list

# Shell output:
NAME                                   CLUSTER                         NAMESPACE                  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                  PATH        TARGET
argocd/example-project-1-helm-dev      https://kubernetes.default.svc  example-project-1-dev      default  Synced  Healthy  Auto-Prune  <none>      git@gitlab.jklug.work:k8s/example-project-1-helm.git  helm-chart  dev
argocd/example-project-1-helm-prod     https://kubernetes.default.svc  example-project-1-prod     default  Synced  Healthy  Auto-Prune  <none>      git@gitlab.jklug.work:k8s/example-project-1-helm.git  helm-chart  prod
argocd/example-project-1-helm-staging  https://kubernetes.default.svc  example-project-1-staging  default  Synced  Healthy  Auto-Prune  <none>      git@gitlab.jklug.work:k8s/example-project-1-helm.git  helm-chart  staging
# List dev branch details
argocd app get example-project-1-helm-dev

# Shell output:
Name:               argocd/example-project-1-helm-dev
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          example-project-1-dev
URL:                https://argocd.jklug.work/applications/example-project-1-helm-dev
Source:
- Repo:             git@gitlab.jklug.work:k8s/example-project-1-helm.git
  Target:           dev
  Path:             helm-chart
  Helm Values:      values.yaml
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to dev (dbf94fa)
Health Status:      Healthy

GROUP              KIND        NAMESPACE              NAME                     STATUS  HEALTH   HOOK  MESSAGE
                   Service     example-project-1-dev  example-app-frontend     Synced  Healthy        service/example-app-frontend created
                   Service     example-project-1-dev  example-app-backend      Synced  Healthy        service/example-app-backend created
apps               Deployment  example-project-1-dev  example-app-frontend     Synced  Healthy        deployment.apps/example-app-frontend created
apps               Deployment  example-project-1-dev  example-app-backend      Synced  Healthy        deployment.apps/example-app-backend created
networking.k8s.io  Ingress     example-project-1-dev  example-project-ingress  Synced  Healthy        ingress.networking.k8s.io/example-project-ingress created
# List staging branch details
argocd app get example-project-1-helm-staging

# Shell output:
Name:               argocd/example-project-1-helm-staging
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          example-project-1-staging
URL:                https://argocd.jklug.work/applications/example-project-1-helm-staging
Source:
- Repo:             git@gitlab.jklug.work:k8s/example-project-1-helm.git
  Target:           staging
  Path:             helm-chart
  Helm Values:      values.yaml
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to staging (41f91aa)
Health Status:      Healthy

GROUP              KIND        NAMESPACE                  NAME                     STATUS  HEALTH   HOOK  MESSAGE
                   Service     example-project-1-staging  example-app-backend      Synced  Healthy        service/example-app-backend created
                   Service     example-project-1-staging  example-app-frontend     Synced  Healthy        service/example-app-frontend created
apps               Deployment  example-project-1-staging  example-app-backend      Synced  Healthy        deployment.apps/example-app-backend created
apps               Deployment  example-project-1-staging  example-app-frontend     Synced  Healthy        deployment.apps/example-app-frontend created
networking.k8s.io  Ingress     example-project-1-staging  example-project-ingress  Synced  Healthy        ingress.networking.k8s.io/example-project-ingress created
# List staging branch details
argocd app get example-project-1-helm-prod

# Shell output:
Name:               argocd/example-project-1-helm-prod
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          example-project-1-prod
URL:                https://argocd.jklug.work/applications/example-project-1-helm-prod
Source:
- Repo:             git@gitlab.jklug.work:k8s/example-project-1-helm.git
  Target:           prod
  Path:             helm-chart
  Helm Values:      values.yaml
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to prod (9b6737d)
Health Status:      Healthy

GROUP              KIND        NAMESPACE               NAME                     STATUS  HEALTH   HOOK  MESSAGE
                   Service     example-project-1-prod  example-app-frontend     Synced  Healthy        service/example-app-frontend created
                   Service     example-project-1-prod  example-app-backend      Synced  Healthy        service/example-app-backend created
apps               Deployment  example-project-1-prod  example-app-backend      Synced  Healthy        deployment.apps/example-app-backend created
apps               Deployment  example-project-1-prod  example-app-frontend     Synced  Healthy        deployment.apps/example-app-frontend created
networking.k8s.io  Ingress     example-project-1-prod  example-project-ingress  Synced  Healthy        ingress.networking.k8s.io/example-project-ingress created

Very App Deployment: Dev
#

NAME                                        READY   STATUS    RESTARTS   AGE
pod/example-app-backend-77f47f45fd-fjrpg    1/1     Running   0          81s
pod/example-app-frontend-849d4d7c4b-q6p6h   1/1     Running   0          81s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/example-app-backend    ClusterIP   10.97.92.74     <none>        8080/TCP   81s
service/example-app-frontend   ClusterIP   10.106.100.93   <none>        8080/TCP   81s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/example-app-backend    1/1     1            1           81s
deployment.apps/example-app-frontend   1/1     1            1           81s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/example-app-backend-77f47f45fd    1         1         1       81s
replicaset.apps/example-app-frontend-849d4d7c4b   1         1         1       81s
# List Ingress resources: (It can take a min till the Ingress resources gets an IP)
kubectl get ingress -n example-project-1-dev

# Shell output:
NAME                      CLASS   HOSTS                            ADDRESS          PORTS     AGE
example-project-ingress   nginx   example-project-dev.jklug.work   192.168.30.200   80, 443   102s

Very App Deployment: Staging
#

# List resources in the "flask-app" namespace
kubectl get all -n example-project-1-staging

# Shell output:
NAME                                        READY   STATUS    RESTARTS   AGE
pod/example-app-backend-7884f949b6-6ng2t    1/1     Running   0          2m8s
pod/example-app-frontend-798b7c88b6-vtkx9   1/1     Running   0          2m8s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/example-app-backend    ClusterIP   10.107.60.205   <none>        8080/TCP   2m8s
service/example-app-frontend   ClusterIP   10.96.130.249   <none>        8080/TCP   2m8s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/example-app-backend    1/1     1            1           2m8s
deployment.apps/example-app-frontend   1/1     1            1           2m8s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/example-app-backend-7884f949b6    1         1         1       2m8s
replicaset.apps/example-app-frontend-798b7c88b6   1         1         1       2m8s
# List Ingress resources: (It can take a min till the Ingress resources gets an IP)
kubectl get ingress -n example-project-1-staging

# Shell output:
NAME                      CLASS   HOSTS                                ADDRESS          PORTS     AGE
example-project-ingress   nginx   example-project-staging.jklug.work   192.168.30.200   80, 443   2m19s

Very App Deployment: Prod
#

# List resources in the "flask-app" namespace
kubectl get all -n example-project-1-prod

# Shell output:
NAME                                        READY   STATUS    RESTARTS   AGE
pod/example-app-backend-79d97497b7-kjz7f    1/1     Running   0          2m46s
pod/example-app-frontend-66d7d78fc9-kxp2j   1/1     Running   0          2m46s

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/example-app-backend    ClusterIP   10.109.192.176   <none>        8080/TCP   2m46s
service/example-app-frontend   ClusterIP   10.109.14.183    <none>        8080/TCP   2m46s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/example-app-backend    1/1     1            1           2m46s
deployment.apps/example-app-frontend   1/1     1            1           2m46s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/example-app-backend-79d97497b7    1         1         1       2m46s
replicaset.apps/example-app-frontend-66d7d78fc9   1         1         1       2m46s
# List Ingress resources: (It can take a min till the Ingress resources gets an IP)
kubectl get ingress -n example-project-1-prod

# Shell output:
NAME                      CLASS   HOSTS                             ADDRESS          PORTS     AGE
example-project-ingress   nginx   example-project-prod.jklug.work   192.168.30.200   80, 443   2m55s

Test the Application
#

Create DNS Entry for Ingress
#

Create a DNS entry for the Ingress resources:

# Create the following DNS entries:
192.168.30.200 example-project-dev.jklug.work
192.168.30.200 example-project-staging.jklug.work
192.168.30.200 example-project-prod.jklug.work

Curl The Application
#

Test The Application: dev

# Test the Frontend and Backend Apps
curl example-project-dev.jklug.work/frontend
curl example-project-dev.jklug.work/backend

Test The Application: staging

# Test the Frontend and Backend Apps
curl example-project-staging.jklug.work/frontend
curl example-project-staging.jklug.work/backend

Test The Application: prod

# Test the Frontend and Backend Apps
curl example-project-prod.jklug.work/frontend
curl example-project-prod.jklug.work/backend

Manaully Sync
#

# Manually sync application: dev
argocd app sync example-project-1-helm-dev

# Manually sync application: staging
argocd app sync example-project-1-helm-staging

# Manually sync application: prod
argocd app sync example-project-1-helm-staging



App Versions / Revisions
#

Create Commit
#

Create another commit and push it into the code repository, for example into the dev branch:


List Revisions Histroy
#

It takes 3 minutes till ArgoCD checks out the newest version.

# List the history of the deployed app
argocd app history example-project-1-helm-dev

# Shell output:
ID      DATE                           REVISION
0       2025-02-16 19:05:30 +0000 UTC  dev (dbf94fa)
1       2025-02-16 19:17:17 +0000 UTC  dev (ca3ebf9)

Rollback Revision
#

Adapt ApplicationSet Sync Policy
#

Adapt the sync policy of the applicationset, so the revision of the dev branch can be changed:

# Open the configuration for the ApplicationSet
vi example-project-1-applicationset.yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: example-project-1-helm
  namespace: argocd
spec:
  generators:
    - list:
        elements:
          - branch: dev
            namespace: example-project-1-dev
          - branch: staging
            namespace: example-project-1-staging
          - branch: prod
            namespace: example-project-1-prod
  template:
    metadata:
      name: example-project-1-helm-{{branch}}
    spec:
      project: default  # ArgoCD Project
      source:
        repoURL: git@gitlab.jklug.work:k8s/example-project-1-helm.git
        targetRevision: "{{branch}}"
        path: helm-chart
        helm:
          valueFiles:
            - values.yaml
      destination:
        server: https://kubernetes.default.svc
        namespace: "{{namespace}}"
      syncPolicy: {}  # Manual sync
#      syncPolicy:
#        automated:
#          prune: true
#          selfHeal: true
# Adapt the sync policy of the ApplicationSet
kubectl apply -f example-project-1-applicationset.yaml

# Shell output:
applicationset.argoproj.io/example-project-1-helm configured

Verify Sync Policy
#

Verify the sync policy for each branch:

# Output the current policy: Dev (-A3 = +3 lines)
argocd app get example-project-1-helm-dev -o yaml | grep -A3 syncPolicy
# Output the current policy: Dev (-A3 = +3 lines)
argocd app get example-project-helm-staging -o yaml | grep -A3 syncPolicy

Rollback to First Revision
#

# Rollback to revision: Syntax
argocd app rollback example-project-1-helm-dev <REVISION-ID>

# Rollback to revision: Example first revision
argocd app rollback example-project-1-helm-dev 0

Alternative the following commands can be used:

# Set the application to the Git commit of the first revision
argocd app set example-project-helm-dev --revision dbf94fa

# Sync the application
argocd app sync example-project-helm-dev

Verify Current Revision
#

# List the history of the deployed app
argocd app history example-project-1-helm-dev

# Shell output:
ID      DATE                           REVISION
0       2025-02-16 19:05:30 +0000 UTC  dev (dbf94fa)
1       2025-02-16 19:17:17 +0000 UTC  dev (ca3ebf9)
2       2025-02-16 19:23:20 +0000 UTC  dev (dbf94fa)



Verify Application via Argo CD Webinterface
#

  • Select the “Applications” tab

  • Select a application

  • Verify the current reversion:



Delete the ApplicationSet
#

# Delete the ApplicationSet
kubectl delete applicationset example-project-1-helm -n argocd