Overview #
This tutorial deployes an ELK stack (Elasticsearch and Kibana, without Logstash) in a Kubernetes cluster, using NFS CSI-based Persistent Volume Claims (PVCs) for Elasticsearch storage.
This is a very minalistic setup and so far I’m not very keen about ELK. I would recommand to use a Grafana, Loki & Prometheus stack instead.
I’m using the following K3s Kubernetes cluster:
# K3s Cluster
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ubuntu1 Ready control-plane,master 7d21h v1.30.5+k3s1 192.168.30.10 <none> Ubuntu 24.04.1 LTS 6.8.0-45-generic containerd://1.7.21-k3s2
ubuntu2 Ready worker 7d21h v1.30.5+k3s1 192.168.30.11 <none> Ubuntu 24.04.1 LTS 6.8.0-45-generic containerd://1.7.21-k3s2
ubuntu3 Ready worker 7d21h v1.30.5+k3s1 192.168.30.12 <none> Ubuntu 24.04.1 LTS 6.8.0-45-generic containerd://1.7.21-k3s2
ubuntu4 Ready worker 7d21h v1.30.5+k3s1 192.168.30.13 <none> Ubuntu 24.04.1 LTS 6.8.0-45-generic containerd://1.7.21-k3s2
I’m also using an NFS CSI StorageClass with the name nfs-csi
.
Prerequisites #
Install Helm #
# Install Helm with script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 &&
chmod +x get_helm.sh &&
./get_helm.sh
# Verify the installation / check version
helm version
Add Elastic Helm Repository #
# Add the Elastic Helm repository
helm repo add elastic https://helm.elastic.co &&
helm repo update
# List Helm repositories
helm search repo elastic
# Shell output:
elastic/eck-elasticsearch 0.12.1 Elasticsearch managed by the ECK operator
elastic/elasticsearch 8.5.1 8.5.1 Official Elastic helm chart for Elasticsearch
elastic/apm-attacher 1.1.1 A Helm chart installing the Elastic APM Kuberne...
elastic/apm-server 8.5.1 8.5.1 Official Elastic helm chart for Elastic APM Server
elastic/eck-agent 0.12.1 Elastic Agent managed by the ECK operator
elastic/eck-apm-server 0.12.1 Elastic APM Server managed by the ECK operator
elastic/eck-beats 0.12.1 Elastic Beats managed by the ECK operator
elastic/eck-enterprise-search 0.12.1 Elastic Enterprise Search managed by the ECK op...
elastic/eck-fleet-server 0.12.1 Elastic Fleet Server as an Agent managed by the...
elastic/eck-kibana 0.12.1 Kibana managed by the ECK operator
elastic/eck-logstash 0.12.1 Logstash managed by the ECK operator
elastic/eck-operator 2.14.0 2.14.0 Elastic Cloud on Kubernetes (ECK) operator
elastic/eck-operator-crds 2.14.0 2.14.0 ECK operator Custom Resource Definitions
elastic/eck-stack 0.12.1 Elastic Stack managed by the ECK Operator
elastic/filebeat 8.5.1 8.5.1 Official Elastic helm chart for Filebeat
elastic/kibana 8.5.1 8.5.1 Official Elastic helm chart for Kibana
elastic/logstash 8.5.1 8.5.1 Official Elastic helm chart for Logstash
elastic/metricbeat 8.5.1 8.5.1 Official Elastic helm chart for Metricbeat
elastic/pf-host-agent 8.14.3 8.14.3 Hyperscaler software efficiency. For everybody.
elastic/profiling-agent 8.15.2 8.15.2 Hyperscaler software efficiency. For everybody.
elastic/profiling-collector 8.15.2 8.15.2 Universal Profiling. Hyperscaler software effic...
elastic/profiling-symbolizer 8.15.2 8.15.2 Universal Profiling. Hyperscaler software effic...
Create Namespace #
# Create a "monitoring" namespace for the deployment
kubectl create ns monitoring
Elasticsearch #
Adopt Values #
# Create a manifest for the Elasticsearch values
vi elasticsearch-values.yml
# Optional adopt the standard values
helm show values elastic/elasticsearch > elasticsearch-values.yml
Adopt the Configuration as follows:
# StorageClass settings
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
storageClassName: "nfs-csi"
resources:
requests:
storage: 5Gi
# AntiAffinity settings
antiAffinity: "soft"
# Node and replica settings
replicas: 1
minimumMasterNodes: 1
-
storageClassName: "nfs-csi"
Define a StorageClass -
antiAffinity: "soft"
Set the value to “soft” if the Kubernetes cluster has not enough worker nodes
This deploys a basic Elasticsearch setup with only one node / pod.
Deploy Elasticsearch #
# Deploy Elasticsearch
helm install elasticsearch elastic/elasticsearch -f elasticsearch-values.yml -n monitoring
# Shell output:
NAME: elasticsearch
LAST DEPLOYED: Thu Oct 10 10:30:25 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=monitoring -l app=elasticsearch-master -w
2. Retrieve elastic user's password.
$ kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
3. Test cluster health using Helm test.
$ helm --namespace=monitoring test elasticsearch
Verify Deployment #
# List PVCs in "monitoring" namespace
kubectl get pvc -n monitoring
# Shell output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
elasticsearch-master-elasticsearch-master-0 Bound pvc-1961728e-cc3c-4040-9cfe-e2d3921453f4 5Gi RWO nfs-csi <unset> 31s
# List PVs
kubectl get pv
# shell output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-1961728e-cc3c-4040-9cfe-e2d3921453f4 5Gi RWO Retain Bound monitoring/elasticsearch-master-elasticsearch-master-0 nfs-csi <unset>
# List pods
kubectl get pods --namespace=monitoring -l app=elasticsearch-master -w
# Shell output: (Wait till the pods are ready)
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 66s
# List services in "monitoring" namespace
kubectl get svc -n monitoring
# Shell output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.43.167.70 <none> 9200/TCP,9300/TCP 77s
elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 77s
Filebeat #
Adopt Values #
# Create a manifest for the Filebeat values
vi filebeat-values.yml
# Optional adopt the standard values
helm show values elastic/filebeat > filebeat-values.yml
Make sure Filebeat is configured to send logs to Elasticsearch elasticsearch-master:9200
:
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.elasticsearch:
host: '${NODE_NAME}'
hosts: '["https://${ELASTICSEARCH_HOSTS:elasticsearch-master:9200}"]'
username: '${ELASTICSEARCH_USERNAME}'
password: '${ELASTICSEARCH_PASSWORD}'
protocol: https
ssl.certificate_authorities: ["/usr/share/filebeat/certs/ca.crt"]
Deploy Filebeat #
# Deploy Filebeat
helm install filebeat elastic/filebeat -f filebeat-values.yml -n monitoring
# Shell output:
NAME: filebeat
LAST DEPLOYED: Thu Oct 10 10:34:30 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=monitoring -l app=filebeat-filebeat -w
Verify Deployment #
# List pods
kubectl get pods --namespace=monitoring -l app=filebeat-filebeat -w
# Shell output: (Wait till pods are ready)
NAME READY STATUS RESTARTS AGE
filebeat-filebeat-6kkz8 1/1 Running 0 74s
filebeat-filebeat-cr9bx 1/1 Running 0 74s
filebeat-filebeat-csr65 1/1 Running 0 74s
filebeat-filebeat-pgm6r 1/1 Running 0 74s
Kibana #
Adopt Values #
Optional, Adopt the values:
# Create a manifest for the Kibana values
vi kibana-values.yml
# Optional adopt the standard values
helm show values elastic/kibana > kibana-values.yml
# Make sure Kibana points to the elasticsearch service
elasticsearchHosts: "https://elasticsearch-master:9200"
Deploy Kibana #
# Deploy Kibana
helm install kibana elastic/kibana -f kibana-values.yml -n monitoring
# Shell output:
NAME: kibana
LAST DEPLOYED: Thu Oct 10 10:41:24 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=monitoring -l release=kibana -w
2. Retrieve the elastic user's password.
$ kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
3. Retrieve the kibana service account token.
$ kubectl get secrets --namespace=monitoring kibana-kibana-es-token -ojsonpath='{.data.token}' | base64 -d
Verify Deployment #
# List Kibana pods
kubectl get pods --namespace=monitoring -l release=kibana -w
# Shell output:
NAME READY STATUS RESTARTS AGE
kibana-kibana-555ddb75f-5brnx 1/1 Running 0 30s
Verify the Deployment:
# List resources in "monitoring" namespace
kubectl get all -n monitoring
# Shell output
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-master-0 1/1 Running 0 12m
pod/filebeat-filebeat-6kkz8 1/1 Running 0 8m35s
pod/filebeat-filebeat-cr9bx 1/1 Running 0 8m35s
pod/filebeat-filebeat-csr65 1/1 Running 0 8m35s
pod/filebeat-filebeat-pgm6r 1/1 Running 0 8m35s
pod/kibana-kibana-555ddb75f-5brnx 1/1 Running 0 75s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch-master ClusterIP 10.43.167.70 <none> 9200/TCP,9300/TCP 12m
service/elasticsearch-master-headless ClusterIP None <none> 9200/TCP,9300/TCP 12m
service/kibana-kibana ClusterIP 10.43.19.122 <none> 5601/TCP 76s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/filebeat-filebeat 4 4 4 4 4 <none> 8m35s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-kibana 1/1 1 1 76s
NAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-kibana-555ddb75f 1 1 1 75s
NAME READY AGE
statefulset.apps/elasticsearch-master 1/1 12m
Kibana Ingress #
TLS Secret #
In this setup I’m using a Let’s Encrypt wildcard certificate.
# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls kibana-tls --cert=./fullchain.pem --key=./privkey.pem
Deploy Ingress #
# Create a manifest for the ingress
vi kibana-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
namespace: monitoring # Define namespace
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
ingressClassName: traefik
tls:
- hosts:
- "kibana.jklug.work"
secretName: kibana-tls
rules:
- host: "kibana.jklug.work"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kibana
port:
number: 5601
# Deploy the ingress resource
kubectl apply -f kibana-ingress.yaml
Verify the Ingress:
# Verify the ingress resource
kubectl get ingress -n monitoring
# Shell output:
NAME CLASS HOSTS ADDRESS PORTS AGE
kibana-ingress traefik kibana.jklug.work 192.168.30.10,192.168.30.11,192.168.30.12,192.168.30.13 80, 443 8s
DNS Entry #
# Create a DNS entry for Kibana
192.168.30.10 kibana.jklug.work
Webinterface #
Open Webinterface #
# Open the Kibana webinterface
https://kibana.jklug.work
Kibana Credentials #
# Default user:
elastic
# List default pw
kubectl get secrets --namespace=monitoring elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
# Shell output:
xDIAkZncOIFD7gqY
Verify Filebeat Logs #
-
Go to: “Management” > “Stack Management”
-
Select “Kibana” > “Data Views”
-
Click “Create data view”
-
Define a name like “Filebeat Example”
-
Define an “Index pattern”, for example
filebeat-*
-
Click “Save data view to Kibana”
-
Go to (Home) “Analytics” > “Discover”
-
Verify the “Filebeat Example” logs
Metricbeat #
Adopt Values #
Optional, adopt the Metricbneat values:
# Save the Metricbeat Helm chart values into a file
helm show values elastic/metricbeat > metricbeat-values.yml
Deploy Metricbeat #
# Deploy Metricbeat
helm install metricbeat elastic/metricbeat -f metricbeat-values.yml -n monitoring
# Shell output:
NAME: metricbeat
LAST DEPLOYED: Thu Oct 10 10:54:51 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch all containers come up.
$ kubectl get pods --namespace=monitoring -l app=metricbeat-metricbeat -w
Verify Deployment #
# List the Metricbeat pods
kubectl get pods --namespace=monitoring -l app=metricbeat-metricbeat -w
# Shell output:
metricbeat-metricbeat-dm297 1/1 Running 0 11s
metricbeat-metricbeat-dc428 1/1 Running 0 11s
metricbeat-metricbeat-28rln 1/1 Running 0 11s
metricbeat-metricbeat-wsb9v 1/1 Running 0 10s
Install Kibana Metricbeat Dashboards #
# Exec Metricbeat terminal
kubectl exec -it metricbeat-metricbeat-dm297 -n monitoring -- bash
# Install Kibana Dashboards
metricbeat setup --dashboards -E setup.kibana.host=http://kibana-kibana:5601
# Shell output:
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Test Kibana Metricbeat Dashboard #
-
Go to: (Analytics) > “Dashboard”
-
Select a Metricbeat Dashboard
Delete the Stack #
# Delete Elasticsearch
helm delete elasticsearch -n monitoring
# Delete Filebeat
helm delete filebeat -n monitoring
# Delete Metricbeat
helm delete metricbeat -n monitoring
# Delete kibana
helm delete kibana -n monitoring
# Delete the "monitoring" namespace
kubectl delete ns monitoring