Skip to main content

NeuVector Kubernetes Security: Install NeuVector via Helm, Enable Ingress with TLS, Scan Container

2483 words·
NeuVector Kubernetes Security Helm Ingress
Table of Contents

Setup Overview
#

In this tutorial I’m using the following Kubernetes cluster, deployed with Kubeadm, using Containered runtime and Nginx ingress:

# Kubernetes cluster
NAME      STATUS   ROLES           AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
ubuntu1   Ready    control-plane   107d   v1.29.11   192.168.30.10   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.25
ubuntu2   Ready    worker          107d   v1.29.11   192.168.30.11   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.25
ubuntu3   Ready    worker          107d   v1.29.11   192.168.30.12   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.25
ubuntu4   Ready    worker          107d   v1.29.11   192.168.30.13   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.25



NeuVector Installation
#

Create & Label Namespace
#

# Create a new namespace for NeuVector
kubectl create namespace neuvector

Disable restrictive Pod Security Policies for the “neuvector” namespace, to allow NeuVector to run privileged containers, which are required for deep security monitoring:

# Label the NeuVector namespace with privileged profile for deploying on PSA enabled cluster
kubectl label namespace neuvector "pod-security.kubernetes.io/enforce=privileged"
# Verify label
kubectl get namespace neuvector --show-labels

# Shell output:
NAME        STATUS   AGE    LABELS
neuvector   Active   4m8s   kubernetes.io/metadata.name=neuvector,pod-security.kubernetes.io/enforce=privileged

Create TLS Kubernetes Secret
#

In this setup I’m using a Let’s Encrypt wildcard certificate:

# Create a Kubernetes secret for the TLS certificate
kubectl create secret tls neuvector-tls \
  --namespace neuvector \
  --cert=./fullchain.pem \
  --key=./privkey.pem
# Verify the secret
kubectl get secret -n neuvector

# Shell output:
NAME            TYPE                DATA   AGE
neuvector-tls   kubernetes.io/tls   2      5s

Add Helm Repository
#

# Add the NeuVector Helm repository
helm repo add neuvector https://neuvector.github.io/neuvector-helm/

# Update all Helm repositories
helm repo update
# List available charts
helm search repo neuvector/core -l

# Shell output:
NAME            CHART VERSION   APP VERSION     DESCRIPTION
neuvector/core  2.8.4           5.4.2           Helm chart for NeuVector's core services
neuvector/core  2.8.3           5.4.1           Helm chart for NeuVector's core services
neuvector/core  2.8.2           5.4.0           Helm chart for NeuVector's core services

Adapt Helm Chart Values
#

Save Values
#

# Save the Helm Chart values
helm show values neuvector/core > neuvector-values.yaml

# Adapt the values
vi neuvector-values.yaml
neuvector-values.yaml

The original configuration looks like this:

# Default values for neuvector.
# This is a YAML-formatted file.
# Declare variables to be passed into the templates.

openshift: false

registry: docker.io
tag: 5.4.2
oem:
imagePullSecrets:
psp: false
rbac: true # required for rancher authentication
serviceAccount: default
leastPrivilege: false
global: # required for rancher authentication (https://<Rancher_URL>/)
  cattle:
    url:
  azure:
    enabled: false
    identity:
      clientId: "DONOTMODIFY" # Azure populates this value at deployment time
    marketplace:
      planId: "DONOTMODIFY" # Azure populates this value at deployment time
    extension:
      resourceId: "DONOTMODIFY" # application's Azure Resource ID, Azure populates this value at deployment time
    serviceAccount: csp
    imagePullSecrets:
    images:
      neuvector_csp_pod:
        tag: latest
        image: neuvector-billing-azure-by-suse-llc
        registry: registry.suse.de/suse/sle-15-sp5/update/pubclouds/images
        imagePullPolicy: IfNotPresent
      controller:
        tag: 5.2.4
        image: controller
        registry: docker.io/neuvector
      manager:
        tag: 5.2.4
        image: manager
        registry: docker.io/neuvector
      enforcer:
        tag: 5.2.4
        image: enforcer
        registry: docker.io/neuvector

  aws:
    enabled: false
    accountNumber: ""
    roleName: ""
    serviceAccount: csp
    annotations: {}
    imagePullSecrets:
    image:
      digest: ""
      repository: neuvector/neuvector-csp-adapter
      tag: latest
      imagePullPolicy: IfNotPresent

# Set a bootstrap password. If leave empty, default admin password used.
bootstrapPassword: ""

autoGenerateCert: true

defaultValidityPeriod: 365

internal: 
  certmanager: # enable when cert-manager is installed for the internal certificates
    enabled: false
    secretname: neuvector-internal
  autoGenerateCert: true
  autoRotateCert: true

controller:
  # If false, controller will not be installed
  enabled: true
  annotations: {}
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  image:
    repository: neuvector/controller
    hash:
  replicas: 3
  disruptionbudget: 0
  schedulerName:
  priorityClassName:
  podLabels: {}
  podAnnotations: {}
  searchRegistries:
  env: []
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - neuvector-controller-pod
            topologyKey: "kubernetes.io/hostname"
  tolerations: []
  topologySpreadConstraints: []
  nodeSelector:
    {}
    # key1: value1
    # key2: value2
  apisvc:
    type:
    annotations: {}
    nodePort:  
    # OpenShift Route configuration
    # Controller supports HTTPS only, so edge termination not supported
    route:
      enabled: false
      termination: passthrough
      host:
      tls:
        #certificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #caCertificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #destinationCACertificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #key: |
        #  -----BEGIN PRIVATE KEY-----
        #  -----END PRIVATE KEY-----
  ranchersso: # required for rancher authentication
    enabled: false
  pvc:
    enabled: false
    existingClaim: false
    accessModes:
      - ReadWriteMany
    storageClass:
    capacity:
  azureFileShare:
    enabled: false
    secretName:
    shareName:
  certificate:
    secret: ""
    keyFile: tls.key
    pemFile: tls.pem
    #key: |
    #  -----BEGIN PRIVATE KEY-----
    #  -----END PRIVATE KEY-----
    #certificate: |
    #  -----BEGIN CERTIFICATE-----
    #  -----END CERTIFICATE-----
  internal: # this is used for internal communication. Please use the SAME CA for all the components (controller, scanner, adapter and enforcer)
    certificate:
      secret: ""
      keyFile: tls.key
      pemFile: tls.crt
      caFile: ca.crt # must be the same CA for all internal.
  federation:
    mastersvc:
      type:
      loadBalancerIP:
      clusterIP:
      nodePort: # Must be a valid NodePort: 30000-32767
      externalTrafficPolicy:
      internalTrafficPolicy:
      # Federation Master Ingress
      ingress:
        enabled: false
        host: # MUST be set, if ingress is enabled
        ingressClassName: ""
        path: "/" # or this could be "/api", but might need "rewrite-target" annotation
        annotations:
          nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
          # ingress.kubernetes.io/rewrite-target: /
        tls: false
        secretName:
      annotations: {}
      # OpenShift Route configuration
      # Controller supports HTTPS only, so edge termination not supported
      route:
        enabled: false
        termination: passthrough
        host:
        tls:
          #certificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #caCertificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #destinationCACertificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #key: |
          #  -----BEGIN PRIVATE KEY-----
          #  -----END PRIVATE KEY-----
    managedsvc:
      type:
      loadBalancerIP:
      clusterIP:
      nodePort: # Must be a valid NodePort: 30000-32767
      externalTrafficPolicy:
      internalTrafficPolicy:
      # Federation Managed Ingress
      ingress:
        enabled: false
        host: # MUST be set, if ingress is enabled
        ingressClassName: ""
        path: "/" # or this could be "/api", but might need "rewrite-target" annotation
        annotations:
          nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
          # ingress.kubernetes.io/rewrite-target: /
        tls: false
        secretName:
      annotations: {}
      # OpenShift Route configuration
      # Controller supports HTTPS only, so edge termination not supported
      route:
        enabled: false
        termination: passthrough
        host:
        tls:
          #certificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #caCertificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #destinationCACertificate: |
          #  -----BEGIN CERTIFICATE-----
          #  -----END CERTIFICATE-----
          #key: |
          #  -----BEGIN PRIVATE KEY-----
          #  -----END PRIVATE KEY-----
  ingress:
    enabled: false
    host: # MUST be set, if ingress is enabled
    ingressClassName: ""
    path: "/" # or this could be "/api", but might need "rewrite-target" annotation
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      # ingress.kubernetes.io/rewrite-target: /
    tls: false
    secretName:
  resources:
    {}
    # limits:
    #   cpu: 400m
    #   memory: 2792Mi
    # requests:
    #   cpu: 100m
    #   memory: 2280Mi
  configmap:
    enabled: false
    data:
      # passwordprofileinitcfg.yaml: |
      #  ...
      # roleinitcfg.yaml: |
      #  ...
      # ldapinitcfg.yaml: |
      #  ...
      # oidcinitcfg.yaml: |
      # ...
      # samlinitcfg.yaml: |
      # ...
      # sysinitcfg.yaml: |
      # ...
      # userinitcfg.yaml: |
      # ...
      # fedinitcfg.yaml: |
      # ...
  secret:
    # NOTE: files defined here have preferrence over the ones defined in the configmap section
    enabled: false
    data:
      # passwordprofileinitcfg.yaml:
      #  ...
      # roleinitcfg.yaml:
      #  ...
      # ldapinitcfg.yaml:
      #   directory: OpenLDAP
      #   ...
      # oidcinitcfg.yaml:
      #   Issuer: https://...
      #   ...
      # samlinitcfg.yaml:
      #   ...
      # sysinitcfg.yaml:
      #   ...
      userinitcfg.yaml:
        users:
        - Fullname: admin
          Password:
          Role: admin
  certupgrader:
    env: []
    # The cronjob schedule that cert-upgrader will run to check and rotate internal certificate.
    # default: "" (off)
    schedule: ""
    imagePullPolicy: IfNotPresent
    timeout: 3600 
    priorityClassName:
    podLabels: {}
    podAnnotations: {}
    nodeSelector:
      {}
      # key1: value1
      # key2: value2
    runAsUser: # MUST be set for Rancher hardened cluster
  prime:
    enabled: false
    image:
      repository: neuvector/compliance-config
      tag: 1.0.2
      hash:
enforcer:
  # If false, enforcer will not be installed
  enabled: true
  image:
    repository: neuvector/enforcer
    hash:
  updateStrategy:
    type: RollingUpdate
  priorityClassName:
  podLabels: {}
  podAnnotations: {}
  env: []
  tolerations:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master
    - effect: NoSchedule
      key: node-role.kubernetes.io/control-plane
  resources:
    {}
    # limits:
    #   cpu: 400m
    #   memory: 2792Mi
    # requests:
    #   cpu: 100m
    #   memory: 2280Mi
  internal: # this is used for internal communication. Please use the SAME CA for all the components (controller, scanner, adapter and enforcer)
    certificate:
      secret: "" 
      keyFile: tls.key
      pemFile: tls.crt
      caFile: ca.crt # must be the same CA for all internal.

manager:
  # If false, manager will not be installed
  enabled: true
  image:
    repository: neuvector/manager
    hash:
  priorityClassName:
  env:
    ssl: true
    envs: []
  #      - name: CUSTOM_PAGE_HEADER_COLOR
  #        value: "#FFFFFF"
  #      - name: CUSTOM_PAGE_FOOTER_COLOR
  #        value: "#FFFFFF"
  svc:
    type: ClusterIP
    nodePort:  
    loadBalancerIP:
    annotations:
      {}
      # azure
      # service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      # service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet"
  # OpenShift Route configuration
  # Make sure manager env ssl is false for edge termination
  route:
    enabled: true
    termination: passthrough
    host:
    tls:
      #certificate: |
      #  -----BEGIN CERTIFICATE-----
      #  -----END CERTIFICATE-----
      #caCertificate: |
      #  -----BEGIN CERTIFICATE-----
      #  -----END CERTIFICATE-----
      #destinationCACertificate: |
      #  -----BEGIN CERTIFICATE-----
      #  -----END CERTIFICATE-----
      #key: |
      #  -----BEGIN PRIVATE KEY-----
      #  -----END PRIVATE KEY-----
  certificate:
    secret: ""
    keyFile: tls.key
    pemFile: tls.pem
    #key: |
    #  -----BEGIN PRIVATE KEY-----
    #  -----END PRIVATE KEY-----
    #certificate: |
    #  -----BEGIN CERTIFICATE-----
    #  -----END CERTIFICATE-----
  ingress:
    enabled: false
    host: # MUST be set, if ingress is enabled
    ingressClassName: ""
    path: "/"
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      # kubernetes.io/ingress.class: my-nginx
      # nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1"
      # nginx.ingress.kubernetes.io/rewrite-target: /
      # nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
      # only for end-to-end tls conf - ingress-nginx accepts backend self-signed cert
    tls: false
    secretName: # my-tls-secret
  resources:
    {}
    # limits:
    #   cpu: 400m
    #   memory: 2792Mi
    # requests:
    #   cpu: 100m
    #   memory: 2280Mi
  topologySpreadConstraints: []
  affinity: {}
  podLabels: {}
  podAnnotations: {}
  tolerations: []
  nodeSelector:
    {}
    # key1: value1
    # key2: value2
  runAsUser: # MUST be set for Rancher hardened cluster
  probes:
    enabled: false
    timeout: 1
    periodSeconds: 10
    startupFailureThreshold: 30

cve:
  adapter:
    enabled: false
    image:
      repository: neuvector/registry-adapter
      tag: 0.1.5
      hash:
    priorityClassName:
    resources:
      {}
      # limits:
      #   cpu: 400m
      #   memory: 512Mi
      # requests:
      #   cpu: 100m
      #   memory: 1024Mi
    affinity: {}
    podLabels: {}
    podAnnotations: {}
    env: []
    tolerations: []
    nodeSelector:
      {}
      # key1: value1
      # key2: value2
    runAsUser: # MUST be set for Rancher hardened cluster
    ## TLS cert/key.  If absent, TLS cert/key automatically generated will be used.
    ##
    ## default: (none)
    certificate:
      secret: ""
      keyFile: tls.key
      pemFile: tls.crt
    #key: |
    #  -----BEGIN PRIVATE KEY-----
    #  -----END PRIVATE KEY-----
    #certificate: |
    #  -----BEGIN CERTIFICATE-----
    #  -----END CERTIFICATE-----
    harbor:
      protocol: https
      secretName:
    svc:
      type: ClusterIP
      loadBalancerIP:
      annotations:
        {}
        # azure
        # service.beta.kubernetes.io/azure-load-balancer-internal: "true"
        # service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet"
    # OpenShift Route configuration
    route:
      enabled: true
      termination: passthrough
      host:
      tls:
        #certificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #caCertificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #destinationCACertificate: |
        #  -----BEGIN CERTIFICATE-----
        #  -----END CERTIFICATE-----
        #key: |
        #  -----BEGIN PRIVATE KEY-----
        #  -----END PRIVATE KEY-----
    ingress:
      enabled: false
      host: # MUST be set, if ingress is enabled
      ingressClassName: ""
      path: "/"
      annotations:
        nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
        # kubernetes.io/ingress.class: my-nginx
        # nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1"
        # nginx.ingress.kubernetes.io/rewrite-target: /
        # nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
        # only for end-to-end tls conf - ingress-nginx accepts backend self-signed cert
      tls: false
      secretName: # my-tls-secret
    internal: # this is used for internal communication. Please use the SAME CA for all the components (controller, scanner, adapter and enforcer)
      certificate:
        secret: "" 
        keyFile: tls.key
        pemFile: tls.crt
        caFile: ca.crt # must be the same CA for all internal.
  updater:
    # If false, cve updater will not be installed
    enabled: true
    secure: false
    cacert: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    image:
      registry: ""
      repository: neuvector/updater
      tag: 0.0.1
      hash:
    schedule: "0 0 * * *"
    priorityClassName:
    resources:
      {}
      # limits:
      #   cpu: 100m
      #   memory: 256Mi
      # requests:
      #   cpu: 100m
      #   memory: 256Mi
    podLabels: {}
    podAnnotations: {}
    nodeSelector:
      {}
      # key1: value1
      # key2: value2
    runAsUser: # MUST be set for Rancher hardened cluster
  scanner:
    enabled: true
    replicas: 3
    dockerPath: ""
    strategy:
      type: RollingUpdate
      rollingUpdate:
        maxSurge: 1
        maxUnavailable: 0
    image:
      registry: ""
      repository: neuvector/scanner
      tag: "6"
      hash:
    priorityClassName:
    resources:
      {}
      # limits:
      #   cpu: 400m
      #   memory: 2792Mi
      # requests:
      #   cpu: 100m
      #   memory: 2280Mi
    topologySpreadConstraints: []
    affinity: {}
    podLabels: {}
    podAnnotations: {}
    env: []
    tolerations: []
    nodeSelector:
      {}
      # key1: value1
      # key2: value2
    runAsUser: # MUST be set for Rancher hardened cluster
    internal: # this is used for internal communication. Please use the SAME CA for all the components (controller, scanner, adapter and enforcer)
      certificate:
        secret: "" 
        keyFile: tls.key
        pemFile: tls.crt
        caFile: ca.crt # must be the same CA for all internal.

resources:
  {}
  # limits:
  #   cpu: 400m
  #   memory: 2792Mi
  # requests:
  #   cpu: 100m
  #   memory: 2280Mi

runtimePath:

# The following runtime type and socket location are deprecated after 5.3.0.
# If the socket path is not at the default location, use above 'runtimePath' to specify the location.
docker:
  path: /var/run/docker.sock

k3s:
  enabled: false
  runtimePath: /run/k3s/containerd/containerd.sock

bottlerocket:
  enabled: false
  runtimePath: /run/dockershim.sock

containerd:
  enabled: false
  path: /var/run/containerd/containerd.sock

crio:
  enabled: false
  path: /var/run/crio/crio.sock

admissionwebhook:
  type: ClusterIP

crdwebhooksvc:
  enabled: true

crdwebhook:
  enabled: true
  type: ClusterIP

lease:
  enabled: true

Adapt Runtime
#

Since I’m using “containerd” runtime in this tutorial, I’m updating the “containerd.enabled” value to “true”:

runtimePath:

# The following runtime type and socket location are deprecated after 5.3.0.
# If the socket path is not at the default location, use above 'runtimePath' to specify the location.
docker:
  path: /var/run/docker.sock

k3s:
  enabled: false
  runtimePath: /run/k3s/containerd/containerd.sock

bottlerocket:
  enabled: false
  runtimePath: /run/dockershim.sock

containerd:
  enabled: true # Set to true
  path: /var/run/containerd/containerd.sock

crio:
  enabled: false
  path: /var/run/crio/crio.sock

admissionwebhook:
  type: ClusterIP

crdwebhooksvc:
  enabled: true

crdwebhook:
  enabled: true
  type: ClusterIP

lease:
  enabled: true

Adapt Ingress
#

Adapt the Manager configuration, to enable TLS:

manager:
  enabled: true
  certificate:
    secret: "neuvector-tls"  # Define a Kubernetes secret
    keyFile: tls.key
    pemFile: tls.pem
  ingress:
    enabled: true # Enable Ingress
    host: neuvector.jklug.work # Define domain name
    ingressClassName: "nginx" # Define Ingress class
    path: "/"
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      # kubernetes.io/ingress.class: my-nginx
      # nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1"
      # nginx.ingress.kubernetes.io/rewrite-target: /
      # nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
      # only for end-to-end tls conf - ingress-nginx accepts backend self-signed cert
    tls: true
    secretName: neuvector-tls  # Define a Kubernetes secret

Install NeuVector
#

# Install NeuVector via Helm
helm install neuvector neuvector/core \
  --namespace neuvector \
  --version 2.8.4 \
  -f neuvector-values.yaml

# Shell output:
NAME: neuvector
LAST DEPLOYED: Tue Mar 11 10:44:00 2025
NAMESPACE: neuvector
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
From outside the cluster, the NeuVector URL is:
http://neuvector.jklug.work
# Delete NeuVector
helm delete neuvector -n neuvector

Verify the Installation
#

It takes a view minutes till the pods are ready:

# List pods in "neuvector" namespace
kubectl get pods -n neuvector

# Shell output:
NAME                                        READY   STATUS      RESTARTS   AGE
neuvector-cert-upgrader-job-hcbsd           0/1     Completed   0          4m7s
neuvector-controller-pod-68d9f4c8df-2f6b9   1/1     Running     0          4m8s
neuvector-controller-pod-68d9f4c8df-8fcdn   1/1     Running     0          4m8s
neuvector-controller-pod-68d9f4c8df-cz62v   1/1     Running     0          4m8s
neuvector-enforcer-pod-47rv5                1/1     Running     0          4m8s
neuvector-enforcer-pod-5gjkr                1/1     Running     0          4m8s
neuvector-enforcer-pod-jwdz7                1/1     Running     0          4m8s
neuvector-enforcer-pod-pdmq8                1/1     Running     0          4m8s
neuvector-manager-pod-65599f5474-wpmp4      1/1     Running     0          4m8s
neuvector-scanner-pod-77bdc7cd5-fcmts       1/1     Running     0          4m8s
neuvector-scanner-pod-77bdc7cd5-lnggw       1/1     Running     0          4m8s
neuvector-scanner-pod-77bdc7cd5-xnwx4       1/1     Running     0          4m8s
# Verify the Ingress
kubectl get ingress -n neuvector

# Shell output:
NAME                      CLASS   HOSTS                  ADDRESS          PORTS     AGE
neuvector-webui-ingress   nginx   neuvector.jklug.work   192.168.30.200   80, 443   4m18s

DNS Entry
#

# Create a DNS entry for NeuVector
192.168.30.200 neuvector.jklug.work

NeuVector Webinterface
#

# Open the NeuVector webinterface
https://neuvector.jklug.work
  • Default user: admin

  • Default password: admin



Container Scanning
#

Example Container
#

# Create example namespace
kubectl create ns example-namespace
  • example-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  namespace: example-namespace
  labels:
    app: example-deployment
spec:
  replicas: 5
  selector:
    matchLabels:
      app: example-container
  template:
    metadata:
      labels:
        app: example-container
    spec:
      containers:
      - name: my-container
        image: jueklu/container-2
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: my-container-service
  namespace: example-namespace
spec:
  type: NodePort
  selector:
    app: example-container
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
    nodePort: 30080
# Apply the configuration
kubectl apply -f example-deployment.yaml

Scan Container
#

  • Go to: “Assets” > “Containers”

  • Select a container and click “Scan”

  • Click “Vulnerabilities”



Links #

# Official Documentation
https://open-docs.neuvector.com/

# Helm Chart
https://github.com/neuvector/neuvector-helm