The K3s cluster was setup in part 1. This post covers building the Helm Chart for PiHole and installing it.

Setup NFS Share Link to heading

In OMV setup the NFS share and create the shared directory. In this case the entire WDBlue directory is being shared.

In order to ensure that there are no permissions issues for the share, you will need to add the following NFS settings to the existing settings.

sync,no_subtree_check,no_root_squash,no_all_squash,insecure,anonuid=1000,anongid=100

Setup Helm Chart Templates Link to heading

In the home-k3s repository, create a new Helm chart.

helm create pihole

cd pihole

In the pihole directory Helm will create some template files. They can be deleted, only leave NOTES.txt and _helpers.tpl. All template files will be added in the templates/ directory.

Setup Storage Class Link to heading

Create an nfs-class StorageClass to use.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: nfs-class
provisioner: kubernetes.io/nfs
reclaimPolicy: Retain
allowVolumeExpansion: true

Setup Persistent Volume Link to heading

Create a persistent volume that will connect to the NFS.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ template "pihole.name" . }}-pv
  labels:
    {{- include "pihole.labels" . | nindent 4 }}
spec:
  storageClassName: nfs-class
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: {{ .Values.capacity }}
  accessModes:
    {{- range .Values.accessModes }}
      - {{ . | quote }}
    {{- end }}
  nfs:
    server: {{ .Values.pv.server }}
    path: {{ .Values.pv.hostPath }}
  mountOptions:
    - nfsvers=4.2

Setup PersistentVolumeClaim Link to heading

Create the PVC that will use bind to the PV for the deployment to use.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ template "pihole.name" . }}-pvc
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "pihole.labels" . | nindent 4 }}
spec:
  storageClassName: nfs-class
  accessModes:
    {{- range .Values.accessModes }}
      - {{ . | quote }}
    {{- end }}
  resources:
    requests:
      storage: {{ .Values.capacity }}

Setup Admin Password Secret Link to heading

Create another template called secret.yaml with the following content.

apiVersion: v1
kind: Secret
metadata:
  name: {{ template "pihole.name" . }}-admin-secret
data:
  password: {{ .Values.adminPassword | b64enc }}

Setup the Deployment Link to heading

The deployment will create the pod that will run the actual PiHole container.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "pihole.fullname" . }}
  labels:
    app: {{ template "pihole.name" . }}
    chart: {{ template "pihole.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  strategy:
    type: {{ .Values.strategyType }}
    {{- if eq .Values.strategyType "RollingUpdate" }}
    rollingUpdate:
      maxSurge: {{ .Values.maxSurge }}
      maxUnavailable: {{ .Values.maxUnavailable }}
    {{- end }}
  selector:
    matchLabels:
      app: {{ template "pihole.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ template "pihole.name" . }}
        release: {{ .Release.Name }}
    spec:
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - {{ template "pihole.name" . }}
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: {{ .Chart.Name }}
        env:
        - name: TZ
          value: {{ .Values.TZ }}
        - name: WEBPASSWORD
          valueFrom:
            secretKeyRef:
              key: password
              name: {{ template "pihole.name" . }}-admin-secret
        - name: DNS1
          value: "1.1.1.1"
        - name: DNS2
          value: "1.0.0.1"
        - name: PUID
          value: {{ .Values.puid | quote }}
        - name: PGID
          value: {{ .Values.pgid | quote }}
        image: pihole/pihole:{{ .Values.image.tag | default .Chart.AppVersion }}
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        securityContext:
          {{- toYaml .Values.securityContext | nindent 10 }}
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 53
          name: dns
          protocol: TCP
        - containerPort: 53
          name: dns-udp
          protocol: UDP
        - containerPort: 67
          name: client-udp
          protocol: UDP
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "sleep 30 && chown pihole:www-data /etc/pihole/gravity.db"]
        volumeMounts:
        - name: pihole-pv
          mountPath: /etc/pihole
        readinessProbe:
          httpGet:
            path: /admin.index.php
            port: http
          initialDelaySeconds: 60
          failureThreshold: 3
          timeoutSeconds: 5
      volumes:
      - name: pihole-pv
        persistentVolumeClaim:
          claimName: {{ template "pihole.name" . }}-pvc

A few things to note for the deployment, it will mount the file system for PiHole at /etc/pihole this will be on the NFS, so it will be persistent through container restarts.

The postStart for the pod lifecycle ensures that the gravity database is not owned by pihole user.

The pod affinity settings force the pods to get deployed on different nodes, however, the current values.yaml only calls for 1 Pod.

The PiHole image is an S6 image, so it takes the environment variables PUID and PGID. These are used to force the process to not run as root.

Setup The Services Link to heading

There are three services needed, one each for the Web UI, DNS over TCP and DNS over UDP.

apiVersion: v1
kind: Service
metadata:
  name: {{ template "pihole.fullname" . }}-dns-tcp
  labels:
    app: {{ template "pihole.name" . }}
    chart: {{ template "pihole.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: pihole-svc
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - port: 53
      targetPort: dns
      protocol: TCP
      name: dns
  selector:
    app: {{ template "pihole.name" . }}
    release: {{ .Release.Name }}

service-dns-tcp.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ template "pihole.fullname" . }}-dns-udp
  labels:
    app: {{ template "pihole.name" . }}
    chart: {{ template "pihole.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: pihole-svc
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - port: 53
      targetPort: dns-udp
      protocol: UDP
      name: dns-udp
  selector:
    app: {{ template "pihole.name" . }}
    release: {{ .Release.Name }}

service-dns-udp.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ template "pihole.fullname" . }}-web
  labels:
    app: {{ template "pihole.name" . }}
    chart: {{ template "pihole.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
  annotations:
    metallb.universe.tf/address-pool: default
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app: {{ template "pihole.name" . }}
    release: {{ .Release.Name }}

service-web.yaml

The UDP and TCP services are almost exactly the same except for the protocol. They also have the annotation metallb.universe.tf/allow-shared-ip: pihole-svc that allows them to share the same IP address.

We’re using the Web service through the Nginx ingress controller, so it goes through that service directly.

As this is a fully local setup, HTTPS is not required.

Setup the Ingress Link to heading

This will allow us to access the web UI from the local network without needing to enter the Kubernetes network.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ template "pihole.fullname" . }}
  labels:
    app: {{ template "pihole.name" . }}
    chart: {{ template "pihole.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: pi.hole
    http:
      paths:
      - path: /
        pathType: ImplementationSpecific
        backend:
          service:
            name: {{ template "pihole.fullname" . }}-web
            port: 
              name: http

Set all the values Link to heading

In the values.yaml file, add all the relevant values.

# values.yaml

TZ: Asia/Singapore

capacity: 1.5Gi
accessModes:
  - ReadWriteMany
pv:
  server: "<NFS Server IP>"
  hostPath: /path/to/pihole
adminPassword: admin123

strategyType: RollingUpdate
replicaCount: 1
maxSurge: 1
maxUnavailable: 0

image:
  pullPolicy: IfNotPresent

securityContext:
  capabilities:
    add:
      - NET_ADMIN
      - CHOWN

podSecurityContext:
  fsGroup: 1003
  supplementalGroups:
  - 999

puid: "1003"
pgid: "1003"

Update Helmfile Link to heading

Add the new chart to the helmfile.yaml

- name: pihole
  namespace: pihole
  chart: ./pihole
  wait: true 

References Link to heading