The K3s cluster was setup in part 1. This post covers building the Helm Chart for Jellyfin and installing it.
In Part 2, NFS Share was setup. On this media, config and cache directories will also be stored on this NFS, refer to Part 2 for the setup configs for NFS share in OMV.
Setup Helm Chart Templates Link to heading
Create a new chart called jellyfin
helm create jellyfin
cd jellyfin
Create Persistent Volumes and Claims Link to heading
All the persistent volumes can be combined into one file, and the same with the persistent volume claims.
# persistentvolumes.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ template "jellyfin.name" . }}-config-pv
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.configpv.capacity }}
accessModes:
{{- range .Values.configpv.accessModes }}
- {{ . | quote }}
{{- end }}
nfs:
server: {{ .Values.configpv.server }}
path: {{ .Values.configpv.hostPath }}
mountOptions:
- nfsvers=4.2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ template "jellyfin.name" . }}-cache-pv
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.cachepv.capacity }}
accessModes:
{{- range .Values.cachepv.accessModes }}
- {{ . | quote }}
{{- end }}
nfs:
server: {{ .Values.cachepv.server }}
path: {{ .Values.cachepv.hostPath }}
mountOptions:
- nfsvers=4.2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ template "jellyfin.name" . }}-movies-pv
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.movpv.capacity }}
accessModes:
{{- range .Values.movpv.accessModes }}
- {{ . | quote }}
{{- end }}
nfs:
server: {{ .Values.movpv.server }}
path: {{ .Values.movpv.hostPath }}
mountOptions:
- nfsvers=4.2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ template "jellyfin.name" . }}-tv-pv
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.tvpv.capacity }}
accessModes:
{{- range .Values.tvpv.accessModes }}
- {{ . | quote }}
{{- end }}
nfs:
server: {{ .Values.tvpv.server }}
path: {{ .Values.tvpv.hostPath }}
mountOptions:
- nfsvers=4.2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ template "jellyfin.name" . }}-pic-pv
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
persistentVolumeReclaimPolicy: Retain
capacity:
storage: {{ .Values.picpv.capacity }}
accessModes:
{{- range .Values.picpv.accessModes }}
- {{ . | quote }}
{{- end }}
nfs:
server: {{ .Values.picpv.server }}
path: {{ .Values.picpv.hostPath }}
mountOptions:
- nfsvers=4.2
# persistentvolumeclaims.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "jellyfin.name" . }}-config-pvc
namespace: {{ .Release.Namespace }}
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
accessModes:
{{- range .Values.configpv.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.configpv.capacity }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "jellyfin.name" . }}-cache-pvc
namespace: {{ .Release.Namespace }}
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
accessModes:
{{- range .Values.cachepv.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.cachepv.capacity }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "jellyfin.name" . }}-movies-pvc
namespace: {{ .Release.Namespace }}
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
accessModes:
{{- range .Values.movpv.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.movpv.capacity }}
volumeName: {{ template "jellyfin.name" . }}-movies-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "jellyfin.name" . }}-tv-pvc
namespace: {{ .Release.Namespace }}
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
accessModes:
{{- range .Values.tvpv.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.tvpv.capacity }}
volumeName: {{ template "jellyfin.name" . }}-tv-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ template "jellyfin.name" . }}-pic-pvc
namespace: {{ .Release.Namespace }}
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
storageClassName: nfs-class
accessModes:
{{- range .Values.picpv.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.picpv.capacity }}
volumeName: {{ template "jellyfin.name" . }}-pic-pv
Another possible file structure is to combine a PV and its corresponding PVC into one file.
There are three media directories (Movies, TV Shows and Photos), a config directory and a cache directory.
The config and cache directories are there for the Jellyfin to store the cache data and necessary data about users and media.
Create the Deployment Link to heading
Using the linuxserver/jellyfin
images, this allows us to set Linux user IDs for the container to run as when accessing the NFS share.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "jellyfin.fullname" . }}
labels:
app: {{ template "jellyfin.name" . }}
chart: {{ template "jellyfin.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
strategy:
type: Recreate
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "jellyfin.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "jellyfin.name" . }}
release: {{ .Release.Name }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "lscr.io/linuxserver/jellyfin:{{ default .Chart.AppVersion }}"
ports:
- name: http
containerPort: 8096
protocol: TCP
- name: auto-disc-1
containerPort: 1900
protocol: UDP
- name: auto-disc-2
containerPort: 7359
protocol: UDP
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
env:
- name: PUID
value: {{ .Values.puid | quote }}
- name: PGID
value: {{ .Values.pgid | quote }}
- name: TZ
value: Asia/Singapore
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: jellyfin-config
mountPath: /config
- name: jellyfin-cache
mountPath: /cache
- name: jellyfin-movies
mountPath: /movies
- name: jellyfin-tv
mountPath: /tv
- name: jellyfin-pic
mountPath: /pictures
- name: vcsm
mountPath: /dev/vcsm
- name: vchiq
mountPath: /dev/vchiq
volumes:
- name: jellyfin-config
persistentVolumeClaim:
claimName: {{ template "jellyfin.name" . }}-config-pvc
- name: jellyfin-cache
persistentVolumeClaim:
claimName: {{ template "jellyfin.name" . }}-cache-pvc
- name: jellyfin-movies
persistentVolumeClaim:
claimName: {{ template "jellyfin.name" . }}-movies-pvc
- name: jellyfin-tv
persistentVolumeClaim:
claimName: {{ template "jellyfin.name" . }}-tv-pvc
- name: jellyfin-pic
persistentVolumeClaim:
claimName: {{ template "jellyfin.name" . }}-pic-pvc
- name: vcsm
hostPath:
path: /dev/vcsm
- name: vchiq
hostPath:
path: /dev/vchiq
The /dev/vcsm
and /dev/vchiq
devices are mounted to allow for hardware acceleration during video transcoding. Jellyfin will default to non-hardware transcoding, but this affects streaming performance and requires a much more powerful machine. On a Raspberry Pi, the better option is still enabling hardware acceleration. Details for setting up the Raspberry Pi for hardware acceleration can be found in the Jellyfin documentation.
Setup Services and Ingress Link to heading
Minimally, the web service has to be exposed for the ingress to route traffic to. On the internal network the auto-discovery ports can also be exposed as services.
Auto-discovery services should be LoadBalancer
s and the web UI should be exposing a ClusterIP
.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "jellyfin.fullname" . }}-web
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "jellyfin.name" . }}
release: {{ .Release.Name }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "jellyfin.fullname" . }}-auto-disc-1
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
annotations:
metallb.universe.tf/address-pool: default
metallb.universe.tf/allow-shared-ip: jellyfin-svc
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 1900
targetPort: auto-disc-1
protocol: UDP
name: auto-disc-1
selector:
app: {{ template "jellyfin.name" . }}
release: {{ .Release.Name }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "jellyfin.fullname" . }}-auto-disc-2
labels:
{{- include "jellyfin.labels" . | nindent 4 }}
annotations:
metallb.universe.tf/address-pool: default
metallb.universe.tf/allow-shared-ip: jellyfin-svc
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 7359
targetPort: auto-disc-2
protocol: UDP
name: auto-disc-2
selector:
app: {{ template "jellyfin.name" . }}
release: {{ .Release.Name }}
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ template "jellyfin.fullname" . }}
labels:
app: {{ template "jellyfin.name" . }}
chart: {{ template "jellyfin.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
spec:
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ template "jellyfin.fullname" . }}-web
port:
name: http
Set All the Values Link to heading
# values.yaml
replicaCount: 1
puid: "1001"
pgid: "1001"
resources:
requests:
cpu: 1000m
memory: 500Mi
limits:
cpu: 1500m
memory: 800Mi
podSecurityContext:
fsGroup: 1001
supplementalGroups:
- 44
- 109
securityContext: {}
service:
type: ClusterIP
port: 8096
ingress:
host: jellyfin.home
configpv:
accessModes:
- ReadWriteOnce
server: "<NFS Server IP>"
hostPath: /path/to/jellyfin/config
capacity: 1Gi
cachepv:
accessModes:
- ReadWriteOnce
server: "<NFS Server IP>"
hostPath: /path/to/jellyfin/cache
capacity: 1Gi
movpv:
accessModes:
- ReadOnlyMany
server: "<NFS Server IP>"
hostPath: /path/to/Movies
capacity: 1024Gi
tvpv:
accessModes:
- ReadOnlyMany
server: "<NFS Server IP>"
hostPath: /path/to/TV
capacity: 500Gi
picpv:
accessModes:
- ReadOnlyMany
server: "<NFS Server IP>"
hostPath: /path/to/Pictures
capacity: 500Gi
Update Helmfile Link to heading
Add the new chart to helmfile.yaml
.
- name: jellyfin
namespace: jellyfin
chart: ./jellyfin