How to Configure NFS as Kubernetes Persistent Volume Storage

In this guide, we will learn how to setup dynamic nfs provisioning in a Kubernetes (k8s) cluster.

Network File System (NFS) is a networking protocol for distributed file sharing. It is a protocol that enables computers to access or share files over the network. It’s an easy solution to have a dedicated file storage where multiple users and heterogeneous client devices can retrieve data from centralized disk capacity.

Dynamic NFS storage provisioning in Kubernetes allows you to automatically provision and manage NFS (Network File System) volumes for your Kubernetes applications on-demand. It enables the creation of persistent volumes (PVs) and persistent volume claims (PVCs) without requiring manual intervention or pre-provisioned storage.

The NFS provisioner is responsible for dynamically creating PVs and binding them to PVCs. It interacts with the NFS server to create directories or volumes for each PVC.

Setting up the NFS server

I am going to set up an NFS server on Ubuntu 22.04 server but this guide should work if you have any Debian based system.

First install the NFS server:

1
2
sudo apt update
sudo apt install nfs-kernel-server -y

Create the following folder and share it using nfs,

1
2
3
sudo mkdir /mnt/k8s-dynamic-store
sudo chown -R nobody:nogroup /mnt/k8s-dynamic-store
sudo chmod 2770 /mnt/k8s-dynamic-store

Add the following entries in /etc/exports file

1
2
$ sudo vim /etc/exports
/mnt/k8s-dynamic-store 10.20.1.0/24(rw,sync,no_subtree_check)

Save and close the file.

  Note: Don’t forget to change network in exports file that suits to your deployment.  

To make above changes into the effect, run

1
2
3
sudo exportfs -a
sudo systemctl restart nfs-kernel-server
sudo systemctl status nfs-kernel-server

On each of the worker nodes, install nfs-common package using following apt command.

1
sudo apt install nfs-common -y

Setup Kubernetes NFS Subdir External Provisioner

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

So, to install NFS subdir external provisioner, first install helm using following set of commands,

1
2
3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

  You must already have an NFS Server.  

Enable the helm repo by running following command:

1
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

Deploy provisioner using following helm command

1
2
3
4
5
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --create-namespace \
    -n nfs-provisioning \
    --set nfs.server=10.20.1.4 \
    --set nfs.path=/mnt/k8s-dynamic-store

The command above will create a namespace called nfs-provisioning and installs the nfs provisioner pod/deployment, storage class with name (nfs-client) and will created the required rbac.

You can confirm that it worked as expected:

1
2
kubectl get all -n nfs-provisioning
kubectl get sc -n nfs-provisioning

This is my output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
➜ kubectl get all -n nfs-provisioning

NAME                                                   READY   STATUS              RESTARTS   AGE
pod/nfs-subdir-external-provisioner-6b8fbdc787-bws5h   1/1     Running             0          102s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-subdir-external-provisioner   1/1     1            1           44m

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-subdir-external-provisioner-6b8fbdc787   1         1         1       103s

And this

1
2
3
4
➜ kubectl get sc -n nfs-provisioning
NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local-path (default)   rancher.io/local-path                           Delete          WaitForFirstConsumer   false                  4d3h
nfs-client             cluster.local/nfs-subdir-external-provisioner   Delete          Immediate              true                   45m

Create Persistent Volume Claims (PVCs)

To test our set up, we will create a PVC to request storage for a pod. The PVC will request for a specific amount of storage from a StorageClass (nfs-client).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ vim test-pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  namespace: nfs-provisioning
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Mi

Save and close.

Run following kubectl command to create pvc using above created yml file,

1
kubectl create -f test-pvc.yml

Verify whether PVC and PV are created or not,

1
kubectl get pv,pvc -n nfs-provisioning

This

1
2
3
4
5
6
7
➜ kubectl get pv,pvc -n nfs-provisioning

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
persistentvolume/pvc-d4e1d20d-ac8a-4f66-8fec-ca9537ac4ed3   5Mi        RWX            Delete           Bound    nfs-provisioning/test-claim   nfs-client              55s

NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-claim   Bound    pvc-d4e1d20d-ac8a-4f66-8fec-ca9537ac4ed3   5Mi        RWX            nfs-client     55s

The output above shows that the pv and pvc are created successfully.

Test and Verify Dynamic NFS Provisioning

In order to test and verify dynamic nfs provisioning, spin up a test pod using following yml file,

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ vim test-pod.yml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
  namespace: nfs-provisioning
spec:
  containers:
  - name: test-pod
    image: busybox:latest
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && sleep 600"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

Deploy the pod using following kubectl command,

1
kubectl create -f test-pod.yml

Verify the status of test-pod,

1
2
3
4
➜ kubectl get pods -n nfs-provisioning

NAME                                               READY   STATUS    RESTARTS   AGE
test-pod                                           1/1     Running   0          113s

Get pod

1
kubectl get pods -n nfs-provisioning

Login to the pod and verify that nfs volume is mounted or not.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
➜ kubectl exec -it test-pod -n nfs-provisioning -- /bin/sh

/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  29.3G      3.3G     24.7G  12% /
tmpfs                    64.0M         0     64.0M   0% /dev
10.20.1.4:/mnt/k8s-dynamic-store/nfs-provisioning-test-claim-pvc-d4e1d20d-ac8a-4f66-8fec-ca9537ac4ed3
                         19.5G      2.0G     16.6G  11% /mnt
/dev/sda1                29.3G      3.3G     24.7G  12% /etc/hosts
/dev/sda1                29.3G      3.3G     24.7G  12% /dev/termination-log
/dev/sda1                29.3G      3.3G     24.7G  12% /etc/hostname
/dev/sda1                29.3G      3.3G     24.7G  12% /etc/resolv.conf
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     3.8G     12.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs                     1.9G         0      1.9G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                     1.9G         0      1.9G   0% /sys/firmware
/ # cd /mnt/ && ls
SUCCESS
/mnt #

Great, above output from the pod confirms that dynamic NFS volume is mounted and accessible.

Finally, let’s delete the pod and PVC and check whether pv is deleted automatically or not.

1
2
kubectl delete -f test-pod.yml
kubectl delete -f test-pvc.yml

Then confirm

1
2
➜ kubectl get pv,pvc -n  nfs-provisioning
No resources found

Installing Multiple Provisioners

It is possible to install multiple NFS provisioners in your cluster to have access to multiple nfs servers and/or multiple exports from a single nfs server. Each provisioner must have a different storageClass.provisionerName and a different storageClass.name. For example:

1
2
3
4
5
helm install  -n nfs-provisioner second-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/second/exported/path \
    --set storageClass.name=second-nfs-client \
    --set storageClass.provisionerName=k8s-sigs.io/second-nfs-subdir-external

Conclusion

In this guide we were able to set up an nfs server and deploy automatic provisioner on Kubernetes for it.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy