Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. It groups containers that make up an application into logical units for easy management and discovery.
K3S is a certified lightweight kubernetes built for IoT & Edge Computing. K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. K3s is packaged as a single <50MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster.
In this tutorial, we will set up a kubernetes cluster with K3S in Rocky Linux 8.
Checkout related posts:
Ensure that the server is up to date
Before proceeding, let us ensure that the server is updated with this command:
Installing the cluster
In the main node machine, install the k3s binary using this command:
1
| curl -sfL https://get.k3s.io | sh -
|
To check if the service installed successfully, you can use:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| $ sudo systemctl status k3s
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-02-02 08:20:25 UTC; 40min ago
Docs: https://k3s.io
Process: 67260 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 67253 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 67250 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
Main PID: 67265 (k3s-server)
Tasks: 90
Memory: 1.3G
CGroup: /system.slice/k3s.service
├─67265 /usr/local/bin/k3s server
├─67327 containerd
├─68051 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 31a55568efdd72e0a783e2d32991386e1f476ad485614b675c>
├─68100 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6fe0538e3d19bc0dbe892156bb1e0f9c1541f1c93135c55ee6>
├─68120 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id d09e9e1cff2929ac67536e7e2ae62cb016b4fb1fcad8584d8a>
├─69219 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 85044244f5e2eccf5056023e9618bc7529d6d76b7c2d66aa15>
└─69245 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f61b5d46763bb98abf52509ed8180dcd734adcd5e9074c8f46>
Feb 02 08:20:56 cloudsrv.citizix.com k3s[67265]: I0202 08:20:56.651519 67265 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/1f0940c
|
Once the installation is done, we can check if the node is ready. Give it some few seconds up to a minute:
1
2
3
4
| $ k3s kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cloudsrv.citizix.com Ready control-plane,master 40m v1.22.6+k3s1 10.2.40.214 <none> Rocky Linux 8.5 (Green Obsidian) 4.18.0-305.3.1.el8_4.x86_64 containerd://1.5.9-k3s1
|
Copy kubeconfig:
1
2
3
4
| mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
chmod 0644 ~/.kube/config
|
Setup K3S worker nodes
To get the token on the master, use this command:
1
2
3
| $ sudo cat /var/lib/rancher/k3s/server/node-token
K1017181252d02d4a3aa4f5db3bf9b30ad83s548f95c533ac63efddd59b2d33325775::server:2508760cd6f4b6f09fd8c60fee329e94
|
To set up K3S worker nodes, we need to run this command:
1
| curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> sh -s - --docker
|
Subtitute the master_IP
to the master node IP and join_token
to the join token from the master. e.g.
1
| curl -sfL http://get.k3s.io | K3S_URL=https://10.2.11.20:6443 K3S_TOKEN='K1017181252d02d4a3aa4f5db3bf9b30ad83s548f95c533ac63efddd59b2d33325775::server:2508760cd6f4b6f09fd8c60fee329e94' sh -s
|
You can also use environment variables:
1
2
| export K3S_TOKEN="secret_edgecluster_token"
export K3S_URL=https://10.2.11.20:6443
|
The environment variable, K3S_URL
is a hint to the installer to configure the node as an agent connected to an existing server.
Finally, run the same script as we did in the previous step.
1
| curl -sfL https://get.k3s.io | sh -
|
You can verify if the k3s-agent on the worker nodes is running by:
1
| sudo systemctl status k3s-agent
|
Confirm if the new node has joined the cluster
1
| k3s kubectl get node -o wide
|
Deploy application to the K3S cluster
We can now deploy a test application on the K3s cluster. Deploy Nginx using this command:
1
2
3
| $ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created
|
Check to confirm that the pod is running:
1
2
3
4
| $ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-55649fd747-j6582 1/1 Running 0 64s
|
Removing k3s
To remove k3s on the worker nodes, execute:
1
2
| sudo /usr/local/bin/k3s-agent-uninstall.sh
sudo rm -rf /var/lib/rancher
|
To remove k3s on the master node, execute:
1
2
| sudo /usr/local/bin/k3s-uninstall.sh
sudo rm -rf /var/lib/rancher
|
Conclusion
In this guide we have learned how to install a K3S cluster and add worker nodes.