Containers

How to Setup a Kubernetes Cluster with K3S in Rocky Linux 8

Pinterest LinkedIn Tumblr

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. It groups containers that make up an application into logical units for easy management and discovery. 

K3S is a certified lightweight kubernetes built for IoT & Edge Computing. K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. K3s is packaged as a single <50MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster.

In this tutorial, we will set up a kubernetes cluster with K3S in Rocky Linux 8.

Checkout related posts:

1. Ensure that the server is up to date

Before proceeding, let us ensure that the server is updated with this command:

sudo dnf update -y

2. Installing the cluster

In the main node machine, install the k3s binary using this command:

curl -sfL https://get.k3s.io | sh -

To check if the service installed successfully, you can use:

# systemctl status k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2022-02-02 08:20:25 UTC; 40min ago
     Docs: https://k3s.io
  Process: 67260 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 67253 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 67250 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
 Main PID: 67265 (k3s-server)
    Tasks: 90
   Memory: 1.3G
   CGroup: /system.slice/k3s.service
           ├─67265 /usr/local/bin/k3s server
           ├─67327 containerd
           ├─68051 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 31a55568efdd72e0a783e2d32991386e1f476ad485614b675c>
           ├─68100 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6fe0538e3d19bc0dbe892156bb1e0f9c1541f1c93135c55ee6>
           ├─68120 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id d09e9e1cff2929ac67536e7e2ae62cb016b4fb1fcad8584d8a>
           ├─69219 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id 85044244f5e2eccf5056023e9618bc7529d6d76b7c2d66aa15>
           └─69245 /var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6/bin/containerd-shim-runc-v2 -namespace k8s.io -id f61b5d46763bb98abf52509ed8180dcd734adcd5e9074c8f46>

Feb 02 08:20:56 cloudsrv.citizix.com k3s[67265]: I0202 08:20:56.651519   67265 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/1f0940c

Once the installation is done, we can check if the node is ready. Give it some few seconds up to a minute:

# k3s kubectl get node -o wide
NAME                   STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION                CONTAINER-RUNTIME
cloudsrv.citizix.com   Ready    control-plane,master   40m   v1.22.6+k3s1   10.2.40.214   <none>        Rocky Linux 8.5 (Green Obsidian)   4.18.0-305.3.1.el8_4.x86_64   containerd://1.5.9-k3s1

Copy kubeconfig:

mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
chmod 0644 ~/.kube/config

3. Setup K3S worker nodes

To get the token on the master, use this command:

# sudo cat /var/lib/rancher/k3s/server/node-token
K1017181252d02d4a3aa4f5db3bf9b30ad83s548f95c533ac63efddd59b2d33325775::server:2508760cd6f4b6f09fd8c60fee329e94

To set up K3S worker nodes, we need to run this command:

curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token> sh -s - --docker

Subtitute the master_IP to the master node IP and join_token to the join token from the master. e.g.

curl -sfL http://get.k3s.io | K3S_URL=https://10.2.11.20:6443 K3S_TOKEN='K1017181252d02d4a3aa4f5db3bf9b30ad83s548f95c533ac63efddd59b2d33325775::server:2508760cd6f4b6f09fd8c60fee329e94' sh -s

You can also use environment variables:

export K3S_TOKEN="secret_edgecluster_token"
export K3S_URL=https://10.2.11.20:6443

The environment variable, K3S_URL is a hint to the installer to configure the node as an agent connected to an existing server.

Finally, run the same script as we did in the previous step.

curl -sfL https://get.k3s.io | sh -

You can verify if the k3s-agent on the worker nodes is running by:

sudo systemctl status k3s-agent

Confirm if the new node has joined the cluster

k3s kubectl get node -o wide

4. Deploy application to the K3S cluster

We can now deploy a test application on the K3s cluster. Deploy Nginx using this command:

# kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created

Check to confirm that the pod is running:

# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-55649fd747-j6582   1/1     Running   0          64s

5. Bonus – Removing k3s

To remove k3s on the worker nodes, execute:

sudo /usr/local/bin/k3s-agent-uninstall.sh
sudo rm -rf /var/lib/rancher

To remove k3s on the master node, execute:

sudo /usr/local/bin/k3s-uninstall.sh
sudo rm -rf /var/lib/rancher

Conclusion

In this guide we have learned how to install a K3S cluster and add worker nodes.

I am a Devops Engineer, but I would describe myself as a Tech Enthusiast who is a fan of Open Source, Linux, Automations, Cloud and Virtualization. I love learning and exploring new things so I blog in my free time about Devops related stuff, Linux, Automations and Open Source software. I can also code in Python and Golang.

5 Comments

  1. Hi, thank you for the tutorial.
    Following all steps i’m getting this error when starting k3s agent (both server and agent are on the same node):
    level=fatal msg=”listen tcp 127.0.0.1:6444: bind: address already in use”

    Is there anything that should be done to fix this issue?

    Thank you in advance

    • Hey David, is there any other service using port 6444 on the local system? That is the kubernetes port. Or is another version of k3s already running?

  2. Pingback: Getting started with Kubernetes – Kubernetes Components

  3. Pingback: How to set up Kubernetes Cluster on Ubuntu 20.04 with kubeadm and CRI-O

  4. Pingback: How to install and run Gatus for health check monitoring in Kubernetes

Write A Comment