How to Setup Kubernetes Cluster on Rocky Linux/Alma Linux 9 using kubeadm

Kubeadm is a tool built to provide best-practice “fast paths” for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way. Kubeadm’s scope is limited to the local node filesystem and the Kubernetes API, and it is intended to be a composable building block of higher level tools.

In this guide we will learn how to set up kubernetes in Rocky Linux 8 server.

Also checkout:

Common Kubeadm cmdlets

  1. kubeadm init to bootstrap the initial Kubernetes control-plane node.
  2. kubeadm join to bootstrap a Kubernetes worker node or an additional control plane node, and join it to the cluster.
  3. kubeadm upgrade to upgrade a Kubernetes cluster to a newer version.
  4. kubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join.

Prerequisites

To follow along, ensure you have:

  • An updated Rocky Linux 8 server or RHEL 8 based server
  • 2 GB or more of RAM per machine
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster
  • Unique hostname, MAC address, and product_uuid for every node.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

Step 1 – Ensuring that the server is up to date

Let us start by ensuring that the server packages are updated. Use this command to achieve this:

sudo dnf -y update

Set hostname

sudo hostnamectl set-hostname rockysrv.citizix.com

Install some common packages

sudo dnf install -y git curl vim iproute-tc

Step 2 – Disable SELinux

Let us disable SELinux

sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

Apply changes so you don’t have to reboot:

sudo setenforce 0

Step 3 – Disable swap

Use this command to turn off swap

sudo sed -i '/swap/d' /etc/fstab

Apply changes

sudo swapoff -a

Step 4 – Letting iptables see bridged traffic

Make sure that the br_netfilter module is loaded. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.

overlay it’s needed for overlayfs, checkout more info herebr_netfilter for iptables to correctly see bridged traffic, checkout more info here.

sudo cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

As a requirement for your Linux Node’s iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.

sudo cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

Apply settings

sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl --system

Step 5 – Install Containerd

Install dependencies and add repo

sudo dnf install dnf-utils -y
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Next install containerd

sudo dnf install -y containerd.io

For containerd, the CRI socket is /run/containerd/containerd.sock by default.

Configure containerd

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

Configuring the systemd cgroup driver

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

If you apply this change, make sure to restart containerd:

sudo systemctl restart containerd

Step 6 – Start and enable containerd

Start

sudo systemctl start containerd

Confirm status

$ sudo systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)
     Active: active (running) since Sun 2022-08-14 12:34:03 UTC; 12s ago
       Docs: https://containerd.io
    Process: 387999 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 388000 (containerd)
      Tasks: 9
     Memory: 21.2M
        CPU: 99ms
     CGroup: /system.slice/containerd.service
             └─388000 /usr/bin/containerd

Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.858272978Z" level=info msg="Start subscribing containerd event"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.863104707Z" level=info msg="Start recovering state"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.863397889Z" level=info msg="Start event monitor"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.863527132Z" level=info msg="Start snapshots syncer"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.865515202Z" level=info msg="Start cni network conf syncer for default"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.865617643Z" level=info msg="Start streaming server"
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.865472672Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.866726790Z" level=info msg=serving... address=/run/containerd/containerd.sock
Aug 14 12:34:03 rockysrv.citizix.com containerd[388000]: time="2022-08-14T12:34:03.870641018Z" level=info msg="containerd successfully booted in 0.045478s"
Aug 14 12:34:03 rockysrv.citizix.com systemd[1]: Started containerd container runtime.

Enable on boot

sudo systemctl enable containerd

Step 7 – Install kubeletkubeadm and kubectl

Add repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Next install kubeadm and the required packages:

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Lock versions in order to avoid unwanted updated via yum or dnf update

sudo dnf install yum-plugin-versionlock -y
sudo dnf versionlock kubelet kubeadm kubectl

Step 8 – Enable and start kubelet

Enable and start kubelet

sudo systemctl enable kubelet
sudo systemctl start kubelet

Check status

sudo systemctl status kubelet

Step 9 – Initialize the cluster (in the master node)

Create cluster configuration

sudo kubeadm config print init-defaults | tee ClusterConfiguration.yaml

Modify ClusterConfiguration.yaml, replace 10.2.40.85 with your Control Plane’s IP address

sudo sed -i '/name/d' ClusterConfiguration.yaml
sudo sed -i 's/ advertiseAddress: 1.2.3.4/ advertiseAddress: 10.2.40.85/' ClusterConfiguration.yaml
sudo sed -i 's/ criSocket: \/var\/run\/dockershim\.sock/ criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yaml

Step 10 – Configuring the kubelet cgroup driver

kubeadm allows you to pass a KubeletConfiguration structure during kubeadm init. This KubeletConfiguration can include the cgroupDriver field which controls the cgroup driver of the kubelet.

sudo cat << EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

Create kubernetes cluster

sudo kubeadm init --config=ClusterConfiguration.yaml

Move kube configuration

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

From the control plane node you can now check you kubernetes cluster, c2-control-plane is in not NotReady mode because we didn’t set up the networking yet.

$ kubectl get nodes
NAME                   STATUS   ROLES                  AGE     VERSION
rockysrv.citizix.com   Ready    control-plane,master   2m57s   v1.23.6

Step 11 – Setup networking with Calico

Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.

Install the Tigera Calico operator and custom resource definitions.

kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

Install Calico by creating the necessary custom resource:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Step 12 – Setting up worker nodes

The following setup have to be done on the Worker nodes only

Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the “kubeadmin init …” command you were prompted for a join command, it should look like:

kubeadm join 10.2.40.85:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:efe7c1c81575325543ccd5956e05e81ed366ebc320898f881c5ca76760d1a94e

f you missed it, you can still generate a token and generate the command with:

kubeadm token create --print-join-command

We are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.

kubectl get nodes

Get all the pods

$ kubectl get pods -A
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7c845d499-9l4ld        1/1     Running   0          2m48s
kube-system   calico-node-q7j2x                              1/1     Running   0          2m48s
kube-system   coredns-64897985d-445lb                        1/1     Running   0          4m37s
kube-system   coredns-64897985d-g6298                        1/1     Running   0          4m37s
kube-system   etcd-rockysrv.citizix.com                      1/1     Running   0          4m41s
kube-system   kube-apiserver-rockysrv.citizix.com            1/1     Running   0          4m40s
kube-system   kube-controller-manager-rockysrv.citizix.com   1/1     Running   0          4m41s
kube-system   kube-proxy-f6s82                               1/1     Running   0          4m38s
kube-system   kube-scheduler-rockysrv.citizix.com            1/1     Running   0          4m41s

Check the version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:09:57Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}

Step 13 – Control plane node isolation

By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes, for example for a single machine Kubernetes cluster, run:

kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-

The output will look something like:

node "rockysrv.citizix.com" untainted
...

This will remove the node-role.kubernetes.io/control-plane and node-role.kubernetes.io/master taints from any nodes that have them, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.

Step 13 – Deploying an application

Deploy sample Nginx application

$ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created

Check to see if pod started

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7c658794b9-t2fkc   1/1     Running   0          4m39s

Conclusion

In this guide we managed to set up a kubernetes server on an Rocky Linux 8 server. You can now deploy apps in it.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy