Kubeadm is a tool built to provide best-practice “fast paths” for creating Kubernetes clusters. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way. Kubeadm’s scope is limited to the local node filesystem and the Kubernetes API, and it is intended to be a composable building block of higher level tools.
In this guide we will learn how to set up kubernetes in Rocky Linux 8 server.
Also checkout:
- How to set up Kubernetes Cluster on Ubuntu 20.04 with kubeadm and CRI-O
- How to set up Kubernetes Cluster on Debian 11 with kubeadm and CRI-O
- How to Setup a Kubernetes Cluster with K3S in Rocky Linux 8
- How to use Kustomize to manage kubernetes configurations
Common Kubeadm cmdlets
- kubeadm init to bootstrap the initial Kubernetes control-plane node.
- kubeadm join to bootstrap a Kubernetes worker node or an additional control plane node, and join it to the cluster.
- kubeadm upgrade to upgrade a Kubernetes cluster to a newer version.
- kubeadm reset to revert any changes made to this host by kubeadm init or kubeadm join.
Prerequisites
To follow along, ensure you have:
- An updated Rocky Linux 8 server or RHEL 8 based server
- 2 GB or more of RAM per machine
- 2 CPUs or more.
- Full network connectivity between all machines in the cluster
- Unique hostname, MAC address, and product_uuid for every node.
- Swap disabled. You MUST disable swap in order for the kubelet to work properly.
Step 1 – Ensuring that the server is up to date
Let us start by ensuring that the server packages are updated. Use this command to achieve this:
sudo dnf -y updateSet hostname
sudo hostnamectl set-hostname rockysrv.citizix.comInstall some common packages
sudo dnf install -y git curl vim iproute-tcStep 2 – Disable SELinux
Let us disable SELinux
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/configApply changes so you don’t have to reboot:
sudo setenforce 0Step 3 – Disable swap
Use this command to turn off swap
sudo sed -i '/swap/d' /etc/fstabApply changes
sudo swapoff -aStep 4 – Letting iptables see bridged traffic
Make sure that the br_netfilter module is loaded. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
overlay it’s needed for overlayfs, checkout more info here. br_netfilter for iptables to correctly see bridged traffic, checkout more info here.
sudo cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOFAs a requirement for your Linux Node’s iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
sudo cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOFApply settings
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl --systemStep 5 – Install Containerd
Install dependencies and add repo
sudo dnf install dnf-utils -y
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repoNext install containerd
sudo dnf install -y containerd.ioFor containerd, the CRI socket is /run/containerd/containerd.sock by default.
Configure containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.tomlConfiguring the systemd cgroup driver
To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = trueIf you apply this change, make sure to restart containerd:
sudo systemctl restart containerdStep 6 – Start and enable containerd
Start
sudo systemctl start containerdConfirm status
$ sudo systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2022-04-24 08:44:50 UTC; 3h 46min ago
Docs: https://containerd.io
Main PID: 135760 (containerd)
Tasks: 21
Memory: 65.3M
CGroup: /system.slice/containerd.service
├─135760 /usr/bin/containerd
└─205783 /usr/bin/containerd-shim-runc-v2 -namespace moby -id ab4d726ae74d86a9dfa86d6fab4d8fa3caa9a268b55de06fb8f59e7e3f6cc>
Apr 24 10:09:27 rockysrv.citizix.com containerd[135760]: time="2022-04-24T10:09:27.766990259Z" level=info msg="starting signal loop"Enable on boot
sudo systemctl enable containerdStep 7 – Install kubelet, kubeadm and kubectl
Add repo
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOFNext install
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetesLock versions in order to avoid unwanted updated via yum update
sudo dnf install yum-plugin-versionlock -y
sudo dnf versionlock kubelet kubeadm kubectlStep 8 – Enable and start kubelet
Enable and start kubelet
sudo systemctl enable kubelet.service
sudo systemctl start kubelet.serviceCheck status
sudo systemctl status kubeletStep 9 – Initialize the cluster (in the master node)
Create cluster configuration
sudo kubeadm config print init-defaults | tee ClusterConfiguration.yamlModify ClusterConfiguration.yaml, replace 10.2.40.85 with your Control Plane’s IP address
sudo sed -i '/name/d' ClusterConfiguration.yaml
sudo sed -i 's/ advertiseAddress: 1.2.3.4/ advertiseAddress: 10.2.40.85/' ClusterConfiguration.yaml
sudo sed -i 's/ criSocket: \/var\/run\/dockershim\.sock/ criSocket: \/run\/containerd\/containerd\.sock/' ClusterConfiguration.yamlStep 10 – Configuring the kubelet cgroup driver
kubeadm allows you to pass a KubeletConfiguration structure during kubeadm init. This KubeletConfiguration can include the cgroupDriver field which controls the cgroup driver of the kubelet.
sudo cat << EOF | cat >> ClusterConfiguration.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOFCreate kubernetes cluster
sudo kubeadm init --config=ClusterConfiguration.yamlMove kube configuration
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configFrom the control plane node you can now check you kubernetes cluster, c2-control-plane is in not NotReady mode because we didn’t set up the networking yet.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rockysrv.citizix.com Ready control-plane,master 2m57s v1.23.6Step 11 – Setup networking with Calico
Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.
Install the Tigera Calico operator and custom resource definitions.
kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yamlInstall Calico by creating the necessary custom resource:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlStep 12 – Setting up worker nodes
The following setup have to be done on the Worker nodes only
Join the other nodes to our cluster, the command must be run on the worker nodes only.
At the end of the “kubeadmin init …” command you were prompted for a join command, it should look like:
kubeadm join 10.2.40.85:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:efe7c1c81575325543ccd5956e05e81ed366ebc320898f881c5ca76760d1a94ef you missed it, you can still generate a token and generate the command with:
kubeadm token create --print-join-commandWe are ready, the setup can be validate with kubectl, all nodes are in ready state and kube-system pods are running.
kubectl get nodesGet all the pods
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7c845d499-9l4ld 1/1 Running 0 2m48s
kube-system calico-node-q7j2x 1/1 Running 0 2m48s
kube-system coredns-64897985d-445lb 1/1 Running 0 4m37s
kube-system coredns-64897985d-g6298 1/1 Running 0 4m37s
kube-system etcd-rockysrv.citizix.com 1/1 Running 0 4m41s
kube-system kube-apiserver-rockysrv.citizix.com 1/1 Running 0 4m40s
kube-system kube-controller-manager-rockysrv.citizix.com 1/1 Running 0 4m41s
kube-system kube-proxy-f6s82 1/1 Running 0 4m38s
kube-system kube-scheduler-rockysrv.citizix.com 1/1 Running 0 4m41sCheck the version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:09:57Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}Step 13 – Control plane node isolation
By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes, for example for a single machine Kubernetes cluster, run:
kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-
The output will look something like:
node "rockysrv.citizix.com" untainted
...
This will remove the node-role.kubernetes.io/control-plane and node-role.kubernetes.io/master taints from any nodes that have them, including the control plane nodes, meaning that the scheduler will then be able to schedule Pods everywhere.
Step 13 – Deploying an application
Deploy sample Nginx application
$ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx createdCheck to see if pod started
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7c658794b9-t2fkc 1/1 Running 0 4m39sConclusion
In this guide we managed to set up a kubernetes server on an Rocky Linux 8 server. You can now deploy apps in it.