How to set up Kubernetes Cluster on Ubuntu 22.04 with kubeadm and CRI-O

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. It groups containers that make up an application into logical units for easy management and discovery. 

Kubeadm is a tool used to build Kubernetes (K8s) clusters. Kubeadm performs the actions necessary to get a minimum viable cluster up and running quickly.

In this guide we will learn how to use kubeadm to set up a kubernetes cluster in Ubuntu 22.04.

Also checkout:

1. Ensure that the server is up to date

It is always a good practice to ensure the system packages are updated. Use this command to ensure our Ubuntu system has up to date packages:

sudo apt update
sudo apt -y upgrade

Set up hostname

sudo hostnamectl set-hostname ubuntusrv.citizix.com

2. Disable Swap and Enable Kernel modules

Use this command to turn off swap

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

Next we need to enable kernel modules and configure sysctl.

To enable kernel modules

sudo modprobe overlay
sudo modprobe br_netfilter

Add some settings to sysctl

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload sysctl

sudo sysctl --system

3. Install kubelet, kubeadm and kubectl

Once the servers are updated, we can install the tools necessary for kubernetes install. These are kubelet, kubeadm and kubectl. These are not found in the default Ubuntu repositories, let us set up kubernetes repositories for this:

sudo apt-get install -y apt-transport-https ca-certificates curl

Download the Google Cloud public signing key:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

Add the Kubernetes apt repository:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Confirm installation by checking the version of kubectl.

$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:29:09Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

4. Install Container runtime

A container runtime is the software that is responsible for running the containers. When installing kubernetes, you need to install a container runtime into each node in the cluster so that Pods can run there.

Supported container runtimes are:

  • Docker
  • CRI-O
  • Containerd

You have to choose one runtime at a time. In this guide we will use CRI-O.

First add CRI-O repo. We are going to run it as root.

sudo -i
OS=xUbuntu_20.04
VERSION=1.22

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list

echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -

curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

We can now install CRI-O after ensuring that our repositories are up to date.

sudo apt update
sudo apt install cri-o cri-o-runc

To confirm the installed version, use this command:

# apt-cache policy cri-o
cri-o:
  Installed: 1.24.2~0
  Candidate: 1.24.2~0
  Version table:
 *** 1.24.2~0 500
        500 http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.24/xUbuntu_20.04  Packages
        100 /var/lib/dpkg/status

To get a more detailed guide on CRI-O installation, checkout How to Install CRI-O Container Runtime on Ubuntu 20.04.

5. Start and enable CRI-O

Let us reload systemd units and start the service while enabling it on boot.

sudo systemctl daemon-reload
sudo systemctl enable crio --now

Confirm the app status:

# sudo systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-08-15 06:48:02 UTC; 19s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 21949 (crio)
      Tasks: 10
     Memory: 12.8M
        CPU: 138ms
     CGroup: /system.slice/crio.service
             └─21949 /usr/bin/crio

Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.636764239Z" level=info msg="Using seccomp default profile when unspecified: true"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.636913550Z" level=info msg="No seccomp profile specified, using the internal default"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.637012619Z" level=info msg="Installing default AppArmor profile: crio-default"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.693066498Z" level=info msg="No blockio config file specified, blockio not configured"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.693198663Z" level=info msg="RDT not available in the host system"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.695420306Z" level=info msg="Found CNI network crio (type=bridge) at /etc/cni/net.d/100-crio-bridge.conf"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.697132728Z" level=info msg="Found CNI network 200-loopback.conf (type=loopback) at /etc/cni/net.d/200-loopb>
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.697158275Z" level=info msg="Updated default CNI network name to crio"
Aug 15 06:48:02 ubuntusrv.citizix.com crio[21949]: time="2022-08-15 06:48:02.723056181Z" level=warning msg="Error encountered when checking whether cri-o should wipe images: version fi>
Aug 15 06:48:02 ubuntusrv.citizix.com systemd[1]: Started Container Runtime Interface for OCI (CRI-O).

6. Initialize master node

Now that we have taken care of the prerequisites, let us initialize the master node.

Login to the server to be used as master and make sure that the br_netfilter module is loaded:

# lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                307200  1 br_netfilter

Let us also enable the kubelet service to start on boot.

sudo systemctl enable kubelet

Pull the required images using this command:

# sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.24.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.24.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.24.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.24.3
[config/images] Pulled k8s.gcr.io/pause:3.7
[config/images] Pulled k8s.gcr.io/etcd:3.5.3-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6

You can use these

kubeadm init options that are used to bootstrap cluster.

--control-plane-endpoint :  set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

If you don’t have a shared DNS endpoint, use this command:

sudo kubeadm init \
  --pod-network-cidr=10.10.0.0/16

To bootstrap with a shared DNS endpoint, use this command. Note that the DNS name is the control plane API. If you don’t have the DNS record mapped to the API endpoint, you can get away with adding it to the /etc/hosts&nbsp;file.

$ sudo vim /etc/hosts
10.10.0.10 citizix.k8s.local

Now create the cluster

sudo kubeadm init \
  --pod-network-cidr=10.10.0.0/16 \
  --upload-certs \
  --control-plane-endpoint=citizix.k8s.local

This is the output from my server:

# sudo kubeadm init \
  --pod-network-cidr=10.10.0.0/16
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntusrv.citizix.com] and IPs [10.96.0.1 10.2.40.98]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntusrv.citizix.com] and IPs [10.2.40.98 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntusrv.citizix.com] and IPs [10.2.40.98 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002937 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntusrv.citizix.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ubuntusrv.citizix.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 9juknp.777t6ig18hc74k0a
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.2.40.98:6443 --token 9juknp.777t6ig18hc74k0a \
	--discovery-token-ca-cert-hash sha256:e50daa002ce8a45ac2ddafcd1246b15570452cf428d27e186aed7d7b2cdb6e76

Now we can copy kubeconfig to the default dir:

mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status:

$ kubectl cluster-info
Kubernetes control plane is running at https://10.2.40.98:6443
CoreDNS is running at https://10.2.40.98:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Additional Master nodes can be added using the command in installation output:

kubeadm join 10.2.40.98:6443 --token fegkje.9uu0g8ja0kqvhll1 \
    --discovery-token-ca-cert-hash sha256:9316503c53c0fd98daca54d314c2040a5a9690358055aeb2460872f1bd28ba7 \
    --control-plane

7. Scheduling pods on Master

By default, your Kubernetes Cluster will not schedule pods on the control-plane node for security reasons. It is recommended you keep it this way, but for test environments or if you are running a single node cluster, you may want to schedule Pods on control-plane node to maximize resource usage.

Remove the taint using this command:

kubectl taint nodes --all node-role.kubernetes.io/master-

You should see similar output to this:

ubuntu@ubuntusrv:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
node/ubuntusrv.citizix.com untainted

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.

8. Install network plugin on Master

Network plugins in Kubernetes come in a few flavors:

  • CNI plugins: adhere to the Container Network Interface (CNI) specification, designed for interoperability.
  • Kubenet plugin: implements basic cbr0 using the bridge and host-local CNI plugins

In this guide we will configure Calico.

Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.

Install the Tigera Calico operator and custom resource definitions.

kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

Install Calico by creating the necessary custom resource. 

kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml

Confirm that all of the pods are running with the following command.

watch kubectl get pods -n tigera-operator

Confirm master node is ready:

$ kubectl get nodes -o wide
NAME                    STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ubuntusrv.citizix.com   Ready    control-plane   3m17s   v1.24.3   10.2.40.98    <none>        Ubuntu 22.04.1 LTS   5.15.0-1017-aws   cri-o://1.24.2

9. Add worker nodes

With the control plane ready you can add worker nodes to the cluster for running scheduled workloads.

The join command that was given is used to add a worker node to the cluster.

kubeadm join 10.2.40.98:6443 --token fegkje.9uu0g8ja0kqvhll1 \
	--discovery-token-ca-cert-hash sha256:9316503c53c0fd98daca54d314c2040a5a9690358055aeb2460872f1bd28ba78

10. Deploying an application

Deploy sample Nginx application

$ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created

Check to see if pod started

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7c658794b9-t2fkc   1/1     Running   0          4m39s

Conclusion

In this guide we managed to set up a kubernetes server on an Ubuntu 22.04 server. You can now deploy apps in it.

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy