How to set up Kubernetes Cluster on Ubuntu 20.04 with kubeadm and CRI-O

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. It groups containers that make up an application into logical units for easy management and discovery. 

Kubeadm is a tool used to build Kubernetes (K8s) clusters. Kubeadm performs the actions necessary to get a minimum viable cluster up and running quickly.

In this guide we will learn how to use kubeadm to set up a kubernetes cluster in Ubuntu 20.04.

Also checkout:

1. Ensure that the server is up to date

It is always a good practice to ensure the system packages are updated. Use this command to ensure our Ubuntu system has up to date packages:

sudo apt update
sudo apt -y upgrade

2. Install kubelet, kubeadm and kubectl

Once the servers are updated, we can install the tools necessary for kubernetes install. These are kubelet, kubeadm and kubectl. These are not found in the default Ubuntu repositories, let us set up kubernetes repositories for this:

sudo apt -y install curl apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Then install required packages.

sudo apt update
sudo apt -y install vim git curl wget kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Confirm installation by checking the version of kubectl.

$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:24:08Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

3. Disable Swap and Enable Kernel modules

Use this command to turn off swap

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a

Next we need to enable kernel modules and configure sysctl.

To enable kernel modules

sudo modprobe overlay
sudo modprobe br_netfilter

Add some settings to sysctl

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Reload sysctl

sudo sysctl --system

4. Install Container runtime

A container runtime is the software that is responsible for running the containers. When installing kubernetes, you need to install a container runtime into each node in the cluster so that Pods can run there.

Supported container runtimes are:

  • Docker
  • CRI-O
  • Containerd

You have to choose one runtime at a time. In this guide we will use CRI-O.

First add CRI-O repo. We are going to run it as root.

sudo -i
OS=xUbuntu_20.04
VERSION=1.22

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -

We can now install CRI-O after ensuring that our repositories are up to date.

sudo apt update
sudo apt install cri-o cri-o-runc

To confirm the installed version, use this command:

$ apt-cache policy cri-o
cri-o:
  Installed: 1.22.1~1
  Candidate: 1.22.1~1
  Version table:
 *** 1.22.1~1 500
        500 http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.22/xUbuntu_20.04  Packages
        100 /var/lib/dpkg/status

To get a more detailed guide on CRI-O installation, checkout How to Install CRI-O Container Runtime on Ubuntu 20.04.

5. Start and enable CRI-O

Let us reload systemd units and start the service while enabling it on boot.

sudo systemctl daemon-reload
sudo systemctl enable crio --now

Confirm the app status:

$ sudo systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2022-02-01 11:55:27 UTC; 45s ago
       Docs: https://github.com/cri-o/cri-o
   Main PID: 18644 (crio)
      Tasks: 11
     Memory: 9.8M
     CGroup: /system.slice/crio.service
             └─18644 /usr/bin/crio

Feb 01 11:55:27 ubuntusrv.citizix.com crio[18644]: time="2022-02-01 11:55:27.738388792Z" level=info msg="Installing default AppArmor profile: crio-default"
Feb 01 11:55:27 ubuntusrv.citizix.com crio[18644]: time="2022-02-01 11:55:27.767489286Z" level=info msg="No blockio config file specified, blockio not configured"
Feb 01 11:55:27 ubuntusrv.citizix.com crio[18644]: time="2022-02-01 11:55:27.767528847Z" level=info msg="RDT not available in the host system"
Feb 01 11:55:27 ubuntusrv.citizix.com crio[18644]: time="2022-02-01 11:55:27.881122431Z" level=warning msg="The binary conntrack is not installed, this can cause failures in network connect>
Feb 01 11:55:27 ubuntusrv.citizix.com crio[18644]: time="2022-02-01 11:55:27.881351017Z" level=warning msg="Error encountered when checking whether cri-o should wipe images: version file /v>
Feb 01 11:55:27 ubuntusrv.citizix.com systemd[1]: Started Container Runtime Interface for OCI (CRI-O).

6. Initialize master node

Now that we have taken care of the prerequisites, let us initialize the master node.

Login to the server to be used as master and make sure that the br_netfilter module is loaded:

$ lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                249856  1 br_netfilter

Let us also enable the kubelet service to start on boot.

sudo systemctl enable kubelet

Pull the required images using this command:

$ sudo kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.3
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.3
[config/images] Pulled k8s.gcr.io/pause:3.6
[config/images] Pulled k8s.gcr.io/etcd:3.5.1-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6

You can use these

kubeadm init options that are used to bootstrap cluster.

--control-plane-endpoint :  set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server

If you don’t have a shared DNS endpoint, use this command:

sudo kubeadm init \
  --pod-network-cidr=10.10.0.0/16

To bootstrap with a shared DNS endpoint, use this command. Note that the DNS name is the control plane API. If you don’t have the DNS record mapped to the API endpoint, you can get away with adding it to the /etc/hosts file.

$ sudo vim /etc/hosts
10.10.0.10 citizix.k8s.local

Now create the cluster

sudo kubeadm init \
  --pod-network-cidr=10.10.0.0/16 \
  --upload-certs \
  --control-plane-endpoint=citizix.k8s.local

This is the output from my server:

$ sudo kubeadm init \
>   --pod-network-cidr=10.10.0.0/16
[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntusrv.citizix.com] and IPs [10.96.0.1 10.2.40.239]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntusrv.citizix.com] and IPs [10.2.40.239 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntusrv.citizix.com] and IPs [10.2.40.239 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.002535 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntusrv.citizix.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ubuntusrv.citizix.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: fegkje.9uu0g8ja0kqvhll1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.2.40.239:6443 --token fegkje.9uu0g8ja0kqvhll1 \
	--discovery-token-ca-cert-hash sha256:9316503c53c0fd98daca54d314c2040a5a9690358055aeb2460872f1bd28ba78

Now we can copy kubeconfig to the default dir:

mkdir -p $HOME/.kube
sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check cluster status:

$ kubectl cluster-info
Kubernetes control plane is running at https://10.2.40.239:6443
CoreDNS is running at https://10.2.40.239:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Additional Master nodes can be added using the command in installation output:

kubeadm join 10.2.40.239:6443 --token fegkje.9uu0g8ja0kqvhll1 \
    --discovery-token-ca-cert-hash sha256:9316503c53c0fd98daca54d314c2040a5a9690358055aeb2460872f1bd28ba7 \
    --control-plane

7. Scheduling pods on Master

By default, your Kubernetes Cluster will not schedule pods on the control-plane node for security reasons. It is recommended you keep it this way, but for test environments or if you are running a single node cluster, you may want to schedule Pods on control-plane node to maximize resource usage.

Remove the taint using this command:

kubectl taint nodes --all node-role.kubernetes.io/master-

You should see similar output to this:

$ kubectl taint nodes --all node-role.kubernetes.io/master-
node/ubuntusrv.citizix.com untainted

This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.

8. Install network plugin on Master

Network plugins in Kubernetes come in a few flavors:

  • CNI plugins: adhere to the Container Network Interface (CNI) specification, designed for interoperability.
  • Kubenet plugin: implements basic cbr0 using the bridge and host-local CNI plugins

In this guide we will configure Calico.

Calico is a networking and network policy provider. Calico supports a flexible set of networking options so you can choose the most efficient option for your situation, including non-overlay and overlay networks, with or without BGP. Calico uses the same engine to enforce network policy for hosts, pods, and (if using Istio & Envoy) applications at the service mesh layer.

Install the Tigera Calico operator and custom resource definitions.

kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

Install Calico by creating the necessary custom resource. 

kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml

Confirm that all of the pods are running with the following command.

watch kubectl get pods -ncalico-system

Confirm master node is ready:

$ kubectl get nodes -o wide
NAME                    STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ubuntusrv.citizix.com   Ready    control-plane,master   23m   v1.23.3   10.2.40.239   <none>        Ubuntu 20.04.3 LTS   5.11.0-1019-aws   cri-o://1.22.1

9. Add worker nodes

With the control plane ready you can add worker nodes to the cluster for running scheduled workloads.

The join command that was given is used to add a worker node to the cluster.

kubeadm join 10.2.40.239:6443 --token fegkje.9uu0g8ja0kqvhll1 \
	--discovery-token-ca-cert-hash sha256:9316503c53c0fd98daca54d314c2040a5a9690358055aeb2460872f1bd28ba78

10. Deploying an application

Deploy sample Nginx application

$ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created

Check to see if pod started

$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-7c658794b9-t2fkc   1/1     Running   0          4m39s

Conclusion

In this guide we managed to set up a kubernetes server on an Ubuntu 20.04 server. You can now deploy apps in it.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy