How to Deploy Kubernetes Cluster on Linux With k0s

K0s, pronounced kzeros, is a fully-fledged open-source Kubernetes distribution developed by team Lens – the Kubernetes IDE project. K0s is highly configurable and flexible to cover various Kubernetes uses like local and private data centers, IoT and public cloud clusters, and hybrid deployments. It is a simple, solid and certified Kubernetes distribution that can be deployed on any infrastructure. This means that K0s can run on any private or public cloud environment.

k0s is distributed as a single binary with zero host OS dependencies besides the host OS kernel. It works with any operating system without additional software packages or configuration. Any security vulnerabilities or performance issues can be fixed directly in the k0s distribution.

Features of k0s

K0s is a fully featured Kubernetes deployment and ships with all the features of Kubernetes. Some of these features include:

  1. Supports latest Kubernetes Versions – later than v1.20.0
  2. Uses ContainerD runtime as the default but you can configure custom runtime
  3. Supported Machine Architectures – x86-64, ARM64, ARMv7
  4. Supported Host OS – Linux (kernel v3.10 or newer), Windows Server 2019 (experimental)
  5. Control Plane Datastore- In-Cluster Elastic Etcd with TLS (default), In-Cluster SQLite (default for single node), External PostgreSQL, External MySQL
  6. Supported CNI Providers – Kube-Router (default), Calico, Custom
  7. Supported Storage & CSI Providers – All Kubernetes storage solutions (with CSI)
  8. Supported Cloud Providers – All Cloud Providers (via extensions)
  9. Built-In Security Features – RBAC, Pod Security Policies, Network Policies, Control Plane Isolation, Support for Micro VMs, Support for OpenID Providers
  10. Built-In Cluster Features- DNS by CoreDNS, Cluster Metrics by Metrics Server, Horizontal Pod Autoscaling (HPA), GPU Support, Zero-Downtime Cluster Upgrade (via k0sctl), Cluster Backup & Restore

Check also

Table of Content

  1. Downloading k0s
  2. Installing k0s
  3. Starting k0s service
  4. Accessing the cluster using kubectl
  5. Getting cluster conf file
  6. Testing the cluster by running a simple nginx app
  7. Uninstalling k0s

Downloading k0s

Run the k0s download script to download the latest stable version of k0s and make it executable from /usr/bin/k0s.

curl -sSLf https://get.k0s.sh | sudo sh

You should see output similar to this:

$ curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.22.2+k0s.1/k0s-v1.22.2+k0s.1-amd64
k0s is now executable in /usr/local/bin

Installing k0s

The k0s install sub-command installs k0s as a system service on the local host that is running one of the supported init systems: Systemd or OpenRC. You can execute the install for workers, controllers or single node (controller+worker) instances.

Run the following command to install a single node k0s that includes the controller and worker functions with the default configuration:

sudo k0s install controller --single

What this does: includes both controller and worker functions in this vm – acting both as a controller and worker node.

Output:

$ sudo k0s install controller --single
INFO[2022-01-02 14:09:50] no config file given, using defaults
INFO[2022-01-02 14:09:50] creating user: etcd
INFO[2022-01-02 14:09:51] creating user: kube-apiserver
INFO[2022-01-02 14:09:51] creating user: konnectivity-server
INFO[2022-01-02 14:09:51] creating user: kube-scheduler
INFO[2022-01-02 14:09:51] Installing k0s service

k8s will store the data in this dir /var/lib/k0s/, check with ls:

ls /var/lib/k0s/

Check the unit service file added with:

$ sudo systemctl list-unit-files | grep k0s
k0scontroller.service                      enabled

The k0s install controller sub-command accepts the same flags and parameters as the k0s controller.

Starting k0s service

To start the k0s service, run:

sudo k0s start

You could also do

sudo systemctl start k0scontroller

The k0s service will start automatically after the node restart.

A minute or two typically passes before the node is ready to deploy applications.

To get general information about your k0s instance’s status, run:

sudo k0s status

Output:

$ sudo k0s status
Version: v1.21.3+k0s.0
Process ID: 51348
Parent Process ID: 1
Role: controller+worker
Init System: linux-systemd
Service file: /etc/systemd/system/k0scontroller.service

Access your cluster using kubectl

Note: k0s includes the Kubernetes command-line tool kubectl.

Use kubectl to deploy your application or to check your node status:

sudo k0s kubectl get nodes

Output

$ sudo k0s kubectl get nodes
NAME     STATUS   ROLES    AGE     VERSION
ub-k0s   Ready    <none>   2m51s   v1.21.3+k0ss

You can check cluster-info using this command:

$ sudo k0s kubectl cluster-info
Kubernetes control plane is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://localhost:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Getting Cluster conf file

Using the existing admin.conf file

To access the cluster from the host file, you need the admin.conf file. The file is located in /var/lib/k0s/pki/admin.conf.

Use this command to copy to the local file:

scp root@192.168.20.7:/var/lib/k0s/pki/admin.conf

For the file to work, you will need to open it with a text editor like vim and update the server:

clusters:
- cluster:
    server: https://localhost:6443

To this

clusters:
- cluster:
    server: https://192.168.20.7:6443

Then export Config with this command

export KUBECONFIG=./admin.conf

Test config by checking pods

➜ kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   kube-proxy-k7855                 1/1     Running   0          21m
kube-system   kube-router-5x74j                1/1     Running   0          21m
kube-system   coredns-5ccbdcc4c4-j5kk6         1/1     Running   0          22m
kube-system   metrics-server-59d8698d9-nbfnt   1/1     Running   0          22m

Check node resource usage using metric server

➜ kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ub-k0s   98m          4%     609Mi           69%

Pods usage using metric server

➜ kubectl top pods -A
NAMESPACE     NAME                             CPU(cores)   MEMORY(bytes)
kube-system   coredns-5ccbdcc4c4-j5kk6         3m           12Mi
kube-system   kube-proxy-k7855                 1m           17Mi
kube-system   kube-router-5x74j                1m           18Mi
kube-system   metrics-server-59d8698d9-nbfnt   1m           12Mi

Using k0s to generate the cluster config file

Use this command to print out the admin config file

sudo k0s kubeconfig admin
sudo k0s kubeconfig admin > admin.conf

Testing the cluster by runing a simple nginx app

Create an nginx deployment

kubectl create deploy nginx --image nginx:latest

Check with:

kubectl get all

Output

❯ kubectl get all

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-55649fd747-xnlvv   1/1     Running   0          4m28s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        32m
service/nginx        NodePort    10.106.202.223   <none>        80:30047/TCP   2m27s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           4m28s

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-55649fd747   1         1         1       4m28s

Expose as Nodeport

kubectl expose deploy nginx --port 80 --type NodePort

Output

❯ kubectl expose deploy nginx --port 80 --type NodePort

service/nginx exposed

Get the service

kubectl get service

Output

➜ kubectl get service

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        30m
nginx        NodePort    10.106.202.223   <none>        80:30047/TCP   17s

Since the app is mounted to port 30047, pick any node’s IP in the cluster (we only have one node so use its IP) then use it to test using this command.

curl http://192.168.20.7:30047

You should see Nginx welcome page.

Cleaning up nginx app

Delete the service

kubectl delete service

Delete deployment

kubectl delete deploy nginx

Uninstalling k0s

The removal of k0s is a two-step process.

Stop the service

Use the following command to stop the service

sudo k0s stop

Execute the k0s reset command

The k0s reset command cleans up the installed system service, data directories, containers, mounts and network namespaces.

sudo k0s reset

Reboot the system.

A few small k0s fragments persist even after the reset (for example, iptables). As such, you should initiate a reboot after the running of the k0s reset command.

Wrapping up

We managed to set up a k0s cluster in this guide and install some nginx app to test that it is working file.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy