K0S
, as the website (https://k0sproject.io/) states is the The Simple, Solid & Certified Kubernetes Distribution.
It comes as a single binary with no dependencies. K0S is:
Zero Friction
– it drastically reduces the complexity of installing and running a fully conformant Kubernetes distribution.Zero Deps
– k0s is distributed as a single binary with minimal host OS dependencies besides the host OS kernel. It works with any operating system without additional software packages or configurationZero Cost
– k0s is completely free for personal or commercial use, and it always will be.
In this guide we are going to use ubuntu 20.04 virtual machine. At the end of this guide, we will have a full Kubernetes cluster with a single node that includes both the controller and the worker.
System requirements
minimum hardware requirements
Role Virtual CPU (vCPU) Memory (RAM)
Controller node 1 vCPU (2 recommended) 1 GB (2 recommended)
Worker node 1 vCPU (2 recommended) 0.5 GB (1 recommended)
Controller + worker 1 vCPU (2 recommended) 1 GB (2 recommended)
SSD is recommended for optimal storage performance – cluster latency and throughput are sensitive to storage
The specific storage consumption for k0s is as follows:
Role Storage (k0s part)
Controller node ~0.5 GB
Worker node ~1.3 GB
Controller + worker ~1.7 GB
Install k0s
Download k0s
Run the k0s download script to download the latest stable version of k0s and make it executable from /usr/bin/k0s
.
curl -sSLf https://get.k0s.sh | sudo sh
You should see output similar to this:
$ curl -sSLf https://get.k0s.sh | sudo sh
Downloading k0s from URL: https://github.com/k0sproject/k0s/releases/download/v1.22.2+k0s.1/k0s-v1.22.2+k0s.1-amd64
k0s is now executable in /usr/local/bin
Install k0s as a service
The k0s install
sub-command installs k0s as a system service on the local host that is running one of the supported init systems: Systemd or OpenRC. You can execute the install for workers, controllers or single node (controller+worker) instances.
Run the following command to install a single node k0s that includes the controller and worker functions with the default configuration:
sudo k0s install controller --single
What this does: include1 both controller andn worker functions in this vm – acting both as a controller and worker node.
Output:
$ sudo /usr/local/bin/k0s install controller --single
INFO[2022-01-02 14:09:50] no config file given, using defaults
INFO[2022-01-02 14:09:50] creating user: etcd
INFO[2022-01-02 14:09:51] creating user: kube-apiserver
INFO[2022-01-02 14:09:51] creating user: konnectivity-server
INFO[2022-01-02 14:09:51] creating user: kube-scheduler
INFO[2022-01-02 14:09:51] Installing k0s service
k8s will store the data in this dir /var/lib/k0s/
, check with ls
:
ls /var/lib/k0s/
Check the unit service file added with:
$ sudo systemctl list-unit-files | grep k0s
k0scontroller.service enabled
The k0s install controller
sub-command accepts the same flags and parameters as the k0s controller
.
Start k0s as a service
To start the k0s service, run:
sudo k0s start
You could also do
sudo systemctl start k0scontroller
The k0s service will start automatically after the node restart.
A minute or two typically passes before the node is ready to deploy applications.
Check service, logs and k0s status
To get general information about your k0s instance’s status, run:
sudo k0s status
Output:
$ sudo k0s status
Version: v1.21.3+k0s.0
Process ID: 51348
Parent Process ID: 1
Role: controller+worker
Init System: linux-systemd
Service file: /etc/systemd/system/k0scontroller.service
Access your cluster using kubectl
Note: k0s includes the Kubernetes command-line tool kubectl
.
Use kubectl to deploy your application or to check your node status:
sudo k0s kubectl get nodes
Output
$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
ub-k0s Ready <none> 2m51s v1.21.3+k0ss
You can check cluster-info using this command:
$ sudo k0s kubectl cluster-info
Kubernetes control plane is running at https://localhost:6443
CoreDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://localhost:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Getting Cluster conf file
Using the existing admin.conf file
To access the cluster from the host file, you need the admin.conf
file. The file is located in /var/lib/k0s/pki/admin.conf
.
Use this command to copy to the local file:
scp root@192.168.20.7:/var/lib/k0s/pki/admin.conf
For the file to work, you will need to open it with a text editor like vim
and update the server:
clusters:
- cluster:
server: https://localhost:6443
To this
clusters:
- cluster:
server: https://192.168.20.7:6443
Then export Config with this command
export KUBECONFIG=./admin.conf
Test config by checking pods
➜ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-proxy-k7855 1/1 Running 0 21m
kube-system kube-router-5x74j 1/1 Running 0 21m
kube-system coredns-5ccbdcc4c4-j5kk6 1/1 Running 0 22m
kube-system metrics-server-59d8698d9-nbfnt 1/1 Running 0 22m
Check node resource usage using metric server
➜ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ub-k0s 98m 4% 609Mi 69%
Pods usage using metric server
➜ kubectl top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-5ccbdcc4c4-j5kk6 3m 12Mi
kube-system kube-proxy-k7855 1m 17Mi
kube-system kube-router-5x74j 1m 18Mi
kube-system metrics-server-59d8698d9-nbfnt 1m 12Mi
Using k0s to generate the cluster config file
Use this command to print out the admin config file
sudo k0s kubeconfig admin
sudo k0s kubeconfig admin > admin.conf
Testing the cluster by runing a simple nginx app
Create an nginx deployment
kubectl create deploy nginx --image nginx:latest
Check with:
kubectl get all
Output
❯ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-55649fd747-xnlvv 1/1 Running 0 4m28s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m
service/nginx NodePort 10.106.202.223 <none> 80:30047/TCP 2m27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 4m28s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-55649fd747 1 1 1 4m28s
Expose as Nodeport
kubectl expose deploy nginx --port 80 --type NodePort
Output
❯ kubectl expose deploy nginx --port 80 --type NodePort
service/nginx exposed
Check
kubectl get service
Output
➜ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
nginx NodePort 10.106.202.223 <none> 80:30047/TCP 17s
Since the app is mounted to port 30047
, pick any node’s IP in the cluster (we only have one node so use its IP) then use it to test using this command.
curl http://192.168.20.7:30047
You should see Nginx welcome page.
Cleaning up nginx app
Delete the service
kubectl delete service
Delete deployment
kubectl delete deploy nginx
Uninstall k0s
The removal of k0s is a two-step process.
Stop the service.
USe the following commannd to stop the service
sudo k0s stop
Execute the k0s reset command.
The k0s reset command cleans up the installed system service, data directories, containers, mounts and network namespaces.
sudo k0s reset
Reboot the system.
A few small k0s fragments persist even after the reset (for example, iptables). As such, you should initiate a reboot after the running of the k0s reset command.