How to Use Ansible to Setup and Upgrade K3s the Lightweight Kubernetes

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Google originally designed Kubernetes, but the Cloud Native Computing Foundation now maintains the project. It groups containers that make up an application into logical units for easy management and discovery.

K3S is a certified lightweight kubernetes built for IoT & Edge Computing. K3s is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. K3s is packaged as a single <50MB binary that reduces the dependencies and steps needed to install, run and auto-update a production Kubernetes cluster.

In this guide, we will learn how to use ansible to automate the set up and upgrade process of k3s.

Checkout these related posts:

Prerequisites

To follow along, you need the followjing

  • At least one master and one node server to set up k3s in
  • Ansible set up locally
  • SSH access to all nodes

Set up inventory file

We will need an investory file to define how we access our clusters. This inventory file defines a master and two k3s nodes.

I prefer my inventory as yaml file because it is readable, save the file under inventory/hosts.yml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
all:
  hosts:
    localhost:
      ansible_ssh_host: 127.0.0.1
      ansible_connection: local
    master:
      ansible_ssh_host: 10.2.11.10
      ansible_ssh_user: admin
      ansible_ssh_port: 22
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    k8s_node1:
      ansible_ssh_host: 10.2.11.11
      ansible_ssh_user: admin
      ansible_ssh_port: 5722
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
    k8s_node2:
      ansible_ssh_host: 10.2.11.12
      ansible_ssh_user: admin
      ansible_ssh_port: 5722
      ansible_ssh_private_key_file: ~/.ssh/id_rsa
  children:
    k8s_cluster:
      hosts:
        master:
        k8s_node1:
        k8s_node2:
    k8s_masters:
      hosts:
        master:
    k8s_nodes:
      hosts:
        k8s_node1:
        k8s_node2:

Setting up kubernetes

To set up our cluster, we need to do the following:

  • set up the master
  • copy the kubeconfig file
  • For each of the node, we need to copy the k3s node token and use it to set it up

To set up the master, we use the curl -sfL https://get.k3s.io | sh - command. Since we want this to be idempotent, we mark tha it creates the /etc/rancher/k3s/k3s.yaml file

1
2
3
4
5
- name: Install k3s master
  shell: |
    curl -sfL https://get.k3s.io | sh -
  args:
    creates: /etc/rancher/k3s/k3s.yaml

Next, we copy the kubeconfig file if it doesn’t exist

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
- name: Check if kubeconfig exists on local machine
  stat:
    path: ~/.kube/citizix_config
  register: kubeconfig_status
  delegate_to: localhost
  run_once: true
  become: no

- name: Copy kubeconfig to local machine
  fetch:
    src: /etc/rancher/k3s/k3s.yaml
    dest: ~/.kube/citizix_config
    flat: yes
  when: not kubeconfig_status.stat.exists

Finally, in the nodes, we get the node token from master and set it up

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
- name: Get master node token
  shell: "cat /var/lib/rancher/k3s/server/node-token"
  register: master_token
  delegate_to: master

- name: Install k3s worker
  shell: |
    curl -sfL https://get.k3s.io | K3S_URL=https://{{ hostvars['master'].ansible_ssh_host }}:6443 K3S_TOKEN={{ master_token.stdout }} sh -
  args:
    creates: /etc/rancher/k3s/k3s.yaml

Here is the full playbook. Save it as k3s-setup.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
- hosts: k8s_masters
  become: yes
  tasks:
    - name: Install k3s master
      shell: |
        curl -sfL https://get.k3s.io | sh -
      args:
        creates: /etc/rancher/k3s/k3s.yaml

    - name: Check if kubeconfig exists on local machine
      stat:
        path: ~/.kube/citizix_config
      register: kubeconfig_status
      delegate_to: localhost
      run_once: true
      become: no

    - name: Copy kubeconfig to local machine
      fetch:
        src: /etc/rancher/k3s/k3s.yaml
        dest: ~/.kube/citizix_config
        flat: yes
      when: not kubeconfig_status.stat.exists

- hosts: k8s_nodes
  become: yes
  tasks:
    - name: Get master node token
      shell: "cat /var/lib/rancher/k3s/server/node-token"
      register: master_token
      delegate_to: master

    - name: Install k3s worker
      shell: |
        curl -sfL https://get.k3s.io | K3S_URL=https://{{ hostvars['master'].ansible_ssh_host }}:6443 K3S_TOKEN={{ master_token.stdout }} sh -
      args:
        creates: /etc/rancher/k3s/k3s.yaml

Use the ansible-playbook command to run the playbook.

1
ansible-playbook -i inventory/hosts.yml k3s-setup.yml -vv

This should create the cluster

Confirm that it is working fine

1
2
3
4
5
6
7
8
export KUBECONFIG=~/.kube/citizix_config

$ kubectl get no

NAME                    STATUS   ROLES                  AGE   VERSION
k8s-master              Ready    control-plane,master   83d   v1.30.6+k3s1
node-1.citizix.com   Ready    <none>                    60d   v1.30.6+k3s1
node-2.citizix.com   Ready    <none>                    60d   v1.30.6+k3s1

Upgrading kubernetes

If you have an existing cluster that you would want to upgrade, you can automate the process with ansible.

It is important to node that this involves a minimal downtime, it is important to schedule upgrades during maintenance windows. Also consider backing up important data in case things doesn’t go right.

The upgrade process involves:

  • Stopping the k3s service
  • Stopping the k3s agent
  • Upgrading the master
  • For nodes, we need the master token

To upgrade the master, stop the k3s, k3s-agent services then run the upgrade command

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
- name: Stop k3s service on master
  ansible.builtin.systemd:
    name: k3s
    state: stopped
  when: "'k8s-masters' in group_names"

- name: Stop k3s-agent service on worker nodes
  ansible.builtin.systemd:
    name: k3s-agent
    state: stopped
  when: "'k8s-nodes' in group_names"

- name: Upgrade k3s on master
  shell: |
    curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="{{ k3s_version }}" sh -
  when: "'k8s-masters' in group_names"

To upgrade the nodes, get the master node token, then run the upgrade command

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
- name: Get master node token
  shell: "cat /var/lib/rancher/k3s/server/node-token"
  register: master_token
  delegate_to: master
  run_once: true

- name: Upgrade k3s on worker nodes
  shell: |
    curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="{{ k3s_version }}" K3S_URL=https://{{ hostvars['master']['ansible_ssh_host'] }}:6443 K3S_TOKEN={{ master_token.stdout }} sh -
  when: "'k8s-nodes' in group_names"

Finally start the stopped services

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
- name: Start k3s service on master
  ansible.builtin.systemd:
    name: k3s
    state: started
  when: "'k8s-masters' in group_names"

- name: Start k3s-agent service on worker nodes
  ansible.builtin.systemd:
    name: k3s-agent
    state: started
  when: "'k8s-nodes' in group_names"

Here is the full playbook, save it under k3s-upgrade.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
- name: Upgrade k3s cluster to latest defined version
  hosts: k8s-cluster
  become: yes
  vars:
    k3s_version: "v1.32.0+k3s1"
  tasks:
    - name: Stop k3s service on master
      ansible.builtin.systemd:
        name: k3s
        state: stopped
      when: "'k8s-masters' in group_names"

    - name: Stop k3s-agent service on worker nodes
      ansible.builtin.systemd:
        name: k3s-agent
        state: stopped
      when: "'k8s-nodes' in group_names"

    - name: Upgrade k3s on master
      shell: |
        curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="{{ k3s_version }}" sh -
      when: "'k8s-masters' in group_names"

    - name: Get master node token
      shell: "cat /var/lib/rancher/k3s/server/node-token"
      register: master_token
      delegate_to: master
      run_once: true

    - name: Upgrade k3s on worker nodes
      shell: |
        curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="{{ k3s_version }}" K3S_URL=https://{{ hostvars['master']['ansible_ssh_host'] }}:6443 K3S_TOKEN={{ master_token.stdout }} sh -
      when: "'k8s-nodes' in group_names"

    - name: Start k3s service on master
      ansible.builtin.systemd:
        name: k3s
        state: started
      when: "'k8s-masters' in group_names"

    - name: Start k3s-agent service on worker nodes
      ansible.builtin.systemd:
        name: k3s-agent
        state: started
      when: "'k8s-nodes' in group_names"

Use the ansible-playbook command to run the playbook.

1
ansible-playbook -i inventory/hosts.yml k3s-upgrade.yml -vv

This should create the cluster

Conclusion and Next Steps

We managed to create and upgrade a k3s cluster in this guide. Once it is running, you can extend your cluster with monitoring tools like Prometheus or logging solutions like Fluentd.

Last updated on Jan 20, 2025 22:20 +0300
comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy