Ink's Space

K3s on Odroid MC1s, a Guide.

DISCLAIMER

This is not really a guide, it’s essentially my notes from when I set up k3s on my Odroid MC1 cluster.

Setting up an Odroid MC1/N2 K3S cluster

I initially saw a video by NetworkChuck about setting up a Raspberry Pi k3s cluster, see his blog post here. I first went and tried to set up k3s on my Odroid cluster using his method, however as noted at the bottom of this post, I had some issues with it. So, after some time trying to fix the issues that were preventing me from getting his method working, I went looking for another option.

My personal notes

I have included these just in case they lead someone else in the right direction in the future.

It seems networkchuck’s setup does not work for me on my odroids, it gets installed but is failing consistently for some reason. I will try with this/these soon: option 1, option 2

It seems that this is my issue: Kubernetes CGROUP PIDS

The Docker Method

Initial Setup

Firstly, set up master and worker hosts.

My set up is odroid n2 as master with ip address 180 and then 5 odroid mc1s as workers with ips 181-5

Both network and hostname can be setup by mounting rootfs and manually editing/adding the required files

Example netplan ‘10-config.yaml’:

    network:
        version: 2
        renderer: networkd
        ethernets:
            eth0:
                addresses: [192.168.0.XXX/16]
                gateway4: 192.168.0.1
                nameservers:
                    addresses: [192.168.0.1, 1.1.1.1]
                    search: [mydomain]

Also set timezone if you want.

    sudo timedatectl set-timezone Australia/Adelaide

In my case the following was used:

  1. flash image to micro sdcard
  2. mount micro sdcard rootfs partition: mount /dev/mmkblk... /mnt/tmp
  3. edit /etc/hostname and add the netplan config above to /etc/netplan
  4. unmount /mnt/tmp
  5. stick sdcard in odroid SBC and power on.

Kernel Patch for MC1s

We must rebuild kernel with updated options so that cgroup_pids is enabled. Hardkernel has a guide here for rebuilding, only two edits are required after the make odroidxu4_defconfig step, they are covered here

note that the following tools are required for the build: bison, flex, libssl-dev, and bc

    apt install bison flex libssl-dev bc -y

The K3S install

Run the following on all nodes:

    iptables -F \
    && update-alternatives --set iptables /usr/sbin/iptables-legacy \
    && update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
    && reboot

    apt update; apt upgrade -y; apt autoremove -y; apt clean; apt install docker.io curl -y
    reboot

    systemctl start docker
    systemctl enable docker

    systemctl status docker

    # Be sure that the firewall is disabled for ease
    ufw disable

Then run the following only on the master node:

    # for master
    curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - --docker

    # check its running
    systemctl status k3s
    kubectl get nodes

    # Get token from master, make sure to store it somewhere
    cat /var/lib/rancher/k3s/server/node-token

Then run the following on the worker nodes, updating the command for each:

    # for workers
    # Fill this out ...
    curl -sfL http://get.k3s.io | K3S_URL=https://<master_IP>:6443 K3S_TOKEN=<join_token>
    K3S_NODE_NAME="odroid-mc1-X" sh -s - --docker

    systemctl status k3s-agent

And thus you should be done, check the master node to see:

    # Check node was added on master
    kubectl get nodes

And all should be up and running correctly, it was for me at least.

I have kept the following notes attached here for posterity, really they came first in this effort - chronologically - but given I stopped at random near the end I felt it better to lead with the successful solution.

-ink


Networkchuck

This did not initially work for me and I gave up on it, I think the issue was actually the cgroup_pids thing covered above but once I got my second attempt working I didn’t want to come back to this.

Once these have been set up with ip addresses and hostnames (odroid-n2, odroid-mc1-1(to 5)), you will want to set up ssh access to each machine, I have a couple of ansible playbooks that I use for this.

Either the following to set up users and access:

    - hosts: all
      become: yes
      tasks:
        - name: create the 'kuber' user
          user: name=kuber append=yes state=present createhome=yes bash=/bin/bash

        - name: allow 'kuber' to have passwordless sudo
          lineinfile:
              dest: /etc/sudoers
              line: "kuber ALL=(ALL) NOPASSWD: ALL"
              validate: "visudo -cf %s"

        - name: set up authorised keys for the 'kuber' user
          authorized_key: user=kuber key="{{item}}"
          with_file:
              - ~/.ssh/id_rsa.pub

Or if you already set up users:

    - hosts: all
      become: yes
      tasks:
          - name: set up authorised keys for the 'root' user
            authorized_key: user=root key="{{item}}"
            with_file:
                - ~/.ssh/id_rsa.pub

The above can be used with a hosts file such as the following

    [masters]
    master ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>

    [workers]
    worker1 ansible_host=192.168.0.XXX ansible_user=<user> ansible_ssh_pass=<password>
    worker2...
    ...

    [all:vars]
    ansible_python_interpreter=/usr/bin/python3

Then the following commands:

    sudo iptables -F \
    && sudo update-alternatives --set iptables /usr/sbin/iptables-legacy \
    && sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy \
    && sudo reboot

useful command formatted from step 2.2.1 of reference material here

Then the following on the master node:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -

Then on the master node grab its node token

sudo cat /var/lib/rancher/k3s/server/node-token

Then run the following on each of the workers:

(note in my case curl was not installed)

  • [your server] = master node ip

  • YOURTOKEN = token from above

  • servername = unique name for node (I use hostname)

    curl -sfL https://get.k3s.io | K3S_TOKEN="YOURTOKEN"
    K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z

    # I used
    apt install curl -y && curl -sfL https://get.k3s.io |
    K3S_TOKEN="YOURTOKEN" K3S_URL="https://[your server]:6443" K3S_NODE_NAME="servername" sh -z

Sadly this is where my notes ended as, although the install worked, all of the system pods were failing and thus I moved on to the method listed above.