If you ever wanted to try out the Kubernetes (K8s) container platform but were hesitant because it’s complicated to set up, fear no more – K3s comes to the rescue.
K3s is a Kubernetes distribution developed by Rancher. Compared to some other Kubernetes distributions it’s lightweight, really easy to install and has only minimal operating system dependencies. One of the things that make it easy to use is, that many Kubernetes components are bundled in a single binary that bootstraps the Kubernetes installation:
Ingress controller
K3s brings Traefik as an Ingress controller by default. The ingress controller is like a reverse proxy and load balancer that forwards Web requests to your containers. If you prefer another Ingress controller (like Nginx for example) you can tell the installer not to deploy it and deploy your own instead.
Storage drivers
The storage drivers In Kubernetes are responsible for managing persistent volumes – volumes that should persist between containers restarts. The Local-path-provisioner provides persistent volumes by simply creating folders that are bind mounted into the container.
By default K3s deploys the ‘Local-path-provisioner’ storage driver within the cluster. It is one of the simplest storage drivers and only has limited capabilities since it only makes sense to use it if you don’t need to share persistent volumes across multiple nodes. However, for testing Kubernetes on a single node it is ideal and completely sufficient.
Single node cluster
Kubernetes can form large clusters with many nodes. However, for your first steps with Kubernetes I would recommend to keep things simple and deploy a one node cluster, which is really simple using K3s.
To install K3s on a one node cluster you simply have to run the following command with root privileges:
curl -sfL https://get.k3s.io | sh - [INFO] Finding release for channel stable [INFO] Using v1.17.4+k3s1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.17.4+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Creating /usr/local/bin/ctr symlink to k3s [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s #
This will install the K3s binary under /usr/local/bin/k3s and create multiple symlinks to that binary. The installer will also create a Systemd service unit and start the service with each boot:
systemctl status k3s.service ● k3s.service - Lightweight Kubernetes Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-01-06 23:11:08 CET; 1 weeks 5 days ago Docs: https://k3s.io Process: 492 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Process: 502 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Main PID: 506 (k3s-server) Tasks: 328 Memory: 1.1G CGroup: /system.slice/k3s.service ├─ 506 /usr/local/bin/k3s server ├─ 1465 containerd ├─ 2042 /var/lib/rancher/k3s/data/986d5e8cf570... ├─ 2062 /pause ├─ 2300 /var/lib/rancher/k3s/data/986d5e8cf570... ├─ 2320 /pause ├─ 2383 /traefik --configfile=/config/traefik.toml ...
Connecting to your Cluster
During K3s installation a kubeconfig file is written to ‘/etc/rancher/k3s/k3s.yaml’. Using that file you can connect to the cluster. The kubeconfig file looks similar to this:
apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0... server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: LS... client-key-data: LS...
To connect to the cluster using the kubectl command you can specify the location of the kubeconfig file with the ‘–kubeconfig’ option as shown below:
# Get a list of cluster nodes kubectl --kubeconfig=/etc/rancher/k3s/k3s.yaml get nodes NAME STATUS ROLES AGE VERSION vmxx28193.bydddddserver.net Ready control-plane,master 255d v1.20.0+k3s2
# Get a list of deployed Pods kubectl --kubeconfig=/etc/rancher/k3s/k3s.yaml get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-58fb86bdfd-r5zcp 1/1 Running 0 2m10s kube-system metrics-server-6d684c7b5-2g7wq 1/1 Running 0 2m10s kube-system helm-install-traefik-zv6bp 0/1 Completed 0 2m9s kube-system svclb-traefik-98bgg 2/2 Running 0 107s kube-system coredns-6c6bb68b64-hr2dv 1/1 Running 0 2m10s kube-system traefik-7b8b884c8-qqhgf 1/1 Running 0 107s
Read more about other methods to connect to your cluster in this article.
UFW Firewall (optional)
If your host is directly reachable from the internet it’s advisable to configure a firewall. Also if you already have a Firewall configured on your host you might need to add a few rules so Kubernetes can function correctly. If your host is only reachable from your internal network you skip this part.
The default firewall under Ubuntu is UFW so I will show you the commands for UFW.
Firewall rule for allowing intra-cluster traffic (credits go to https://jbhannah.net/articles/k3s-wireguard-kilo ):
ufw default allow routed ufw allow in on cni0 from 10.42.0.0/16 coment "K3s rule: allow traffic from kube-system pods" ufw allow in on kube-bridge from 10.42.0.0/16 comment "K3s”
Allow traffic to the Ingress controller:
ufw allow 80/tcp comment "K8s Ingress Traefik" ufw allow 443 comment "K8s Ingress Traefik"
Allow access to the Kubernetes API from your trusted networks (replace IP ranges accordingly):
ufw allow from 10.x.x.0/24 to any port 6443 comment "K8s: allow access to kube-api from internal network" ufw allow from 2a02:abcd:abcd:abcd::/64 to any port 6443 comment "K8s: allow access to kube-api from home"
Don’t forget to add rules for ssh from your network (if you haven’t already).
ufw allow from x.x.x.0/24 to any port 22 comment "SSH: allow SSH access from home"