K
ken
HomeArticles🕐 Time Converter📋 JSON Tools🖼️ Base64 Image🔑 Password Generator Cron Expression🔤 Case Converter📱 QR Code#️⃣ Hash🔡 Encoding🔍 Regex Tester⚙️ Config Convert🔐 Encrypt/Decrypt⚖️ BMI Calculator🎲 Random Data🗜️ Image Tools🌍 World Clock🏛️ Roman Numeral🔢 Number to Chinese💰 Loan Calculator
DevOpsHOT

How to Set Up a K3s Kubernetes Cluster on Ubuntu: Step-by-Step

2026-04-29·9 min read

This guide walks through setting up a K3s Kubernetes cluster on Ubuntu with 3 nodes: one master and two workers. K3s is a lightweight Kubernetes distribution designed for resource-constrained environments — it replaces the kubelet, containerd, and etcd with a single binary and embedded database.

Why K3s Instead of Full Kubernetes

I have set up full Kubernetes clusters with kubeadm. It takes hours, requires significant memory, and the complexity is overwhelming for a homelab or small team.

K3s strips away the complexity. The binary is under 100MB. It runs on a 1GB RAM VPS. It uses SQLite instead of etcd by default (etcd requires SSD and at least 2GB RAM just for itself). But it provides the same Kubernetes API — your kubectl commands and YAML manifests work identically.

Prerequisites

bash双击代码复制
# Three Ubuntu 22.04+ nodes with:
# - Static IP addresses or DNS hostnames
# - 2GB RAM minimum per node
# - 20GB disk per node
# - Port 6443 open between nodes (Kubernetes API)

Step 1: Install K3s on the Master Node

SSH into your master node and run:

bash双击代码复制
curl -sfL https://get.k3s.io | sh -

# Check status
sudo systemctl status k3s

# Get the node token (needed for worker nodes)
sudo cat /var/lib/rancher/k3s/server/node-token

This single command installs everything: Kubernetes API server, scheduler, controller manager, kubelet, containerd, CoreDNS, and the Traefik ingress controller.

Step 2: Configure kubectl on the Master

bash双击代码复制
# Copy the kubeconfig to your home directory
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config

# Verify
kubectl get nodes
# Should show your master node with STATUS "Ready"

Step 3: Join Worker Nodes

On each worker node, run:

bash双击代码复制
curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_IP>:6443   K3S_TOKEN=<NODE_TOKEN> sh -

Replace <MASTER_IP> with your master's IP address and <NODE_TOKEN> with the token from Step 1.

Step 4: Verify the Cluster

bash双击代码复制
# All nodes should show "Ready"
kubectl get nodes

# Check all pods are running
kubectl get pods -A

# View cluster info
kubectl cluster-info

Step 5: Deploy a Test Application

yaml双击代码复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: nginx
bash双击代码复制
kubectl apply -f nginx.yaml
kubectl get pods -w  # Watch pods start up

Step 6: Enable Ingress

K3s comes with Traefik pre-installed. Create an ingress for your service:

yaml双击代码复制
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx
            port:
              number: 80

Step 7: Install cert-manager for SSL

Automatic TLS certificates with Let's Encrypt:

bash双击代码复制
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml

# Create a ClusterIssuer
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your@email.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: traefik
EOF

Maintenance Commands

bash双击代码复制
# Backup etcd (actually SQLite for K3s)
sudo k3s etcd-snapshot save --name my-backup

# Restore from snapshot
sudo k3s server --cluster-reset --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/my-backup

# Useful aliases I keep in my ~/.bashrc
alias k=kubectl
alias kg='kubectl get'
alias kgp='kubectl get pods'
alias kgn='kubectl get nodes'
alias kga='kubectl get all -A'

Wrap Up

Persistent Storage with Longhorn

K3s comes with no default storage provisioner. For stateful applications, you need one. Longhorn is the most popular option for K3s:

bash双击代码复制
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

Longhorn provides distributed block storage that replicates data across your nodes. A pod on node1 can still access its data even if it gets rescheduled to node2.

Monitoring with k9s and Metrics Server

k9s is a terminal UI for Kubernetes that I cannot live without:

bash双击代码复制
# Install k9s
curl -sS https://webinstall.dev/k9s | bash

# Launch
k9s

It shows real-time pod status, lets you tail logs, exec into containers, and delete stuck pods — all from your terminal with Vim-like keybindings.

To see metrics (CPU/memory usage), install the metrics server:

bash双击代码复制
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Now you can see resource usage
kubectl top pods
kubectl top nodes

Practical Tip: Scheduling Pods to Specific Nodes

In a mixed cluster where some nodes have GPUs or SSDs, you want to control where pods land:

bash双击代码复制
# Label a node
kubectl label node worker2 disk=ssd

# In your deployment YAML
# nodeSelector:
#   disk: ssd
yaml双击代码复制
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  template:
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: postgres
        image: postgres:16-alpine

Upgrading K3s

bash双击代码复制
# Check current version
kubectl version

# Upgrade master
curl -sfL https://get.k3s.io | sh -

# Upgrade workers (one at a time)
kubectl drain worker1 --ignore-daemonsets
# On worker1:
curl -sfL https://get.k3s.io | K3S_URL=https://<MASTER_IP>:6443 K3S_TOKEN=<TOKEN> sh -
# Back on master:
kubectl uncordon worker1

K3s makes Kubernetes accessible. The setup takes under 5 minutes for a basic cluster, and the resource requirements are low enough to run on cheap hardware. For production, add etcd, persistent storage with Longhorn, and monitoring with Prometheus + Grafana. But for getting started, the default K3s install is all you need.