Edge computing brings computation closer to data sources—IoT devices, retail locations, manufacturing floors. K3s, a lightweight Kubernetes distribution, makes it possible to run production workloads on resource-constrained edge devices.

Why K3s for Edge?

Standard Kubernetes requires significant resources. K3s addresses this:

FeatureStandard K8sK3s
Binary size~1GB~60MB
Memory (control plane)2GB+512MB
Dependenciesetcd, container runtimeSQLite, containerd built-in
ARM supportLimitedFirst-class

Installing K3s

On a Raspberry Pi or edge device:

1
2
3
4
5
# Install K3s server (control plane + worker)
curl -sfL https://get.k3s.io | sh -

# Check installation
sudo k3s kubectl get nodes

For a multi-node cluster:

1
2
3
4
5
6
# On the server node, get the token
sudo cat /var/lib/rancher/k3s/server/node-token

# On worker nodes
curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 \
  K3S_TOKEN=<token> sh -

Edge Architecture Patterns

Pattern 1: Hub and Spoke

Central cloud cluster manages edge clusters:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Fleet controller in cloud
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: edge-apps
  namespace: fleet-default
spec:
  repo: https://github.com/myorg/edge-manifests
  branch: main
  paths:
  - production
  targets:
  - clusterSelector:
      matchLabels:
        location: warehouse

Pattern 2: Standalone Edge

Self-contained edge cluster with local storage:

1
2
3
4
5
6
7
8
# Local path provisioner for persistent storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Deploying to Edge Devices

Lightweight Container Images

Use minimal base images:

1
2
3
4
5
6
7
8
9
# Multi-stage build for small images
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY . .
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o sensor-agent

FROM scratch
COPY --from=builder /app/sensor-agent /sensor-agent
ENTRYPOINT ["/sensor-agent"]

Result: 5-10MB images instead of 100MB+.

ARM Compatibility

Build multi-architecture images:

1
2
3
docker buildx create --name multiarch --use
docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 \
  -t myorg/sensor-agent:v1 --push .

Resource Constraints

Configure strict limits for edge:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sensor-agent
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: agent
        image: myorg/sensor-agent:v1
        resources:
          requests:
            memory: "32Mi"
            cpu: "50m"
          limits:
            memory: "64Mi"
            cpu: "100m"

Offline Operation

Edge devices often have intermittent connectivity.

Pre-pulling Images

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: batch/v1
kind: Job
metadata:
  name: prepull-images
spec:
  template:
    spec:
      initContainers:
      - name: prepull
        image: myorg/sensor-agent:v2
        command: ["echo", "Image pulled"]
      containers:
      - name: done
        image: busybox
        command: ["echo", "Prepull complete"]
      restartPolicy: Never

Local Registry Mirror

1
2
3
4
5
6
7
8
# /etc/rancher/k3s/registries.yaml
mirrors:
  docker.io:
    endpoint:
      - "http://local-registry:5000"
  gcr.io:
    endpoint:
      - "http://local-registry:5000"

Data Buffering

Buffer data locally when offline:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: sensor-data-buffer
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: data-collector
spec:
  template:
    spec:
      containers:
      - name: collector
        volumeMounts:
        - name: buffer
          mountPath: /data
        env:
        - name: BUFFER_PATH
          value: /data/buffer.db
        - name: CLOUD_ENDPOINT
          value: https://api.cloud.example.com
      volumes:
      - name: buffer
        persistentVolumeClaim:
          claimName: sensor-data-buffer

Monitoring Edge Clusters

Lightweight Metrics

Use Victoria Metrics instead of Prometheus:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: apps/v1
kind: Deployment
metadata:
  name: victoria-metrics
spec:
  template:
    spec:
      containers:
      - name: victoria-metrics
        image: victoriametrics/victoria-metrics:v1.93.0
        args:
        - -retentionPeriod=7d
        - -memory.allowedPercent=50
        resources:
          limits:
            memory: 256Mi
            cpu: 200m

Remote Write to Cloud

Forward metrics during connectivity:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# vmagent configuration
global:
  scrape_interval: 30s

scrape_configs:
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
      - role: pod

remote_write:
  - url: https://metrics.cloud.example.com/api/v1/write
    queue_config:
      max_samples_per_send: 1000
      batch_send_deadline: 5s
      capacity: 10000

Security at the Edge

Network Isolation

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-external
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/8  # Internal only
  - to:  # Allow cloud API
    - ipBlock:
        cidr: 203.0.113.0/24
    ports:
    - protocol: TCP
      port: 443

Automatic Updates

Use system-upgrade-controller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
  name: k3s-upgrade
  namespace: system-upgrade
spec:
  concurrency: 1
  version: v1.28.4+k3s1
  nodeSelector:
    matchExpressions:
    - key: node-role.kubernetes.io/master
      operator: Exists
  serviceAccountName: system-upgrade
  upgrade:
    image: rancher/k3s-upgrade

Real-World Example: Retail Store

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Store edge deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pos-gateway
  labels:
    app: pos-gateway
    store-id: "12345"
spec:
  replicas: 2
  template:
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: pos-gateway
            topologyKey: kubernetes.io/hostname
      containers:
      - name: gateway
        image: retail/pos-gateway:v2.1
        ports:
        - containerPort: 8443
        env:
        - name: STORE_ID
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['store-id']
        - name: OFFLINE_MODE_ENABLED
          value: "true"

Conclusion

K3s enables Kubernetes at the edge:

  • Lightweight footprint for constrained devices
  • ARM and x86 support
  • Offline-capable with local storage
  • Easy fleet management with Rancher

At Sajima Solutions, we design edge computing solutions for retail, manufacturing, and IoT across the Gulf region. Contact us to discuss your edge deployment strategy.