Week 12 - Kubernetes¶
Topic¶
Deploying containerized applications on a single-node K3s Kubernetes cluster: installing K3s, creating namespaces, writing Deployment, Service, and Ingress manifests, and migrating an existing Apache-hosted service to a Kubernetes-backed deployment.
Company Requests¶
Ticket #1201: Standardize Application Deployment
"We've had too many incidents caused by hand-configured services on bare VMs. The infrastructure team has decided to move workloads to Kubernetes for better lifecycle management and fault tolerance. Install K3s on your VM and get the cluster running."
Ticket #1202: Deploy Inventory API to Kubernetes
"The Inventory API needs high availability. Deploy it on Kubernetes with 2 replicas so it survives a single pod failure. It should be reachable at
inventory.<vm_name>.sysadm.eethrough the cluster ingress controller."
Ticket #1203: Migrate Company Website to Kubernetes
"Now that the cluster is running, move the company website there too. Deploy it with 10 replicas and update Apache to proxy requests to Kubernetes instead of serving the files directly. The site must remain reachable at
<vm_name>.sysadm.ee."
Accessing services via Traefik
K3s ships with Traefik as its built-in ingress controller. In this lab, Traefik is configured to run on port 8080 (instead of the default 80) to avoid conflicting with Apache.
To test Ingress-based services from the command line, use --resolve to point the hostname at your VM's IP without relying on DNS:
curl --resolve inventory.<vm_name>.sysadm.ee:8080:<vm_ip> \
http://inventory.<vm_name>.sysadm.ee:8080/api/v1/inventory
The scoring server uses the same technique — DNS is not required for any checks to pass.
Scoring Checks¶
- Check 12.1: K3s API server is reachable on port 6443.
- Method: TCP connection to port 6443 from the scoring server.
- Expected: Connection succeeds.
- Check 12.2: Namespace
lab12exists in the cluster.- Method: The scoring server runs
kubectl get ns/lab12on your VM via SSH. - Expected: Namespace found.
- Method: The scoring server runs
- Check 12.3: Deployment
inventory-apihas at least 2 ready replicas in namespacelab12.- Method: SSH to your VM, query
readyReplicasfrom the deployment. - Expected: 2 or more replicas ready. WARNING if fewer than 2 are ready; CRITICAL if the deployment does not exist.
- Method: SSH to your VM, query
- Check 12.4:
inventory.<vm_name>.sysadm.eeis served by a Kubernetes pod.- Method: The scoring server sends a request to
inventory.<vm_name>.sysadm.ee/api/v1/inventoryon port 80 with the bearer token and reads theX-Served-Byresponse header, which the k8s image injects automatically. - Expected: Header present with a value matching
inventory-api-*(the Kubernetes pod name). The old Docker-based container does not set this header.
- Method: The scoring server sends a request to
- Check 12.5:
<vm_name>.sysadm.eeis served via Kubernetes.- Method: HTTP GET to port 80; response body checked for the string
Kubernetes deployment on. - Expected: String present, confirming Apache is proxying to the Kubernetes-hosted site rather than serving static files directly.
- Method: HTTP GET to port 80; response body checked for the string
Tasks¶
Task 1: Install K3s¶
Two configuration files must exist before the installer runs — K3s reads them at startup and will not pick them up afterwards:
/var/lib/rancher/k3s/server/manifests/traefik-config.yamltells Traefik to listen on port 8080 instead of 80, so it does not conflict with Apache./etc/rancher/k3s/config.yamladds your VM's external IP as a TLS Subject Alternative Name so the API certificate is valid for remote connections, and sets the kubeconfig to be world-readable.
The file formats and settings are documented in the references.
Complete
- Create both pre-configuration files (contents in the SOP reference below), then install K3s:
curl -sfL https://get.k3s.io | sh - - Verify the
k3sservice is running and the node is ready:systemctl status k3s kubectl get nodes - Open port
6443/tcpin bothfirewalldand your cloud security group — this is the only new port that needs external access. All application traffic still flows through Apache on port 80. - Add K3s's internal networks to the trusted firewall zone so Traefik can reach application pods:
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 firewall-cmd --reload
Reference: SOP: Kubernetes Operations — Install K3s
Task 2: Deploy the Inventory API¶
The inventory API uses the NFS share from Lab 08 so both replicas share the same storage file. Bearer token authentication stays in Apache — the application pods themselves are unauthenticated.
Complete
-
Create the storage directory on the NFS share:
mkdir -p /data/nfs/inventory -
Save the following to
inventory-api.yamland fill in every_____:--- apiVersion: v1 kind: Namespace # (1)! metadata: name: _____ --- apiVersion: v1 kind: PersistentVolume # (2)! metadata: name: inventory-api-pv spec: capacity: storage: 100Mi accessModes: - ReadWriteMany # (3)! persistentVolumeReclaimPolicy: Retain storageClassName: "" # (4)! nfs: path: /data/nfs/inventory server: localhost # (5)! readOnly: false --- apiVersion: v1 kind: PersistentVolumeClaim # (6)! metadata: name: inventory-api-pvc namespace: _____ spec: accessModes: - ReadWriteMany storageClassName: "" volumeName: inventory-api-pv resources: requests: storage: 100Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: inventory-api namespace: _____ spec: replicas: _____ # (7)! selector: matchLabels: app: inventory-api template: metadata: labels: app: inventory-api spec: containers: - name: inventory-api image: registry.hpc.ut.ee/public/sysadm-inventory-api:latest imagePullPolicy: Always command: ["python", "app.py", "--storage", "/data/inventory/storage.db"] # (8)! ports: - containerPort: 5000 volumeMounts: - name: inventory-storage mountPath: /data/inventory # (9)! volumes: - name: inventory-storage persistentVolumeClaim: claimName: inventory-api-pvc --- apiVersion: v1 kind: Service metadata: name: inventory-api namespace: _____ spec: selector: app: inventory-api # (10)! ports: - port: 5000 targetPort: 5000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: inventory-api namespace: _____ spec: ingressClassName: traefik # (11)! rules: - host: "_____" # (12)! http: paths: - path: / pathType: Prefix backend: service: name: inventory-api port: number: 5000- Isolates all lab12 resources into their own virtual cluster. Resources in different namespaces cannot accidentally interfere with each other.
- Represents the actual storage — an NFS directory on this VM. PersistentVolumes are cluster-scoped (no namespace) and provisioned by an administrator.
- Both replicas need to mount this volume simultaneously.
ReadWriteManypermits concurrent mounts from multiple pods.ReadWriteOncewould only allow one pod at a time. - An empty string disables dynamic provisioning and tells Kubernetes to use the statically created PV above. Without this, Kubernetes would try to provision storage automatically.
- The NFS server is on this same VM. K3s uses the internal cluster network to reach it.
- A user's claim on storage. The PVC binds to the PV above and gives the Deployment a stable reference to the storage regardless of what's underneath.
- Fill in
2. Kubernetes will always keep exactly this many pods running — if one crashes, a replacement starts automatically. - Overrides the container image's default startup command to enable the persistent storage backend. Without this flag, inventory data would be in-memory and lost on restart.
- The path inside the container where the NFS volume appears. The application writes
storage.dbhere. - Kubernetes uses this label to find the pods this Service should route traffic to. It must match the
labelsin the Deployment template. - Routes HTTP requests through K3s's built-in Traefik reverse proxy. The Traefik pod sees the request hostname and forwards it to this Service.
- Fill in
inventory.<vm_name>.sysadm.ee. Traefik matches the incomingHostheader against this value.
-
Apply the manifest and wait for both pods to reach
Running:kubectl apply -f inventory-api.yaml kubectl get pods -n lab12 -w -
Test at each level of the stack to confirm each layer works before the next:
# Pod level — direct to a pod IP, no auth (bearer auth is Apache's job, not the pod's) kubectl get pods -n lab12 -l app=inventory-api -o wide curl http://<pod-ip>:5000/api/v1/inventory # Service level — via the ClusterIP kubectl get svc inventory-api -n lab12 curl http://<cluster-ip>:5000/api/v1/inventory # Traefik Ingress level — via hostname routing on port 8080 curl --resolve inventory.<vm_name>.sysadm.ee:8080:<vm_ip> \ http://inventory.<vm_name>.sysadm.ee:8080/api/v1/inventory -
Update the Apache VirtualHost for
inventory.<vm_name>.sysadm.eeto proxy to port 8080 instead of 5000. You have configured reverse proxies before — update theProxyPassandProxyPassReverselines accordingly, then reload Apache. -
Verify the full chain. The response should include an
X-Served-Byheader with the Kubernetes pod name:curl -sI \ --resolve inventory.<vm_name>.sysadm.ee:80:<vm_ip> \ -H "Authorization: Bearer 845e6732f32b81dd778972703474ccbb" \ http://inventory.<vm_name>.sysadm.ee/api/v1/inventory \ | grep -i x-served-by
Reference: SOP: Kubernetes Operations, Concepts: Container Orchestration
Task 3: Deploy the Company Website and Update Apache¶
The website image accepts a VM_NAME environment variable and injects it — along with the pod's own hostname — into every page response. Traefik will round-robin requests across all 10 replicas, which makes the pod name change on each request.
Unlike Task 2, there is no scaffolded manifest here. Use the same pattern you just applied: Deployment, Service, and Ingress in a single file.
Note
Make sure to think through which resources you need. For example, the site does not require persistent storage, so there's no need to include those resources.
To pass an environment variable into a container, add an env field under the container spec:
containers:
- name: website
image: registry.hpc.ut.ee/public/sysadm-fizzops-site:latest
env:
- name: VM_NAME
value: "your-short-vm-name"
ports:
- containerPort: 80
Complete
-
Create
website.yamlfor thelab12namespace containing:- A Deployment named
websitewithreplicas: 10, the image and env var shown above. - A Service named
websiteexposing container port 80. - An Ingress routing
<vm_name>.sysadm.eeto the Service on port 80.
- A Deployment named
-
Apply and verify all 10 pods reach
Running:kubectl apply -f website.yaml kubectl get pods -n lab12 -w -
Test at each level before going through Apache:
# Pod level kubectl get pods -n lab12 -l app=website -o wide curl http://<pod-ip>:80/ | grep -E "Kubernetes deployment|Served by pod" # Service level kubectl get svc website -n lab12 curl http://<cluster-ip>:80/ | grep -E "Kubernetes deployment|Served by pod" # Traefik Ingress — run 10 times to see different pod names each time for i in $(seq 1 10); do curl -s --resolve <vm_name>.sysadm.ee:8080:<vm_ip> \ http://<vm_name>.sysadm.ee:8080/ | grep "Served by pod" done -
Update the Apache VirtualHost for
<vm_name>.sysadm.eeto proxy to Kubernetes instead of serving files directly. You have done this before — replace theDocumentRootblock with a reverse proxy tolocalhost:8080, then reload Apache. -
Confirm port 80 now serves the Kubernetes-backed site:
curl --resolve <vm_name>.sysadm.ee:80:<vm_ip> \ http://<vm_name>.sysadm.ee/ | grep "Kubernetes deployment on"
Reference: SOP: Kubernetes Operations, SOP: Web Server Management — Set Up a Reverse Proxy
Ansible tips¶
Regarding Ansible, it's not a very good tool to manage Kubernetes. You can use it to prepare and deploy K3s itself, but leave managing Kubernetes manifests to kubectl.
Course feedback¶
Once you're done with the labs, the teaching staff would be glad if you could give your honest (!) feedback to the course, so we can improve it in the future.
Link is as follows: https://docs.google.com/forms/d/e/1FAIpQLScKUeJ6723jMJXEz9SYc0OPR6gvwqp3rqYJfvO-8s-nGGXpWg/viewform?usp=header
You can be absolutely honest here. If you're about it coming back to bite you, use a random matrix number.