With my Proxmox cluster running I have decided to delve back into Kubernetes in a local cluster. My Proxmox hosts are not super powerful (2 cores each and a mixture of 4GB and 8GB) but that should be enough for a K3s cluster.
K3s
K3s is lightweight Kubernetes (K8s) built for unattended, remote locations or resource constrained environments (like my Proxmox cluster).
My Set Up
I am using 3 Virtual Machines (VMs) across my 2 Proxmox nodes:
- K3 Server
Acting as the server in my K3s environment. It has 4GB RAM and 2 processors - Node1
Acting as a node in my K3S environment. It has 1GB RAM and 1 processor. - Node2
Active as a node in my K3s environment. It has 1GB RAM and 1 processor.
I’m also using an NFS share for when I want a shared volume or a volume that may exist outside of the containers ephemeral storage.
Deploying An App
Note: I’m about to go a slightly long winded way to deploy an app. In a production environment it may be easier to use Helm.
It has been some time since I have used Kubernetes so I’m going to delve into launching a basic Nginx example as my first application across my K3s cluster. I want multiple instances of Nginx running, all offering the same static webpage from a shared source.
nginx deployment
I have created a deployment file that I can apply with kubectl:
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
name: http-web-svc
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-volume
volumes:
- name: nginx-volume
nfs:
server: SERVERIP
path: /var/nfs/nginx
readOnly: true
This deployment is creating 4 instances of nginx containers, using the latest nginx container image. It is also routing a volume mount into /usr/share/nginx/html of each container. That volume is the nginx-volume that I am sharing from my Raspberry Pi.
nginx Service
Alongside the deployment, K3s will need a service to help traffic flow to the container ports (named http-web-svc in the deployment). I created a service file for that and again applied it with Kubectl.
kind: Service
apiVersion: v1
metadata:
name: nginx-service
spec:
type: NodePort
selector:
name: nginx
ports:
- protocol: TCP
port: 80
targetPort: http-web-svc
nginx Ingress
With the deployment and service applied I then created an ingress file so that K3s would be able to allow traffic into the cluster to view nginx.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
The ingress referenced the nginx-service named and created in the service file.
Why Not a Load Balancer Service?
Kubernetes offers a load balancer service that can obtain an external IP address and help with load balancing traffic to the services in the cluster. However, it seems to be reliant on Kubernetes running in a cloud where the cloud provider can give out external IP addresses. On my K3s cluster it just sits waiting for an external IP and never gets one.
Kubectl Commands
There are a few kubectl commands that help with applying files and checking on statuses:
kubectl apply -f filename.yaml
kubectl get services
kubectl get pods
kubectl get nodes
kubectl get ingress
Also the -o wide option for the end of the get commands to show more information.