Github Code: https://github.com/JustinGuese/dataprofitexpert-tutorials
Welcome back to our tutorial series about K3s and self-hosted Kubernetes! If you’re someone who wants to start hosting applications without breaking the bank, this guide is perfect for you.
Many developers and startups begin their journey with platforms like Heroku or managed Kubernetes services such as AWS EKS. While these services are great, the costs can quickly become overwhelming—especially for small teams, startups, or individuals just experimenting with Kubernetes.
The good news? Self-hosting Kubernetes isn’t as difficult as it might seem. With K3s, a lightweight Kubernetes distribution, you can set up a small server cluster that’s both affordable and capable of hosting your applications.
Why Self-Host Kubernetes?
For large organizations, managed services like AWS EKS make sense. They eliminate the need for a dedicated team to maintain the Kubernetes cluster, offering convenience at a premium cost. However, for startups, individual developers, or anyone exploring Kubernetes, the price tag often outweighs the benefits.
Take, for instance, the AWS T4G Medium instance. With 2 vCPUs and 4 GB RAM, it might seem like an affordable choice for hosting your apps. But when you factor in the additional $73/month management fee for EKS, your total costs can exceed $100/month. Over a year, that’s more than $1,000!
K3s: The Cost-Effective Alternative
With K3s, you can reduce your hosting costs to as little as €3/month without compromising on quality. K3s is specifically designed for small-scale setups, making it an excellent choice for:
- Startups exploring Kubernetes.
- Developers transitioning from Heroku or managed platforms.
- Side projects or SaaS products running on minimal infrastructure.
In this tutorial, we’ll show you how to:
- 1. Set up a VPS with 2 vCPUs and 4 GB of RAM from an affordable provider.
- 2. Install K3s to create a single-node Kubernetes cluster.
- 3. Prepare your cluster for hosting multiple apps securely.
This guide focuses on getting you up and running in just 5–10 minutes with a single-node setup. While this configuration is ideal for testing and smaller projects, it’s easy to scale by adding more nodes, which we’ll cover in a future tutorial.
Ready to save money and take control of your hosting? Let’s dive in and build your own Kubernetes cluster with K3s!
What We’ll Cover in This Setup
First, we’ll install the K3s distribution of Kubernetes on your server. This lightweight version of Kubernetes is perfect for small setups. We’ll also install:
1. NGINX Ingress Controller – This acts like AWS’s load balancer, managing incoming traffic to your apps.
2. Cert-Manager – This tool will automatically provision free SSL certificates from Let’s Encrypt, ensuring secure connections between users and your applications.
Comparing Heroku to Self-Hosting
Heroku is a fantastic platform for getting started, but it has its limitations, especially in terms of cost and scalability. For example:
- • Heroku charges $25/month for their smallest instance, which doesn’t include databases, SSL, or storage volumes.
- • Adding a PostgreSQL database can easily increase costs by another $25–$50/month, even for basic plans.
In contrast, with our K3s setup, all your resources are hosted on a single server. You won’t need to spin up a separate instance for every app. Instead, everything—from apps to databases—runs efficiently on shared server resources.
Why This Setup is Better
1. Cost Efficiency: Instead of paying for multiple instances, all your containers can share the resources of one server. For example, you can divide apps into namespaces like app-A and app-B to organize them while keeping costs low.
2. Secure by Default: With this setup, connections between your app and services like PostgreSQL remain internal. Unlike other hosted solutions, traffic doesn’t go over the internet, improving security.
3. Scalability: You can easily extend this setup. For instance, adding more nodes to your cluster is straightforward if your app needs more resources later.
4. Ease of Use: Tools like k9s provide a simple, user-friendly interface to manage your Kubernetes cluster. I’ll show you how to use it later.
Hosting Options: AWS and Hetzner
While you can use AWS for this setup, the key difference is that we’ll eliminate the AWS EKS component. Instead, we’ll use K3s to manage the instance directly.
If you stick with AWS, consider cost-saving options like a three-year all-upfront EC2 instance saving plan, which can reduce your cost to around €9/month per instance.
However, I personally recommend Hetzner. It’s a reliable hosting provider offering:
- • Affordable cloud hosting packages.
- • Built-in firewalls, load balancers, and other tools.
- • A robust and secure environment for your Kubernetes setup.
Hetzner referral code (20€ bonus for you, 10€ for me): https://hetzner.cloud/?ref=Oq4OYhgNBCWp
In the next part of this tutorial, we’ll dive into the specifics: setting up an Ubuntu server and installing K3s. Stay tuned—it’s simpler than you think!
Affordable Servers for Kubernetes Hosting
When it comes to hosting a self-managed Kubernetes cluster, the cost of servers is surprisingly low compared to managed services. For example, a server with 2 vCPUs and 4 GB RAM can cost as little as €3.29/month. If you’re looking to save even more, you can use platforms like Server Hunter to filter affordable servers worldwide. However, be cautious when selecting hosting locations; for instance, hosting in regions like Russia might not be ideal right now.
Why Hetzner is a Great Choice
Among various options, Hetzner stands out as a reliable and affordable provider:
- • Servers are inexpensive and designed for cloud hosting.
- • Free features like firewalls and low-cost volumes make it ideal for Kubernetes setups.
- • Unlike AWS, you don’t need to pay for a load balancer (€5.39/month or more), as K3s includes its own lightweight ingress controller.
Key Considerations for a Single-Node Setup
While this tutorial focuses on a single-node Kubernetes cluster, it’s important to note the limitations:
1. No Disaster Recovery: If the server goes down, the entire cluster becomes unavailable.
2. Local Storage Only: K3s uses the default Rancher Local Path storage driver, meaning volumes are stored locally on the node. Adding another node to the cluster won’t replicate these volumes. In a future tutorial, I’ll cover implementing a volume manager to share storage between nodes.
Choosing Your VPS and Architecture
You can use any VPS provider, but I recommend Hetzner for simplicity and cost-effectiveness. Here’s a quick rundown of the setup:
- • Select Ubuntu as the operating system.
- • Compare ARM64 and x86 architectures. ARM64 can sometimes be cheaper, but not all software supports it.
- • Include an IPv4 address (€0.50/month) for better networking compatibility, even though you could skip it if you’re extremely cost-conscious.
Setting Up SSH Keys for Secure Access
To securely access your server, you need an SSH key pair:
1. Run the following command to generate your SSH keys:
ssh-keygen -f ~/.ssh/df_tutorial
The private key remains on your machine.
The public key (ending with .pub) is shared with the hosting provider.
2. Copy the public key:
• Use cat to display it in the terminal:
cat ~/.ssh/df_tutorial.pub
Copy it using pbcopy on Mac or a text editor like VS Code.
3. Add the public key to your hosting provider (e.g., Hetzner).
Connecting to Your Server
Once your server is ready, connect via SSH:
1. Use the private key to log in:
ssh -i ~/.ssh/df_tutorial root@<server-ip>
2. Accept the host fingerprint by typing yes.
Optional: Server Hardening
If you plan to use the server long-term, consider hardening it for security:
- • Enable an internal firewall.
- • Set up automatic system updates for security patches.
- • Disable unnecessary services to reduce attack vectors.
In the next steps, we’ll dive into installing K3s and setting up your lightweight Kubernetes cluster. Let’s get started!
Steps for Setting Up K3S Cluster
1. Installing K3S
- • Use the quick-start guide for K3S installation.
- • Add a K3S token for joining additional nodes in the future.
- • Disable Traefik as the ingress, since NGINX ingress will be used instead.
Command:
curl -sfL https://get.k3s.io | sh -s - server --disable=traefik
2. Install kubectl
• Ensure kubectl is installed on the local machine to manage the cluster.
3. Install Helm
- • Install Helm for managing Kubernetes packages.
- • Add repositories for NGINX ingress and cert-manager:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo add jetstack https://charts.jetstack.io
helm repo update
4. Install the NGINX ingress and cert-manager using Helm.
helm install ingress-nginx ingress-nginx/ingress-nginx -n kube-system
helm install cert-manager jetstack/cert-manager --namespace kube-system --set crds.enabled=true
5. Configure Cert-Manager
• Create a ClusterIssuer for Let’s Encrypt:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: your-email@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
• Apply the configuration:
kubectl apply -f cluster-issuer.yaml
6. Deploy and Test NGINX
- • Create a Kubernetes deployment, service, and ingress for NGINX.
- • Use a wildcard domain pointing to the server’s IP (e.g., *.k8s.yourdomain.com).
- • Configure DNS records in your registrar to point to the server’s IPv4 and IPv6 addresses.
Example Deployment YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
rules:
- host: test.k8s.doku-chat.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
tls:
- hosts:
- test.k8s.doku-chat.de
secretName: nginx-tls
Apply the configuration:
kubectl apply -f nginx-deployment.yaml
7. Verify Setup
- • Check the deployment using kubectl get pods, kubectl get svc, and kubectl get ingress.
- • Use a browser to access the domain (e.g., https://test.k8s.yourdomain.com).
Key Advantages
- • Cost-efficient hosting compared to AWS/Heroku (~€4/month).
- • Flexibility with wildcard domains for dynamic subdomain-based routing.
- • Full control over cluster management.
- • Scalable for multiple apps with minimal effort.
If you’d like, I can help refine specific parts of the tutorial, automate configurations, or expand on additional setups (e.g., CI/CD pipelines or multi-node clusters).