Containerization is the modern standard for deploying and managing applications. And Kubernetes (K8s) is the leading platform for automating the deployment, scaling, and management of containerized workloads. If you’re already using VPS infrastructure or planning to rent a server for your project, deploying a Kubernetes cluster on your own VPS is a flexible and efficient solution.
In this article, we’ll guide you through the process of deploying Kubernetes on VPS step by step: what resources are needed, how to prepare the servers, and how to properly set up the cluster.
What Is Kubernetes and Why Is It Useful?
Kubernetes is an open-source platform that allows you to orchestrate containers, automating tasks like resource allocation, application scaling, rolling updates, and self-healing.
Key Kubernetes features:
- Automatic scaling of applications.
- Zero-downtime deployments.
- CPU and RAM resource management per pod.
- Self-recovery in case of failures.
- Seamless rolling updates.
Kubernetes is ideal for SaaS platforms, startup infrastructure, microservices, CI/CD automation, and testing environments. And it doesn’t have to run in the cloud — you can launch your own Kubernetes cluster on VPS servers.
What Resources Are Needed for Kubernetes on VPS?
Before installing, assess how many nodes you need and what roles they’ll play:
- 1 node (single VPS) — good for testing and development.
- 3+ VPS — minimum setup for production: 1 master + 2 worker nodes.
Minimum requirements per node:
- CPU: 2 cores
- RAM: at least 2 GB (4 GB recommended)
- Disk: SSD, 20 GB or more
- OS: Ubuntu 20.04 or higher (Debian, CentOS also supported)
We recommend using a VPS with flexible scaling options, so you can easily add more nodes later.
Preparing Your VPS for Kubernetes Installation
Each VPS node needs initial setup:
Update the system:
bash
sudo apt update && sudo apt upgrade -y
Set hostname and edit hosts file:
bash
sudo hostnamectl set-hostname master-node
Install Docker (or other container runtime):
Kubernetes uses a container engine to run pods.
bash
sudo apt install docker.io -y
sudo systemctl enable docker
sudo systemctl start docker
Install kubeadm, kubelet, and kubectl:
bash
sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
Disable swap:
bash
sudo swapoff -a
Initializing the Master Node
On the master node, initialize the cluster:
bash
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
At the end, you’ll see a kubeadm join command that you’ll use to connect worker nodes:
bash
kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:...
Save this command for later use.
Configure kubectl access:
bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Adding Worker Nodes
On each additional VPS:
- Install Docker, kubeadm, kubelet, and kubectl (as on the master).
- Disable swap.
- Run the kubeadm join command obtained earlier to connect to the cluster.
Installing Pod Network Between Nodes
Once the master node is initialized, install a CNI plugin to enable network communication between pods. For example, Flannel:
bash
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Without a network plugin, pods cannot communicate — Kubernetes requires this for proper operation.
Checking Cluster Status
Use these commands to monitor your cluster:View node list:
bash
kubectl get nodes
View pods:
bash
kubectl get pods --all-namespaces
Check system components:
bash
kubectl get componentstatuses
Deploying a Test Service
Let’s deploy a simple nginx service to verify everything works:
bash
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc
Now nginx should be accessible through one of your node’s IP address and assigned port.
Conclusion
Deploying Kubernetes on a VPS isn’t as complicated as it may seem. It gives you full control over your infrastructure, reduces dependency on third-party cloud providers, and allows you to optimize your operating costs.
With virtual servers from RX‑NAME, you can build your own container orchestration platform, manage apps at scale, and support full CI/CD automation on your own terms.
Start your own Kubernetes cluster today — and take your infrastructure to a new level of flexibility and performance.
Leave a Reply