How to Host a Kubernetes Cluster on Your Dedicated Server

Step 1: Prepare Your Dedicated Server

  • Update the system:
    • Before starting, ensure that your server is up to date. Run the following commands:
      sudo apt-get update && sudo apt-get upgrade -y
      
    • Reboot if necessary:
      sudo reboot
      
  • Install essential dependencies:
    • Ensure curl, apt-transport-https, and ca-certificates are installed:
      sudo apt-get install -y curl apt-transport-https ca-certificates
      

Step 2: Install Docker Kubernetes relies on Docker to manage containers, so you'll need to install Docker on each node in your Kubernetes cluster.

  • Install Docker:
    sudo apt-get install -y docker.io
    
  • Enable and start Docker:
    sudo systemctl enable docker
    sudo systemctl start docker
    
  • Verify the Docker installation:
    docker --version
    

Step 3: Install Kubernetes Components You need to install kubeadm, kubelet, and kubectl, which are essential for setting up a Kubernetes cluster.

  • Add the Kubernetes APT repository:
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    sudo apt-add-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"
    
  • Install Kubernetes components:
    sudo apt-get update
    sudo apt-get install -y kubelet kubeadm kubectl
    

Step 4: Disable Swap (Required for Kubernetes) Kubernetes requires that swap be disabled on all nodes.

  • Disable swap temporarily:
    sudo swapoff -a
    
  • To permanently disable swap, edit /etc/fstab and comment out any swap lines:
    sudo nano /etc/fstab
    
    • Comment out the line containing swap.
    • Save and exit the editor.

Step 5: Initialize Kubernetes Master Node Now, you’ll initialize the master node for your Kubernetes cluster.

  • Initialize the Kubernetes master node:
    sudo kubeadm init --pod-network-cidr=10.244.0.0/16
    
    • The --pod-network-cidr flag is used to specify the range of IPs for the pods. In this case, we’re using a default CIDR block suitable for the Flannel network plugin.
  • Set up kubeconfig for the current user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Step 6: Install a Network Plugin A network plugin is necessary to enable communication between the Kubernetes pods. Flannel is a common choice.

  • Install Flannel:
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

Step 7: Join Worker Nodes to the Cluster

  • After initializing the master node, you will see a command with a token in the output, something like:
    kubeadm join <MASTER-IP>:6443 --token <your-token> --discovery-token-ca-cert-hash sha256:<your-hash>
    
    • Run this command on each worker node to join them to the Kubernetes cluster.

Step 8: Verify Cluster Nodes

  • Once the worker nodes have joined, you can verify the status of the nodes in the cluster.
    kubectl get nodes
    
    • This will list all the nodes in your cluster, including the master and worker nodes.

Step 9: Deploy Applications to Your Cluster

  • Create a deployment YAML file: Create a file named nginx-deployment.yaml for example, with the following content:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
    
  • Deploy the application:
    kubectl apply -f nginx-deployment.yaml
    
  • Expose the deployment as a service:
    kubectl expose deployment nginx-deployment --type=LoadBalancer --name=nginx-service
    

Step 10: Access the Application

  • To check if your application is running:
    kubectl get services
    
  • The external IP will be shown if you are using a LoadBalancer service.

Step 11: Monitor and Manage Your Cluster

  • Check the status of your pods:
    kubectl get pods
    
  • Check logs for troubleshooting:
    kubectl logs <pod-name>
    

Step 12: Scale Your Application

  • If you need to scale your application up or down, use the following command:
    kubectl scale deployment nginx-deployment --replicas=3
    

Step 13: Clean Up Resources

  • Delete the deployment and service:
    kubectl delete deployment nginx-deployment
    kubectl delete service nginx-service
    
Was this answer helpful? 0 Users Found This Useful (0 Votes)