- July 30 2024
- Vishnu Dass
Private Kubernetes clusters are becoming more popular because they offer better security, control, and compliance compared to public cloud options.
This means companies can keep their data safer and have more say in everything.
However, setting up and managing these private clusters can be tricky, especially when they are used in big, busy environments.
In this blog, we will help you understand the best ways to set up your own private Kubernetes cluster.
Establish a Private Kubernetes Cluster for Enhanced Control
If you prefer full control over your Kubernetes environment, a DIY approach to deploying private Kubernetes clusters offers greater flexibility. These steps will involve infrastructure provisioning, component installation, networking configuration, and security considerations.
Infrastructure Provisioning Options
- Bare-Metal Servers:
– Set up physical servers with necessary hardware and network configurations.
– Install a base operating system (e.g., Ubuntu, CentOS).
– Ensure SSH access and network connectivity.
– Set up a firewall and configure IP tables.
– Use tools like PXE for network booting to streamline OS installations.
- Virtual Machines:
– Use a hypervisor like VMware, VirtualBox, or cloud providers (AWS, GCP, Azure) to create virtual machines.
– Allocate CPU, memory, and storage resources to each VM.
– Install a base operating system and ensure SSH access.
– Recommend using Infrastructure as Code (IaC) tools like Terraform for managing VM provisioning.
Kubernetes Component Installation:
- Install Dependencies:
– Disable swap on all nodes, as Kubernetes requires it:
sudo swapoff -a
– Install Docker:
sudo apt-get update && sudo apt-get install -y docker.io
– Install kubeadm, kubelet, and kubectl:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- Initialize the Control Plane:
– On the master node, initialize the Kubernetes control plane:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
– Configure kubectl for the root user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
– Save the `kubeadm join` command output as it is required to join worker nodes.
– Suggest setting up a non-root user for Kubernetes administration for better security practices.
- Join Worker Nodes:
– On each worker node, join the cluster using the command provided by `kubeadm init`:
sudo kubeadm join <master-node-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Networking Configuration:
- Install a CNI Plugin:
– For Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
– For Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
– Verify the CNI plugin installation:
kubectl get pods –all-namespaces
- Configure Service Mesh (Optional):
– Install Istio or Linkerd for advanced traffic management and observability.
Security Considerations:
- Enable RBAC:
– Kubernetes RBAC is enabled by default in Kubernetes 1.6 and later.
– Create roles and role bindings to control access to resources:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“get”, “watch”, “list”]
Pod Security Policies:
– Define and enforce security policies for pod deployments:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
seLinux:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
– ‘configMap’
– ’emptyDir’
– ‘persistentVolumeClaim’
– ‘projected’
– ‘secret’
– ‘downwardAPI’
– ‘gitRepo’
Note: Pod Security Policies (PSPs) are deprecated in Kubernetes 1.21 and are planned for removal in Kubernetes 1.25. Use Open Policy Agent (OPA) Gatekeeper or Pod Security Admission as alternatives.
2. Private Clusters Through Managed Kubernetes Services on Public Cloud Platforms (MKS)
Managed Kubernetes Services (MKS) on public cloud platforms help you run your apps without worrying too much about the behind-the-scenes stuff.
These services make it easier for you to control your apps while letting the cloud provider take care of the hard parts.
In this section, we’ll talk about three popular cloud providers: Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service (AKS).
Google Kubernetes Engine (GKE)
Google Kubernetes Engine (GKE) is a service from Google Cloud that helps you run your apps easily.
Key Features:
- Auto-scaling: GKE can automatically add or remove resources to handle more or fewer users.
- Logging and Monitoring: GKE works well with Google Cloud’s tools to help you see what’s going on with your apps.
- Security: GKE has features to keep your apps safe.
Example Deployment:
Here’s how you can set up a GKE cluster using the command line:
# Set variables
PROJECT_ID=my-gcp-project
CLUSTER_NAME=my-gke-cluster
ZONE=us-central1-a
# Authenticate gcloud
gcloud auth login
# Set project
gcloud config set project $PROJECT_ID
# Create GKE cluster
gcloud container clusters create $CLUSTER_NAME \
–zone $ZONE \
–num-nodes 3 \
–enable-autoscaling –min-nodes=1 –max-nodes=5 \
–enable-ip-alias \
–enable-private-nodes –master-ipv4-cidr 172.16.0.0/28
Amazon Elastic Kubernetes Service (EKS)
Amazon Elastic Kubernetes Service (EKS) is a service from AWS that helps you run apps in the cloud.
Key Features:
- AWS Integration: EKS works well with other AWS tools.
- Managed Control Plane: AWS takes care of the hard parts of running Kubernetes for you.
- EKS Fargate: Lets you run apps without worrying about servers.
Example Deployment:
Here’s how you can set up an EKS cluster:
# Install AWS CLI and eksctl
pip install awscli --upgrade
curl --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Configure AWS CLI
aws configure
# Create EKS cluster
eksctl create cluster \
–name my-eks-cluster \
–region us-west-2 \
–nodegroup-name standard-workers \
–node-type t3.medium \
–nodes 3 \
–nodes-min 1 \
–nodes-max 4 \
–managed
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS) is a service from Microsoft Azure that helps you run your apps in the cloud.
Key Features:
- Azure Active Directory Integration: AKS works with Azure AD to manage user access.
- Developer-Friendly: Works well with tools like Azure DevOps and GitHub.
- Security: AKS has features to keep your apps safe.
Example Deployment:
Here’s how you can set up an AKS cluster:
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Login to Azure
az login
# Set variables
RESOURCE_GROUP=myResourceGroup
CLUSTER_NAME=myAKSCluster
# Create resource group
az group create –name $RESOURCE_GROUP –location eastus
# Create AKS cluster
az aks create \
–resource-group $RESOURCE_GROUP \
–name $CLUSTER_NAME \
–node-count 3 \
–enable-addons monitoring \
–generate-ssh-keys \
–enable-aad
Bare-Metal Deployment for Private Kubernetes Clusters
Infrastructure Considerations:
Hardware Selection:
- Pick servers with enough CPU, memory, and storage for what you plan to do.
- Make sure your hardware works with the operating system and Kubernetes.
Network Fabric Design:
- Design a strong network layout to keep latency low and throughput high.
- Set up redundant network paths for failover and high availability.
- Use network segmentation and VLANs for better security and traffic management.
Practical Steps:
Prepare Bare-Metal Servers:
- Set up your physical servers with the right hardware (CPU, memory, storage, network interfaces).
- Install an operating system (e.g., Ubuntu, CentOS).
- Ensure you can access your servers via SSH and they are network-connected.
- Set up IP addresses and make sure hostnames can be resolved.
Install Required Software:
Disable Swap:
sudo swapoff -a
Install Docker:
sudo apt-get update && sudo apt-get install -y docker.io
Install Kubernetes Components:
Install kubeadm, kubelet, and kubectl:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Set Up the Kubernetes Control Plane:
- On the main server (master node), initialize the control plane:
sudo kubeadm init –pod-network-cidr=10.244.0.0/16
Configure kubectl for the root user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Join Worker Nodes:
- On each worker node, join the cluster using the command provided by
kubeadm init
:
sudo kubeadm join <master-node-ip>:<port> –token <token> –discovery-token-ca-cert-hash sha256:<hash>
Install a CNI Plugin:
- Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Enable Role-Based Access Control (RBAC):
- Use RBAC to control who can do what in your Kubernetes cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“get”, “watch”, “list”]
Container Orchestration Platforms for Private Kubernetes Clusters
If you’re looking to deploy private Kubernetes clusters with additional functionalities, tools like OpenShift and Rancher can provide enhanced management features and capabilities. These platforms offer benefits such as integrated CI/CD pipelines, service catalogs, and multi-cluster management, making it easier to handle complex Kubernetes environments.
Recommended Tools:
OpenShift:
- Service Catalogs
- CI/CD Pipelines
- Developer Tools
- Enhanced Security
Rancher:
- Multi-Cluster Management
- Service Mesh Integration
- App Catalog
- Access Control
These tools can streamline your Kubernetes deployments and provide the necessary features to manage and scale your clusters effectively.