Creating Multi-Cloud Kubernetes Cluster on AWS, Azure, and GCP cloud

Jyoti Pawar
8 min readJun 13, 2022

Multi-cloud Kubernetes is the deployment of Kubernetes over multiple cloud services and providers. Kubernetes can also be a way for organizations to efficiently manage multi-cloud architecture.

Most enterprises are already using a multi-cloud strategy. By combining cloud services, organizations can choose the best services for them at the lowest cost. However, multi-cloud can get complex, and this is where Kubernetes comes in. By standardizing workloads on Kubernetes, and leveraging Kubernetes features like Federation, organizations can deploy large-scale workloads on multiple clouds with central control.

Pre-requisites

  1. For building this cluster you should require a working account on clouds. I’m will be using three clouds that is AWS, GCP, and Azure.

Now, I am launching 1 instance in an AWS cloud which is works as a K8 Master node, and the other 2 are in Azure cloud & GCP cloud respectively which act as a Slave node.

Step 1: Configure Kubernetes Master node on AWS cloud

I have launch one ec2 instance .

Now we have to configure it as K8s Master for that we have to login into the master node. For login, we can use putty or any other tools. Here I have used putty. After login, I have changed from ec2-user to root user

Let’s start configuring the master node

  1. I am going to use docker as a container engine, so the first step is to install docker. The repo for docker is pre-configured on Amazon Linux 2 AMI and to install docker, use the above command
# yum install docker -y
# docker version

2. Start docker services

# systemctl enable docker --now
# systemctl status docker

3. Here I am going to set up Kubernetes cluster using kubeadm program.

The repo for kubeadm is not pre-configured on Amazon Linux 2 AMI. So first we have to configure yum repo for kubeadm.

For reference use this link https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

Now we can install kubeadm using command

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Starting the Kubelet Service

# systemctl status kubelet
# systemctl enable kubelet --now

As of now, the status is ‘activating’ but it will start soon, as we proceed towards configuring the cluster.

4. Kubernetes runs multiple programs behind the scene and it launched all those programs on top of docker containers. And to launch containers of the respective programs, it needs the respective images. so now we have to pull the required images

To pull the images use the command

5. Docker by default uses ‘cgroupfs’ as its Cgroup Driver. But Kubernetes does not have proper support for this driver.So, the next step is to change Cgroup driver from ‘cgroupfs’ to ‘systemd’This can be done by creating a ‘daemon.json’ file in ‘/etc/docker/’ with the following content

Restarting Docker Service. Use the above command to do

# systemctl restart docker
# docker info | grep Cgroup

6. The next step is to make changes in IPtables settings.Use the above commands

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

And finally, use the above command to apply the changes

# sysctl --system

7. We also need ‘iproute-tc’ package for traffic control, use the above command to install it

# yum install iproute-tc -y

8. Initializing the Kubernetes Cluster using kubeadm init command. Generally, we use the above command to initialize the cluster

kubeadm init — pod-network-cidr=[Network CIDR]

Kubernetes requires a minimum of 2 CPUs and 2 GiB RAM to initialize the cluster. If you are using t2.mirco instance type (with 1 vCPU and1 Gib RAM), it might throw an error.To get rid of the error, you can use the above command:

If we launch master and slave in different cloud:
# kubeadm init --control-plane-endpoint "public_ip_of_instance:PORT" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
If we launch master and slave in same cloud use below command:
# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

Note :

** pod-network-cidr= IP range (for pods inside the slave nodes)

** Control plane endpoint = assign the cluster with a public IP with port

* ignore-preflight-errors= Ignoring the unwanted CPU errors and memory errors

9. The next step is to set up the kubeconfig file that will help kubectl client command to connect to the cluster. Use the above commands.

# mkdir -p $HOME/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config# sudo chown $(id -u):$(id -g) $HOME/.kube/config

we can also see that the kubelet service has been started:

# systemctl status kubelet

10. We can see that we have only one node and it is not ready as of now.

# kubectl get nodes

In the final step, we have to run the kube-flannel program.Use the above command to set up the flannel program for the cluster

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

11. The kubeadm init command gives a join token. We have to use this token in order to connect the slave node with the master node. You can also recreate this token in the master node using the above command

# kubeadm token create --print-join-command

Now we can use the above token in the slave node to connect it with the master node

We have successfully setup K8s master node

Step 2: Configure Kubernetes Slave node on Azure Cloud

Here I have launch one virtual machine in azure cloud of Redhat 8.2

Now we have to setup it as K8s Slave node.

  1. In my os docker repo is not configured so first I have to configure repo.
# vim  /etc/yum.repos.d/docker.repo[docker_repo]
baseurl =
https://download.docker.com/linux/centos/7/x86_64/stable/
gpgcheck = 0
name = Yum repo for docker
# yum install docker-ce --nobest

Change the cgroup driver for docker to systemd

# mkdir /etc/docker
# vim /etc/docker/daemon.json

Add this content to the /etc/docker/daemon.json file

{
“exec-opts”: [“native.cgroupdriver=systemd”]
}

Start and enable docker services

# systemctl start docker
# systemctl enable docker

2. Here I am going to set up Kubernetes cluster using kubeadm program.

The repo for kubeadm is not pre-configured So first we have to configure yum repo for kubeadm.

For reference use this link https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

# vim /etc/yum.repos.d/kubernetes.repo[kubernetes]
baseurl =
https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
gpgcheck = 0
name = Yum repo for Kubernetes

Now we can install kubeadm using command

# yum install kubelet kubeadm kubectl -y

Start and enable kubelet

# systemctl status kubelet
# systemctl enable kubelet --now

3. The next step is to make changes in IPtables settings.Use the above commands

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

And finally, use the above command to apply the changes

# sysctl --system

4. We also need ‘iproute-tc’ package for traffic control, use the above command to install it

# yum install iproute-tc -y

5. Finally, run the kubeadm join command which we get in master (# kubeadm token create — print-join-command) node to join this slave node to the master node.

6. Now we run a command to check slave is connected or not

# kubectl get nodes

The slave is successfully connected to the master

Step 3: Configure Kubernetes Slave node on GCP Cloud

I have launch one instance on GCP cloud

Now we have to setup it as K8s Slave node.

Login to this instance by clicking the SSH button in the GCP console.

Now, follow the following steps for configuring the Kubernetes slave.

  1. Install Docker

Create a yum repository for docker.

# vim  /etc/yum.repos.d/docker.repo[docker_repo]
baseurl =
https://download.docker.com/linux/centos/7/x86_64/stable/
gpgcheck = 0
name = Yum repo for docker
# yum install docker-ce --nobest

Change the cgroup driver for docker to systemd

# mkdir /etc/docker
# vim /etc/docker/daemon.json

Add this content to the /etc/docker/daemon.json file

{
“exec-opts”: [“native.cgroupdriver=systemd”]
}

Start and enable docker services

# systemctl start docker
# systemctl enable docker

2. Here I am going to set up Kubernetes cluster using kubeadm program.

The repo for kubeadm is not pre-configured So first we have to configure yum repo for kubeadm.

For reference use this link https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

# vim /etc/yum.repos.d/kubernetes.repo[kubernetes]
baseurl =
https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
gpgcheck = 0
name = Yum repo for Kubernetes

Now we can install kubeadm using command

# yum install kubelet kubeadm kubectl -y

Start and enable kubelet

# systemctl status kubelet
# systemctl enable kubelet --now

3. The next step is to make changes in IPtables settings.Use the above commands

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

And finally, use the above command to apply the changes

# sysctl --system

4. We also need ‘iproute-tc’ package for traffic control, use the above command to install it

# yum install iproute-tc -y

5. Finally, run the kubeadm join command which we get in master node (#kubeadm token create — print-join-command) to join this slave node to the master node.

6. Now we run a command to check slave is connected or not

# kubectl get nodes

Finally, the Kubernetes slave nodes running on Azure cloud and GCP cloud are connected to the Kubernetes master node running on AWS cloud. And both the slave nodes are ready. Now, we can run all of the Kubernetes resources on this cluster and these resources will be running on Azure and GCP cloud.

In this way, we have built a true Multi-Cloud Kubernetes Cluster to achieve high availability.

Thanks for reading !!

--

--

Jyoti Pawar

Devops || AWS || ML || Deep learning || Python || Flask || Ansible RH294 || OpenShift DO180