Kubernetes cluster over AWS by Ansible.

Jyoti Pawar
8 min readJun 12, 2022

KUBERNETES :

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

ANSIBLE :

Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.

Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.

KUBERNETES CLUSTER :

A Kubernetes cluster is a set of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.

The cluster is the heart of Kubernetes’ key advantage: the ability to schedule and run containers across a group of machines, be they physical or virtual, on premises or in the cloud. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster.

🌀 Now Let’s see how to configure Kubernetes multinode cluster using Ansible.

Goal : I am going to launch 3 instances on AWS cloud. 1of the 3 instances will be configured as Kubernetes Master Node and the rest as Kubernetes Worker Node ans setup connection between master and workers.

Here I am using RHEL-8 virtual machine (on oracle virtualbox) we configured ansible on this Rhel-8 VM.

I’ve created different Ansible Roles for different requirement i.e

  • launch EC2 instance over AWS
  • Configure Kubernetes Master
  • Configure Kubernetes Worker

To attain the set Goal Follow the steps mention below:

Step 1: Create dynamic Inventory

To create dynamic inventory for AWS instances download pre-created Scripts. The Script will automatically pick the IP’s and configure the inventory.

create a directory and copy the scripts in the directory now assign the directory address in the ansible configuration file.

links of the scripts are mention below :

link to ec2.py :
https://github.com/ansible/ansible/blob/stable-2.9/contrib/inventory/ec2.py
Link to ec2.ini :
https://github.com/ansible/ansible/blob/stable-2.9/contrib/inventory/ec2.ini

after download the files in the directory make the files executable.

command to make any file executable is

chmod +x <file_name>e.g : chmod +x ec2.ini

Now set the environmental variables i.e I’ve to Provide the credentials of our AWS account so that the Scripts can know in which account to look for the Instances.

To do so we have to create an IAM user in AWS, after the user created ACCESS_KEY and SECRET_ACCESS_KEY are appeared on the screen we need these two keys.

NOTE : The SECRET_ACCESS_KEY is only visible one time just after the user is created so save this key as some place safe.

There is a need of these two keys for 2 purposes

  1. Provide the keys to the script
  2. Use these keys to launch instances in AWS

Now set the env. variables, command to set env. variables:

export AWS_ACCESS_KEY_ID=”***************”

export AWS_SECRET_ACCESS_KEY=”*********************************”

export AWS_REGION=”ap-south-1" (can select any)

Now the Dynamic Inventory is set.

We can check the IP of instances(if any instance is currently in running state in AWS) with the command

ansible all --list-hosts

Step 2: Create Ansible Role

To create an Ansible Role use command

ansible-galaxy role init <role_name>

There are 3 roles I’ve created these Roles are as follows:

  1. ec2_instance : launch 3 ec2-instances (number can be changed) 1 named as kube_master and other 2 as kube_worker.
- name: Launch kubernetes master nodes
ec2:
region: "{{region}}"
key_name: "{{master_key_name}}"
instance_type: "{{instance_type}}"
image: "{{image}}"
group: "{{group_name}}"
vpc_subnet_id: "{{vpc_subnet_id}}"
count: "{{ master_count}}"
wait: yes
instance_tags:
Name: kube_master
state: present
assign_public_ip: yes
aws_access_key: "{{access_key}}"
aws_secret_key: "{{secret_key}}"
- name: Launch kubernetes worker nodes
ec2:
region: "{{region}}"
key_name: "{{worker_key_name}}"
instance_type: "{{instance_type}}"
image: "{{image}}"
group: "{{group_name}}"
vpc_subnet_id: "{{vpc_subnet_id}}"
count: "{{worker_count}}"
wait: yes
instance_tags:
Name: kube_worker
state: present
assign_public_ip: yes
aws_access_key: "{{access_key}}"
aws_secret_key: "{{secret_key}}"

vars/main.yml file :

region: ap-south-1
instance_type: t2.micro
image: ami-04db49c0fb2215364
vpc_subnet_id: subnet-0790eee827b286891
group_name: kubernetes_sg
master_key_name: cluster_key
master_count: 1

worker_key_name: cluster_key
worker_count: 2

2. config_master : using this role the kube_master instance will be configured as kubernetes master Node.

---
# tasks file for config_master

- name: Configure yum repo for kubernetes
yum_repository:
name: Kubernetes
description: "yum repository for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: 1
gpgcheck: 1
repo_gpgcheck: 1
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

- name: "Install Docker, Kubeadm, kubectl, kubelet and IProutes"
package:
name: "{{ item }}"
state: present
loop: "{{ packages }}"

- name: "enabling services"
service:
name: "{{ item }}"
state: started
enabled: yes
loop: "{{ services }}"


- name: "to pull the config images"
shell: kubeadm config images pull
register: kubeadm_images

- name: "dispalying kubeadm images"
debug:
var: kubeadm_images
ignore_errors: yes

- name: "copying docker driver file"
copy:
src: "{{ source_file }}"
dest: "{{ destination_file }}"

- name: " Restart docker service"
service:
name: "docker"
state: restarted


- name: "Setting bridge to 1"
shell:
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
changed_when: false

- name: Initialize the kuberntes
shell: "kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: yes
args:
warn: false

- name: "Configuration Files Setup"
file:
path: "$HOME/.kube"
state: directory

- name: "Copying Configuration File"
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes

- name: Setup kubeconfig for home user
shell: "chown $(id -u):$(id -g) $HOME/.kube/config"

- name: "installing flannel"
shell: "kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"
args:
warn: false

- name: "THE JOIN TOKEN"
command: "kubeadm token create --print-join-command"
register: x
ignore_errors: True

- name: "Storing Token"
local_action: copy content={{ x.stdout }} dest=/tmp/token



#- name: "token"
# debug:
# msg: "{{ tokens.stdout }}"

#- name: "register host with join command"
# local_action: copy content="{{ token.stdout_lines[0] }}" dest="/ws_task_19/join-command"

vas/main.yml file :

---
# vars file for config_master

packages:
- docker
- kubeadm
- kubectl
- kubelet
- iproute-tc

services:
- docker
- kubelet

source_file: /ws_task_19/daemon.json
destination_file: /etc/docker/daemon.json

3. config_worker : similarly using this role the kube_worker instances will be configured as kubernetes worker Node.

---
# tasks file for config_master

- name: Configure yum repo for kubernetes
yum_repository:
name: Kubernetes
description: "yum repository for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: 1
gpgcheck: 1
repo_gpgcheck: 1
gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

- name: "Install Docker, Kubeadm, kubectl, kubelet and IProutes"
package:
name: "{{ item }}"
state: present
loop: "{{ packages }}"

- name: "enabling services"
service:
name: "{{ item }}"
state: started
enabled: yes
loop: "{{ services }}"


- name: "copying docker driver file"
copy:
src: "{{ src_docker_file }}"
dest: "{{ dest_docker_file }}"

- name: " Restart docker service"
service:
name: "docker"
state: restarted

- name: "config file"
copy:
src: "{{ bridge_src}}"
dest: "{{bridge_dest}}"
register: result

- name: "Load settings from all system cofiguration files"
shell: sysctl --system

- name: "Sending master token authentication to Worker nodes"
copy:
src: /tmp/token
dest: /tmp/token

- name: "Joining the cluster worker nodes with master"
shell: "bash /tmp/token"
ignore_errors: True
#- name: copy the join command to server location
#copy:
# src: /ws_task_19/join-command
# dest: /tmp/join-command.sh
# mode: +x

#- name: join command
#shell: "sh /tmp/join-command.sh"

vars/main.yml file:

---
# vars file for config_worker

packages:
- docker
- iproute-tc
- kubectl
- kubeadm
- kubelet


services:
- docker
- kubelet


src_docker_file: /ws_task_19/daemon.json
dest_docker_file: /etc/docker/daemon.json

bridge_src: /ws_task_19/k8s.conf
bridge_dest: /etc/sysctl.d/k8s.conf

in the image above we can see all the 3 Roles along with some other files as well

cluster_key.pem : is the key used in the creation of the instance.

daemon.json : this file is used to change the docker driver to systemd.

{
"exec-opts": ["native.cgroupdriver=systemd"]
}

k8s.conf : copy this file in kube_worker to enable kernel setting for bridging.

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

key.yml : ansible vault contains access and secret keys.

- access_key: "****************"
- secret_key: "*****************************"

main.yml : This is main (driver) ansible-playbook which executes all the roles in certain order.

we also created a playbook which decides the flow of the execution of Roles i.e when which role should execute.

here key.yml is a vault and the access and secret keys are present in the vault

a vault is a password protected encrypted file, only the person with the password can open the file so the vault is best choice for store anything sensitive/confidential information.

To create a new vault:ansible-vault create <file_name>encrypt a pre-created file:ansible-vault encrypt <file_name>

NOTE: The content of the vault can’t be retrieved without the password i.e without password the vault can’t be opened.

In the main.yml first we execute ec2-instance role then execute Role config_master on the target instance with Name = kube_master

then execute Role config_worker on the target instances kube_worker.

after the main.yml playbook executes successfully three instances are launched and configured over AWS.

Now let’s check does the kubernetes cluster setup successfully?

first check does all all the packages are successfully installed or not

also check the nodes

also check does all the static pods are launched successfully like coredns, flannel etc.

Now lets create a deployment and check does the k8s works effectively.

now expose the deployment.

Here the kubernetes cluster is setup perfectly and running.

Thank You !!

--

--

Jyoti Pawar

Devops || AWS || ML || Deep learning || Python || Flask || Ansible RH294 || OpenShift DO180