Kubernetes Cluster Setup with AWS Cloud Provider

Introduction

In my previous blog post, I have shown you how to setup a Kubernetes cluster on AWS EC2 instances. I tried deploying a sample application with a public docker image from DockerHub and it is working fine. But when I tried to deploy an application with an image from the AWS ECR, we started getting permission errors. Even though I have given all the necessary permissions for the instance profile role, the pod couldn’t fetch the image from ECR and showing ImagePullBackOff error.

Then we found this documentation from Kubernetes where they are saying “Verify kubelet is running with –cloud-provider=aws” to make it work.

AWS Cloud Provider:

Kubernetes Cloud Providers provide a method of provisioning cloud resources through Kubernetes via the –cloud-provider option. In AWS, this flag allows the provisioning of EBS volumes and cloud load balancers.

Configuring a cluster for AWS requires several specific configuration parameters in the infrastructure.

AWS IAM Permissions

For the aws-cloud-controller-manager to be able to communicate to AWS APIs, you will need to create a few IAM policies for your EC2 instances. You can find the IAM policy in detail in the following link: Click Here

Infrastructure Configuration

  • Apply the roles and policies to Kubernetes masters and workers
  • Set the hostname of the EC2 instances to the private DNS hostname of the instance. We can change the hostname to private dns name using the command “sudo hostname <Private DNS (from AWS Console)>”.
  • Label the EC2 instances (tagging) with the key “kubernetes.io/cluster/" and value can be <owned|shared>. For Cloud provider AWS add the tag with key "kubernetes.io/cluster/kubernetes" and value "owned" for cluster name kubernetes.

Cluster Configuration

On the Kubernetes side, we need to make sure that the –cloud-provider=aws command-line flag is present for the API server, controller manager, and every Kubelet in the cluster.

Since we are using kubeadm to initialize the cluster and join the node, we have to create some configuration files and add the –cloud-provider=aws option to use while running the kubeadm command.

Kubeadm configuration changes:

Kubeadm init

The configuration file for kubeadm init is as follows:

The command to run kubeadm init with configuration file is: “sudo kubeadm init –config kubeadm_config.yml”

Kubeadm join

The configuration file for kubeadm join is as follows:

The command to run kubeadm join with configuration file is: “sudo kubeadm join –config kubeadm_join.yml”

Now we are all set to deploy an application with an image from ECR :)

comments...

Kubernetes cluster setup on AWS EC2

Intro

This is a blog post about the steps I have followed to setup a Kubernetes cluster from scratch by launching new Ubuntu 18.04 AWS EC2 instances, one for master and another for worker node. In this setup the Kubernetes cluster is using Flannel CNI plugin to implement the network for communication.

Security Groups

The components of a Kubernetes cluster communicates through different ports and protocols, for example Kubernetes API server, etcd server, Kubelet APi, flannel CNI, etc uses different ports(6443, 10250,..) and protocols(TCP, UDP). So we should enable these ports and protocols in the security group while launching the EC2 instances. The detailed list of ports and protocols required for the master node and worker node are shown in the following pages:

  1. Kubernets required ports
  2. Flannel required ports
     

Launching Insatnces

Launch 2 Ubuntu 18.04 AWS EC2 instances with the proper security groups and IAM Roles. These 2 instances are for Kubernetes master node and worker node. Based on the requirement we have to add the permissions to the role assigned to the EC2 instances.

Installation on all nodes

Run the following commands in the same order in all the nodes to install docker, kubelet, kubeadm, and kubectl.

Installation on master node

Run the following commands in master node to create the Kubernetes cluster, configure the kubectl and install the Flannel CNI plugin

Installation on worker node

The “kubeadm init …” will show you a “kubeadm join …” command at the end of the installation, which can be used to join the node to the Kubernetes cluster. The command will be something like the following:

 sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash   

 

Conclusion

Now the Kubernetes cluster setup is done, we have one master node and one worker node running on 2 Ubuntu 18.04 EC2 instances. To test the cluster working properly, we can run the following commands:

 kubectl get nodes
 # This will print the list of nodes associated with the cluster

 

 kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml
 kubectl exec -ti busybox -- nslookup kubernetes.default
 # The first command will create a pod with busybox container running. 
 # The second command will test the DNS working properly by nslookup an internal pod name.

Once everything is working fine, that’s it, the setup is done. Now you can deploy your application in to Kubernetes cluster.

comments...

AWS CodeBuild Custom Notification Messages

Recently we got a chance to implement AWS Code Pipeline for a project. It has four stages: CodeCommit, CodeBuild, CodeDeploy, and Test using Jenkins. There was a requirement to enable notifications for each state of the stages. So we have created a SNS Topic and added subscriptions so that we can use this Topic in the AWS services to send the notification.

Enabling notifications in CodeCommit and CodeDeploy are pretty straight forward. In CodeCommit you can create a Trigger with Event “Push to existing branch” and add target as our SNS topic. Same with CodeDeploy, we can Create Trigger from Deployment Group with events (DeploymentStart, DeploymentSuccess, DeploymentFailure, DeploymentStop, DeploymentReady, DeploymentRollback) and target as our SNS Topic.

But the problem with CodeCuild is, we don’t have a built-in trigger for the CodeBuild state change. So we have to use CloudWatch events to trigger the notifications, where we have created Rule with event pattern and target as SNS topic in CloudWatch. In the Rule creation page we have to select “Event Pattern” and “Build event pattern to match events by service”, where we will select CodeBuild as “Service Name” and “All Events” for “Event Type”. This will generate an event pattern for CodeBuild status (IN_PROGRESS, SUCCEDED, FAILED, STOPPED).

But the cloudwatch event notifications send the emails with a static content as subject (“AWS Notification Message”), which won’t be helpful for the user who subscribed for the notification, as he has to open the email and read the body to understand the state (success or failure, which build, etc.) of the CodeBuild. So we came up with a solution to customize the notification from CloudWatch.

From CloudWatch, while creating the Rule instead of giving the target as SNS Topic, we have given the target as AWS Lambda function. We will get the entire event details as json in our Lambda function, which we can use to prepare the subject and body of the notification message. We have used Python Botocore SDK to publish the message to SNS Topic. Please find below the AWS Lambda function:

comments...