Kubernetes Cluster Setup with AWS Cloud Provider

Introduction

In my previous blog post, I have shown you how to setup a Kubernetes cluster on AWS EC2 instances. I tried deploying a sample application with a public docker image from DockerHub and it is working fine. But when I tried to deploy an application with an image from the AWS ECR, we started getting permission errors. Even though I have given all the necessary permissions for the instance profile role, the pod couldn’t fetch the image from ECR and showing ImagePullBackOff error.

Then we found this documentation from Kubernetes where they are saying “Verify kubelet is running with –cloud-provider=aws” to make it work.

AWS Cloud Provider:

Kubernetes Cloud Providers provide a method of provisioning cloud resources through Kubernetes via the –cloud-provider option. In AWS, this flag allows the provisioning of EBS volumes and cloud load balancers.

Configuring a cluster for AWS requires several specific configuration parameters in the infrastructure.

AWS IAM Permissions

For the aws-cloud-controller-manager to be able to communicate to AWS APIs, you will need to create a few IAM policies for your EC2 instances. You can find the IAM policy in detail in the following link: Click Here

Infrastructure Configuration

  • Apply the roles and policies to Kubernetes masters and workers
  • Set the hostname of the EC2 instances to the private DNS hostname of the instance. We can change the hostname to private dns name using the command “sudo hostname <Private DNS (from AWS Console)>”.
  • Label the EC2 instances (tagging) with the key “kubernetes.io/cluster/" and value can be <owned|shared>. For Cloud provider AWS add the tag with key "kubernetes.io/cluster/kubernetes" and value "owned" for cluster name kubernetes.

Cluster Configuration

On the Kubernetes side, we need to make sure that the –cloud-provider=aws command-line flag is present for the API server, controller manager, and every Kubelet in the cluster.

Since we are using kubeadm to initialize the cluster and join the node, we have to create some configuration files and add the –cloud-provider=aws option to use while running the kubeadm command.

Kubeadm configuration changes:

Kubeadm init

The configuration file for kubeadm init is as follows:

The command to run kubeadm init with configuration file is: “sudo kubeadm init –config kubeadm_config.yml”

Kubeadm join

The configuration file for kubeadm join is as follows:

The command to run kubeadm join with configuration file is: “sudo kubeadm join –config kubeadm_join.yml”

Now we are all set to deploy an application with an image from ECR :)

blog comments powered by Disqus
Vapor 4 migration issues an... >>
<< Kubernetes cluster setup on...