How to Create a Kubernetes Cluster in AWS Using Kops

In this guide, we are going to create a kubernetes cluster using kops. Kops is sa tool that allows you to manage kubernetes clusters in the cloud (AWS, Google cloud or Azure) but can als o be used to manage clusters in OpenStack.

Learn more about openstack in its github repo here https://github.com/kubernetes/kops.

Installation

kops is a single binary application that can be installed in either Linux, Mac or Windows. Grab the latest binary for your OS in its releases page here https://github.com/kubernetes/kops/releases and add it to your executable path dir. Verify the installation using this command:

kops version

Requirements

I am using version 1.18.3 (git-11ec695516) for this guide.

You will also need kubectl installed.

AWS User

Create KOPS user

# Create group called kops
aws iam create-group --group-name kops

# Attach the permissions to the group
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops

# Create user
aws iam create-user --user-name kops

# add user to the group
aws iam add-user-to-group --user-name kops --group-name kops

# Generate access key for the user
aws iam create-access-key --user-name kops

Creating cluster

Create a state bucket

aws s3 mb s3://dev.citizix.com --region me-south-1

# Version the bucket
aws s3api put-bucket-versioning --bucket dev.citizix.com --versioning-configuration Status=Enabled

# Encrypt the bucket
aws s3api put-bucket-encryption --bucket dev.citizix.com --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'

Domain in AWS

For this to happen, you will need to have the domain in AWS working.

  • Go to Route53 -> Hosted Zone and add a hosted zone – (mine is dev.citizix.com)
  • From the generated NS records, add them to your hosting provider such that the NS records for the domain just added matches the ones provided by AWS.

Verify with this:

aws route53 list-hosted-zones --output=table
dig -t NS dev.citizix.com

dig +short dev.citizix.com soa
dig +short dev.citizix.com ns

Kops

To create the cluster, use the kops create cluster command

For this guide, we are using AWS. we will need a user with

kops create cluster \
  --master-count=1 \
  --node-count=3 \
  --cloud aws \
  --cloud-labels "Environment=\"dev\",Type=\"k8s\",Role=\"node\",Provisioner=\"kops\"" \
  --node-size t3.large \
  --master-size t3.medium \
  --state=s3://dev.citizix.com \
  --topology=private \
  --bastion=true \
  --kubernetes-version=v1.19.7 \
  --zones=me-south-1a,me-south-1b,me-south-1c \
  --dns-zone=dev.citizix.com \
  --networking=calico \
  --vpc=vpc-0b8284ea24e023548 \
  --ssh-public-key ~/.ssh/id_rsa.pub \
  dev.citizix.com

Edit cluster

I am going to edit the cluster to add a node policy to allow anything matching k8s-* so services in the cluster can access aws resources.

kops edit cluster --name=dev.citizix.com --state=s3://dev.citizix.com

Policy:

...
kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
spec:
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": [
            "arn:aws:iam::xxxxxxxxxx:role/k8s-*"
          ]
        }
      ]
...

Since I created a VPC with private and public networks I will use those to set up cluster resources there:

...
subnets:
  - cidr: 10.0.1.0/24
    name: eu-west-3a
    type: Private
    zone: eu-west-3a
    id: subnet-xxx
    egress: nat-xxx
  ...
  - cidr: 10.0.4.0/24
    name: utility-eu-west-3a
    type: Utility
    zone: eu-west-3a
    id: subnet-xxx
  ...
...

Topology being private, the private subnets have the egress as the nat gateway. The id is the subnet id and the cidr is the cidr for that private subnet.

Updating cluster

To apply changes to the cluster, use the following command:

kops update cluster --name=dev.citizix.com --state=s3://dev.citizix.com --yes

Other Kops commands

Edit cluster

kops edit ig nodes --name=dev.be.smsleopard.com --state=s3://dev.be.smsleopard.com

kops edit ig master-eu-west-3a --name=dev.be.smsleopard.com --state=s3://dev.be.smsleopard.com

Validate cluster

kops validate cluster \
    --name=dev.citizix.com \
    --state=s3://dev.citizix.com

Get cluster

kops get cluster --name=dev.citizix.com --state=s3://dev.citizix.com
kops get cluster --name=dev.citizix.com --state=s3://dev.citizix.com -o yaml

kops get instancegroups --name=dev.be.smsleopard.com --state=s3://dev.be.smsleopard.com

Export kubeconfig

Export a kubecfg file for a cluster from the state store. By default the configuration will be saved into a users $HOME/.kube/config file.

kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com

# As Admin user
kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com --admin

# To a file
kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com --kubeconfig dev.citizix.com.yml

Apply a rolling update

# --yes to apply
kops rolling-update cluster --name=dev-k8s.ictlife.com --state=s3://dev-k8s.ictlife.com --yes

Delete cluster

kops delete cluster --name=dev.citizix.com --state=s3://dev.citizix.com --yes

Remove state bucket

aws s3 rm s3://${STATE_BUCKET} --recursive
aws s3 rb s3://${STATE_BUCKET} --region ${REGION} --force
comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy