In this guide, we are going to create a kubernetes cluster using kops
. Kops is sa tool that allows you to manage kubernetes clusters in the cloud (AWS, Google cloud or Azure) but can als o be used to manage clusters in OpenStack.
Learn more about openstack in its github repo here https://github.com/kubernetes/kops.
Installation
kops
is a single binary application that can be installed in either Linux, Mac or Windows. Grab the latest binary for your OS in its releases page here https://github.com/kubernetes/kops/releases and add it to your executable path dir. Verify the installation using this command:
Requirements
I am using version 1.18.3 (git-11ec695516) for this guide.
You will also need kubectl installed.
AWS User
Create KOPS user
Create an aws group called kops
1
| aws iam create-group --group-name kops
|
Attach the permissions to the group
1
2
3
4
5
| aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
|
Create user
1
| aws iam create-user --user-name kops
|
add user to the group
1
| aws iam add-user-to-group --user-name kops --group-name kops
|
Generate access key for the user
1
| aws iam create-access-key --user-name kops
|
Creating a kubernetes cluster
Create a state bucket
1
| aws s3 mb s3://dev.citizix.com --region me-south-1
|
Version the bucket
1
| aws s3api put-bucket-versioning --bucket dev.citizix.com --versioning-configuration Status=Enabled
|
Encrypt the bucket
1
| aws s3api put-bucket-encryption --bucket dev.citizix.com --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'
|
Domain Setup in AWS
For this to happen, you will need to have the domain in AWS working.
- Go to Route53 -> Hosted Zone and add a hosted zone (mine is
dev.citizix.com
) - From the generated
NS
records, add them to your hosting provider such that the NS
records for the domain just added matches the ones provided by AWS.
Verify with this:
1
2
3
4
5
6
7
| aws route53 list-hosted-zones --output=table
dig -t NS dev.citizix.com
dig +short dev.citizix.com soa
dig +short dev.citizix.com ns
|
Kops
To create the cluster, use the kops create cluster command
For this guide, we are using AWS. we will need a user with
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| kops create cluster \
--master-count=1 \
--node-count=3 \
--cloud aws \
--cloud-labels "Environment=\"dev\",Type=\"k8s\",Role=\"node\",Provisioner=\"kops\"" \
--node-size t3.large \
--master-size t3.medium \
--state=s3://dev.citizix.com \
--topology=private \
--bastion=true \
--kubernetes-version=v1.19.7 \
--zones=me-south-1a,me-south-1b,me-south-1c \
--dns-zone=dev.citizix.com \
--networking=calico \
--vpc=vpc-0b8284ea24e023548 \
--ssh-public-key ~/.ssh/id_rsa.pub \
dev.citizix.com
|
Edit cluster
I am going to edit the cluster to add a node policy to allow anything matching k8s-*
so services in the cluster can access aws resources.
1
| kops edit cluster --name=dev.citizix.com --state=s3://dev.citizix.com
|
Policy:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| ...
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::xxxxxxxxxx:role/k8s-*"
]
}
]
...
|
Since I created a VPC with private and public networks I will use those to set up cluster resources there:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
| ...
subnets:
- cidr: 10.0.1.0/24
name: eu-west-3a
type: Private
zone: eu-west-3a
id: subnet-xxx
egress: nat-xxx
...
- cidr: 10.0.4.0/24
name: utility-eu-west-3a
type: Utility
zone: eu-west-3a
id: subnet-xxx
...
...
|
Topology being private, the private subnets have the egress
as the nat gateway. The id
is the subnet id and the cidr
is the cidr for that private subnet.
Updating cluster
To apply changes to the cluster, use the following command:
1
| kops update cluster --name=dev.citizix.com --state=s3://dev.citizix.com --yes
|
Other Kops commands
Editing cluster
1
2
3
| kops edit ig nodes --name=dev.citizix.com --state=s3://dev.citizix.com
kops edit ig master-eu-west-3a --name=dev.citizix.com --state=s3://dev.citizix.com
|
Validate cluster
To validate the cluster, use this command:
1
2
3
| kops validate cluster \
--name=dev.citizix.com \
--state=s3://dev.citizix.com
|
Get cluster
Get cluster details using this
1
2
3
4
| kops get cluster --name=dev.citizix.com --state=s3://dev.citizix.com
kops get cluster --name=dev.citizix.com --state=s3://dev.citizix.com -o yaml
kops get instancegroups --name=dev.citizix.com --state=s3://dev.citizix.com
|
Export kubeconfig
Export a kubecfg file for a cluster from the state store. By default the configuration will be saved into a users $HOME/.kube/config file.
1
| kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com
|
To export as an Admin user
1
| kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com --admin
|
To export to a file
1
| kops export kubecfg --name=dev.citizix.com --state=s3://dev.citizix.com --kubeconfig dev.citizix.com.yml
|
Apply a rolling update. Use --yes
to auto apply
1
| kops rolling-update cluster --name=dev-k8s.ictlife.com --state=s3://dev-k8s.ictlife.com --yes
|
Delete cluster
1
| kops delete cluster --name=dev.citizix.com --state=s3://dev.citizix.com --yes
|
Remove state bucket
1
2
| aws s3 rm s3://${STATE_BUCKET} --recursive
aws s3 rb s3://${STATE_BUCKET} --region ${REGION} --force
|