In this guide we are going to explore how to launch Kubernetes instance in Digital Ocean.
Digital Ocean provides a cost-effective, ready-to-use Kubernetes cluster in minutes so you can focus on building your application.
Prerequisites
To follow along this guide, you need the following:
- A Digital Ocean Account with credits. Use this link to create an account with trial credits if you don’t have an account
- Basic Knowledge of Kubernetes
- Basic knowledge of Terraform
- A Digital Ocean Token with permissions to create a Kubernetes cluster
- Optional:
doctl
command installed so we can use to query the DO api. Checkout https://github.com/digitalocean/doctl/releases for the latest version for your OS.
Generating API Access Key
We need an API Access key to access Digital Ocean through the API. Once logged in to the Digital Ocean Dashboard, head over to API -> Tokens and Keys -> Generate New Token. Give your Token a name and ensure it has write permissions then click Generate Token:
Copy the generated key and save it somewhere private.
Use this code to create a terraform provider:
1
2
3
4
5
6
7
8
9
10
11
12
| terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
|
In the above code, we define digital ocean provider from digitalocean/digitalocean
and version to be above 2.0
. The DigitalOcean (DO) provider is used to interact with the resources supported by DigitalOcean. The provider needs to be configured with the proper credentials before it can be used.
We need the following variables in our code
do_token
which is the token that we will use to create resources.name
the name of the cluster. We define this as variable so we can reuse.
1
2
3
4
5
| variable "do_token" {}
variable "name" {
default = "uat-k8s"
}
|
The do_token
variable is defined as an empty variable so we can supply the value wither as an argument or using the .tfvars
file.
Creating the Kubernetes resource
Next let’s define the Kubernetes resource. This resource will create a kubernets cluster with the supplied values.
In our example, we are defining a kubernetes resource in the Frakfurt fra1
region with the name supplied in the variables.
We specify the latest version of kubernetes. To get the latest version slug from kubernetes, use the doctl
command to query the digital ocean API.
1
| doctl kubernetes options versions
|
At the time of writing this article, the latest version is 1.21.3-do.0
Next we define the node pool. The node pool is a group of servers (workers) that will run the containers (pods) in our kubernetes cluster. We can have as many worker nodes as we want. In our case we are defining 2 nodes in our node pool with 2vcpu sand 4gb ram. To get the size specifications, query the digital ocean API using this command:
1
| doctl kubernetes options sizes
|
Here is the code for the kubernetes cluster resource:
1
2
3
4
5
6
7
8
9
10
11
12
13
| resource "digitalocean_kubernetes_cluster" "k8s" {
name = var.name
region = "fra1"
version = "1.21.3-do.0"
node_pool {
name = "${var.name}-worker-pool"
# doctl kubernetes options sizes
size = "s-2vcpu-4gb"
node_count = 2
}
}
|
Saving the Access Token to .tfvars
In the above code we expect a variable do_token
to be supplied with the correct access key to access digital ocean. Let us save it in the file terraform.tfvars
where terraform will read the variable values. In the file, create a variable do_token
with the value being the generated access token.
File terraform.tfvars
:
1
| do_token = "xxxxxxxxxxxx"
|
Full code
Here is the full code. Save it to main.tf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
| terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
variable "do_token" {}
variable "name" {
default = "uat-k8s"
}
provider "kubernetes" {
host = resource.digitalocean_kubernetes_cluster.k8s.endpoint
token = resource.digitalocean_kubernetes_cluster.k8s.kube_config[0].token
cluster_ca_certificate = base64decode(
resource.digitalocean_kubernetes_cluster.k8s.kube_config[0].cluster_ca_certificate
)
}
resource "digitalocean_kubernetes_cluster" "k8s" {
name = var.name
region = "fra1"
# Grab the latest version slug from `doctl kubernetes options versions`
version = "1.21.3-do.0"
node_pool {
name = "${var.name}-worker-pool"
# doctl kubernetes options sizes
size = "s-2vcpu-4gb"
node_count = 2
}
}
|
Let us initialize our code with this:
You should see output similar to this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
| $ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of digitalocean/digitalocean from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Using previously-installed digitalocean/digitalocean v2.11.1
- Using previously-installed hashicorp/kubernetes v2.5.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
|
Then to check whether out code is ok, let’s validate. If there is an error in the code it will be caught in this stage.
1
2
3
| $ terraform validate
Success! The configuration is valid.
|
If you get Success!
like in the above, it means that everything is Ok.
Next let’s get the plan. Terraform plan will evaluate the changes to be applied against the resources that exist in digital ocean. The plan will tell you if there is any resource to be created, modified or deleted. Use this command to get the plan:
This is the output of my plan:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
| $ terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_kubernetes_cluster.k8s will be created
+ resource "digitalocean_kubernetes_cluster" "k8s" {
+ cluster_subnet = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ ipv4_address = (known after apply)
+ kube_config = (sensitive value)
+ name = "uat-k8s"
+ region = "fra1"
+ service_subnet = (known after apply)
+ status = (known after apply)
+ surge_upgrade = true
+ updated_at = (known after apply)
+ urn = (known after apply)
+ version = "1.21.3-do.0"
+ vpc_uuid = (known after apply)
+ maintenance_policy {
+ day = (known after apply)
+ duration = (known after apply)
+ start_time = (known after apply)
}
+ node_pool {
+ actual_node_count = (known after apply)
+ auto_scale = false
+ id = (known after apply)
+ name = "uat-k8s-worker-pool"
+ node_count = 2
+ nodes = (known after apply)
+ size = "s-2vcpu-4gb"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply"
now.
|
From the above, I can see that the resources to be created are what I defined.
Next, is applying the changes. Now that everything is as expected let’s apply the changes. The application process will modify the resources as shown in the plan.
Use this command to apply:
Apply will show you the plan then pause for output waiting for you to confirm if you really want to apply the changes. Type yes
to proceed.
You should see an output almost similar to this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
| $ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
...
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
digitalocean_kubernetes_cluster.k8s: Creating...
digitalocean_kubernetes_cluster.k8s: Still creating... [10s elapsed]
...
digitalocean_kubernetes_cluster.k8s: Creation complete after 7m19s [id=d52770a7-ee69-430d-8069-a3efe54a114f]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
|
Congratulation! The cluster is now up and runing.
Go to the Digital Ocean Kubernetes Dashboard and click on the cluster we just created.
Download the kubeconfig file by clicking the Download Config file
button.
Accessing the kubernetes cluster
Once the kubeconfig file has been downloaded, export it in the environment using this command:
1
| export KUBECONFIG=~/Downloads/uat-k8s-kubeconfig.yml
|
Subtitute the ~/Downloads/uat-k8s-kubeconfig.yml
to the path ti your downloaded kubeconifig file.
Get nodes:
1
2
3
4
5
| $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
uat-k8s-worker-pool-u87d3 Ready <none> 10m v1.21.3 10.114.0.4 46.101.237.95 Debian GNU/Linux 10 (buster) 4.19.0-17-cloud-amd64 containerd://1.4.9
uat-k8s-worker-pool-u87dn Ready <none> 9m48s v1.21.3 10.114.0.5 138.197.184.5 Debian GNU/Linux 10 (buster) 4.19.0-17-cloud-amd64 containerd://1.4.9
|
Creating simple nginx deployment:
1
2
3
| $ kubectl create deploy nginx --image nginx:latest
deployment.apps/nginx created
|
Get deployments
1
2
3
4
| $ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 39s
|
Let us create an nginx service exposed as a node port:
1
2
3
| $ kubectl expose deploy nginx --port 80 --type NodePort
service/nginx exposed
|
Check the service:
1
2
3
4
5
| $ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 15m
nginx NodePort 10.245.96.235 <none> 80:30756/TCP 34s
|
To check if our pod is working as expected, let us port forward it to local :8080
port.
1
2
3
4
5
| $ kubectl port-forward pod/nginx-55649fd747-csvsl 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
|
Now in your browser head to http://127.0.0.1:8080/, you will see nginx welcome page.
Awesome! Our pod is running as expected.
Clean up
Now that we have confirmed that everything is working as expected, let us clean up.
Deleting the service:
1
2
3
| $ kubectl delete service nginx
service "nginx" deleted
|
Deleting deployment:
1
2
3
| $ kubectl delete deployment nginx
deployment.apps "nginx" deleted
|
Whe you no longer need the cluster, you might want to delete to avoind incuring costs.
Use this command to delete the resources:
Output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
| $ terraform destroy
digitalocean_kubernetes_cluster.k8s: Refreshing state... [id=d52770a7-ee69-430d-8069-a3efe54a114f]
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
...
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
digitalocean_kubernetes_cluster.k8s: Destroying... [id=d52770a7-ee69-430d-8069-a3efe54a114f]
digitalocean_kubernetes_cluster.k8s: Destruction complete after 2s
Destroy complete! Resources: 1 destroyed.
|
Concusion
Up to this point we have been able to launch a kubernetes cluster with two nodes in Digital Ocean. We also tested launching Nginx Deployment.