How to use Terraform to create a vpc network and a GKE in GCP

In this guide, we will build a Google Kubernetes Engine (GKE) instance in Google cloud platform using terraform. Terraform allows you to develop cloud infrastructure by automating repetitive tasks.

Creating a GKE cluster in the console can be tiring, especially if you have to create multiple instances with different parameters such as node types, node sizes etc. Terraform was created to solve that problem. It allows you to have the instructions as code that can be used to plan, deploy, modify, and destroy the clusters programmatically.

Checkout these:

# Requirements

You need the following to proceed

  • A Google Project – GCP organizes resources into projects. Create one now in the GCP console and make note of the project ID. 

  • Enable Google Compute Engine for your project in the GCP console. Make sure to select the project you are using to follow this tutorial and click the “Enable” button.

  • A GCP service account keyCreate a service account key to enable Terraform to access your GCP account. When creating the key, use the following settings:

  • Select the project you created in the previous step.

  • Click “Create Service Account”.

  • Give it any name you like and click “Create”.

  • For the Role, choose “Project -> Editor”, then click “Continue”.

  • Skip granting additional users access, and click “Done”.

After you create your service account, download your service account key.

  • Select your service account from the list.
  • Select the “Keys” tab.
  • In the drop down menu, select “Create new key”.
  • Leave the “Key Type” as JSON.
  • Click “Create” to create the key and save the key file to your system.

# Step 1 – Downloading and installing terraform

Terraform is available as a binary for most distributions. Get the latest binary and download instructions from terraform downloads page here.

# Step 2 – Adding the Project code

In this section we will create the files that will contain the code for our resources. First you need to create a directory and switch to it. In your terminal use these commands:

mkdir gcp-gke
cd gcp-gke

First we will have to specify the providers. Terraform relies on plugins called “providers” to interact with cloud providers, SaaS providers, and other APIs.

Terraform configurations must declare which providers they require so that Terraform can install and use them. Additionally, some providers require configuration (like endpoint URLs or cloud regions) before they can be used.

This is the where we define the google provider that we will use and we are also specifying the specific versions. We are also defining some locals that we can reuse.

A local value assigns a name to an expression, so you can use the name multiple times within a module instead of repeating the expression. Local values are like a function’s temporary local variables.

locals {
  env              = "dev"
  project          = "citizix"
  credentials_path = "./gcp-credentials.json"
  region           = "europe-west1"

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">=4.20.0, < 5.0.0"

provider "google" {
  credentials = file(local.credentials_path)

  project = "citizix-prj"
  region  = local.region

# Create a vpc

Next we will need to create a vpc because all the other resources depends on it. The following code specifies a google compute network and two sub networks – one private and one public. Save it as

locals {
  vpc_name = "${local.env}-${local.project}-vpc"

resource "google_compute_network" "vpc" {
  name                    = local.vpc_name
  auto_create_subnetworks = "false"

resource "google_compute_subnetwork" "public" {
  name          = "${local.vpc_name}-public-0"
  region        = local.region
  network       =
  ip_cidr_range = ""

resource "google_compute_subnetwork" "private" {
  name                     = "${local.vpc_name}-private-0"
  region                   = local.region
  private_ip_google_access = true
  network                  =
  ip_cidr_range            = ""

# Create the GKE cluster

Next we can create our cluster. We can’t create a cluster with no node pool defined, but we want to only use separately managed node pools. So we create the smallest possible default node pool and immediately delete it. The cluster will be created in the vpc defined above in the public subnet.

Next we create a nodepool where we define the node properties and node count for our cluster.

Add this content to

locals {
  cluster_name  = "${local.env}-${local.project}-gke"
  gke_num_nodes = 1

resource "google_container_cluster" "gke" {
  name     = local.cluster_name
  location = local.region

  remove_default_node_pool = true
  initial_node_count       = 1

  network    =
  subnetwork =

# Separately Managed Node Pool
resource "google_container_node_pool" "gke-nodes" {
  name       = "${local.cluster_name}-node-pool"
  location   = local.region
  cluster    = local.cluster_name
  node_count = local.gke_num_nodes

  node_config {
    oauth_scopes = [

    labels = {
      env = local.env

    # preemptible  = true
    machine_type = "n1-standard-1"
    tags         = ["gke-node", "${local.project}-gke"]
    metadata = {
      disable-legacy-endpoints = "true"

output "gke_endpoint" {
  value = google_container_cluster.gke.endpoint

output "gke_master_version" {
  value = google_container_cluster.gke.master_version

output "gke-node-urls" {
  value = google_container_node_pool.gke-nodes.instance_group_urls

# Step 4 – Planning and applying changes

To apply the changes, do the following

First initialize terraform to download required dependencies and plugins.

terraform init

Then validate to ensure that you have valid code without errors.

terraform validate

Then plan to confirm that the changes being introduced are what is expected.

terraform plan -out tf.plan

Finally apply to create resources in gcp.

terraform apply - tf.plan

To apply with no prompt

terraform apply -auto-approve tf.plan

If you no longer need the changes you can destroy with this. You can add -auto-approve if you do not want to be prompted.

terraform destroy

# Step 5 – Connecting to the cluster

Ensure that you are connected to the cloud console from the terminal.

Ensure that you have kubectl installed. Check instructions here.

Then use gcloud command to get cluster credentials:

➜ gcloud container clusters get-credentials dev-citizix-gke --region europe-west6 --project citizix-prj

Fetching cluster endpoint and auth data.
kubeconfig entry generated for dev-citizix-gke.

Then get cluster info

➜ kubectl cluster-info
➜ kubectl get nodes

# Conclusion

We were able to use terraform to create a vpc and gke cluster in gcp. This allows us to create and destroy resources easily at the same time bringing in benefits of having infrastructure as code.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy