How to use Terraform to create a vpc network and a Cloud SQL in GCP

In this guide, we will build a Cloud SQL instance in Google cloud platform using terraform. Terraform allows you to develop cloud infrastructure by automating repetitive tasks.

Creating a Cloud SQL cluster in the console can be tiring, especially if you have to create multiple instances with different parameters such as node types, node sizes etc. Terraform was created to solve that problem. It allows you to have the instructions as code that can be used to plan, deploy, modify, and destroy the clusters programmatically.

Checkout these:

Requirements

You need the following to proceed

  • A Google Project – GCP organizes resources into projects. Create one now in the GCP console and make note of the project ID. 

  • Enable Google Compute Engine for your project in the GCP console. Make sure to select the project you are using to follow this tutorial and click the “Enable” button.

  • A GCP service account key: Create a service account key to enable Terraform to access your GCP account. When creating the key, use the following settings:

  • Select the project you created in the previous step.

  • Click “Create Service Account”.

  • Give it any name you like and click “Create”.

  • For the Role, choose “Project -> Editor”, then click “Continue”.

  • Skip granting additional users access, and click “Done”.

After you create your service account, download your service account key.

  • Select your service account from the list.
  • Select the “Keys” tab.
  • In the drop down menu, select “Create new key”.
  • Leave the “Key Type” as JSON.
  • Click “Create” to create the key and save the key file to your system.

You also need to enable the sqladmin api. You can either use the console or use this gcloud command:

gcloud services enable sqladmin.googleapis.com

Step 1 – Downloading and installing terraform

Terraform is available as a binary for most distributions. Get the latest binary and download instructions from terraform downloads page here.

Step 2 – Adding the Project code

In this section we will create the files that will contain the code for our resources. First you need to create a directory and switch to it. In your terminal use these commands:

mkdir gcp-cloudsql
cd gcp-cloudsql

First we will have to specify the providers. Terraform relies on plugins called “providers” to interact with cloud providers, SaaS providers, and other APIs.

Terraform configurations must declare which providers they require so that Terraform can install and use them. Additionally, some providers require configuration (like endpoint URLs or cloud regions) before they can be used.

This is the main.tf where we define the google provider that we will use and we are also specifying the specific versions. We are also defining some locals that we can reuse.

A local value assigns a name to an expression, so you can use the name multiple times within a module instead of repeating the expression. Local values are like a function’s temporary local variables.

locals {
  env              = "dev"
  project          = "citizix"
  credentials_path = "./gcp-credentials.json"
  region           = "europe-west1"
}

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">=4.20.0, < 5.0.0"
    }
  }
}

provider "google" {
  credentials = file(local.credentials_path)

  project = "citizix-prj"
  region  = local.region
}

Create a vpc

Next we will need to create a vpc because all the other resources depends on it. The following code specifies a google compute network and two sub networks – one private and one public. Save it as vpc.tf.

locals {
  vpc_name = "${local.env}-${local.project}-vpc"
}

resource "google_compute_network" "vpc" {
  name                    = local.vpc_name
  auto_create_subnetworks = "false"
}

resource "google_compute_subnetwork" "public" {
  name          = "${local.vpc_name}-public-0"
  region        = local.region
  network       = google_compute_network.vpc.name
  ip_cidr_range = "10.1.0.0/24"
}

resource "google_compute_subnetwork" "private" {
  name                     = "${local.vpc_name}-private-0"
  region                   = local.region
  private_ip_google_access = true
  network                  = google_compute_network.vpc.name
  ip_cidr_range            = "10.1.1.0/24"
}

Create a Cloud SQL instance

Next, we can create a could sql instance. We will use Postgres in this case.

In my case, I am assigning both private and public IPs to my instance and allowing access to the instance from all addresses. For security purposes, you can restrict the IPs or Subnets that are allowed to access the instance. To use a different database version, change the database_version parameter. You can use MYSQL_8_0 for the latest version of mysql.

Save this content in the file postgres.tf.

locals {
  sql_instance_name = "${local.env}-${local.project}-postgres"

  authorized_networks = [
    {
      name  = "allow-all-inbound"
      value = "0.0.0.0/0"
    },
  ]
}

resource "google_compute_global_address" "private_ip_address" {
  name          = "${local.env}-${local.project}-private-ip-address"
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 16
  network       = google_compute_network.vpc.id
}

resource "google_service_networking_connection" "private_vpc_connection" {
  network                 = google_compute_network.vpc.id
  service                 = "servicenetworking.googleapis.com"
  reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}

resource "google_sql_database_instance" "postgres" {
  name             = local.sql_instance_name
  database_version = "POSTGRES_14"
  region           = local.region

  depends_on = [google_service_networking_connection.private_vpc_connection]

  settings {
    tier = "db-f1-micro"

    ip_configuration {
      dynamic "authorized_networks" {
        for_each = local.authorized_networks
        content {
          name  = lookup(authorized_networks.value, "name", null)
          value = authorized_networks.value.value
        }
      }

      ipv4_enabled    = true
      private_network = google_compute_network.vpc.id
    }
  }
  deletion_protection = "false"
}

resource "google_sql_database" "db" {
  name      = "citizix"
  instance  = local.sql_instance_name
  charset   = "utf8"
  collation = "utf8_general_ci"
}

resource "google_sql_user" "user" {
  name     = "root"
  instance = local.sql_instance_name
  host     = "%"
  password = "grudh3VRdWcqY8"
}

output "postgres_instance_name" {
  description = "The name of the postgres database instance"
  value       = google_sql_database_instance.postgres.name
}

output "postgres_public_ip_address" {
  description = "The public IPv4 address of the postgres instance."
  value       = google_sql_database_instance.postgres.public_ip_address
}

output "postgres_private_ip_address" {
  description = "The public IPv4 address of the postgres instance."
  value       = google_sql_database_instance.postgres.private_ip_address
}

Step 4 – Planning and applying changes

To apply the changes, do the following

First initialize terraform to download required dependencies and plugins.

terraform init

Then validate to ensure that you have valid code without errors.

terraform validate

Then plan to confirm that the changes being introduced are what is expected.

terraform plan -out tf.plan

Finally apply to create resources in gcp.

terraform apply - tf.plan

To apply with no prompt

terraform apply -auto-approve tf.plan

If you no longer need the changes you can destroy with this. You can add -auto-approve if you do not want to be prompted.

terraform destroy

Step 5 – Connecting to the SQL Instance

Ensure that you are connected to the cloud console from the terminal. You can either use gcloud or postgres client to access the instance.

Then use gcloud command to connect:

gcloud sql connect myinstance --user=root

Conclusion

We were able to use terraform to create a vpc and an SQL cloud instance in gcp. This allows us to create and destroy resources easily at the same time bringing in benefits of having infrastructure as code.

comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy