How to install Velero for backups using GCP provider

Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.

Velero has two main components: a CLI, and a server-side Kubernetes deployment.

# Prerequisites

  • Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
  • kubectl¬†installed locally

Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.

Velero supports storage providers for both cloud-provider environments and on-premises environments.

# Install the CLI

# MacOS ‚Äď Homebrew

On macOS, you can use Homebrew to install the velero client:

brew install velero

# GitHub release

Download the latest release’s tarball for your client platform.

Extract the tarball:

tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to

Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users).

# Install and configure the server components

There are two supported methods for installing the Velero server components:

  • the¬†velero install¬†CLI command
    the Helm chart

Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider.

# Cloud provider

The Velero client includes an install command to specify the settings for each supported cloud provider. You can install Velero for the included cloud providers using the following command:

velero install \
    --provider <YOUR_PROVIDER> \
    --bucket <YOUR_BUCKET> \
    --secret-file <PATH_TO_FILE> \

# Run Velero on GCP

You can run Kubernetes on Google Cloud Platform in either:

  • Kubernetes on Google Compute Engine virtual machines
  • Google Kubernetes Engine

If you do not have the gcloud and gsutil CLIs locally installed, follow the user guide to set them up.

# Create GCS bucket

Velero requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the FAQ for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket:


gsutil mb gs://$BUCKET/

# Create service account

To integrate Velero with GCP, create a Velero-specific Service Account:

View your current config settings:

gcloud config list

Store the project value from the results in the environment variable $PROJECT_ID.

PROJECT_ID=$(gcloud config get-value project)

Create a service account:

gcloud iam service-accounts create velero \
    --display-name "Velero service account"

If you’ll be using Velero to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default velero.

Then list all accounts and find the velero account you just created:

gcloud iam service-accounts list

Set the $SERVICE_ACCOUNT_EMAIL variable to match its email value.

SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
  --filter="displayName:Velero service account" \
  --format 'value(email)')

Attach policies to give velero the necessary permissions to function:


gcloud iam roles create velero.server \
    --project $PROJECT_ID \
    --title "Velero Server" \
    --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"

gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    --role projects/$PROJECT_ID/roles/velero.server

gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}

Create a service account key, specifying an output file (credentials-velero) in your local directory:

gcloud iam service-accounts keys create credentials-velero.json \
    --iam-account $SERVICE_ACCOUNT_EMAIL

# Credentials and configuration

If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects. See the GKE documentation for more information.

# Install and start Velero

Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called velero, and place a deployment named velero in it.

velero install \
    --provider gcp \
    --plugins velero/velero-plugin-for-gcp:v1.0.0 \
    --bucket $BUCKET \
    --secret-file ./credentials-velero.json

Additionally, you can specify --use-restic to enable restic support, and --wait to wait for the deployment to be ready.
(Optional) Specify ‚Äďsnapshot-location-config snapshotLocation=<YOUR_LOCATION> to keep snapshots in a specific availability zone.

For more complex installation needs, use either the Helm chart, or add ‚Äďdry-run -o yaml options for generating the YAML representation for the installation.

# Installing with the Helm chart

When installing using the Helm chart, the provider’s credential information will need to be appended into your values.

The easiest way to do this is with the --set-file argument, available in Helm 2.10 and higher.

Add helm chart and update helm repos:

helm repo add vmware-tanzu
helm repo update

Note: You may add the flag ‚Äď-set cleanUpCRDs=true if you want to delete the Velero CRDs after deleting a release. Please note that cleaning up CRDs will also delete any CRD instance, such as BackupStorageLocation and VolumeSnapshotLocation, which would have to be reconfigured when reinstalling Velero. The backup data in object storage will not be deleted, even though the backup instances in the cluster will.

helm install --set-file stable/velero

Specify the necessary values using the ‚Äďset key=value[,key=value] argument to helm install. For example,

helm install velero vmware-tanzu/velero \
    --namespace <YOUR NAMESPACE> \
    --create-namespace \
    --set-file<FULL PATH TO FILE> \
    --set configuration.backupStorageLocation[0].name=<BACKUP STORAGE LOCATION NAME> \
    --set configuration.backupStorageLocation[0].provider=<PROVIDER NAME> \
    --set configuration.backupStorageLocation[0].bucket=<BUCKET NAME> \
    --set configuration.backupStorageLocation[0].config.region=<REGION> \
    --set configuration.volumeSnapshotLocation[0].name=<VOLUME SNAPSHOT LOCATION NAME> \
    --set configuration.volumeSnapshotLocation[0].provider=<PROVIDER NAME> \
    --set configuration.volumeSnapshotLocation[0].config.region=<REGION> \
    --set initContainers[0].name=velero-plugin-for-<PROVIDER NAME> \
    --set initContainers[0].image=velero/velero-plugin-for-<PROVIDER NAME>:<PROVIDER PLUGIN TAG> \
    --set initContainers[0].volumeMounts[0].mountPath=/target \
    --set initContainers[0].volumeMounts[0].name=plugins

# Removing Velero

If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by velero install:

kubectl delete namespace/velero clusterrolebinding/velero
kubectl delete crds -l component=velero

For helm installed velero

helm uninstall velero -n velero

kubectl delete namespace velero

kubectl delete crds -l component=velero

The last command is necessary because Velero’s CRDs are not uninstalled during helm uninstall.

# Conclusion

In this article, we learnt how to install Velero with gcp provider using the velero install command and also using its Helm chart.
In future articles, we will try our hands on a simple backup & restore scenario.

comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy