Velero is an open source tool to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes.
Velero has two main components: a CLI, and a server-side Kubernetes deployment.
Prerequisites
- Access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
kubectl
installed locally
Velero uses object storage to store backups and associated artifacts. It also optionally integrates with supported block storage systems to snapshot your persistent volumes. Before beginning the installation process, you should identify the object storage provider and optional block storage provider(s) you’ll be using from the list of compatible providers.
Velero supports storage providers for both cloud-provider environments and on-premises environments.
Related Content
- Velero Life cycle commands - Backup and restore scenarios
- How to set up Kubernetes Cluster on Debian 11 with kubeadm and CRI-O
- How to set up Kubernetes Cluster on Ubuntu 22.04 with kubeadm and CRI-O
Install the CLI
MacOS - Homebrew
On macOS, you can use Homebrew to install the velero client:
|
|
GitHub release
Download the latest release’s tarball for your client platform.
Extract the tarball:
|
|
Move the extracted velero
binary to somewhere in your $PATH
(/usr/local/bin
for most users).
Install and configure the server components
There are two supported methods for installing the Velero server components:
- the
velero install
CLI command theHelm chart
Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. The steps to install and configure the Velero server components along with the appropriate plugins are specific to your chosen storage provider.
Cloud provider
The Velero client includes an install
command to specify the settings for each supported cloud provider. You can install Velero for the included cloud providers using the following command:
|
|
Run Velero on GCP
You can run Kubernetes on Google Cloud Platform in either:
- Kubernetes on Google Compute Engine virtual machines
- Google Kubernetes Engine
If you do not have the gcloud and gsutil CLIs locally installed, follow the user guide to set them up.
Create GCS bucket
Velero requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the FAQ for more details). Create a GCS bucket, replacing the <YOUR_BUCKET>
placeholder with the name of your bucket:
|
|
Create service account
To integrate Velero with GCP, create a Velero-specific Service Account:
View your current config settings:
|
|
Store the project value from the results in the environment variable $PROJECT_ID
.
|
|
Create a service account:
|
|
If you’ll be using Velero to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default velero.
Then list all accounts and find the velero
account you just created:
|
|
Set the $SERVICE_ACCOUNT_EMAIL
variable to match its email value.
|
|
Attach policies to give velero the necessary permissions to function:
|
|
Create a service account key, specifying an output file (credentials-velero) in your local directory:
|
|
Credentials and configuration
If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects. See the GKE documentation for more information.
Install and start Velero
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called velero
, and place a deployment named velero
in it.
|
|
Additionally, you can specify --use-restic
to enable restic support, and --wait
to wait for the deployment to be ready.
(Optional) Specify -snapshot-location-config snapshotLocation=<YOUR_LOCATION>
to keep snapshots in a specific availability zone.
For more complex installation needs, use either the Helm chart, or add --dry-run -o yaml
options for generating the YAML representation for the installation.
Installing with the Helm chart
When installing using the Helm chart, the provider’s credential information will need to be appended into your values.
The easiest way to do this is with the --set-file
argument, available in Helm 2.10 and higher.
Add helm chart and update helm repos:
|
|
Note: You may add the flag --set cleanUpCRDs=true
if you want to delete the Velero CRDs after deleting a release. Please note that cleaning up CRDs will also delete any CRD instance, such as BackupStorageLocation and VolumeSnapshotLocation, which would have to be reconfigured when reinstalling Velero. The backup data in object storage will not be deleted, even though the backup instances in the cluster will.
|
|
Specify the necessary values using the --set key=value[,key=value]
argument to helm install. For example,
|
|
Removing Velero
If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by velero install:
|
|
For helm installed velero
|
|
The last command is necessary because Velero’s CRDs are not uninstalled during helm uninstall.
Conclusion
In this article, we learnt how to install Velero with gcp provider using the velero install
command and also using its Helm chart.
In future articles, we will try our hands on a simple backup & restore scenario.