How to use Kustomize to manage kubernetes configurations

Kustomize is used for Kubernetes native configuration management. It introduces a template-free way to customize application configuration that simplifies the use of off-the-shelf applications. Kustomize traverses a Kubernetes manifest to add, remove or update configuration options without forking. It is available both as a standalone binary and as a native feature of kubectl as apply -k.

Kustomize simplifies deployments by allowing you to create an entire Kubernetes application out of individual pieces — without touching the YAML configuration files for the individual components.

Kustimize leverages layering to preserve the base settings of your applications and components by overlaying declarative yaml artifacts (called patches) that selectively override default settings without actually changing the original files.

Kustomize relies on the following system of configuration management layering to achieve reusability:

  • Base Layer – Specifies the most common resources
  • Patch Layers – Specifies use case specific resources

In this tutorial, we’ll set up kustomize and explore how it works with a sample application deployment. 

Also checkout:

Features of Kustomize:
  • Purely declarative approach to configuration customization
  • Natively built into kubectl from version 1.14
  • Manage an arbitrary number of distinctly customized Kubernetes configurations
  • Available as a standalone binary for extension and integration into other services
  • Every artifact that kustomize uses is plain YAML and can be validated and processed as such
  • Kustomize encourages a fork/modify/rebase workflow

Installing Kustomize

Kustomize can be installed by downloading precompiled binaries.

Binaries at various versions for linux, MacOs and Windows are published on the releases page.

The following script detects your OS and downloads the appropriate kustomize binary to your current working directory.

curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  <strong>|</strong> bash

This script doesn’t work for ARM architecture

Kubernetes Example

Let’s step through how Kustomize works using a deployment scenario involving 2 different environments: uat, and prod. In this example we’ll use service, deployment, and horizontal pod autoscaler resources. All of the environments will use different types of services:

  • uat – ClusterIP
  • prod– LoadBalancer

They each will have different HPA settings. This is how directory structure looks:

➜ tree
.
├── base
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
    ├── prod
    │   ├── hpa.yaml
    │   ├── kustomization.yaml
    │   ├── rollout-replicas.yaml
    │   └── service-loadbalancer.yaml
    └── uat
        ├── hpa.yaml
        └── kustomization.yaml

4 directories, 10 files

Base files

The base folder holds the common resources, such as the standard deployment.yaml, service.yaml, and hpa.yaml resource configuration files. We’ll explore each of their contents in the following sections.

base/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: citiapp
spec:
  selector:
    matchLabels:
      app: citiapp
  template:
    metadata:
      labels:
        app: citiapp
  spec:
    containers:
    - name: app
      image: foo/citiapp:latest
      ports:
      - name: http
        containerPort: 8080
        protocol: TCP

base/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: citiapp-service
spec:
  ports:
  - name: http
    port: 8080
  selector:
    app: citiapp

base/hpa.yaml

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: citiapp-hpa
spec:
  scaleTargetRef:
  apiVersion: apps/v1
  kind: Deployment
  name: citiapp
  minReplicas: 1
  maxReplicas: 5
  metrics:
  - type: Resource
  resource:
    name: cpu
    target:
      type: Utilization
      averageUtilization: 50

base/kustomization.yaml

The kustmization.yaml file is the most important file in the base folder and it describes what resources you use.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - service.yaml
  - deployment.yaml
  - hpa.yaml

Uat Overlay Files

The overlays folder houses environment-specific overlays. It has 2 sub-folders (one for each environment).

uat/kustomization.yaml

This file defines which base configuration to reference and patch using patchesStrategicMerge, which allows partial YAML files to be defined and overlaid on top of the base.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../base
patchesStrategicMerge:
  - hpa.yaml

uat/hpa.yaml

This file has the same resource name as the one located in the base file. This helps in matching the file for patching. This file also contains important values, such as min/max replicas, for the dev environment.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: citiapp-hpa
spec:
  minReplicas: 1
  maxReplicas: 2
  metrics:
  - type: Resource
  resource:
    name: cpu
    target:
      type: Utilization
      averageUtilization: 90

If you compare the previous hpa.yaml file with base/hpa.yaml, you’ll notice differences in minReplicas, maxReplicas, and averageUtilization values.

Review Patches

To confirm that your patch config file changes are correct before applying to the cluster, you can run kustomize build overlays/uat:

apiVersion: v1
kind: Service
metadata:
  name: citiapp-service
spec:
  ports:
  - name: http
    port: 8080
  selector:
    app: citiapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: citiapp
spec:
  selector:
    matchLabels:
      app: citiapp
  spec:
    containers:
    - image: foo/citiapp:latest
      name: app
      ports:
      - containerPort: 8080
        name: http
        protocol: TCP
  template:
    metadata:
      labels:
        app: citiapp
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: citiapp-hpa
spec:
  apiVersion: apps/v1
  kind: Deployment
  maxReplicas: 2
  metrics:
  - type: Resource
  minReplicas: 1
  name: citiapp
  resource:
    name: cpu
    target:
      averageUtilization: 90
      type: Utilization

Apply Patches

Once you have confirmed that your overlays are correct, use the kubectl apply -k overlays/uat command to apply the the settings to your cluster:

$ kubectl apply -k  overlays/dev 
service/citiapp-service created
deployment.apps/citiapp created
horizontalpodautoscaler.autoscaling/citiapp-hpa created

After handling the uat environment, let us also demo the production environment.

Define Prod Overlay Files

prod/hpa.yaml

In our production hpa.yaml, let’s say we want to allow up to 10 replicas, with new replicas triggered by a resource utilization threshold of 70% avg CPU usage. This is how that would look:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: citiapp-hpa
spec:
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
  resource:
    name: cpu
    target:
      type: Utilization
      averageUtilization: 70

prod/rollout-replicas.yaml

There’s also a rollout-replicas.yaml file in our production directory which specifies our rolling strategy:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: citiapp
spec:
  replicas: 10
  strategy:
  rollingUpdate:
    maxSurge: 1
    maxUnavailable: 1
  type: RollingUpdate

prod/service-loadbalancer.yaml

We use this file to change the service type to LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: citiapp-service
spec:
  type: LoadBalancer

prod/kustomization.yaml

This file operates the same way in the production folder as it does in your base folder: it defines which base file to reference and which patches to apply for your production environment. In this case, it includes two more files: rollout-replica.yaml and service-loadbalancer.yaml.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../../base
patchesStrategicMerge:
  - rollout-replica.yaml
  - hpa.yaml
  - service-loadbalancer.yaml

Review Prod Patches

Lets see if production values are being applied by running kustomize build overlays/prod

Once you have reviewed, apply your overlays to the cluster with:

$ kubectl apply -k overlays/prod
service/citiapp-service created
<span style="font-size: calc(11px + 0.2em);">deployment.apps/citiapp created</span>
<span style="font-size: calc(11px + 0.2em);">horizontalpodautoscaler.autoscaling/frontend-deployment-hpa created</span>

Understanding Kustomize

Bases and overlays

Kustomize’s configuration transformation approach leverages the use of kustomization layers so that the same base configuration files can be reused across multiple kustomization configurations. It achieves this with the concepts of bases and overlays.

  • base is a directory containing a file called kustomization.yaml, which can enumerate some set of resources with some customizations that will be applied to them. A base should be declared in the resources field of a kustomization file.
  • An overlay is a directory that refers to another kustomization directory as its, or one of its, bases.

A base can be thought of as a preliminary step in a pipeline, having no knowledge of the overlays that it is referenced by. After a base is finished processing, it sends its resources as input to the overlay to transform according to the overlay’s specification.

The following is an example of a kustomization base:

# base/kustomization.yaml<br>resources:<br>- deployment.yaml<br>namePrefix: bar-
#base/deployment.yaml<br>apiVersion: apps/v1<br>kind: Deployment<br>metadata:<br>  name: nginx<br>spec:<br>  template:<br>    metadata:<br>    spec:<br>      containers:<br>      - image: nginx<br>        name: nginx

This base could be reused by multiple kustomization overlays. The following is an example of an overlay that could refer to this base:

# overlay/kustomization.yaml<br>resources:<br>- ../base<br>- configmap.yaml<br>namePrefix: foo-
#overlay/configmap.yaml<br>apiVersion: v1<br>kind: ConfigMap<br>metadata:<br>  name: cm<br>data:<br>  red: blue

Running the command kustomize build overlay produces the following output:

apiVersion: apps/v1<br>kind: Deployment<br>metadata:<br>  name: foo-bar-nginx<br>spec:<br>  template:<br>    metadata:<br>    spec:<br>      containers:<br>      - image: nginx<br>        name: nginx<br>---<br>apiVersion: v1<br>kind: ConfigMap<br>metadata:<br>  name: foo-cm<br>data:<br>  red: blue

The Deployment received the name prefix bar-from the base kustomization, and then another name prefix foo- from the overlay kustomization. The ConfigMap only received the name prefix foo- because it was declared in the overlay, and thus was processed only by the overlay.

Generate Secrets and ConfigMaps

You can generate Secrets and ConfigMaps from a file by using the secretGenerator or configMapGenerator fields in your kustomization file. For example:

# kustomization.yaml<br>configMapGenerator:<br>- name: my-app<br>  files:<br>  -.properties

generates the following YAML output:

apiVersion: v1<br>kind: ConfigMap<br>metadata:<br>  name: my-app-g82klmn92h<br>data:<br>  .properties: |-<br>      red=blue

Edit the kustomization file

Kustomize provides several imperative commands that help you manage your kustomization file.

  • To add all the YAML files in your current directory to the kustomization’s resources field, run the following command:kustomize edit add resource *.yaml
  • To view the kustomize edit help page and see all the subcommands it offers, run the following command:kustomize edit -h
  • To get specific help for subcommands, add the subcommand as an argument. For example:kustomize edit add -h
comments powered by Disqus
Built with Hugo
Theme Stack designed by Jimmy