How to Deploy to Kubernetes with ArgoCD, GitHub Actions, and Helm Templating (Step-by-Step)

Build a GitOps CI/CD pipeline: GitHub Actions builds Docker images, Helm template renders Kubernetes manifests, ArgoCD syncs from Git. Step-by-step with workflow YAML and ArgoCD Application.

ArgoCD is one of the cleanest ways to manage Kubernetes deployments because it enforces GitOps: your cluster state is driven from Git. In this guide we build a full CI/CD pipeline using GitHub Actions to build and push Docker images, Helm to generate Kubernetes manifests, and ArgoCD to sync and deploy those manifests automatically.

The idea: GitHub Actions builds the image, runs helm template to render manifests into raw YAML, commits them into a separate GitOps repo, and ArgoCD detects the change and deploys the new version. You get full Git history of deployments, reproducible manifests, and easy rollback by pointing ArgoCD back to an older path.

In this guide you’ll:

  • Understand the two-repo layout (application repo vs release/GitOps repo)
  • See why rendering Helm in CI (instead of using Helm inside ArgoCD) can be beneficial
  • Walk through the full deployment workflow step by step
  • Walk through a complete GitHub Actions workflow and understand what each step does
  • Configure an ArgoCD Application with automated sync, self-heal, and Slack notifications
  • See release folder strategy and security considerations

Prerequisites:

  • A Kubernetes cluster with ArgoCD installed
  • An application repo with source code, Dockerfile(s), and a Helm values file
  • A GitOps (release) repo that will store rendered manifests (ArgoCD watches this repo)
  • GitHub (and optionally GitLab) for code and Helm registry; Docker Hub (or another registry) for images
  • Slack webhook and secrets (DOCKERHUB_*, GITHUB_TOKEN, SLACK_WEBHOOK) configured in GitHub

Architecture overview

We use two repositories.

1. Application repository

This holds application source and deployment config:

  • Application source code
  • Dockerfile (main app image)
  • Dockerfile.migrate (optional migration image)
  • Helm values file(s) per environment (e.g. deploy/helm/stage.yaml)
  • GitHub Actions workflow (e.g. .github/workflows/deploy.yml)

Example layout:

1
2
3
4
5
your-app/
  Dockerfile
  Dockerfile.migrate
  deploy/helm/stage.yaml
  .github/workflows/deploy.yml

2. Release repository (GitOps repo)

This holds only the rendered Kubernetes YAML that ArgoCD applies. No Helm charts here—only the output of helm template.

Example layout:

1
2
3
4
5
6
7
8
argocd-releases/
  apisim/
    stage/
      main-84e93ed/
        deployment.yaml
        service.yaml
  argo-apps/
    stage-apisim.yaml   # ArgoCD Application manifest

ArgoCD watches this repo and syncs whatever is in the path configured in the Application (e.g. apisim/stage/main-84e93ed).


Why render Helm templates in CI instead of using Helm in ArgoCD?

ArgoCD can deploy Helm charts directly (source type: Helm). Rendering with helm template in CI and committing plain YAML has trade-offs.

Advantages of rendering in CI:

  • Rendered manifests are visible and reviewable in the GitOps repo
  • ArgoCD applies plain YAML—simpler to debug and audit
  • Fits environments that restrict Helm inside production clusters
  • No need to manage Helm dependencies or repos inside ArgoCD

Trade-offs:

  • More commits in the release repo
  • Release repo grows (use a cleanup strategy or overwrite a single current/ folder)

For many teams, the visibility and control of rendered YAML are worth it.


The deployment workflow (step by step)

The pipeline does the following:

  1. Generate a Docker tag from branch name and git short SHA (e.g. main-84e93ed).
  2. Build and push the application Docker image to the registry.
  3. Build and push the migration image (same tag).
  4. Fetch the Helm chart from your chart repo (e.g. GitLab Package Registry).
  5. Render manifests with helm template using the new image tag and environment values.
  6. Copy the rendered YAML into the GitOps repo under a new folder (e.g. apisim/stage/<tag>/).
  7. Update the ArgoCD Application manifest so spec.source.path points to the new folder.
  8. Commit and push to the GitOps repo.
  9. Notify Slack (success or failure).

Once the push happens, ArgoCD sees the change and syncs the cluster to the new manifests.


GitHub Actions workflow

Below is a complete workflow that deploys a stage app (e.g. apisim) using the pattern above.

Replace placeholders:

  • Helm repo: use your own --username / --password and chart URL (e.g. GitLab project ID instead of 000000).
  • Image names: e.g. ektowett/apisim → your registry and image name.
  • Repo: etowett/argocd-releases → your GitOps repo.

Workflow file (e.g. .github/workflows/deploy-stage.yml):

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
name: Deploy stage app using ArgoCD

on:
  workflow_dispatch:
  push:
    branches:
      - main

concurrency:
  group: "${{ github.workflow }}-${{ github.head_ref || github.ref }}"
  cancel-in-progress: true

jobs:
  deploy-stage:
    name: Build and deploy stage
    runs-on: ubuntu-latest

    steps:
      - name: Check out code
        uses: actions/checkout@v4

      - name: Set Docker tag
        id: vars
        shell: bash
        run: |
          set -euo pipefail
          GIT_HASH=$(git rev-parse --short "${GITHUB_SHA}")
          DOCKER_TAG="${GITHUB_REF##*/}-${GIT_HASH}"
          echo "docker_tag=$DOCKER_TAG" >> $GITHUB_OUTPUT

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Build and push app image
        uses: docker/build-push-action@v5
        with:
          push: true
          build-args: |
            TAGS=citizix
            BUILD_ID=${{ github.run_id }}
            BUILD_TAG=${{ steps.vars.outputs.docker_tag }}
          tags: your-registry/your-app:${{ steps.vars.outputs.docker_tag }}

      - name: Build and push migration image
        uses: docker/build-push-action@v5
        with:
          push: true
          file: Dockerfile.migrate
          tags: your-registry/your-app-migrate:${{ steps.vars.outputs.docker_tag }}

      - name: Render Helm templates and update ArgoCD
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          HELM_USER: ${{ secrets.HELM_REGISTRY_USER }}
          HELM_PASSWORD: ${{ secrets.HELM_REGISTRY_PASSWORD }}
        shell: bash
        run: |
          set -euo pipefail

          helm repo add --username "$HELM_USER" --password "$HELM_PASSWORD" myrepo https://gitlab.com/api/v4/projects/YOUR_PROJECT_ID/packages/helm/stable
          helm fetch myrepo/app --version 0.1.3 --untar

          helm template myapp ./app \
            --set image.tag=${{ steps.vars.outputs.docker_tag }} \
            --set hook.image.tag=${{ steps.vars.outputs.docker_tag }} \
            --namespace=stage \
            -f deploy/helm/stage.yaml \
            --output-dir stage/${{ steps.vars.outputs.docker_tag }}

          cd stage/${{ steps.vars.outputs.docker_tag }}
          cp app/templates/*.yaml .
          rm -rf app
          cd ../..

          git config --global user.email "[email protected]"
          git config --global user.name "GitHub Actions"
          git clone "https://x-access-token:${GITHUB_TOKEN}@github.com/YOUR_ORG/argocd-releases.git" argocd-releases
          cd argocd-releases

          mkdir -p myapp/stage/${{ steps.vars.outputs.docker_tag }}
          cp ../stage/${{ steps.vars.outputs.docker_tag }}/*.yaml myapp/stage/${{ steps.vars.outputs.docker_tag }}

          git add myapp/stage/${{ steps.vars.outputs.docker_tag }}

          OLD_PATH=$(grep "path:" argo-apps/stage-myapp.yaml | awk '{print $2}')
          OLD_TAG="${OLD_PATH##*/}"
          sed -i "s/${OLD_TAG}/${{ steps.vars.outputs.docker_tag }}/" argo-apps/stage-myapp.yaml
          git add argo-apps/stage-myapp.yaml

          git commit -m "Deploy stage: ${{ steps.vars.outputs.docker_tag }}"
          git push origin main

      - name: Notify Slack
        uses: lazy-actions/slatify@master
        if: always()
        with:
          type: ${{ job.status }}
          job_name: "*ArgoCD deploy - ${{ steps.vars.outputs.docker_tag }}*"
          mention: "here"
          mention_if: "failure"
          channel: "#deploys"
          url: ${{ secrets.SLACK_WEBHOOK }}

How this workflow works:

  • Passing the Docker tag between steps: The “Set Docker tag” step writes the tag to $GITHUB_OUTPUT with echo "docker_tag=$DOCKER_TAG" >> $GITHUB_OUTPUT. Later steps read it via steps.vars.outputs.docker_tag. This is the standard way in GitHub Actions to pass data from one step to the next.
  • Bash safety: The inline scripts use set -euo pipefail: the job fails on the first command that fails (-e), on use of unset variables (-u), and on failure in any command in a pipeline (-o pipefail). That way a failed helm or git command fails the whole job instead of being ignored.
  • Step names: Names like “Build and push app image” and “Build and push migration image” show up in the Actions run UI so you can see which step succeeded or failed without opening the logs.
  • Cloning the GitOps repo: The workflow clones the release repo using GITHUB_TOKEN from the job env and the x-access-token: URL form. The token is never echoed in the script. Use a fine-grained or classic repo token with write access to the release repo (e.g. contents read/write).
  • Helm credentials: Store your Helm registry username and password in GitHub Secrets (e.g. HELM_REGISTRY_USER, HELM_REGISTRY_PASSWORD) and pass them into the “Render Helm templates” step so helm repo add and helm fetch can authenticate.

How Helm rendering works

The crucial command is:

1
2
3
4
5
6
helm template myapp ./app \
  --set image.tag=<docker_tag> \
  --set hook.image.tag=<docker_tag> \
  --namespace=stage \
  -f deploy/helm/stage.yaml \
  --output-dir stage/<docker_tag>

This produces rendered YAML under stage/<tag>/app/templates/*.yaml. The script then copies those files into the GitOps repo at e.g. myapp/stage/<tag>/, so each deployment has its own directory. ArgoCD’s Application is updated to point at the new path so it syncs the new release.


Updating the ArgoCD Application path

The workflow updates the Application’s source.path so ArgoCD deploys the new folder:

1
2
3
OLD_PATH=$(grep "path:" argo-apps/stage-myapp.yaml | awk '{print $2}')
OLD_TAG="${OLD_PATH##*/}"
sed -i "s/${OLD_TAG}/${DOCKER_TAG}/" argo-apps/stage-myapp.yaml

So path: myapp/stage/main-84e93ed becomes path: myapp/stage/main-a1b2c3d. After the commit is pushed, ArgoCD sees the change and syncs.


ArgoCD Application definition

This is the ArgoCD Application that deploys the stage app. It points at the GitOps repo and the path that the workflow updates.

This example enables selfHeal and syncOptions so the cluster stays in sync with Git and the target namespace is created if it doesn’t exist:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: stage-myapp
  namespace: argocd
  annotations:
    notifications.argoproj.io/subscribe.on-deployed.slack: argocd-deploys
    notifications.argoproj.io/subscribe.on-sync-succeeded.slack: argocd-deploys
    notifications.argoproj.io/subscribe.on-sync-failed.slack: argocd-deploys
  labels:
    team: backend
spec:
  destination:
    name: in-cluster
    namespace: stage
  project: default
  source:
    path: myapp/stage/main-84e93ed
    repoURL: https://github.com/YOUR_ORG/argocd-releases
    targetRevision: main
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - PruneLast=true
  • selfHeal: true — ArgoCD reverts manual changes in the cluster to match Git.
  • CreateNamespace=true — Creates the stage namespace if missing.
  • PruneLast=true — Deletes pruned resources after new ones are healthy.

For Slack, configure the ArgoCD Notifications controller and use the webhook name in the annotations (e.g. ArgoCD Slack notifications).


Release folder strategy and pruning

Using a folder per release (e.g. myapp/stage/main-84e93ed) gives clear history and easy rollback: point the Application path back to an older folder and sync.

Options:

  • Keep last N releases: Add a cleanup job (e.g. GitHub Action or cron) that deletes old directories in the GitOps repo (e.g. keep last 20).
  • Overwrite current: Use a single path like myapp/stage/current and overwrite it each run; smaller repo but no per-release history in the path.

The folder-per-release approach works well for auditability and rollback when combined with periodic pruning of old folders.


Security considerations

  • Avoid hardcoding tokens in clone URLs in the workflow script. Prefer passing GITHUB_TOKEN via env and using x-access-token in the URL, or use actions/checkout with a second repo and a token that has access to the release repo.
  • Store Helm registry credentials and Slack webhook in GitHub Secrets; never commit them.
  • Use a dedicated token for the GitOps repo with minimal scope (e.g. contents: read/write only).

Frequently Asked Questions (FAQ)

What is ArgoCD?

ArgoCD is a GitOps controller for Kubernetes. It watches a Git repository (and optionally Helm/Kustomize) and keeps the cluster state in sync with what’s defined in Git. Deployments are driven by Git commits, which gives auditability and easy rollback.

Why use Helm template in CI instead of Helm in ArgoCD?

Rendering with helm template in CI and committing plain YAML gives you visible, reviewable manifests in Git and lets ArgoCD apply raw YAML. Some teams also prefer not to run Helm inside the cluster. The trade-off is more commits and a growing release repo unless you prune or overwrite a single path.

What is the GitOps repo for?

The GitOps (release) repo holds only the rendered Kubernetes manifests (and ArgoCD Application definitions). CI writes new manifests and updates the Application path; ArgoCD reads from this repo and applies changes. Separation of application code (app repo) and deployment state (GitOps repo) is a core GitOps pattern.

How do I roll back a deployment with ArgoCD?

Point the ArgoCD Application’s source.path back to a previous release folder (e.g. from myapp/stage/main-a1b2c3d to myapp/stage/main-84e93ed), commit and push. ArgoCD will sync the cluster to that revision. With folder-per-release, rollback is a path change in Git.

How do I get Slack notifications for ArgoCD?

Use ArgoCD Notifications and annotate the Application with Slack triggers (e.g. on-sync-succeeded, on-sync-failed). You can also send notifications from GitHub Actions (e.g. Slatify) on workflow success/failure. See how to set up ArgoCD Slack notifications for more.


Conclusion

You now have a GitOps-style pipeline where:

  • CI (GitHub Actions) builds Docker images and runs Helm template to generate manifests.
  • Rendered YAML is committed to a release repo and the ArgoCD Application path is updated.
  • ArgoCD syncs the cluster from Git, with optional selfHeal and Slack notifications.

This gives you auditable, reproducible deployments and straightforward rollbacks. For more on ArgoCD itself, see how to deploy and configure ArgoCD in Kubernetes. For Helm chart hosting, see how to host Helm charts with GitLab Package Registry or working with Helm in Kubernetes.

Ways to extend this pipeline:

  • Clean up old release folders in the GitOps repo (e.g. a scheduled job that keeps only the last N directories) so the repo doesn’t grow without bound.
  • Sign container images (e.g. with Cosign) and verify signatures in the cluster for supply-chain security.
  • Use ArgoCD sync waves so migration Jobs run and complete before the main app Deployment.
  • Add a manual approval (e.g. environment protection) for production deploys.
  • Promote the same image from stage to production (reuse the tag, don’t rebuild) for true promotion workflows.
comments powered by Disqus
Citizix Ltd
Built with Hugo
Theme Stack designed by Jimmy