For this example, we assume two cluster scenarios: staging and production. The ultimate goal is to use Flux and Kustomize to manage both clusters while minimizing duplicate declarations.
We will configure Flux to install, test, and upgrade the demo application using the HelmRepository and HelmRelease custom resources. Flux monitors the Helm repository and automatically upgrades the Helm version to the latest chart version based on the Semver scope.
The preparatory work
flux2-kustomize-helm-example
- Github.com/fluxcd/flux…
You will need Kubernetes cluster version 1.16 or later and Kubectl version 1.18 or later. For quick local testing, you can use Kubernetes Kind. However, any other Kubernetes Settings will work as well.
To follow this guide, you need a GitHub account and a Personal Access token that can create a repository (check all permissions under the REPO).
Install Flux CLI on MacOS and Linux using Homebrew:
brew install fluxcd/tap/flux
Copy the code
Or install the CLI by downloading precompiled binaries using the Bash script:
curl -s https://fluxcd.io/install.sh | sudo bash
Copy the code
The project structure
The Git repository contains the following top-level directories:
- The Apps directory contains the version of Helm with a custom configuration for each cluster
- The Infrastructure directory contains common infrastructure tools, such as the NGINX Ingress Controller and Helm repository definitions
- The Clusters directory contains the Flux configuration for each cluster
├── apps │ ├─ Base │ ├─ production │ ├─ staging │ ├─ Infrastructure │ ├─ nginx │ ├── sources ├─ ├ ─ ─ production └ ─ ─ the stagingCopy the code
The configuration structure of apps is:
- The apps/base/ directory contains the namespace and release definitions for Helm.
- Apps /production/ directory containing production Helm release values
- The apps/staging/ directory contains staging values
. / apps / ├ ─ ─ base │ └ ─ ─ podinfo │ ├ ─ ─ kustomization. Yaml │ ├ ─ ─ the namespace. The yaml │ └ ─ ─ the yaml ├ ─ ─ production │ ├ ─ ─ Yaml ├─ stanchion ├─ kustomization. Yaml ├─ kustomizationCopy the code
In the apps/base/podinfo/ directory, we have a HelmRelease, and both clusters have the same value:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
namespace: podinfo
spec:
releaseName: podinfo
chart:
spec:
chart: podinfo
sourceRef:
kind: HelmRepository
name: podinfo
namespace: flux-system
interval: 5m
values:
cache: redis-master.redis:6379
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
path: "/ *"
Copy the code
In the apps/staging/ directory, we have a Kustomize patch with a staging specific value:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
spec:
chart:
spec:
version: "> = 1.0.0 - alpha"
test:
enable: true
values:
ingress:
hosts:
- podinfo.staging
Copy the code
Note that using version: “>=1.0.0-alpha” we configure Flux to automatically upgrade HelmRelease to the latest chart release, including alpha, beta, and pre-releases (pre-Releases).
In the apps/production/ directory, we have a Kustomize patch with product-specific values:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: podinfo
namespace: podinfo
spec:
chart:
spec:
version: "> = 1.0.0"
values:
ingress:
hosts:
- podinfo.production
Copy the code
Note that with version: “>=1.0.0” we configure Flux to automatically upgrade HelmRelease to the latest stable chart release (alpha, beta, and pre-Releases will be ignored).
Infrastructure:
./ Infrastructure / ├── Heavy Exercises. Yaml │ ├── Heavy Exercises Yaml │ ├── ├─ yaml │ ├── yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ yaml │ ├─ podinfo.yamlCopy the code
In the Infrastructure /sources/ directory, we have the Helm repository definition:
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: podinfo
spec:
interval: 5m
url: https://stefanprodan.github.io/podinfo
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: bitnami
spec:
interval: 30m
url: https://charts.bitnami.com/bitnami
Copy the code
Note that using interval: 5m we configured Flux to pull the Helm repository index every five minutes. If the index contains a new version of Chart that matches the HelmRelease Semver range, Flux will upgrade that version.
The Bootstrap the staging and production
The cluster directory contains the Flux configuration:
. / clusters / ├ ─ ─ production │ ├ ─ ─ apps. The yaml │ └ ─ ─ proceeds. The yaml └ ─ ─ the staging ├ ─ ─ apps. The yaml └ ─ ─ proceeds. The yamlCopy the code
In the clusters/staging/ directory, we have the Kustomization definition:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: apps
namespace: flux-system
spec:
interval: 10m0s
dependsOn:
- name: infrastructure
sourceRef:
kind: GitRepository
name: flux-sytem
path: ./apps/staging
prune: true
validation: client
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: infrastructure
namespace: flux-system
spec:
interval: 10m0s
sourceRef:
kind: GitRepository
name: flux-system
path: ./infrastructure
Copy the code
Note that with Path:./apps/staging we configure Flux to synchronously hold Kustomize overrides, and with dependsOn we tell Flux to create infrastructure items before deploying the application.
Fork this repository on your personal GitHub account and export your GitHub Access Token, username, and repository name:
export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>
export GITHUB_REPO=<repository-name>
Copy the code
Verify that your temporary cluster meets the prerequisites:
flux check --pre
Copy the code
Set the Kubectl Context to your staging cluster and bootstrap Flux:
flux bootstrap github \
--context=staging \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/staging
Copy the code
The bootstrap command submits the list of flux components in the Clusters /staging/flux-system directory and creates a deployment key on GitHub that has read-only access so that it can pull changes within the cluster.
Note Helm Releases installed at staging:
$ watch flux get helmreleases --all-namespacesNAMESPACE NAME REVISION SUSPENDED READY MESSAGE Nginx nginx 5.6.14 False True Release Succeeded Podinfo Podinfo 5.0.3 False True Release Reconciliation succeeded redis redis 11.3.4 False True Release Reconciliation succeededCopy the code
Verify that demo App can be accessed through ingress:
$ kubectl -n nginx port-forward svc/nginx-ingress-controller 8080:80 &
$ curl -H "Host: podinfo.staging" http://localhost:8080{"hostname": "podinfo-59489db7b5-lmwPN ", "version": "5.0.3"}Copy the code
Boot Flux on production by setting the context and path of the production cluster:
flux bootstrap github \
--context=production \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/production
Copy the code
Monitor production reconciliation:
$ watch flux get kustomizations
NAME REVISION READY
apps main/797cd90cc8e81feb30cfe471a5186b86daf2758d True
flux-system main/797cd90cc8e81feb30cfe471a5186b86daf2758d True
infrastructure main/797cd90cc8e81feb30cfe471a5186b86daf2758d True
Copy the code
Encryption Kubernetes secrets
To store secrets securely in a Git repository, you can use Mozilla’s SOPS CLI to encrypt Kubernetes Secrets via OpenPGP or KMS.
Install GNUPg and SOPS:
brew install gnupg sops
Copy the code
Generate a GPG key for Flux without specifying a passphrase and get the GPG key ID:
$ gpg --full-generate-key
Email address: [email protected]
$ gpg --list-secret-keys [email protected]
sec rsa3072 2020-09-06 [SC]
1F3D1CED2F865F5E59CA564553241F147E7C5FA4
Copy the code
Create Kubernetes secret on cluster with private key:
gpg --export-secret-keys \
--armor 1F3D1CED2F865F5E59CA564553241F147E7C5FA4 |
kubectl create secret generic sops-gpg \
--namespace=flux-system \
--from-file=sops.asc=/dev/stdin
Copy the code
Create Kubernetes Secret manifest and encrypt secret data fields with sOPS:
kubectl -n redis create secret generic redis-auth \
--from-literal=password=change-me \
--dry-run=client \
-o yaml > infrastructure/redis/redis-auth.yaml
sops --encrypt \
--pgp=1F3D1CED2F865F5E59CA564553241F147E7C5FA4 \
--encrypted-regex '^(data|stringData)$' \
--in-place infrastructure/redis/redis-auth.yaml
Copy the code
Add secret to infrastructure/redis/kustomization yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: redis
resources:
- namespace.yaml
- release.yaml
- redis-auth.yaml
Copy the code
Enable decryption on the cluster by editing the infrastructure. Yaml file:
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: infrastructure
namespace: flux-system
spec:
# content omitted for brevity
decryption:
provider: sops
secretRef:
name: sops-gpg
Copy the code
Export a public key so that anyone with access to the repository can encrypt secrets but not decrypt them:
gpg --export -a [email protected] > public.key
Copy the code
Push changes to the main branch:
git add -A && git commit -m "add encrypted secret" && git push
Copy the code
Verify that secret has been created in the Redis namespace of both clusters:
kubectl --context staging -n redis get secrets
kubectl --context production -n redis get secrets
Copy the code
You can use Kubernetes secrets to provide values for your Helm Releases:
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: redis
spec:
# content omitted for brevity
values:
usePassword: true
valuesFrom:
- kind: Secret
name: redis-auth
valuesKey: password
targetPath: password
Copy the code
Learn more about Helm Releases Values coverage in Docs.
Add the cluster
To add a cluster to your Fleet, clone the repository locally:
git clone https://github.com/${GITHUB_USER}/${GITHUB_REPO}.git
cd ${GITHUB_REPO}
Copy the code
Create a directory in clusters using your cluster name:
mkdir -p clusters/dev
Copy the code
Copy the synchronization list from staging:
cp clusters/staging/infrastructure.yaml clusters/dev
cp clusters/staging/apps.yaml clusters/dev
Copy the code
You can create a dev overlay within apps, ensuring that the spec.path in clusters/dev/apps.yaml is changed to path:./apps/dev.
Push changes to the main branch:
git add -A && git commit -m "add dev cluster" && git push
Copy the code
Set the Kubectl context and path to your Dev cluster and boot Flux:
flux bootstrap github \
--context=dev \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/dev
Copy the code
Same environment
If you want to start an identical environment, you can boot a cluster, such as production-clone and reuse the production definition.
Boot production-clone cluster:
flux bootstrap github \
--context=production-clone \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/production-clone
Copy the code
Pull changes locally:
git pull origin main
Copy the code
Create a kustomization.yaml in the Clusters /production-clone directory:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- flux-system
- ../production/infrastructure.yaml
- ../production/apps.yaml
Copy the code
Note that in addition to the Flux-System Kustomize Overlay, we also include a list of infrastructure and apps from the Production directory.
Push changes to the main branch:
git add -A && git commit -m "add production clone" && git push
Copy the code
Tell Flux to deploy production workloads on production-Clone clusters:
flux reconcile kustomization flux-system \
--context=production-clone \
--with-source
Copy the code