Deploying HashiCorp Vault on Kubernetes with Helm and Flux
Everything you need to deploy Vault on Kubernetes—from namespace setup to ingress configuration with real-world examples
HashiCorp Vault is a powerful secrets management tool that provides secure storage and access to tokens, passwords, certificates, and other sensitive data. This guide walks through deploying Vault on Kubernetes using Helm charts managed by Flux, providing a GitOps approach to secrets management infrastructure.
Architecture Overview
This deployment creates a standalone Vault instance with persistent storage, making it suitable for development and small production environments. The setup uses several Kubernetes resources orchestrated through Kustomize and managed by Flux.
Required Manifests
The deployment consists of several key components that work together to create a complete Vault installation:
Kustomization Configuration
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- persistentVolume.yaml
- persistentVolumeClaim.yaml
- repository.yaml
- release.yaml
configMapGenerator:
- name: vault-values
files:
- values.yaml
generatorOptions:
disableNameSuffixHash: true
The Kustomization manifest serves as the orchestration layer, defining which resources to include and how to generate configuration. The configMapGenerator
creates a ConfigMap from the values.yaml
file, which contains Helm chart customizations. The disableNameSuffixHash
option ensures the ConfigMap name remains consistent across deployments.
Namespace Isolation
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: vault
The namespace provides logical isolation for all Vault-related resources, following Kubernetes best practices for resource organization and access control.
Persistent Storage Setup
# persistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vault
namespace: vault
labels:
app: vault
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
local:
path: /data/vault
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- homelab
The PersistentVolume creates a local storage volume on a specific node. The nodeAffinity
configuration ensures the volume is bound to a particular node (in this case, "homelab"), which is essential for local storage. The Retain
reclaim policy prevents data loss when the PersistentVolumeClaim is deleted.
# persistentVolumeClaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vault-data
namespace: vault
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pv-vault
storageClassName: ""
The PersistentVolumeClaim requests storage from the PersistentVolume. By specifying volumeName
, it binds directly to the created PV, ensuring data persistence across pod restarts.
Helm Repository Configuration
# repository.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: hashicorp
namespace: vault
spec:
interval: 24h
url: https://helm.releases.hashicorp.com
The HelmRepository resource tells Flux where to find the HashiCorp Helm charts. The interval
setting determines how often Flux checks for chart updates, set to 24 hours for reasonable update frequency without excessive polling.
Helm Release Deployment
# release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vault
namespace: vault
spec:
chart:
spec:
chart: vault
interval: 12h
sourceRef:
kind: HelmRepository
name: hashicorp
namespace: vault
version: 0.30.0
interval: 30m
valuesFrom:
- kind: ConfigMap
name: vault-values
valuesKey: values.yaml
The HelmRelease defines the actual Vault deployment. It references the Helm repository, specifies the chart version for reproducible deployments, and pulls configuration values from the generated ConfigMap. The interval
controls how often Flux reconciles the release.
Helm Values Configuration
The values.yaml
file contains crucial configuration that customizes the Vault deployment:
# values.yaml
global:
enabled: true
tlsDisable: true
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
server:
standalone:
enabled: true
config: |-
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
telemetry {
unauthenticated_metrics_access = "true"
}
}
storage "file" {
path = "/data/vault"
}
dataStorage:
enabled: false
volumes:
- name: vault-data
persistentVolumeClaim:
claimName: vault-data
volumeMounts:
- name: vault-data
mountPath: /data/vault
ingress:
enabled: true
ingressClassName: "traefik"
hosts:
- host: vault.homelab.internal
extraPaths:
- path: /*
pathType: Prefix
backend:
service:
name: vault-ui
port:
number: 8200
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
Global Configuration
The global
section sets deployment-wide parameters. TLS is disabled for simplicity in development environments, though production deployments should enable TLS. Resource limits ensure predictable performance and prevent resource exhaustion.
Server Configuration
The server.standalone
configuration creates a single Vault instance suitable for development or small production workloads. The embedded HCL configuration enables the web UI, sets up TCP listeners on standard ports, and configures file-based storage. The telemetry settings allow Prometheus to scrape metrics without authentication.
Storage Integration
The volumes
and volumeMounts
sections connect the PersistentVolumeClaim to the Vault container, ensuring data persists across pod restarts. The dataStorage.enabled: false
setting disables the default storage provisioning since we're using custom persistent volumes.
Ingress Configuration
The server includes ingress configuration for external access through Traefik:
server:
ingress:
enabled: true
ingressClassName: "traefik"
hosts:
- host: vault.homelab.internal
extraPaths:
- path: /*
pathType: Prefix
backend:
service:
name: vault-ui
port:
number: 8200
This configuration enables access to Vault through a domain name (vault.homelab.internal
) rather than direct port access. The ingress uses Traefik as the ingress controller and routes all paths to the Vault UI service on port 8200.
UI Access
The UI configuration has been updated to use ClusterIP instead of NodePort, working in conjunction with the ingress:
ui:
enabled: true
serviceType: "ClusterIP"
This approach provides cleaner external access through the ingress controller rather than exposing ports directly on cluster nodes.
Deployment Process
Validation
Before applying the manifests, validate the configuration:
k kustomize path/to/kustomization.yaml
This command processes the Kustomization and displays the resulting manifests, allowing you to verify the configuration before deployment.
Application
Deploy the configuration:
k apply -k path/to/kustomization.yaml
This applies all resources defined in the Kustomization, creating the complete Vault deployment.
Flux Reconciliation
If needed, force Flux to reconcile the HelmRelease:
flux reconcile helmrelease -n vault vault
This command triggers immediate reconciliation, useful for testing changes or troubleshooting deployment issues.
Security Considerations
This configuration prioritizes simplicity and is suitable for development environments. For production deployments, consider:
Enabling TLS encryption for all communications
Implementing proper authentication and authorization
Using cloud-based storage backends for better availability
Configuring backup and disaster recovery procedures
Implementing network policies for traffic restriction
Monitoring and Maintenance
The deployment includes telemetry configuration for Prometheus integration, enabling monitoring of Vault's performance and health metrics. Regular maintenance should include monitoring storage usage, updating chart versions, and reviewing security configurations.
Complete Implementation
This Vault deployment is part of a larger homelab infrastructure setup. You can find the complete configuration files, along with other Kubernetes deployments and GitOps configurations, in my homelab repository: https://github.com/HYP3R00T/homelab
The repository includes additional examples of Flux-managed deployments, monitoring setups, and infrastructure automation that complement this Vault installation.