Deploying High Available Vault on Kubernetes using Helm
This article intends to be a simple tutorial to achieve a high available Vault cluster inside Kubernetes enabling secret injection using annotations on workload definitions. Helm charts will be used to deploy Vault and Etcd to store vault secrets and configuration.
Prerequisites
To use Helm charts the helm command line interface is necessary. To install follow the instructions on Helm documentation https://helm.sh/docs/intro/install.
To use Helm charts you will need the helm command line interface. You can install following this steps described on Helm documentation: https://helm.sh/docs/intro/install.
Setup
Create a namespace for the vault workload:
$ kubectl create namespace vault
Set the namespace "vault"
as the actual context in kubectl configuration, by doing this it will not be necessary to specify the namespace for every command:
$ kubectl config set-context --current --namespace=vault
Add helm repositories:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo add hashicorp https://helm.releases.hashicorp.com
Installing Etcd
To store Vault secrets it is possible to use different backend options rather than the default local storage. For this tutorial, Etcd will be used.
First, create a file called helm-etcd-values.yaml
that contains the custom values used to override the helm chart defaults.
helm-etcd-values.yaml
replicaCount: 3
auth:
rbac:
enabled: true
To deploy etcd cluster, run the helm install command passing the configuration file:
$ helm install etcd bitnami/etcd --values helm-etcd-values.yaml
Wait until the pods are healthy to continue. Use the command "watch --no-title kubectl get pods"
to watch their status:
NAME READY STATUS RESTARTS AGE
etcd-0 1/1 Running 0 2m16s
etcd-1 1/1 Running 0 2m16s
etcd-2 1/1 Running 0 2m16s
Installing Vault
Now install Vault chart using helm-vault-values.yaml
config:
helm-vault-values.yaml
server:
# affinity: ""
ha:
enabled: true
config: |
disable_mlock = true
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "etcd" {
address = "http://etcd-0.etcd-headless.vault.svc.cluster.local:2379,http://etcd-1.etcd-headless.vault.svc.cluster.local:2379,http://etcd-2.etcd-headless.vault.svc.cluster.local:2379"
etcd_api = "v3"
ha_enabled = "true"
}
By default this chart will try to schedule pods in different nodes. If testing on a cluster with less than 3 worker nodes, uncomment the affinity
option otherwise the status of the pods will appear as Pending
.
$ helm install vault hashicorp/vault --values helm-vault-values.yaml
After this the status of the pods will appear as Running
but not Ready
. This is because it is necessary to initialize Vault cluster and unseal Vault in all cluster members.
NAME READY STATUS RESTARTS AGE
...
vault-0 0/1 Running 0 1m5s
vault-1 0/1 Running 0 1m5s
vault-2 0/1 Running 0 1m5s
...
Initialize the cluster, saving the unsealed secret in a file for the next steps
$ kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json
Get the unsealed secret from the file cluster-keys.json
$ VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
Unseal vault in all cluster members
$ for i in 0 1 2
do
kubectl exec vault-$i -- vault operator unseal $VAULT_UNSEAL_KEY
done
The response will be similar to this for every cluster member and all pods will become Ready
:
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.8.3
Storage Type etcd
Cluster Name vault-cluster-107a3a9a
Cluster ID 65efeff3-e342-ffb6-6309-871bcab8ba96
HA Enabled true
HA Cluster n/a
HA Mode standby
Active Node Address <none>
$ kubectl get pods -l app.kubernetes.io/name=vault
NAME READY STATUS RESTARTS AGE
vault-0 1/1 Running 0 15m
vault-1 1/1 Running 0 15m
vault-2 1/1 Running 0 15m
Enabling secrets engine
Now access the vault login
using the root password inside the cluster-keys.json
file.
Get the password, then execute the login:
$ VAULT_ROOT_PASSWORD=$(cat cluster-keys.json | jq -r ".root_token")
$ kubectl exec vault-0 -- vault login $VAULT_ROOT_PASSWORD
Enable key/value secrets engine:
$ kubectl exec vault-0 -- vault secrets enable -path=internal kv-v2
Enabling Kubernetes Auth module
Kubernetes authentication module isn't enabled by default, so it's necessary to enable it. It can be enabled by running the command:
$ kubectl exec vault-0 -- vault auth enable kubernetes
Now it's necessary to configure the module by writing it's configuration inside Vault. First login in one of the vault cluster pod members.
$ kubectl exec -it vault-0 -- /bin/sh
Then run this command to write the auth configuration, where:
token_reviewer_jwt
: A service account JWT used to access the TokenReview API to validate other JWTs during login.
kubernetes_host
: URL to the base of the Kubernetes API server.
kubernetes_ca_cert
: PEM encoded CA cert for use by the TLS client used to talk with the Kubernetes API.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Create a policy. This policy enables the read capabilities for the secrets on this path internal/data/database/config
:
$ vault policy write app - <<EOF
path "internal/data/database/config" {
capabilities = ["read"]
}
EOF
Create a role to bind the policy created:
$ vault write auth/kubernetes/role/app \
bound_service_account_names=app \
bound_service_account_namespaces=default \
policies=app \
ttl=24h
This role authorizes the "app"
service account in the default namespace and it gives it the default policy.
Now exit from the pod:
$ exit
Deploying an application
Add a username/password secret to vault that will be used by the application:
$ kubectl exec vault-0 -- vault kv put internal/database/config username="db-readonly-username" password="db-secret-password"
Create a service account named app
:
$ kubectl create serviceaccount app
To enable the secrets injection inside the application container, some annotations are needed:
vault.hashicorp.com/agent-inject: "true": Enable the agent injection
vault.hashicorp.com/role: "app": The role to use when authenticating
vault.hashicorp.com/agent-pre-populate-only: "true": Uses a init container instead of a sidecar
vault.hashicorp.com/agent-inject-secret-database-config.txt: "internal/data/database/config": The secret to be injected. The file that contains the secret will be called database-config.txt
inside the path /vault/secrets
.
orgchat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
name: orgchart
namespace: default
labels:
app: orgchart
spec:
selector:
matchLabels:
app: orgchart
replicas: 1
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "app"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/agent-inject-secret-database-config.txt: "internal/data/database/config"
labels:
app: orgchart
spec:
serviceAccountName: app
containers:
- name: orgchart
image: jweissig/app:0.0.1
Create the orgchart deployment:
$ kubectl create -f orgchart.yaml
deployment.apps/orgchart created
$ kubectl get pods -l app=orgchart
NAME READY STATUS RESTARTS AGE
orgchart-6dbb599c46-rmn9t 1/1 Running 0 32s
Check the data injected inside the container:
$ kubectl exec orgchart-6dbb599c46-rmn9t -c orgchart -- cat /vault/secrets/database-config.txt
data: map[password:db-secret-password username:db-readonly-username]
metadata: map[created_time:2021-10-07T19:59:55.801928567Z deletion_time: destroyed:false version:1]
The data isn't very legible and can't be useful as it's displayed. It is possible to fix this using templating. This patch, add a templating annotation, which will transform the data into a more displayable version:
orgchat-patch.yaml
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "app"
vault.hashicorp.com/agent-pre-populate-only: "true"
vault.hashicorp.com/agent-inject-secret-database-config.txt: "internal/data/database/config"
vault.hashicorp.com/agent-inject-status: "update"
vault.hashicorp.com/agent-inject-template-database-config.txt: |
{{- with secret "internal/data/database/config" -}}
postgresql://{{ .Data.data.username }}:{{ .Data.data.password }}@postgres:5432/wizard
{{- end -}}
Patching the deployment, the template will be applied and the secret will update:
$ kubectl patch deploy orgchart --patch-file orgchat-patch.yaml
deployment.apps/orgchart patched
kubectl exec orgchart-54d575974b-5ncfl -c orgchart -- cat /vault/secrets/database-config.txt
postgresql://db-readonly-username:db-secret-password@postgres:5432/wizard
This post can be useful as an example or as a reference for similar activities to be developed. The links referring to the documentation of the software used for a more in-depth look at the options and configuration modes are available below:
https://www.vaultproject.io/docs
Have a Good Vaulting