Securing Secrets in Your Kubernetes Cluster with Vault

Securing Secrets in Your Kubernetes Cluster with Vault

“Three may keep a secret, if two of them are dead.”
― Benjamin Franklin, Poor Richard's Almanack

Secret storage and delivery to applications can be solved in many ways, with them all lying on a spectrum of security, convenience, and sophistication. Until the advent of secrets management services like Vault, many other methods followed no real standard approach or best practice. Some of the most common ways are:

Modern secrets management technologies leave us no excuse but to take a more mature stance of the storage and distribution of secrets. Finding the right method for your team is essential in encouringing best practices. For us, we saw the opportunity to level up and use Vault when we were migrating to Kubernetes.

Our goal was to set up a secrets storage infrastructure within Kubernetes which is secure and very easy to use.

We found that while there was plenty of documentation for each component, practical guides to a full setup within Kubernetes were lacking. That is the aim of this blog post.

Prerequisites

These instructions assume that you have a kubernetes cluster, and have a fundamental grasp of volumes, config maps and secrets. You should also be able to bring up a Vault server on the cluster, edit the config, and persist it too.

Architecture Overview

The goals of our architecture are as follows:

  1. Secrets should be served on our cluster internally.
  2. Secrets should be encrypted at rest and in transit.
  3. All applications deployed to the infrastructure should be granted access to the secrets they need automatically and securely.

Note that the first point implies we will not be exposing Vault to the outside world. This was the best solution for us because all of our applications using secrets will live within the same Kubernetes cluster and we had no need to expose Vault and increase the attack surface of the architecture.

Securing Vault Communications with HTTPS

Bootstrapping Vault as a CA

Since we want the internal communications with Vault to be secured, we must have the certificates signed by a Certificate Authority (CA). The hard part was getting set up without having access to an external CA. Fortunately, we can use Vault's capabilities to act as a CA to sign it's own certificates as part of a bootstrapping process. This was a largely undocumented process and so we leave the following instructions to lend later wanderers some clarity.

What did you see

Expectations

At the end of this process you should have an instance of Vault running over HTTPS, with its certificate signed by its own CA.

Instructions

  • Start Vault with TLS disabled (this is safe initially,
    because we aren't yet using it to store secrets). Your config will look something like this:
listener "tcp" {
  address = "localhost:8200"
  tls_disable = 1
}

storage "file" {
  path = "/vault/data"
}
  • Mount its pki backend.
  • Create a root certificate.
  • Use that to sign an intermediate CA.
  • Store that chain as the CA certificate for later use by services.
  • Use that to sign a CSR for the Vault service itself.
  • Store the private key and newly signed certificate in the Vault config volume
    mount (this is the only place where we have a secret stored within our
    infrastructure that is not stored in Vault).

We automated these last steps with a script (adapted from here). Note that in the following, vault.default.svc.cluster.local is the DNS name
supplied to the vault service in the default namespace. This hostname will
always be the form [your_service_name].[your_namespace].svc.cluster.local:

# We need to point the Vault client to the endpoint over plain HTTP for now
export VAULT_ADDR=http://127.0.0.1:8200

# Create a Root CA that expires in 10 years:
vault mount -path=root-ca -max-lease-ttl=87600h pki

# Generate the root certificate:
vault write root-ca/root/generate/internal common_name="Root CA" ttl=87600h exclude_cn_from_sans=true

# Set up the URLs:
vault write root-ca/config/urls issuing_certificates="https://vault.default.svc.cluster.local:8200/v1/root-ca/ca" \
    crl_distribution_points="https://vault.default.svc.cluster.local:8200/v1/root-ca/crl"

# Create the Intermediate CA that expires in 5 years:
vault mount -path=intermediate-ca -max-lease-ttl=43800h pki

# Generate a Certificate Signing Request:
vault write -format=json intermediate-ca/intermediate/generate/internal \
    common_name="Intermediate CA" ttl=43800h exclude_cn_from_sans=true \
    | jq -r .data.csr > intermediate.csr

# Ask the Root to sign it:
vault write -format=json root-ca/root/sign-intermediate \
    [email protected] use_csr_values=true exclude_cn_from_sans=true \
    | jq -r .data.certificate > signed.crt

# Send the stored certificate back to Vault:
vault write intermediate-ca/intermediate/set-signed [email protected]

# Set up URLs:
vault write intermediate-ca/config/urls issuing_certificates="https://vault.default.svc.cluster.local:8200/v1/intermediate-ca/ca" \
    crl_distribution_points="https://vault.default.svc.cluster.local:8200/v1/intermediate-ca/crl"

# Create a role that permits the Vault Server to generate certificates
# signed by the intermediate CA
vault write intermediate-ca/roles/vault \
      allowed_domains="vault.default.svc.cluster.local" \
      allow_subdomains="true" allow_bare_domains="true" max_ttl="72h"

# Issue the newly signed cert for the vault server
vault write intermediate-ca/issue/vault \
      common_name=vault.default.svc.cluster.local

You'll need to combine the root and intermediate CA certs to create the overall CA cert chain which will be used to verify Vault's cert later. It will look something like this:

-----BEGIN CERTIFICATE-----
KIIMass... (root CA cert)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
KIICa12... (intermediate CA cert)
-----END CERTIFICATE-----

Store this file as we'll need it later.

The final private key in the above should be stored, and the certificate stored with the chain we just stored appended. It will look something like this:

-----BEGIN CERTIFICATE-----
KIIMass... (root CA cert)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
KIICa12... (intermediate CA cert)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
KII12va... (TLS cert)
-----END CERTIFICATE-----
  • Restart vault with TLS enabled and the config set to find the key and certificate. You'll need to update your config maps and secrets in your Kubernetes deployments. Set the Vault config to look something like this:
listener "tcp" {
  address = "localhost:8200"
  tls_cert_file = "/vault/config/cert.pem"
  tls_key_file = "/vault/config/cert.key"
}

storage "file" {
  path = "/vault/data"
}

Now Vault is up and running over HTTPS!

Service Communication with Vault

For apps to be able to access the secrets they need within Vault, they need to be able to authenticate with the Vault server. Managing the authentication for each and every application we deploy threatens to be a difficult task. It needn't be, and indeed with kubernetes-vault can be achieved very easily, while maintaining the expressiveness and rigorousness of Vault's policies.

kubernetes-vault works by supplying newly deployed applications with a
token they can use to access secrets they need. This is achieved through the use of Init Containers. kubernetes-vault is reasonably straight-forward to setup, and their quick-start guide will walk you through setting it up with Vault running without HTTPS.

When we're done with this, we should have an infrastructure that looks like the following diagram (taken from the kubernetes-vault github repo):

kubernetes-vault infrastructure

Starting kubernetes-vault

You'll first need to create a new role for kubernetes-vault and set its policy. Get a shell in your Vault container and execute the following to point to the new TLS secured Vault instance

export VAULT_ADDR=https://127.0.0.1:8200

Also create a file called signed.crt containing the certificate chain we outputted earlier and let Vault know where it is:

export VAULT_CACERT=signed.crt

Now create a new role for kubernetes-vault:

vault policy-write kubernetes-vault policies/[your-kubernetes-vault-policy].hcl
vault write auth/token/roles/kubernetes-vault allowed_policies=kubernetes-vault period=6h
vault token-create -format=json -role=kubernetes-vault | jq -r .auth.client_token

Where [your-kubernetes-vault-policy].hcl is a path to a policy containing something like the following:

path "pki/issue/kubernetes-vault" {
  capabilities = ["update"]
}

path "auth/approle/role/*" {
  capabilities = ["update"]
}

path "auth/token/roles/kubernetes-vault" {
  capabilities = ["read"]
}

Note down the token outputted here, we'll need it shortly.

This will allow kubernetes vault to create new tokens for approles which it will pass to newly created Init Containers for the purpose of writing the token to a volume. If you want the policy to be stricter, you can replace the wildcard in the second path with one path per app you want kubernetes-vault to be able to issue tokens for. We knew that for now we'll only be using approles for this specific purpose so it suits us just fine.

We are now ready to prepare our deployment of kubernetes-vault. We can use the example deployment in the kubernetes-vault repo as a starting point (found here). We are currently not using Prometheus for tracking, so we removed the prometheus entry in the kubernetes-vault.yml config, and also removed the private key and certificate for Prometheus to serve over HTTPS. This leaves us with the ca.crt file, which should contain the contents of signed.crt created earlier by our script.

You will also need to change the vault token to the token that was outputted earlier.

Now kubernetes-vault will be configured and can be brought up with kubectl apply -f [path-to-kubernetes-vault-deployment].yaml.

Bringing up an Application

In order to have your application be supplied with a token by kubernetes-vault, you'll need to create a policy for it. It should look something like this:

path "secret/[app-role-name]/*" {
  capabilities = ["read"]
}

path "auth/token/lookup-self" {
  policy = "read"
}

This policy should be written as before, with:

vault policy-write [app-role-name] policies/[your-app-policy].hcl
vault write auth/token/roles/[app-role-name] allowed_policies=kubernetes-vault period=6h
vault token-create -format=json -role=[app-role-name] | jq -r .auth.client_token

Copying the token into the VAULT_ROLE_ID environment variable in the initContainer.

In the deployment yaml for your application you'll need to use a volume and a kubernetes-vault Init Container to supply your application container with the token it needs to authenticate with vault. You can see an example of this again in the kubernetes-vault repo here.

Within your application, you should read the token in the volume mount and use a Vault client to perform token authentication with the Vault server. You'll again need to supply the CA cert to your app so that the client knows to trust the server cert.

You should store your secrets for that app under secret/[app-role-name]. The lookup-self policy is necessary for the application to use its token.

Conclusion

Recap

Congratulations! You now have a secure and easy to use secrets infrastructure which your Kubernetes apps can use to access the secrets they need. To recap: we configured Vault to run on Kubernetes over HTTPS, used kubernetes-vault to supply our applications with a token to access its own secrets, and created a policy for a new application.

It is now up to you to have your application actually read the secrets it needs from Vault using the method that best suits your application. You will also need to renew the token relatively frequently as we have set the TTL (Time to Live) of the token to six hours.

Since we are using Django for some of our apps, we use the hvac package. Authenticating with Vault is as simple as constructing an instance of hvac.Client using the token read from the volume mount supplied by kubernetes-vault:

with open('/var/run/secrets/boostport.com/vault-token') as json_token_file:
    json_token = json.load(json_token_file)
    secrets = hvac.Client('https://vault.default.svc.cluster.local:8200', token=json_token['clientToken'])

SECRET_KEY = secrets.read('secret/yn-api/SECRET_KEY')['data']['value']

Improvements

Usage of Vault and management of secrets can be made even easier by introducing a user interface to Vault. This allows users, through the use of the Authentication Backends of Vault, to log in and view secrets in a web UI. Their access can be limited through the use of policies. Two options for a Vault user interface are Goldfish and vault-ui. Both are fairly active, and we are using vault-ui for now. We might expand on this in a future part!

Another potential improvement is to implement a Vault client specific to Django settings, allowing the importing and use of Settings throughout the application without the need for messy syntax like in the above code example. One such work-in-progress can be found on our Github repo.