Using confd to Inject Secrets into Kubernetes Pods

Introduction

Whilst using Kubernetes over the past few months, one challenge I repeatedly faced was to get secrets - such as passwords, SSH keys or certificate keys - securely into applications running on Kubernetes.

Whilst this is quite easy if the container image is under your full control, to achieve this with an ‘off the shelf’ image is a little more tricky.

One tool I came across recently was confd - which has helped a lot with this challenge and below I will outline how.

confd Basics

confd is a tool for rendering configuration files from predefined templates using values (secrets) that are stored in a backend. A backend could be etcd, Amazon SSM Parameter Store, Hashicorp Vault or many others.

The examples below will be using the Amazon SSM Parameter Store backend. For Kubernetes clusters running on AWS this works really well as Amazon IAM roles can be used, mitigating the use for storing the backend password anywhere.

I won’t go into too much detail on the basics of confd. I would recommend you look at the below links:

confd Image

First a Docker container that has confd available will be required. At the time of writing I could not find an official image available - so baked my own. This should be a simple and small image, based on something like Alpine Linux with only confd installed and not much else.

Since creating this guide, many better ways of handling secrets in Kubernetes are available such as CSI drivers and the use of Terraform with Kubernetes secrets. Due to this and the fact confd has not had a release since 2018 the pre-baked images are no longer available.

Example 1: Injecting Secrets into Environment

Many ‘off the shelf’ images allow for loading secrets from environment variables. One example of this is Grafana.

Starting Example

Lets start with injecting secrets as simply as possible - plain text in the deployment spec:

- name: grafana
  imagePullPolicy: IfNotPresent
  image: "grafana/grafana:latest"
  env:
  - name: GF_SECURITY_ADMIN_USER
    value: admin
  - name: GF_SECURITY_ADMIN_PASSWORD
    value: supersecurepassword123

We want the Grafana image to get the secrets above, by itself, without having to manage the Grafana image ourselves.

Add the Secrets to Amazon SSM Parameter Store

Add the two secrets to the Amazon SSM Parameter store using the AWS console.

  • /grafana-username: the Grafana administrators username
  • /grafana-password: the Grafana administrators password

Create an Amazon IAM Role

To allow the containers to access the SSM Parameters, they need to be granted access by IAM.

In addition, access to decrypt using the Amazon KMS key used to encypt the parameters will also need to be granted.

Example (do not copy and paste!):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": [
              "arn:aws:ssm:eu-west-1:123456123:parameter/grafana-username",
              "arn:aws:ssm:eu-west-1:123456123:parameter/grafana-password"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:eu-west-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
        }
    ]
}

confd Configurations

confd uses TOML configuration files to define what you want it to process. The below TOML will process template grafana.env.tmpl (defined later) and put the output in /shared-config/grafana.env with mode 0400. As Grafana by default runs as UID:GID 472:472 we make sure the environment file is owned by the same user & group.

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-confd-configs
  namespace: monitoring
data:
  grafana.env.toml: |
    [template]
    src    = "grafana.env.tmpl"
    dest   = "/shared-config/grafana.env"
    uid    = 472
    gid    = 472
    mode   = "0400"
    keys   = [
      "/grafana-username",
      "/grafana-password"
    ]    

confd Templates

The templates are the configuration files to render. As we want to set environment variables, the following template works well:

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-confd-templates
  namespace: monitoring
data:
  grafana.env.tmpl: |
    export GF_SECURITY_ADMIN_USER="{{     getv "/grafana-username" }}"
    export GF_SECURITY_ADMIN_PASSWORD="{{ getv "/grafana-password" }}"    

Override Launcher

A new launcher script should be created for the main container (Grafana in this example). The new launcher script should import rendered environment variables and then start the original entrypoint script.

Please note the following:

  • Always use . instead of source. A lot of containers do not have a full bash shell.
  • Always exec to start the original entry point - so that it remains as PID 1.
  • /run.sh is the original entry point of the Grafana image.
  • Make sure "${@}" is passed to the original entry point, so arguments still work.

To find the original entry point of an image, download the image with docker pull and then use docker inspect to find the entry point.

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-launcher
  namespace: monitoring
data:
  launcher.sh: |
    #!/bin/bash -e
    ###############################################################################

    echo ":: Loading extra environment variables..."
    . "/shared-config/grafana.env"

    ###############################################################################

    echo ":: Launching Grafana..."
    exec "/run.sh" "${@}"

    ###############################################################################    

Modifying the Deployment

The final step is to make the Grafana deployment run confd based on the supplied configuration before Grafana is started. To do that we use an initContainer.

Notice:

  • We set the region to eu-west-1, but you need to set this to the region your SSM parameters are stored.
  • We have three volume mounts:
    • grafana-shared-config is a shared emptyDir volume for the main Grafana container and the confd initContainer.
    • grafana-confd-configs will refer to the confd configurations configuration map defined above.
    • grafana-confd-templates will refer to the confd templates configuration map defined above.
initContainers:
  - name: grafana-confd
    image: "rlees85/secrets-loader:latest"
    command: [ 'confd', '-onetime', '-backend', 'ssm' ]
    env:
      - name: AWS_DEFAULT_REGION
        value: eu-west-1
    volumeMounts:
      - name: grafana-shared-config
        mountPath: /shared-config
      - name: grafana-confd-configs
        mountPath: /etc/confd/conf.d
      - name: grafana-confd-templates
        mountPath: /etc/confd/templates

The grafana-shared-config and grafana-launcher mount should be added to the main Grafana container.

- name: grafana-shared-config
  mountPath: /shared-config
- name: grafana-launcher
  mountPath: /launcher

All volumes should be correctly defined in the deployment. Please note that in the particular deployment used in this example grafana-config was already present.

volumes:
- name: grafana-config
  configMap:
    name: grafana
- name: grafana-shared-config
  emptyDir: {}
- name: grafana-confd-configs
  configMap:
    defaultMode: 0400
    name: grafana-confd-configs
- name: grafana-confd-templates
  configMap:
    defaultMode: 0400
    name: grafana-confd-templates
- name: grafana-launcher
  configMap:
    defaultMode: 0500
    name: grafana-launcher

The initContainer needs access to Amazon SSM Parameter store. Make sure kube2iam is configured on the Kubernetes Cluster and add the appropriate annotation to the Grafana deployment.

annotations:
  iam.amazonaws.com/role: grafana

Finally, we can override the Grafana containers start-up command to use the new launcher script:

- name: grafana
  imagePullPolicy: IfNotPresent
  image: "grafana/grafana:latest"
  command: [ '/launcher/launcher.sh' ]

Conclusion

The confd initContainer now runs before Grafana starts and outputs the templated secrets to shared storage. The main Grafana container then sources these secrets from shared storage before running the original image entry point.

$ kubectl -n monitoring logs grafana-7646488856-4f4gx -c grafana-confd
2018-08-06T19:34:55Z grafana-7646488856-4f4gx confd[1]: INFO Backend set to ssm
2018-08-06T19:34:55Z grafana-7646488856-4f4gx confd[1]: INFO Starting confd
2018-08-06T19:34:55Z grafana-7646488856-4f4gx confd[1]: INFO Backend source(s) set to
2018-08-06T19:34:56Z grafana-7646488856-4f4gx confd[1]: INFO Target config /shared-config/grafana.env out of sync
2018-08-06T19:34:56Z grafana-7646488856-4f4gx confd[1]: INFO Target config /shared-config/grafana.env has been updated

$ kubectl -n monitoring logs grafana-7646488856-4f4gx
t=2018-08-06T19:35:07+0000 lvl=info msg="Starting Grafana" logger=server version=5.2.1 commit=2040f61 compiled=2018-06-29T09:17:46+0000

...

t=2018-08-06T19:35:07+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_USER=admin"
t=2018-08-06T19:35:07+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_SECURITY_ADMIN_PASSWORD=*********"

...

Example 2: Rendering Configuration Files and/or Keys

Please read through example 1 first. A lot of things will not be covered again and are assumed to be already set up (SSM parameters, KMS keys and IAM permissions).

In this example we have a much more complicated application, that requires secrets to be loaded into its configuration files. Additionally, the application integrates with other services - and therefore needs an SSH private key to be injected at run-time.

Starting Example

In this example, the deployment spec has no secrets in. The secrets are baked directly in to the image. This may be undesirable for example if the image has to pass through a pipeline - developers perhaps should not have access to production secrets.

Let’s say the following file is baked directly into the image:

$ cat /etc/application.d/50-config.properties
mysql.db.username=application
mysql.db.password=application123
integration.ssh-key=/etc/application/ssh.pem

NOTE: When loading multi-line parameters (such as SSH keys) into Amazon SSM Parameter store use the CLI tool and not the console! If the console is used new lines are lost.

confd Configurations

In a similar fashion to the first example, we template a configuration file and SSH key based on templates to shared storage.

apiVersion: v1
kind: ConfigMap
metadata:
  name: application-confd-configs
data:
  99-secrets.properties.toml: |
    [template]
    src    = "99-secrets.properties.tmpl"
    dest   = "/shared-config/99-secrets.properties"
    mode   = "0400"
    keys   = [
      "/application-db-username",
      "/application-db-password"
    ]    
  integration-key.pem.toml: |
    [template]
    src    = "integration-key.pem.tmpl"
    dest   = "/shared-config/integration-key.pem"
    mode   = "0400"
    keys   = [
      "/application-integration-key"
    ]    

confd Templates

As before, the templates referred to by the TOML configurations are defined below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: application-confd-templates
data:
  99-secrets.properties.tmpl: |
    mysql.db.username={{ getv "/application-db-username" }}
    mysql.db.password={{ getv "/application-db-password" }}
    integration.ssh-key=/shared-config/integration-key.pem    
  integration-key.pem.tmpl: |
    {{ getv "/application-integration-key" }}    

Override Launcher

The same as the first example a new launcher script should be created for the main container. The script should import rendered configuration files into a folder that the application can pick them up.

The extra configuration file already points to the rendered SSH key so no further action is required for the key.

apiVersion: v1
kind: ConfigMap
metadata:
  name: application-launcher
  namespace: monitoring
data:
  launcher.sh: |
    #!/bin/bash -e
    ###############################################################################

    echo ":: Loading extra configuration files..."
    find "/shared-config" -maxdepth 1 -type f -name "*.properties" -exec cp -sfv {} "/etc/application.d/" \;

    ###############################################################################

    echo ":: Launching Application..."
    exec "/opt/startup/startup.sh" "${@}"

    ###############################################################################    

If the image you are working with does not have find there are many other ways to achieve the same thing.

Modifying the Deployment

The deployment needs to be modified the same way as in example 1 above.

Conclusion

This shows even complicated configurations can be setup with confd whilst still using off-the-shelf images.

Built with Hugo
Theme Stack designed by Jimmy