BITServices Ltd Logo

Business Information Technology Services

Consultancy and engineering expertise in cloud computing, infrastructure automation and software development

Scalable Ways to Manage Terraform Remote State

There are many guides on the Internet for getting started with Terraform and even setting up remote state. There seems however to be very few (if any) that suggest ways of setting it up in a scalable way. Normally, in such guides the remote state is statically declared in the Terraform code. This forbids the code to be easily re-used for different environments without duplicating it all.

The purpose of this post is to put forward some ways that I have used myself or seen used over the past few years.

This post will focus on the AWS S3 remote backend, but the concepts will apply to others as well, such as the Azure storage account (azurerm) backend.

Starting Point

  • Both Terraform and the AWS CLI tool are installed.

  • An S3 bucket exists that can be used for Terraform remote state.

  • The current AWS CLI tool profile has read and write access to the Terraform remote state bucket.

  • There is some Terraform code that needs to be deployed to multiple environments. In this example the following code is used as a starting point. Everything is in a single file to make the example more simple and the remote backend is statically defined, as per most examples on the Internet:

s3.tf

###############################################################################

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    key     = "object-store/terraform.state"
    bucket  = "example-terraform-state-bitservices"
    region  = "eu-west-1"
    encrypt = true
  }

  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

###############################################################################

provider "aws" {
  region = "eu-west-1"
}

###############################################################################

variable "account"     { default = "bitservices" }
variable "environment" { default = "default"     }

###############################################################################

variable "encryption_key"  { default = null     }
variable "encryption_type" { default = "AES256" }

###############################################################################

variable "acl"           { default = "private"      }
variable "service"       { default = "object-store" }
variable "force_destroy" { default = false          }

###############################################################################

locals {
  name           = format("%s-%s-%s", var.service, var.environment, var.account)
  encryption_key = var.encryption_type == "aws:kms" ? var.encryption_key : null
}

###############################################################################

resource "aws_s3_bucket" "scope" {
  acl           = var.acl
  bucket        = local.name
  force_destroy = var.force_destroy

  tags = {
    Name        = local.name
    Account     = var.account
    Service     = var.service
    Environment = var.environment
  }

  dynamic "server_side_encryption_configuration" {
    for_each = lower(var.encryption_type) == "none" ? [] : tolist([var.encryption_type])

    content {
      rule {
        apply_server_side_encryption_by_default {
          sse_algorithm     = server_side_encryption_configuration.value
          kms_master_key_id = local.encryption_key
        }
      }
    }
  }
}

###############################################################################

Please note: This example does not include state locking or the use of Terraform modules to try and keep the post as on-topic and as short as possible.

Option 1: Workspaces

One of the simplest ways to make some Terraform code re-usable across different environments is to use Terraform Workspaces.

The easiest way to use workspaces in this way is to ensure each resource identifier includes the workspace name. Settings, for example instance sizes or if encryption is to be enabled or not can be defined in maps with the workspace names as keys. This allows settings to be looked up based on the currently enabled workspace.

The example Terraform code above modified to work with multiple workspaces could look something like this:

###############################################################################

terraform {
  required_version = ">= 1.0.0"

  backend "s3" {
    key                  = "object-store/terraform.state"
    bucket               = "example-terraform-state-bitservices"
    region               = "eu-west-1"
    encrypt              = true
    workspace_key_prefix = "object-store-env"
  }

  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

###############################################################################

provider "aws" {
  region = "eu-west-1"
}

###############################################################################

variable "account" { default = "bitservices" }

###############################################################################

variable "encryption_key"  { default = null }
variable "encryption_type" { default = {
  "default" = "none",
  "prod"    = "AES256"
}}

###############################################################################

variable "acl"           { default = "private"      }
variable "service"       { default = "object-store" }
variable "force_destroy" { default = false          }

###############################################################################

locals {
  name            = format("%s-%s-%s", var.service, local.environment, var.account)
  environment     = terraform.workspace
  encryption_key  = var.encryption_type == "aws:kms" ? var.encryption_key : null
  encryption_type = lookup(var.encryption_type, local.environment, "none")
}

###############################################################################

resource "aws_s3_bucket" "scope" {
  acl           = var.acl
  bucket        = local.name
  force_destroy = var.force_destroy

  tags = {
    Name        = local.name
    Account     = var.account
    Service     = var.service
    Environment = local.environment
  }

  dynamic "server_side_encryption_configuration" {
    for_each = lower(local.encryption_type) == "none" ? [] : tolist([local.encryption_type])

    content {
      rule {
        apply_server_side_encryption_by_default {
          sse_algorithm     = server_side_encryption_configuration.value
          kms_master_key_id = local.encryption_key
        }
      }
    }
  }
}

###############################################################################

Please note: normally it is best to use encryption for all environments. This was just changed for the purpose of being an example.

Make sure Terraform has been initialised:

$ terraform init

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
...

Then, to create a new workspace for production, run:

$ terraform workspace new prod
Created and switched to workspace "prod"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration

Now when running a plan or apply, a new, unique S3 bucket will be managed.

Using workspaces still uses a statically defined remote backend key. Each non-default workspace state file will have an automatically appended prefix.

Option 2: Simple Wrapper Script

Another way of splitting out remote state can be to mirror the remote storage backend with the local file system. This is more useful if there are lots of different pieces of Terraform within one repository but there are not multiple environments for each individual piece.

For this to work with our example we would have to do the following:

  • Turn our workspace into a Git repository, if it is not already one:
$ git init
  • Move our s3.tf file into its own folder. This is based on the original s3.tf file and NOT the one modified to work with workspaces:
$ mkdir s3
$ mv ./s3.tf ./s3/s3.tf
  • Remove the following line from the backend configuration in our s3.tf file, since we will be generating it with scripts:
    key     = "object-store/terraform.state"
  • Create a common.sh file with the content below. This file should NOT be executable as it will only ever be sourced:

common.sh

###############################################################################

if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
  echo "Please do not run this file directly!"
  exit 1
fi

###############################################################################

TF_BASE="$(git rev-parse --show-toplevel)"

###############################################################################

if [ -n "${TF_PREFIX}" ] && [ -d "${TF_PREFIX}" ]; then
    cd "${TF_PREFIX}"
    TF_PATH="$(pwd -P)"
else
  echo "Error: No Terraform folder specified or folder does not exist!"
  exit 1
fi

###############################################################################

if [[ "${TF_PATH}" != "${TF_BASE}"* ]]; then
  echo "Error: The folder given does not exist within the Git repository."
  exit 1
fi

###############################################################################

S3_STATE_FILENAME="terraform.tfstate"
S3_STATE_KEY="$(git rev-parse --show-prefix)${S3_STATE_FILENAME}"

###############################################################################

TF_VAR_base="${TF_BASE}"
TF_VAR_path="${TF_PATH}"

###############################################################################

terraform --version

###############################################################################

echo ""
echo ":: Base  : ${TF_BASE}"
echo ":: Path  : ${TF_PATH}"
echo ":: S3 Key: ${S3_STATE_KEY}"
echo ""

################################################################################

terraform init --input=false --backend=true --backend-config="key=${S3_STATE_KEY}"

################################################################################

  • Create a plan.sh file with the below content. This file SHOULD be executable as it will be directly called to do a Terraform plan:

plan.sh

#!/bin/bash -e
###############################################################################

set -o pipefail

###############################################################################

TF_PREFIX="${1}"

###############################################################################

source "./common.sh"

###############################################################################

terraform plan

###############################################################################

  • Based on plan.sh, create apply.sh, destroy.sh, etc.

From here the shell scripts are used to call Terraform and set the remote state key based on what local folder we are running Terraform against.

For example:

$ ./apply.sh s3

Will create our S3 bucket and put the remote state in the following S3 key: s3/terraform.tfstate.

If we created another folder called ec2 that had code to create an EC2 instance and called it with the same scripts:

$ ./apply.sh ec2

Will create the EC2 instance and put the remote state in the following S3 key: ec2/terraform.tfstate.

Whilst fairly simple this approach does have some drawbacks:

  • Management of a small shell script for each Terraform sub-command.

  • Difficult to manage environments for the same pieces of Terraform code without complex use of symlinks.

Option 3: Full Wrapper Library

Ultimately this is where I have ended up and many organisations that use Terraform extensively are also likely to end up. A Terraform wrapper can be created to not only organise the remote state storage, it could manage Terraform binary versions, manage authentication with the cloud provider, do a degree of configuration management, make calling from CI or locally the same and any other organisation specific things.

A wrapper can be created in any language, though it can be nice to have it integrate with a build system like Make or Rake so calling Terraform and non-Terraform tasks feel the same.

The Terraform wrapper I use and maintain is located: https://rubygems.org/gems/terraform-wrapper. Sadly it is not yet documented at this stage.

This wrapper integrates with the Rake build system and provides Terraform related tasks to multiple folders containing Terraform infrastructure.

$ rake -T
[I] [TerraformWrapper] Terraform Wrapper for Ruby - version: 1.2.0
[I] [TerraformWrapper] Building tasks for service: account, component: bootstrap...
[I] [TerraformWrapper] Building tasks for service: account, component: account...
rake account:apply[config,plan]           # Applies infrastructure with Ter...
rake account:binary                       # Downloads and extracts the expe...
rake account:clean                        # Cleans a Terraform infrastructu...
rake account:destroy[config]              # Destroys infrastructure with Te...
rake account:import[config,address,id]    # Import a piece of existing infr...
rake account:init[config]                 # Initialises the Terraform infra...
rake account:plan[config,out]             # Creates a Terraform plan for a ...
rake account:plan-destroy[config,out]     # Creates a Terraform destroy pla...
rake account:upgrade                      # Upgrades the Terraform infrastr...
rake account:validate                     # Validates the Terraform code fo...
rake bootstrap:apply[config,plan]         # Applies infrastructure with Ter...
rake bootstrap:binary                     # Downloads and extracts the expe...
rake bootstrap:clean                      # Cleans a Terraform infrastructu...
rake bootstrap:destroy[config]            # Destroys infrastructure with Te...
rake bootstrap:import[config,address,id]  # Import a piece of existing infr...
rake bootstrap:init[config]               # Initialises the Terraform infra...
rake bootstrap:plan[config,out]           # Creates a Terraform plan for a ...
rake bootstrap:plan-destroy[config,out]   # Creates a Terraform destroy pla...
rake bootstrap:upgrade                    # Upgrades the Terraform infrastr...
rake bootstrap:validate                   # Validates the Terraform code fo...

Another example of a Terraform wrapper I have seen used and works well is located: https://rubygems.org/gems/rake_terraform.

Recent posts

Categories