Run Terraform In An OSS Environment: No Terraform Enterprise

Momodu Afegbua
5 min readJul 14, 2021

--

Picture this: You work in a company that is pro-OSS (open source software) tools; they’ll only accept a paid SaaS tool only and only if it is the last option. This company is on a cloud migration journey (AWS) and they want to adopt IaC from the onset. Terraform is the preferred tool and you’ve been added to the cloud migration team, your first Epic is to do a POC (Proof of Concept) on Terraform — without using Hashicorp’s SaaS offering; Terraform Cloud/Enterprise. What will you do?

Well, first thing you’ll do is start knowledge gathering. Terraform, being a quite opinionated tool, has a lot of ways to achieve this. But then you’ll likely not see the complete implementation of an end to end pipeline that is similar to Terraform Enterprise, sitting on a GitHub account somewhere. But then here’s how I did the POC for a Dev, Staging and Prod environments (three AWS Environment). Like I’ve stated, it is an opinionated tool, my approach is not a global standard.

Pre-requisite:

  • Knowledge of Terraform
  • Experience storing secrets in AWS Secret Manager.
  1. AWS Accounts: Get two other separate AWS Accounts. One will be dedicated for administration duties and the other one will be dedicated to testing modules (alias AWSTerraform and AWSModuleTests). The former will house the buckets that will be destinations for remote state files, it will also store the secrets (AWS Credentials) for other accounts, using AWS Secret Manager. I personally prefer Vault for secret storage but I wasn’t ready to manage vault instances. The other three accounts are going to be called AWSDev, AWSStage, AWSProd.

2. Decide if you’ll use public Terraform modules or write your own modules. It was the latter for me. So you’ll write modules. These modules are tested on AWSModuleTests, which also serves as the environment for testing new provisioned resources on continuous integration pipeline.

This is a simple VPC module with private subnet(s).

main.tf

#Take note of the PROVIDER block added to each resource
resource "aws_vpc" "medium_vpc" {
provider = aws.current
cidr_block = var.vpc_cidr_block
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "${var.vpc_tag_name}-vpc-${var.environment}"
}
}
resource "aws_subnet" "private_subnet" {
provider = aws.current
count = var.number_of_private_subnets
vpc_id = aws_vpc.medium_vpc.id
cidr_block = element(var.private_subnet_cidr_blocks, count.index)
availability_zone = element(var.availability_zones, count.index)
tags = {
Name = "${var.private_subnet_tag_name}-${var.environment}"
}
}
#OUTPUTS: I don't want to add more code blocks
output vpc_arn {
value = aws_vpc.medium_vpc.arn
}
output vpc_id {
value = aws_vpc.medium_vpc.id
}
output private_subnet_ids {
value = aws_subnet.private_subnet.*.id
}
output public_subnet_ids {
value = aws_subnet.public_subnet.*.id
}

variable.tf

#Provider Block: Take note of this variable as well
provider "aws" {
alias = "current"
}
#Variables
variable "vpc_tag_name" {
type = string
description = "Name tag for the VPC"
}
variable "number_of_private_subnets" {
type = number
default = 1
description = "The number of private subnets in a VPC."
}
variable "private_subnet_tag_name" {
type = string
description = "Name tag for the private subnet"
}

3. Use the modules: In a multi-tenant environment, Terraform OSS offers workspace features that enables the management of different environments. I didn’t use Workspaces, instead I adopted the folder approach. This approach was such that each environment had its own folder arranged as shown below. Pretty messed up approach, considering that it does not use a VCS multi-branch approach but scratch that, with the right script, it works.

terraform_code #You know we're assuming this is a github repo right?
|- Dev
|- vpc
|- main.tf
|- terraform.tf
|- outputs.tf
|- backend.tf
|- iam
|- Stage
|- vpc
|- iam
|- Prod
|- vpc
|- iam

Let’s take a look at what’s inside the vpc folder in dev.

terraform.tf

#Import Data from AWSTerraform; the AWS credentials stored there.
#Dev
data "aws_secretsmanager_secret_version" "dev_creds" {
secret_id = "medium/dev"
}
#Staging
data "aws_secretsmanager_secret_version" "staging_creds" {
secret_id = "medium/staging"
}
#Prod
data "aws_secretsmanager_secret_version" "prod_creds" {
secret_id = "medium/prod"
}
#Regions: We are going to stick to two regions
locals {
aws_regions = {
"r0" = "us-east-1" # N Virginia, USA
"r1" = "eu-west-1" # Dublin, Ireland
}
}
#Get Credentials from Secret manager in JSON format but decode them#For Dev: Do same for staging and prod if need be.
locals {
aws_env_creds = jsondecode(
data.aws_secretsmanager_secret_version.dev_creds.secret_string
)
}
#Load sts creds into: a = access key ID, s = secret key, t = token in case of sts role. I'm not using sts role for this, so no 't'
locals {
gsuite_dev = {
a = local.aws_env_creds.AWS_ACCESS_KEY_ID
s = local.aws_env_creds.AWS_SECRET_ACCESS_KEY
}
}
#Declare Initial Region, before switching to sepcific environment regions. Mind you, AWSTerraform needs the default region.
provider "aws" {
region = var.region
}
#Remember the PROVIDER in the previous step? This is where it comes to roast
# AWS Providers Dev
provider "aws" {
alias = "dev-east-1"
access_key = local.medium_dev["a"]
secret_key = local.medium_dev["s"]
region = local.aws_regions["r0"]
}
provider "aws" {
alias = "dev-west-1"
access_key = local.medium_dev["a"]
secret_key = local.medium_dev["s"]
region = local.aws_regions["r1"]
}
################################################################################
# AWS Providers Staging
provider "aws" {
alias = "staging-east-1"
access_key = local.medium_staging["a"]
secret_key = local.medium_staging["s"]
region = local.aws_regions["r0"]
}
provider "aws" {
alias = "staging-west-1"
access_key = local.medium_staging["a"]
secret_key = local.medium_staging["s"]
region = local.aws_regions["r1"]
}
################################################################################
# AWS Providers Prod
provider "aws" {
alias = "prod-east-1"
access_key = local.medium_prod["a"]
secret_key = local.medium_dev["s"]
region = local.aws_regions["r0"]
}
provider "aws" {
alias = "prod-west-1"
access_key = local.medium_prod["a"]
secret_key = local.medium_prod["s"]
region = local.aws_regions["r1"]
}

main.tf

#Note the Provider's block. This is where most of the magic happens. Changing aws.dev-east-1 to aws.stage-east-1 will sinply deploy this resource to staging environment (AWSStaging)
module "vpc_for_ecs_fargate" {
providers = {
aws.current = aws.dev-east-1
}
source = "momoduismaila/modules/vpc" # This is a fictional source though
vpc_tag_name = "medium-vpc"
number_of_private_subnets = 2
number_of_public_subnets = 2
private_subnet_tag_name = "medium-private-subnet"
public_subnet_tag_name = "medium-public-subnet"
environment = var.environment
security_group_lb_name = "medium-alb-sg"
security_group_ecs_tasks_name = "medium-ecs-tasks-sg"
app_port = 80
availability_zones = var.availability_zones
region = var.region
}

backend.tf

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.27.0"
}
}
#Everything that has to do with backend are stored in AWSTerraform, which on a norm should be more secured
#than the production environment. No individual should have access to the account except the deployment tool
backend "s3" {
bucket = "some.random.terraform.files"
key = "dev/terraform.tfstate"
region = "eu-west-1"
## Use dynamo db for lock_state
dynamodb_table = "terraform-medium-lock"
encrypt = true
}
}

4. Write your deployment script… Depending on the CI/CD tools you use, you can decide to add a test phase to your pipeline or use policy agent tools. Chef Inspec is a good tool, though the learning curve is not as smoothas it looks at first glance. For the POC, test phase was not part of it so I implemented a script using makefile.

SHELL = /bin/bash
TF_PARALLEL = 128
# current working directory
.EXPORT_ALL_VARIABLES:
.PHONY: all init init-reconfigure import plan apply destroy refresh show validate upgrade clean reset timestamp login logout token backend backend_check confirm_importall: applyinit: timestamp
$(TF) init
init-reconfigure: timestamp
$(TF) init -reconfigure
plan: init validate
$(TF) plan -parallelism=$(TF_PARALLEL)
apply: init
$(TF) apply -auto-approve -parallelism=$(TF_PARALLEL)
destroy: init
$(TF) destroy -parallelism=$(TF_PARALLEL)
refresh: init
$(TF) refresh -parallelism=$(TF_PARALLEL)
show: init
$(TF) show
validate: init
$(TF) validate
clean:
rm -vf terraform.log terraform-debug.log
reset: clean
rm -rvf .terraform
rm -vf terraform.tfstate terraform.tfstate.* .terraform.lock.hcl
timestamp:
date --rfc-3339=seconds | tee -a terraform.log

--

--

Momodu Afegbua

Cloud Architect | DevOps Evangelist | CKA, CKAD | I mostly write things in here so I can read them again when I get lost — eventually.