Skip to main content

Terraform workspaces and locals for environment separation

 

Terraform is this amazing tool to provision and manage changes on your cloud infrastructure while following the great practice of keeping your infrastructure-as-code.
One common need on infrastructure management is to build multiple environments, such as testing and production, with mostly the same setup but keeping a few variables different, like networking and sizing.
The first tool help us with that is terraform workspaces. Previously called environments, it allows you to create different and independent states on the same configuration. And as it’s compatible with remote backend this workspaces are shared with your team.
As an example, let’s work with the following simple infrastructure:
===============
provider “aws” {
  region= “us-east-1”
}
resource “aws_instance” “my_service” {
  ami=”ami-7b4d7900″
  instance_type=”t2.micro”
}
================
Now we have defined a single aws ec2 instance andterraform apply will have your testing server up.
But that is only one environment, in this simple example one might think that it would be ok to simple replicate and call one service “testing_my_service” and the other “prod_my_server”, but this approach will quickly lead to confusion as your setup grows in complexity and more resources are added.
What you can do instead is use workspaces to separate them.
terraform workspace new production
With this, you are now in production workspace. This one will have the same configuration, since we are in the same folder and module of terraform, but nothing created. Thus, if you terraform apply it will create another server with the same configuration but not changing the previous workspace.
To go back to testing you can terraform workspace select default , since we are using default as the testing environment to make sure we are not working on production by mistake.
But, obviously, there are differences between testing and production, and the first approach would be to use variables and an if switch on the resources. Instead a better approach would be to use the recently introduced terraform locals to keep resources lean of logic:
==============
provider “aws” {
  region= “us-east-1”
}
locals {
  env=”${terraform.workspace}”
  counts = {
    “default”=1
    “production”=3
  }
  instances = {
    “default”=”t2.micro”
    “production”=”t4.large”
  }
  instance_type=”${lookup(local.instances,local.env)}”
  count=”${lookup(local.counts,local.env)}”
}
resource “aws_instance” “my_service” {
  ami=”ami-7b4d7900″
  instance_type=”${local.instance_type}”
  count=”${local.count}”
}
====================
The main difference from variables is that locals can have logic in them instead of in the resources, while variables allow only values and pushed the logic into the resources.
One thing to keep in mind is that terraform is rapidly evolving and is worth keeping an eye on it’s changes to make sure you making the most of it.

Comments

Popular posts from this blog

HP SMART ARRAY CLI COMMANDS

Show configuration : /opt/hp/hpssacli/bin/hpssacli ctrl all show config Controller status /opt/hp/hpssacli/bin/hpssacli ctrl all show status Show detailed controller information for all controllers /opt/hp/hpssacli/bin/hpssacli ctrl all show detail Show detailed controller information for controller in slot 0 /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 show detail Rescan for New Devices /opt/hp/hpssacli/bin/hpssacli rescan Physical disk status /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show status Show detailed physical disk information /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 pd all show detail Logical disk status /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld all show status View Detailed Logical Drive Status /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 ld 2 show Create New RAID 0 Logical Drive /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld drives=1I:1:2 raid=0 Create New RAID 1 Logical Drive /opt/hp/hpssacli/bin/hpssacli ctrl slot=0 create type=ld dri...

Monthly Mksys OS backup AIX

Step-1 Estimate Backup: ================================================================= df -tk `lsvgfs rootvg` | awk ‘{total+=$3}\ END {printf “Estimated mksysb size: %d bytes, %.2f GB\n”, total*1024, total/1024/1024}’ It will give you the estimated time of backup, Step-2 Backup Command: backupios -file /home/padmin/28August2013_TESTVIOI_mksysb ===============> VIO Backup Command mksysb -e -i -X /mksysb/10Nov2013_server-1_mksysb============================>AIX server Backup command Step-3 ================= Pre -checks: => check NFS FS size(Backup file) and root vg FS sizes . Note : if any FS Full Zip old logs => To check performance of the server => To check root VG FS => To check /etc/exclude ========================== Step-4 ================================ server-1 mksysb -e -i -X /mksysb/02March2014_server-1_mksysb server-2 mksysb -e -i -X /mksysb/02March2014_sever-2_mksysb Post checkups: ==> check the process running o...

HP proliant SERVER hardware commands

to check the failed disk, adapter, any physical component on hp prolient server use the below important command. # hpacucli controller all show config———-to See the failed disk in HP proliant server  Smart Array P410i in Slot 0 (Embedded) (sn: 500143802590F6C0) logicaldrive 1 (558.9 GB, RAID 1, Interim Recovery Mode) array A (SAS, Unused Space: 0 MB) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 600 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 600 GB, Failed)====>indicating the faild drive SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143802590F6CF) [root@TEST:/root]# hpacucli ctrl slot=0 show Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 500143802590F6C0 Cache Serial Number: PBCDH0CRH1Z6R3 RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: C Firmware Version: 6.60 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Autom...