Terraform is an Infrastructure as Code (IaC) tool by HashiCorp that lets you define and provision infrastructure using a declarative configuration language called HCL (HashiCorp Configuration Language). Rather than clicking through a cloud console or writing imperative scripts, you describe the end state you want and Terraform figures out how to get there.
On macOS with Homebrew:
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
terraform version
Providers are plugins that let Terraform talk to external APIs. You declare which providers you need and Terraform downloads them during terraform init:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-west-1"
}
Resources are the core building blocks — the actual infrastructure you want to create:
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-unique-bucket-name"
}
resource "aws_s3_bucket_versioning" "my_bucket_versioning" {
bucket = aws_s3_bucket.my_bucket.id
versioning_configuration {
status = "Enabled"
}
}
Notice the second resource references aws_s3_bucket.my_bucket.id — Terraform understands this dependency and will create the bucket before attempting to enable versioning.
Hardcoding values is fine for learning, but variables make configurations reusable:
variable "environment" {
description = "Deployment environment"
type = string
default = "dev"
}
variable "bucket_name" {
description = "Name of the S3 bucket"
type = string
}
resource "aws_s3_bucket" "my_bucket" {
bucket = "${var.bucket_name}-${var.environment}"
}
Pass values at runtime:
terraform apply -var="bucket_name=my-app" -var="environment=prod"
Or via a terraform.tfvars file:
bucket_name = "my-app"
environment = "prod"
Outputs let you extract values from your infrastructure after it’s been created — useful for passing data between modules or just seeing what got created:
output "bucket_arn" {
description = "ARN of the created S3 bucket"
value = aws_s3_bucket.my_bucket.arn
}
Data sources let you read information about existing infrastructure without managing it:
data "aws_vpc" "main" {
filter {
name = "tag:Name"
values = ["main-vpc"]
}
}
resource "aws_subnet" "app" {
vpc_id = data.aws_vpc.main.id
cidr_block = "10.0.1.0/24"
}
HCL has a set of expressions that let you make configurations dynamic and avoid repetition.
The ternary operator lets you toggle values based on a condition:
variable "environment" {
type = string
default = "dev"
}
resource "aws_instance" "app" {
instance_type = var.environment == "prod" ? "t3.large" : "t3.micro"
}
Useful for enabling or disabling features per environment:
resource "aws_s3_bucket_versioning" "this" {
bucket = aws_s3_bucket.my_bucket.id
versioning_configuration {
status = var.environment == "prod" ? "Enabled" : "Suspended"
}
}
count creates multiple instances of a resource. Use it when you need N identical copies:
resource "aws_iam_user" "developers" {
count = length(var.developer_names)
name = var.developer_names[count.index]
}
You can also use count as a boolean to conditionally create a resource:
resource "aws_cloudwatch_log_group" "app" {
count = var.enable_logging ? 1 : 0
name = "/app/${var.environment}"
}
for_each is more flexible than count — it creates one resource per item in a map or set, and each instance is keyed by the map key rather than an index. This matters when items are added or removed; Terraform won’t recreate unrelated resources:
variable "buckets" {
type = map(string)
default = {
assets = "eu-west-1"
backups = "eu-west-2"
}
}
resource "aws_s3_bucket" "this" {
for_each = var.buckets
bucket = "${each.key}-${var.environment}"
# each.key = "assets" / "backups"
# each.value = "eu-west-1" / "eu-west-2"
}
lookup retrieves a value from a map by key, with an optional default if the key doesn’t exist:
variable "instance_types" {
type = map(string)
default = {
dev = "t3.micro"
stag = "t3.medium"
prod = "t3.large"
}
}
resource "aws_instance" "app" {
instance_type = lookup(var.instance_types, var.environment, "t3.micro")
}
The third argument is the fallback — if var.environment isn’t in the map, t3.micro is used.
for expressions let you transform lists and maps inline:
# Uppercase a list of names
variable "names" {
default = ["alice", "bob", "carol"]
}
locals {
upper_names = [for name in var.names : upper(name)]
# ["ALICE", "BOB", "CAROL"]
}
# Filter a list
locals {
prod_buckets = [for b in var.buckets : b if b.environment == "prod"]
}
# Transform a list into a map
locals {
name_map = { for name in var.names : name => upper(name) }
# { alice = "ALICE", bob = "BOB", carol = "CAROL" }
}
locals lets you compute intermediate values rather than repeating expressions:
locals {
name_prefix = "${var.project}-${var.environment}"
common_tags = {
Project = var.project
Environment = var.environment
ManagedBy = "terraform"
}
}
resource "aws_s3_bucket" "app" {
bucket = "${local.name_prefix}-app-data"
tags = local.common_tags
}
resource "aws_instance" "app" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = merge(local.common_tags, { Name = "${local.name_prefix}-app" })
}
merge combines two maps — handy for adding resource-specific tags on top of a shared base.
When a resource has a repeating nested block, dynamic lets you generate those blocks from a list rather than duplicating them:
variable "ingress_rules" {
type = list(object({
port = number
description = string
}))
default = [
{ port = 80, description = "HTTP" },
{ port = 443, description = "HTTPS" },
]
}
resource "aws_security_group" "app" {
name = "app-sg"
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.port
to_port = ingress.value.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = ingress.value.description
}
}
}
terraform init # Download providers and modules
terraform fmt # Format your code consistently
terraform validate # Check for syntax errors
terraform plan # Preview changes
terraform apply # Apply changes (prompts for confirmation)
terraform destroy # Tear it all down
For CI/CD pipelines, use terraform apply -auto-approve to skip the confirmation prompt.
By default Terraform stores state locally in terraform.tfstate. This works fine solo, but breaks down in a team — two people running terraform apply at the same time against the same local state will cause problems.
The solution is remote state with locking. On AWS:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "eu-west-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
The DynamoDB table provides state locking — only one operation can run at a time, preventing state corruption.
Modules are reusable packages of Terraform configuration. Once your infrastructure grows beyond a single file, modules are how you keep things manageable:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-west-1a", "eu-west-1b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
}
The Terraform Registry has well-maintained community modules for most common patterns — there’s rarely a need to write a VPC module from scratch.
A sensible starting layout for a project with multiple environments:
├── modules/
│ ├── networking/
│ └── compute/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ └── prod/
│ ├── main.tf
│ ├── variables.tf
│ └── terraform.tfvars
Each environment has its own state file and can be applied independently. Shared modules live in modules/ and are called from each environment.
terraform plan before terraform apply — read the output carefully, especially destroy operationsterraform state subcommands if you need to manipulate state~> 5.0 allows patch updates but prevents breaking major version upgrades from surprising youterraform fmt in CI — keeps formatting consistent across the team without arguments