Terraform modules#
Infrastructure-as-code for customer-VPC deployments. Stubbed today; full modules ship with v0.12.
Status (April 2026)
Terraform modules are on the v0.12 roadmap. Today, use the reference architectures below as a starting point + hand-roll the final manifests. When modules ship, they will conform to the same structure.
Why modules (vs Helm alone)#
Helm deploys the application. Terraform provisions the surrounding infra:
- Managed Postgres (RDS / Cloud SQL / Azure DB)
- Managed Kubernetes (EKS / GKE / AKS)
- Object storage buckets + lifecycle rules
- KMS keys for BYOK
- IAM roles + service accounts
- VPC peering, PrivateLink, firewall rules
- CloudFront / Cloud Armor / Front Door (CDN + WAF)
- Cert-manager + Route 53 / Cloud DNS for TLS
Module structure (planned)#
deploy/terraform/
├── aws/
│ ├── main.tf # VPC + subnets + security groups
│ ├── eks.tf # EKS cluster + node groups
│ ├── rds.tf # Postgres
│ ├── s3.tf # Runs bucket + lifecycle
│ ├── kms.tf # BYOK key + alias
│ ├── iam.tf # service-account roles + pod identity
│ ├── route53.tf # DNS
│ ├── acm.tf # TLS certs
│ ├── variables.tf
│ ├── outputs.tf
│ └── versions.tf
├── gcp/
│ ├── main.tf
│ ├── gke.tf
│ ├── cloudsql.tf
│ ├── gcs.tf
│ ├── kms.tf
│ └── ...
└── azure/
├── main.tf
├── aks.tf
├── postgres.tf
├── blob.tf
├── keyvault.tf
└── ...
Example usage (AWS)#
module "swarm" {
source = "github.com/TheAiSingularity/swarm//deploy/terraform/aws?ref=v0.12.0"
name = "swarm-prod"
region = "ap-south-1"
vpc_cidr = "10.42.0.0/16"
availability_zones = ["ap-south-1a", "ap-south-1b", "ap-south-1c"]
kubernetes_version = "1.30"
postgres_instance_class = "db.r6g.large"
postgres_storage_gb = 200
postgres_multi_az = true
postgres_backup_retention_days = 30
enable_byok = true
kms_key_admin_arns = ["arn:aws:iam::...:role/KeyAdmin"]
# Let swarm create its own IAM role with least-privilege access to bucket + KMS
swarm_domain = "swarm.customer.com"
dns_zone_id = "Z0123456789ABCDEF"
tags = {
Environment = "production"
Owner = "ml-platform"
CostCenter = "CC-42"
}
}
# Output the values needed by the Helm values.yaml
output "swarm_helm_values" {
value = module.swarm.helm_values
}
Then feed the output into Helm:
terraform apply
terraform output -json swarm_helm_values > values-generated.yaml
helm install swarm swarm/swarm -n swarm --create-namespace \
-f values-base.yaml \
-f values-generated.yaml
Example usage (GCP)#
module "swarm" {
source = "github.com/TheAiSingularity/swarm//deploy/terraform/gcp?ref=v0.12.0"
project_id = "swarm-customer"
region = "asia-south1"
zones = ["asia-south1-a", "asia-south1-b", "asia-south1-c"]
gke_version = "1.30"
gke_node_machine_type = "n2-standard-4"
gke_min_nodes = 3
gke_max_nodes = 10
cloudsql_tier = "db-custom-2-8192"
cloudsql_ha = true
gcs_bucket_name = "swarm-customer-runs"
gcs_storage_class = "STANDARD"
enable_cmek = true # Customer-Managed Encryption Keys
}
Example usage (Azure)#
module "swarm" {
source = "github.com/TheAiSingularity/swarm//deploy/terraform/azure?ref=v0.12.0"
resource_group_name = "swarm-prod-rg"
location = "centralindia"
aks_version = "1.30"
aks_node_count = 3
aks_vm_size = "Standard_D4s_v5"
postgres_flexible_server_sku = "GP_Standard_D4s_v3"
postgres_storage_mb = 204800
enable_cmk = true
keyvault_name = "swarm-prod-kv"
}
Outputs (all modules)#
Every module exports:
helm_values— drop-in forhelm install -fcluster_endpoint— Kubernetes API URLcluster_ca_data— base64 CA certkubeconfig— ready-to-use kubeconfig contentpostgres_connection_string— excluding password (in Secrets Manager separately)storage_bucket— object storage bucket namekms_key_arn/kms_key_id— BYOK key referenceingress_lb_hostname/ingress_lb_ip— for DNS CNAME
State management#
Modules are agnostic — use whatever backend you prefer (S3, GCS, Terraform Cloud, Azure Blob).
terraform {
backend "s3" {
bucket = "acme-tfstate"
key = "swarm/prod/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "acme-tfstate-lock"
}
}
Version pinning#
Always pin the module version:
Breaking changes between minor versions are possible pre-1.0. Read the release notes.
CI/CD integration#
Typical flow:
# .github/workflows/deploy-swarm.yml (excerpt)
jobs:
terraform-apply:
steps:
- uses: hashicorp/setup-terraform@v3
- run: terraform init
- run: terraform plan -out=plan.bin
- uses: actions/upload-artifact@v4
with: { name: plan, path: plan.bin }
# Manual approval gate here in production
- run: terraform apply plan.bin
- run: terraform output -json swarm_helm_values > values-generated.yaml
- run: helm upgrade swarm ./chart -f values-base.yaml -f values-generated.yaml
Migration from existing infra#
If you already have EKS + RDS, import instead:
terraform import module.swarm.aws_eks_cluster.this acme-prod-cluster
terraform import module.swarm.aws_db_instance.postgres acme-prod-db
Then run terraform plan — differences are what you'd need to reconcile.
Next#
- Kubernetes + Helm — the thing Terraform provisions for
- Data residency — multi-region patterns