We are opening 30 min slots for 1:1 tech discussions to help you get started
In this more advanced guide, you will learn how to use driftctl in a more realistic real-life environment, with multiple Terraform states and output filtering. We will demonstrate how manual changes can impact drift detection and how driftctl complements Terraform plan!
Whether it’s a script gone wild, a bad API call from a trusted Lambda, or just the daily SNAFU, you want to know about the situation.
Driftctl will do just that.
We recommend using an AWS account dedicated to testing.
Clone the example Terraform code and execute it with Terraform. This Terraform configuration will simply create a VPC and a basically locked down security group.
Disclaimer: we used simple Terraform resources using the AWS provider: we did not try to create the most advanced, useful or complete Terraform configuration.
$ git clone
git@github.com:cloudskiff/driftctl-advanced-aws-tutorial.git
$ cd driftctl-advanced-aws-tutorial
AWS_ACCESS_KEY_ID
/ AWS_SECRET_KEY
pair):
$ export AWS_PROFILE="your-profile"
$ cd app_env_1
$ terraform init
[...]
Terraform has been successfully initialized!
Run Terraform:
$ terraform apply
[...]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
$ cd ..
Do the same for the second environment:
$ cd app_env_2
$ terraform init
[...]
Terraform has been successfully initialized!
$ terraform apply
[...]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
$ cd ..
Congrats, your AWS account now includes:
As of this writing, driftctl can scan for AWS resources and complement Terraform in drift detection. More providers are on their way!
Using driftctl, you can:
--from <statefiles>
).driftignore
)--filter <expression>
)--output <format>
)The small lab we created above is simulating 2 different “environments” (in real life it can be different applications, environments, or teams), with distinct Terraform states.
If we run driftctl with a single state, resources from the other state will be detected as drifts, or more precisely, unmanaged resources, which is not true and not what we want.
Here’s how to use driftctl with multiple states:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate
Scanning AWS on region: us-east-1
Found 3 resource(s)
- 100% coverage
Let’s manually add a rule to the security group we created, so we can detect it using driftctl:
Run Terraform from the folder where the security group is managed and confirm it doesn’t catch the manual change:
$ cd env_app_2
$ terraform apply
aws_security_group.supersecure: Refreshing state... [id=sg-0d04a4ce8ff6a74d3]
aws_security_group_rule.supersecure_sg_rule_1: Refreshing state... [id=sgrule-1254751605]
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Now run driftctl again:
$ cd ..
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate
Scanning AWS on region: us-east-1
Found unmanaged resources:
aws_security_group_rule:
- sgrule-619247160 (Type: ingress, SecurityGroup: sg-0d04a4ce8ff6a74d3, Protocol: All, Ports: All, Source: ::/0)
- sgrule-1309243877 (Type: ingress, SecurityGroup: sg-0d04a4ce8ff6a74d3, Protocol: All, Ports: All, Source: 0.0.0.0/0)
Found 5 resource(s)
- 60% coverage
- 3 covered by IaC
- 2 not covered by IaC
- 0 deleted on cloud provider
- 0/3 drifted from IaC
Holy cow, driftctl caught the drift!
.gitignore
: you simply add to this file all the resources you want to be ignored.To proceed with this step:
Obviously, as this is completely done outside of its control, this bucket can’t be detected as a drift by Terraform, so we’re in the dark, as expected.
Confirm driftctl detects the manually created bucket:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate
Scanning AWS on region: us-east-1
Found unmanaged resources:
[...]
aws_s3_bucket:
- randomBucket514
[...]
Open the .driftignore
file at the root of the repository, where we execute driftctl and add a line like aws_s3_bucket
(in my case: aws_s3_bucket.randomBucket514
)
Now run driftctl again:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate
[...]
Now the manually created S3 bucket is ignored forever!
It’s often helpful to be able to filter the output dynamically, to see results only for one type of resource (like only IAM users) or a specific tag (like only a specific environment).
Filtering in driftctl is implemented as in the AWS CLI: you won’t be lost! (hint: it’s JMESPath).
Let’s filter only VPC resources:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate --filter "Type=='aws_vpc'"
Scanning AWS on region: us-east-1
Found 1 resource(s)
- 100% coverage
Congrats! Your infrastructure is fully in sync.
Let’s now filter only for anything matching the “app_env_1” “Environment” tag on EC2:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate --filter "Attr.Tags.Environment == 'app_env_1'"
Scanning AWS on region: us-east-1
Found 1 resource(s)
- 100% coverage
Congrats! Your infrastructure is fully in sync.
You now can collect data only for the exact setup you want! Perfect for those reports 😉
We’ve covered driftctl human-readable output, but it’s also often useful to process it further. That’s why driftctl can output to JSON!
Here’s how to generate a JSON file directly:
$ driftctl scan --from tfstate://./app_env_1/terraform.tfstate --from tfstate://./app_env_2/terraform.tfstate --output json://driftctl.json
Take a look at the generated JSON file:
$ cat output.json
{
"summary": {
"total_resources": 5,
"total_drifted": 0,
"total_unmanaged": 2,
"total_deleted": 0,
"total_managed": 3
},
[...]
$
Now you can process this file using a processor like jq, for example, to catch only the coverage percentage and maybe send it to a database, from which a graph can be generated:
$ jq '.coverage' < driftctl.json
60
The possibilities are now endless!
Don’t forget to destroy the resources we created for this lab:
$ cd app_env_2; terraform destroy
$ cd ../app_env_1; terraform destroy
Delete the S3 bucket you randomly created and your AWS account is now as clean as before.
We covered a lot in this advanced tutorial:
Get product updates and occasional news.