In this advanced demo for driftctl, you will learn how to use the tool in a more realistic real-life environment, with multiple Terraform states and output filtering. We will demonstrate how manual changes can impact drift detection and how driftctl complements Terraform plan!
Whether it’s a script gone wild, a bad API call from a trusted Lambda, or just the daily SNAFU, you want to know about the situation.
Driftctl will do just that.
We recommend using an AWS account dedicated to testing.
To start this driftctl advanced demo, clone the example Terraform code and execute it with Terraform. This Terraform configuration will simply create a VPC and a basically locked down security group.
Disclaimer: we used simple Terraform resources using the AWS provider: we did not try to create the most advanced, useful or complete Terraform configuration.
AWS_ACCESS_KEY_ID
/ AWS_SECRET_KEY
pair): Run Terraform:
Do the same for the second environment:
Congrats, your AWS account now includes:
As of this writing, driftctl can scan for AWS resources and complement Terraform in drift detection. More providers are on their way!
Using driftctl, you can:
--from <statefiles>
).driftignore
)--filter <expression>
)--output <format>
)The small lab we created above is simulating 2 different “environments” (in real life it can be different applications, environments, or teams), with distinct Terraform states.
If we run driftctl with a single state, resources from the other state will be detected as drifts, or more precisely, unmanaged resources, which is not true and not what we want.
Here’s how to use driftctl with multiple states:
Alternatively, if you want to use pattern matching to load a bunch of Terraform state files all at once, you can use glob pattern (more examples here in the docs)
In this exact scenario, with only 2 directories containing 1 state each, we can load them all at once using the following:
Let’s manually add a rule to the security group we created, so we can detect it using driftctl:
Run Terraform from the folder where the security group is managed and confirm it doesn’t catch the manual change:
Now run driftctl again:
Holy cow, driftctl caught the drift!
.gitignore
: you simply add to this file all the resources you want to be ignored. To proceed with this step:
Obviously, as this is completely done outside of its control, this bucket can’t be detected as a drift by Terraform, so we’re in the dark, as expected.
Confirm driftctl detects the manually created bucket:
Open the .driftignore
file at the root of the repository, where we execute driftctl and add a line like aws_s3_bucket
(in my case: aws_s3_bucket.randomBucket514
)
Now run driftctl again:
Now the manually created S3 bucket is ignored forever!
It’s often helpful to be able to filter the output dynamically, to see results only for one type of resource (like only IAM users) or a specific tag (like only a specific environment).
Filtering in driftctl is implemented as in the AWS CLI: you won’t be lost! (hint: it’s JMESPath).
Let’s filter only VPC resources:
Let’s now filter only for anything matching the “app_env_1” “Environment” tag on EC2:
You now can collect data only for the exact setup you want! Perfect for those reports 😉
We’ve covered driftctl human-readable output, but it’s also often useful to process it further. That’s why driftctl can output to JSON!
Here’s how to generate a JSON file directly:
Take a look at the generated JSON file:
Now you can process this file using a processor like jq, for example, to catch only the coverage percentage and maybe send it to a database, from which a graph can be generated:
The possibilities are now endless!
Don’t forget to destroy the resources we created for this lab:
Delete the S3 bucket you randomly created and your AWS account is now as clean as before.
We covered a lot in this driftctl advanced demo:
Get product updates and occasional news.