HashiCorp Terraform Associate Exam Dumps & Practice Test Questions

Question 1:

You have written a Terraform configuration that provisions infrastructure in AWS. After running terraform apply, you realize you need to add a new tag to all EC2 instances managed by your Terraform state. You update the configuration file to include the new tag, but when you run terraform apply again, Terraform does not detect any changes and does not update the existing EC2 instances with the new tag.

What is the most likely reason Terraform did not apply the tag update to the EC2 instances, and how should you fix this?

A. Terraform state is locked; you need to run terraform unlock before applying changes.
B. The EC2 instances were created outside of Terraform; import the instances into Terraform state using terraform import.
C. The resource in the configuration uses lifecycle { ignore_changes = ["tags"] }; remove or modify this block.
D. You need to run terraform refresh before terraform apply to update the state with current resource attributes.

Correct answer: C

Explanation:

This question tests your understanding of how Terraform manages resource attributes, state, and the lifecycle meta-argument, which are critical concepts for the HashiCorp Terraform Associate exam.

Option A:
Terraform uses a state lock to prevent concurrent modifications. While a locked state could prevent changes, it typically results in an error during terraform apply rather than silently skipping updates. Also, the problem here is not about locking, but about Terraform not detecting changes to tags. So, this is not the reason.

Option B:
If the EC2 instances were created outside Terraform, Terraform would not manage them unless they are imported into state. However, the question states the instances are managed by your Terraform state, so this does not apply here. If the instances were unmanaged, Terraform would not show them in the plan or apply. Hence, this is incorrect.

This configuration tells Terraform to ignore any changes to the tags attribute on the resource during plans and applies. This means when you update the tags in the configuration, Terraform will not detect the difference and thus won't update the existing EC2 instances with the new tags. To fix this, you need to remove or adjust the ignore_changes setting to allow tag changes to be detected and applied.

Option C:
Running terraform refresh updates the state file with the current real-world resource attributes. While this can help detect drift, it does not force Terraform to apply configuration changes. Also, if ignore_changes is set on a resource, even after refresh, Terraform will ignore changes to those attributes. Thus, this alone won't solve the issue.

The key concept is the Terraform lifecycle meta-argument ignore_changes, which lets you prevent Terraform from managing certain resource attributes after creation. This can be useful but can also cause confusion when updates do not propagate.

In this scenario, since tag updates are ignored, Terraform does not attempt to modify the EC2 instances with the new tags. Removing or modifying the ignore_changes block for tags will ensure Terraform detects the changes and applies the updates on subsequent terraform apply runs.

Question 2:

You have a Terraform configuration that provisions an AWS VPC and related networking resources. After running terraform apply, you realize you want to use a different AWS region. You update the provider block with the new region and run terraform plan. However, Terraform shows no changes to be applied.

Why does Terraform not plan to change resources to the new region, and how can you ensure resources are provisioned in the new region?

A. Terraform only updates existing resources in the same region; you must destroy and re-create resources in the new region.
B. The provider block region cannot be changed once resources are created; update the Terraform backend configuration instead.
C. Terraform caches the region from the initial run; run terraform refresh to update the region.
D. The region attribute must be specified in each resource block for changes to take effect.

Correct answer: A

Explanation:

Terraform manages infrastructure per provider configuration, and the AWS region in the provider block dictates where resources are created. When you change the region in the provider block, Terraform does not automatically move existing resources to the new region. The existing resources continue to exist in the original region unless explicitly destroyed.

  • Option A is correct because Terraform does not migrate resources between regions. To use a new region, you must either:

    • Destroy existing resources in the old region and re-apply in the new region, or

    • Use separate Terraform states for different regions (e.g., via workspaces or separate configurations).
      Terraform’s state tracks resources by their region, so changing the region only affects new resources.

  • Option B is incorrect because while the backend manages where state is stored, it does not control the region where resources are deployed. The provider block’s region determines the actual AWS region for resources.

  • Option C is incorrect because terraform refresh updates the state with current resource attributes but does not change the region of resources or move resources.

  • Option D is incorrect because the region is specified once in the provider block and inherited by all resources unless overridden. You cannot override the region per resource in a standard AWS provider configuration.

Question 3:

You need to store sensitive values such as database passwords in Terraform. Which is the best practice for managing sensitive variables to avoid exposing them in logs and state files?

A. Use Terraform variable blocks with the sensitive = true attribute and store the values in plaintext files.
B. Hardcode sensitive values directly into Terraform configuration files but mark them as sensitive.
C. Use environment variables or secret management tools and mark variables as sensitive in Terraform.
D. Store sensitive values in remote backend state files with public read access for easy sharing.

Correct answer: C

Explanation:

Handling sensitive data securely in Terraform is critical for maintaining infrastructure security:

  • Option C is the best practice because:

    • You mark variables as sensitive in your Terraform configuration, which prevents Terraform from showing their values in CLI output or logs.

    • You provide actual secret values via environment variables or dedicated secret management tools (like Vault, AWS Secrets Manager, or SSM Parameter Store), so secrets are not stored in Terraform files or version control.

    • This reduces risk of accidental exposure.

  • Option A is only partly correct. While you can mark variables as sensitive = true, storing values in plaintext files is insecure and should be avoided.

  • Option B is a security risk because hardcoding secrets in configuration files can expose sensitive data in version control and logs despite the sensitive flag.

  • Option D is incorrect as making state files publicly readable exposes secrets and infrastructure metadata, causing serious security vulnerabilities.

Question 4:

You want to share modules across your team with a version control system (VCS) integration to manage infrastructure code reuse. 

Which method should you use to manage and distribute Terraform modules efficiently?

A. Copy module code into each Terraform project manually.
B. Publish modules to the Terraform Registry or use a private module registry.
C. Store modules inside the root Terraform configuration directory.
D. Use remote-exec provisioners to download modules during terraform apply.

Correct answer: B

Explanation:

Terraform modules are reusable configurations, and managing them efficiently improves collaboration and maintainability:

  • Option B is correct because:

    • Publishing modules to the Terraform Registry (public or private) allows centralized version control, easy discovery, and consistent reuse.

    • Teams can use version constraints in their root modules to lock specific module versions.

    • Private registries provide security and control for proprietary modules.

  • Option A is poor practice because manually copying code leads to duplication, inconsistencies, and difficult maintenance.

  • Option C is possible but not scalable. Modules stored inside root configurations are local modules and do not facilitate easy sharing across projects.

  • Option D is incorrect because provisioners run during resource creation and are not designed for module distribution. Downloading modules at apply time can cause unpredictable and unreliable builds.

Question 5:

Which of the following best describes the correct sequence of commands for deploying new infrastructure using Terraform?

A. Use terraform plan to import existing infrastructure into the state file, update the code, then run terraform apply to modify the infrastructure.
B. Write a Terraform configuration, execute terraform show to examine proposed changes, and then run terraform apply to create the infrastructure.
C. Use terraform import to bring existing infrastructure into the state file, adjust your configuration, and then run terraform apply to update the infrastructure.
D. Write your Terraform configuration, run terraform init to initialize the environment, use terraform plan to review the planned changes, and finally run terraform apply to create the infrastructure.

Correct Answer: D

Explanation:

Deploying new infrastructure with Terraform requires following a precise workflow to ensure the infrastructure is created and managed correctly. Let’s analyze why option D is the right approach and why the others fall short.

Option A is incorrect because terraform plan is not used to import existing infrastructure into the Terraform state. The terraform plan command generates an execution plan showing what actions Terraform will take based on the configuration and current state. It does not modify or import resources into the state file. Importing existing infrastructure requires the separate command terraform import. Also, the workflow in option A misses initializing Terraform and proper plan review.

Option B is wrong because terraform show displays the current state or a saved plan but does not preview what changes will occur if you apply the configuration. To see proposed infrastructure changes before applying, the correct command is terraform plan. Using terraform show before applying won’t inform you about what will change.

Option C correctly identifies terraform import as the method to add existing resources to the Terraform state. However, it’s incomplete as it doesn’t mention essential steps like running terraform init or generating a plan before applying changes. Also, terraform import is only necessary when managing pre-existing resources, not when deploying new infrastructure from scratch.

Option D outlines the full and proper workflow for deploying infrastructure:

  1. Write Terraform configuration files describing the desired infrastructure.

  2. Run terraform init to initialize the working directory and download providers.

  3. Run terraform plan to generate and review an execution plan that details what changes Terraform will make.

  4. Finally, execute terraform apply to provision the resources based on the plan.

Following this sequence ensures Terraform is properly set up, changes are reviewed before being made, and infrastructure is deployed in a controlled, predictable manner. Therefore, D is the correct answer.

Question 6:

You have a Terraform null resource called null_resource.run_script that uses a local-exec provisioner to run a script. You want to rerun this script. Which command should you execute first to ensure the script runs again?

A. terraform taint null_resource.run_script
B. terraform apply -target=null_resource.run_script
C. terraform validate null_resource.run_script
D. terraform plan -target=null_resource.run_script

Correct Answer: A

Explanation:

In Terraform, when you use a local-exec provisioner inside a null_resource, the script or command runs only when the resource is created or recreated. If you want to rerun the script without changing the resource’s configuration, you need to instruct Terraform to treat the resource as needing replacement. This is achieved by “tainting” the resource.

The terraform taint command marks a resource as tainted, signaling Terraform that the resource is in a bad or outdated state and needs to be destroyed and recreated on the next apply operation. By running terraform taint null_resource.run_script, you mark the null resource so that when you next run terraform apply, Terraform will destroy and recreate this resource. Because the null resource is recreated, the local-exec provisioner triggers again, rerunning the script.

Looking at the other options:

  • B, terraform apply -target=null_resource.run_script, applies changes only to that specific resource. However, if the resource isn't marked as tainted, Terraform assumes no changes are necessary and won’t rerun the provisioner. This command alone won’t guarantee rerunning the script.

  • C, terraform validate null_resource.run_script, simply checks the configuration files for syntax errors. It does not affect resource state or trigger any provisioner executions.

  • D, terraform plan -target=null_resource.run_script, generates an execution plan for the targeted resource but doesn’t change state or cause the provisioner to run. It’s used for previewing changes but does not perform any actions.

In practice, tainting the resource is the standard and most direct method to force re-execution of provisioners attached to a null resource, especially local-exec. This approach ensures consistency with Terraform’s lifecycle management and avoids manual interventions outside Terraform’s control.

Question 7:

Which Terraform provisioner runs commands directly on the resource that Terraform has just created?

A. remote-exec
B. null-exec
C. local-exec
D. file

Correct Answer: A

Explanation:

Terraform provisioners enable you to execute scripts or commands during resource creation or modification, often to perform configuration tasks that cannot be managed solely by the infrastructure-as-code resources. However, not all provisioners operate in the same context or location.

The remote-exec provisioner is designed explicitly to run commands on the newly created resource itself, typically a remote server or virtual machine. After Terraform creates a resource (e.g., an EC2 instance), remote-exec connects to it—using SSH for Linux or WinRM for Windows—and runs the specified commands directly on that machine. This allows you to automate initial configuration, install software, or perform setup steps that require direct access to the target resource.

Why the other options are incorrect:

  • null-exec is not a standard Terraform provisioner. Sometimes people confuse it with the null resource pattern or custom scripts, but it doesn’t exist as a native provisioner and doesn’t execute on the resource.

  • local-exec runs commands locally on the machine where Terraform is executed, not on the created resource. It is useful for actions like sending notifications, updating local files, or running scripts that interact with Terraform outputs but not for configuring the target resource.

  • file provisioner uploads files from the local machine to the resource but does not run any commands or processes on the resource itself. It handles file transfer only.

Therefore, the remote-exec provisioner is uniquely suited for triggering processes and running commands directly on resources created by Terraform, making it the correct answer.

Question 8:

Which statement about Terraform providers is incorrect?

A. Individuals can develop Terraform providers.
B. Terraform providers can be supported and updated by community members.
C. HashiCorp is responsible for maintaining some Terraform providers.
D. Both major cloud vendors and non-cloud companies can create, maintain, or contribute to Terraform providers.
E. None of the above statements are false.

Correct Answer: E

Explanation:

Terraform providers are integral plugins that enable Terraform to communicate with various APIs and services, managing infrastructure ranging from cloud platforms to third-party tools. Evaluating each statement will clarify why option E — none of the above — is correct.

Statement A is accurate because Terraform’s open framework allows individuals to write their own providers. This capability empowers users to extend Terraform’s reach to services not natively supported. These individual authors can even publish their providers on the Terraform Registry, making them publicly available.

Statement B is also true. Many Terraform providers are open-source projects maintained by communities. These collaborative efforts ensure providers stay current and functional, with contributors from across the world improving code quality and fixing issues.

Statement C is correct as well. HashiCorp, the company behind Terraform, maintains several official providers, especially for leading cloud providers like AWS, Azure, and Google Cloud. This maintenance ensures reliability and alignment with service updates.

Statement D reflects reality too. Major cloud vendors and non-cloud companies such as Datadog or VMware often write or contribute to Terraform providers. This collaboration ensures seamless integration with their services, helping users manage diverse infrastructure from a single tool.

Given all the above statements are true, option E is the right answer. The flexibility and community-driven nature of Terraform providers contribute greatly to Terraform’s widespread adoption. Providers can originate from individuals, community groups, HashiCorp, or vendors, creating a rich ecosystem supporting a wide range of infrastructure needs.

Question 9:

What is the initial Terraform command that must be executed when running Terraform in a new configuration directory for the first time?

A. terraform import
B. terraform init
C. terraform plan
D. terraform workspace

Correct Answer: B

Explanation:

When you begin working with Terraform in a new directory containing configuration files, the very first command you need to execute is terraform init. This command is fundamental because it prepares your working environment for all subsequent Terraform operations.

One of the primary roles of terraform init is to initialize the backend. The backend is where Terraform stores its state file — the snapshot of your infrastructure’s current condition. This backend can be local or remote, such as Amazon S3 or Terraform Cloud. Initializing it ensures Terraform knows where and how to manage this critical data.

Another vital function of terraform init is to download provider plugins. Terraform relies on providers—like AWS, Azure, or Google Cloud—to communicate with their respective APIs. Without these provider plugins installed, Terraform cannot create or modify resources. The terraform init command automatically fetches and installs the necessary providers specified in your configuration.

Additionally, if your Terraform code uses modules—reusable chunks of configuration—terraform init downloads and sets up these modules, whether they are locally stored or retrieved from remote repositories like the Terraform Registry or GitHub.

Only after completing these initializations is your working directory ready for other commands such as terraform plan (to preview changes) or terraform apply (to deploy infrastructure).

Other options do not serve this purpose: terraform import brings existing resources under Terraform management and is not the first step; terraform plan requires prior initialization; and terraform workspace manages multiple environments but is not needed at the start.

In summary, terraform init is the essential first command that configures your Terraform environment by initializing the backend, installing providers, and setting up modules, making it indispensable for a successful Terraform workflow.

Question 10:

You have deployed infrastructure using Terraform but forgot to define any output values in your configuration files. Now you need to quickly find the public DNS name of an AWS EC2 instance managed by Terraform. 

What is the best way to obtain this information without modifying your existing code or redeploying resources?

A. Use the command terraform output public_dns to retrieve the DNS name.
B. Create a new Terraform configuration that uses the terraform_remote_state data source to access the existing state file, then define outputs for the public DNS name.
C. Run terraform state list to identify the EC2 instance resource name, then use terraform state show <resource_name> to view detailed attributes, including the public DNS name.
D. Destroy the current infrastructure using terraform destroy, then run terraform apply to recreate it, and check the public DNS name in the output logs.

Correct Answer: C

Explanation:

When you deploy infrastructure using Terraform, the state file holds detailed information about every resource Terraform manages. This includes computed attributes such as public IP addresses, DNS names, and other metadata generated by the cloud provider. If you forget to declare outputs in your Terraform configuration, you can still retrieve these values directly from the state file.

Option C is the most effective and safe method. The terraform state list command shows all resources currently tracked in your state file. You can use this to identify the exact resource name for your EC2 instance, for example aws_instance.my_web_server. After identifying the resource, the terraform state show <resource_name> command displays all the resource’s attributes as stored in the state file. Among these attributes, the public DNS name or IP address is typically included, allowing you to retrieve the information immediately without altering your code or redeploying infrastructure.

Option A will not work if you have not defined an output for the public DNS name in your Terraform configuration. Terraform outputs are user-defined and only available if explicitly declared; otherwise, this command returns an error or no data.

Option B involves creating a new Terraform project that references your existing state through the terraform_remote_state data source. While this can work, it is unnecessarily complex for quickly retrieving information from your existing deployment. It requires setting up a new configuration and managing outputs, which is more effort than needed for a quick lookup.

Option D is highly discouraged because destroying and recreating resources just to obtain their attributes is disruptive, risks data loss, and causes downtime. It’s inefficient and violates best practices for infrastructure management.

In conclusion, querying the Terraform state directly via commands in Option C is the fastest, safest, and most straightforward way to find deployment-specific information when outputs are not defined.


Top HashiCorp Certifications

Top HashiCorp Certification Exams

Site Search:

 

VISA, MasterCard, AmericanExpress, UnionPay

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.