Amazon AWS Certified DevOps Engineer - Professional DOP-C02 Exam Dumps & Practice Test Questions

Question 1:

A company’s mobile app sends HTTP API requests through an Application Load Balancer (ALB), which forwards these requests to an AWS Lambda function. Multiple versions of the app are simultaneously in use, including some under testing by select users. The app version is specified in the user-agent header included in all API calls. Recently, the API experienced issues after updates. The company wants to track metrics for each API operation by response code and app version. The Lambda function has been updated to extract the API operation name, version from the user-agent header, and response code. 

What further steps should the DevOps engineer take to collect the required metrics?

A. Modify Lambda to log the operation name, response code, and version in CloudWatch Logs. Use a CloudWatch Logs metric filter to create metrics by operation name, with response code and version as dimensions.
B. Modify Lambda to log the data in CloudWatch Logs. Use CloudWatch Logs Insights to query logs and populate metrics by response code and version.
C. Enable ALB access logs in CloudWatch Logs. Modify Lambda to send operation name, response code, and version in response metadata to ALB. Create a metric filter on CloudWatch Logs using ALB logs.
D. Enable AWS X-Ray on Lambda. Modify Lambda to create X-Ray subsegments with operation name, response code, and version. Use X-Ray insights to generate aggregated metrics and publish to CloudWatch with dimensions.

Correct answer: A

Explanation:

The goal is to collect detailed metrics on API operations, categorized by response code and app version. The Lambda function already extracts the necessary information (operation name, response code, version) during execution. The key is to efficiently capture these details as metrics in real-time for monitoring and troubleshooting.

Option A is the best approach because it leverages CloudWatch Logs and metric filters, which are designed for extracting structured information from log data and converting it into custom metrics. The Lambda function logs each request’s details, and metric filters on those logs extract counts per API operation, response code, and version. This method provides an automated, real-time metric creation process that integrates well with CloudWatch monitoring dashboards and alarms.

Option B involves using CloudWatch Logs Insights, which is a powerful querying tool for log analysis but not ideal for automated metric generation. Logs Insights requires manual or scheduled queries to derive metrics, making it less practical for continuous monitoring compared to metric filters.

Option C suggests using ALB access logs combined with Lambda response metadata. While ALB logs provide request-level data, they don’t inherently include application-level details such as API operation names or version numbers unless explicitly included, which adds complexity. Also, capturing metrics directly from Lambda logs is more direct and less error-prone.

Option D recommends AWS X-Ray tracing for metric generation. X-Ray excels at tracing and performance analysis but is not specifically built for generating granular, real-time custom metrics by multiple dimensions such as API version and response code. Using X-Ray for this purpose adds unnecessary overhead.

In summary, logging structured information in CloudWatch Logs from Lambda and using metric filters (Option A) is a straightforward, scalable, and effective way to generate detailed application metrics needed for monitoring multiple app versions and API operations.

Question 2:

A company offers an application accessed through an Amazon API Gateway REST API, which invokes an AWS Lambda function. When initialized, the Lambda function loads a large dataset from a DynamoDB table. This causes cold starts lasting 8–10 seconds. The DynamoDB table uses DynamoDB Accelerator (DAX) for caching. The application receives thousands of requests daily, with a peak tenfold increase midday and a significant drop near day's end. Customers complain about intermittent high latency. 

How should a DevOps engineer reduce Lambda function latency throughout the day?

A. Set provisioned concurrency to 1 for the Lambda function and delete the DAX cluster.
B. Set reserved concurrency to 0 for the Lambda function.
C. Enable provisioned concurrency with AWS Application Auto Scaling on the Lambda function, setting min to 1 and max to 100.
D. Configure reserved concurrency for Lambda and enable Application Auto Scaling on API Gateway with a max reserved concurrency of 100.

Correct answer: C

Explanation:

The core issue is the Lambda function’s cold-start delay, worsened by a heavy initialization workload (loading large data from DynamoDB). The function must handle highly variable traffic patterns: steady low traffic with a midday spike, then tapering off. The goal is to maintain low latency regardless of traffic fluctuations.

Option C is the ideal solution because provisioned concurrency keeps a specified number of Lambda instances pre-initialized and ready to serve requests, eliminating cold starts and thus minimizing latency. By combining provisioned concurrency with AWS Application Auto Scaling, the concurrency level can dynamically adjust between a minimum (e.g., 1) and maximum (e.g., 100), matching demand fluctuations. This ensures cost efficiency while maintaining performance during peak and off-peak periods.

Option A proposes provisioned concurrency but fixes it at 1 instance, which is insufficient for handling large midday traffic spikes. Additionally, deleting the DAX cluster would remove a key caching layer that speeds up DynamoDB reads, likely increasing latency rather than reducing it.

Option B suggests setting reserved concurrency to 0, which disables the Lambda function entirely, preventing it from serving any requests—clearly an invalid solution.

Option D recommends reserved concurrency (which only caps concurrency) combined with scaling on API Gateway, which does not directly influence Lambda cold starts. Reserved concurrency manages the maximum simultaneous executions but does not eliminate cold start latency. Scaling API Gateway does not solve the Lambda initialization delay either.

Thus, the best practice is to enable provisioned concurrency on Lambda and let Application Auto Scaling adjust it according to real-time traffic patterns. This approach guarantees pre-warmed Lambda instances during demand peaks and scales down to reduce costs during low usage, effectively addressing both latency and scalability.

Question 3:

A company is implementing AWS CodeDeploy to automate deployments of a Java-Apache Tomcat application running alongside an Apache Webserver. They began with a proof of concept, setting up a deployment group for their developer environment and successfully running functional tests. The plan is to later add deployment groups for staging and production. Currently, the Apache server’s log level is configured statically, but the team wants to dynamically adjust the log level depending on which deployment group the code is deployed to—without creating separate application revisions or managing multiple script versions. 

How can this be achieved with minimal operational overhead?

A. Tag EC2 instances by deployment group, use a script within the app revision that queries instance metadata and EC2 API to identify the group, then configure log levels. Run this script in the AfterInstall lifecycle hook.
B. Write a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to detect the deployment group, configure log levels accordingly, and run this script during the BeforeInstall lifecycle hook.
C. Create custom CodeDeploy environment variables per environment, write a script that reads these variables to set log levels, and invoke it during the ValidateService lifecycle hook.
D. Use the environment variable DEPLOYMENT_GROUP_ID in a script to determine the deployment group and configure logs, running it in the Install lifecycle hook.

Answer: B

Explanation:

The core challenge is to dynamically adjust Apache’s log level based on the deployment group without multiplying application revisions or scripts, thus minimizing management complexity.

Option A suggests tagging EC2 instances and using metadata and API calls to determine the deployment group. While possible, this adds unnecessary complexity by relying on external calls, increasing overhead. Additionally, the AfterInstall hook runs too late; configuration changes should ideally happen before installation.

Option B leverages a built-in CodeDeploy environment variable called DEPLOYMENT_GROUP_NAME, which automatically identifies the current deployment group. Writing a script that reads this variable to adjust log levels is both clean and efficient. Using the BeforeInstall lifecycle hook ensures configuration is set before the application is deployed, aligning perfectly with requirements. This method avoids multiple scripts or revisions, maintaining simplicity and low overhead.

Option C involves creating custom environment variables per environment, which increases manual setup and management effort. Also, the ValidateService hook is meant for validating deployments, not configuration changes, making it a less suitable lifecycle stage.

Option D references DEPLOYMENT_GROUP_ID, which is not a standard environment variable in CodeDeploy, requiring extra setup or workarounds. The Install lifecycle hook occurs too late in the deployment sequence for configuring log levels effectively.

Therefore, Option B offers the best balance of simplicity, efficiency, and alignment with CodeDeploy best practices, ensuring dynamic log level configuration with minimal management effort.

Question 4:

A company mandates that all Amazon EBS volumes in its account must be tagged with a Backup_Frequency tag, specifying values like none, daily, or weekly to indicate backup frequency preferences. Sometimes developers forget to apply these tags. The company wants to ensure that every EBS volume has this tag, defaulting to weekly if none is specified, to guarantee at least weekly backups. 

What is the best solution to enforce and automatically remediate missing tags?

A. Use AWS Config with a custom rule checking for the Backup_Frequency tag on all EC2 resources, plus a remediation runbook that tags volumes with weekly.
B. Use AWS Config with a managed rule targeting EC2::Volume resources missing the Backup_Frequency tag and a remediation runbook that applies the tag with value weekly.
C. Enable CloudTrail and create an EventBridge rule for EBS CreateVolume events that triggers a runbook to tag volumes with weekly.
D. Enable CloudTrail and create an EventBridge rule for both EBS CreateVolume and ModifyVolume events that triggers a runbook to tag volumes with weekly.

Answer: B

Explanation:

The requirement is to ensure that all EBS volumes have a Backup_Frequency tag, defaulting to weekly if absent. This must be enforced automatically, minimizing manual intervention and ensuring compliance.

Option A proposes using AWS Config with a custom rule monitoring all EC2 resources, applying remediation if the tag is missing. While it can work, custom rules require more development effort and maintenance. Monitoring all EC2 resources instead of specifically EBS volumes may lead to unnecessary checks and overhead.

Option B is the optimal choice. It utilizes an AWS Config managed rule designed to check for tags specifically on EBS volumes (EC2::Volume). Managed rules require less maintenance and are optimized for this use case. When the rule detects a missing Backup_Frequency tag, a remediation action is triggered via an AWS Systems Manager Automation runbook that applies the weekly tag automatically. This ensures continuous compliance and removes reliance on developers to manually tag volumes.

Option C uses CloudTrail to capture volume creation events and an EventBridge rule to trigger tagging. However, this only addresses volumes at creation, ignoring volumes modified later or existing untagged volumes, leading to incomplete enforcement.

Option D improves on C by including modification events, but even then, it depends on reactive event processing and may miss volumes that were created before CloudTrail was enabled or volumes that remain untagged due to other reasons. It is less robust than AWS Config’s continuous compliance monitoring.

Overall, Option B is the most reliable, scalable, and maintainable approach to ensure all EBS volumes are properly tagged with Backup_Frequency, facilitating proper backup scheduling without manual overhead.

Question 5:

You are a DevOps engineer tasked with ensuring that an Amazon Aurora database cluster maintains high availability and minimal downtime during an upcoming maintenance window. The application must handle both read and write operations efficiently during this period. 

What is the best approach to achieve this?

A. Add a reader instance to the Aurora cluster. Configure the application to use the cluster endpoint for writes and the reader endpoint for reads.
B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster and configure the application to use it for both reads and writes.
C. Enable Multi-AZ on the Aurora cluster. Configure the application to use the cluster endpoint for writes and the reader endpoint for reads.
D. Enable Multi-AZ on the Aurora cluster. Create a custom ANY endpoint and configure the application to use it for both reads and writes.

Answer: A

Explanation:

The key objective here is to maximize availability and reduce downtime during maintenance, while supporting both read and write operations efficiently.

Option A is the most appropriate because adding a reader instance to the Aurora cluster enables read scalability and fault tolerance. By configuring the application to write to the primary cluster endpoint and read from the reader endpoint, you effectively separate write and read workloads. This separation ensures that even if the primary instance undergoes maintenance or fails over, the reader instance can continue serving read requests without interruption. Aurora automatically handles failover between primary and reader instances, maintaining database availability.

Option B is less suitable because the custom ANY endpoint directs both reads and writes to any available instance. This means write operations could be routed to a read-only replica, causing failures or data inconsistencies. It does not guarantee separation of read and write traffic, which is critical for availability during maintenance.

Option C involves enabling Multi-AZ, which enhances availability by creating a synchronous standby instance in another availability zone. However, this only protects write availability and does not address read scalability or continuous read availability during maintenance. The application still has to manage failover, which might cause temporary interruptions.

Option D combines Multi-AZ with a custom ANY endpoint. Like option B, this endpoint mixes read/write traffic, which can lead to operational issues. Also, Multi-AZ doesn’t help with read scalability.

In summary, Option A offers the best balance of performance, fault tolerance, and ease of administration during maintenance by using built-in Aurora features to direct writes to the primary instance and reads to a separate replica.

Question 6:

Your company requires that all Amazon Machine Images (AMIs) shared across AWS accounts be encrypted. You have an unencrypted custom AMI in a source account and must share it with a target account, where an EC2 Auto Scaling group will launch instances using this AMI. The source account has a customer-managed AWS KMS key. 

Which three steps should you take to meet these encryption and sharing requirements?

A. Copy the unencrypted AMI in the source account to an encrypted AMI, specifying the custom KMS key during the copy.
B. Copy the unencrypted AMI in the source account to an encrypted AMI, using the default EBS encryption key.
C. Create a KMS grant in the source account that allows the target account’s Auto Scaling service-linked role to use the KMS key.
D. Modify the key policy in the source account to allow the target account to create a grant, then have the target account create the KMS grant.
E. Share the original unencrypted AMI with the target account.
F. Share the encrypted AMI with the target account.

Answer: A, C, F

Explanation:

To comply with the company’s encryption policy while sharing AMIs across accounts, the process must both encrypt the AMI with the custom KMS key and allow the target account to use that encrypted AMI.

Step A is essential because the original AMI is unencrypted. Copying the AMI and specifying the custom KMS key encrypts the AMI at rest, meeting the encryption requirement. The copy action creates a new AMI that uses the customer-managed KMS key for its EBS volumes.

Step C is required to enable the Auto Scaling group in the target account to launch instances using the encrypted AMI. Creating a KMS grant in the source account delegates permission to the service-linked role of Auto Scaling in the target account to use the KMS key for decrypting the AMI when launching instances. Without this grant, the target account won’t be able to access the encrypted data.

Step F ensures the encrypted AMI is shared with the target account. The sharing action is necessary so the target account can see and use the AMI in their Auto Scaling configuration.

The other options have drawbacks:

  • B uses the default EBS key, which doesn’t meet the requirement for using the company’s custom KMS key.

  • D unnecessarily complicates permissions by modifying the key policy to delegate grant creation to the target account. The best practice is to create the grant directly in the source account (step C).

  • E violates the encryption mandate by sharing an unencrypted AMI.

In conclusion, the combination of copying the AMI with encryption (A), delegating KMS permissions via a grant (C), and sharing the encrypted AMI (F) is the correct and secure approach to fulfill company policy while enabling cross-account AMI usage.

Question 7:

A company uses AWS CodePipeline to automate its application releases, with stages for build, test, and deployment. Previously, each stage used separate AWS CodeBuild projects. Now, the company wants to use AWS CodeDeploy specifically for the deployment stage. The application is packaged as an RPM and must be deployed to Amazon EC2 instances within an Auto Scaling group, all launched from a shared AMI.

Which two steps should a DevOps engineer take to fulfill these deployment requirements? (Choose two.)

A. Create a new AMI version including the CodeDeploy agent and update the EC2 instance IAM role for CodeDeploy access.
B. Create a new AMI version with the CodeDeploy agent installed and create an AppSpec file that defines deployment scripts and permissions for CodeDeploy.
C. Create a CodeDeploy application with an in-place deployment targeting the Auto Scaling group, add a pipeline step using EC2 Image Builder to create a new AMI, and configure CodeDeploy to deploy the new AMI.
D. Create a CodeDeploy application with an in-place deployment targeting the Auto Scaling group and modify CodePipeline to use the CodeDeploy action for deployment.
E. Create a CodeDeploy application with an in-place deployment targeting the individual EC2 instances launched from the AMI and update CodePipeline to use the CodeDeploy action.

Answer: B, D

Explanation:

The goal is to integrate AWS CodeDeploy into the existing pipeline’s deployment stage, targeting EC2 instances in an Auto Scaling group. The application is packaged as an RPM, and instances are launched from a common AMI, which must be prepared for CodeDeploy.

Option A addresses creating an AMI with the CodeDeploy agent and updating IAM roles, which are necessary but incomplete steps. Without an AppSpec file defining deployment scripts and permissions, CodeDeploy cannot orchestrate the deployment properly. Thus, A alone is insufficient.

Option B correctly includes creating the new AMI with the CodeDeploy agent and producing the AppSpec file, which tells CodeDeploy how to install the RPM package and run lifecycle scripts. This step is essential for a smooth deployment.

Option C complicates the process by involving EC2 Image Builder to create a new AMI for every deployment, which isn’t required here. The deployment should focus on installing the application package, not rebuilding AMIs continually. This introduces unnecessary complexity.

Option D completes the setup by creating a CodeDeploy application configured for in-place deployment (updating existing instances) targeting the Auto Scaling group, and updating CodePipeline to use CodeDeploy for deployment. This ensures the pipeline integrates with CodeDeploy properly.

Option E is less ideal because targeting individual EC2 instances instead of the Auto Scaling group can cause management and scaling issues. Auto Scaling groups should be the deployment target for seamless scaling and deployment consistency.

Together, B and D prepare the instances and pipeline for effective CodeDeploy usage with RPM packages on Auto Scaling EC2 instances.

Question 8:

A company’s security team requires that all externally facing Application Load Balancers (ALBs) and Amazon API Gateway APIs must be associated with AWS WAF web ACLs. The company has hundreds of AWS accounts consolidated under a single AWS Organization and has configured AWS Config organization-wide. During an audit, it was discovered that some external ALBs lack attached AWS WAF web ACLs.

Which two actions should a DevOps engineer take to prevent such non-compliance in the future? (Choose two.)

A. Delegate AWS Firewall Manager to a designated security account.
B. Delegate Amazon GuardDuty to a designated security account.
C. Create an AWS Firewall Manager policy that automatically attaches AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
D. Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to new ALBs and API Gateway APIs.
E. Configure an AWS Config managed rule to attach AWS WAF web ACLs to new ALBs and API Gateway APIs.

Answer: A, C

Explanation:

The company must enforce the association of AWS WAF web ACLs with all external ALBs and API Gateway APIs across many AWS accounts managed within a single organization. The challenge is to prevent future violations and ensure continuous compliance across this multi-account environment.

Option A is crucial because delegating AWS Firewall Manager to a security account enables centralized security policy management across the entire AWS Organization. Firewall Manager allows the security team to enforce web ACL attachments automatically and consistently.

Option B involves Amazon GuardDuty, which is a threat detection service rather than a configuration enforcement tool. GuardDuty helps identify suspicious activity but cannot enforce resource configurations like attaching WAF web ACLs. Therefore, it does not address the compliance requirement.

Option C complements option A by creating an AWS Firewall Manager policy to automatically attach AWS WAF web ACLs to any new ALBs or API Gateway APIs. This proactive policy prevents violations by enforcing security configurations during resource creation.

Option D again misuses GuardDuty, as it has no capability to create or enforce policies related to WAF attachments. Thus, it is irrelevant to the problem.

Option E suggests using AWS Config rules, which are excellent for auditing and monitoring resource compliance. However, Config rules only detect violations and cannot directly enforce or remediate resource configurations by themselves. While Config rules can alert teams about missing WAF attachments, they cannot prevent the violation proactively as Firewall Manager policies do.

In conclusion, delegating AWS Firewall Manager to a security account and creating a Firewall Manager policy to automatically attach WAF web ACLs ensures proactive, automated enforcement of security compliance across multiple accounts. AWS Config can assist in monitoring but is not the primary enforcement mechanism.

Question 9:

A company utilizes AWS Key Management Service (AWS KMS) with manual key rotation to comply with regulatory requirements. The security team wants to receive alerts if any keys have not been rotated within 90 days. 

Which approach will best fulfill this requirement?

A. Set up AWS KMS to send notifications to an Amazon SNS topic when keys exceed 90 days without rotation.
B. Use Amazon EventBridge to trigger an AWS Lambda function that calls AWS Trusted Advisor and sends alerts to an Amazon SNS topic.
C. Create a custom AWS Config rule that monitors AWS KMS key rotation and publishes alerts to an Amazon SNS topic if keys are over 90 days old.
D. Configure AWS Security Hub to publish notifications to an Amazon SNS topic when key rotation exceeds 90 days.

Answer: C

Explanation:

The key goal here is to monitor AWS KMS keys to ensure they are rotated regularly—specifically, within a 90-day window—and notify the security team if this does not happen. Among AWS services, AWS Config is designed to continuously assess resource configurations and compliance status, making it the most suitable service for this task.

Option C involves developing a custom AWS Config rule tailored to check the rotation age of KMS keys. AWS Config can track the configuration of AWS resources over time and evaluate compliance with custom rules. By setting this rule to trigger whenever a key surpasses 90 days without rotation, it can automatically send notifications through Amazon SNS to alert the security team. This method is both scalable and efficient, providing automated monitoring aligned directly with compliance needs.

Option A is incorrect because AWS KMS does not natively support sending notifications based on key age or rotation status. There is no direct event or trigger in KMS for this scenario, so this option won’t fulfill the requirement.

Option B suggests using EventBridge with Lambda and Trusted Advisor. While EventBridge and Lambda are flexible for automation, Trusted Advisor does not provide detailed KMS key rotation data to trigger alerts accurately. This adds unnecessary complexity without meeting the key requirement.

Option D mentions AWS Security Hub, which aggregates security findings but does not provide specific checks on KMS key rotation schedules. It’s better suited for overall security posture, not for this precise monitoring.

In conclusion, AWS Config’s custom rule with SNS notifications is the most appropriate and straightforward solution. It ensures automated, ongoing compliance monitoring with alerts, helping maintain regulatory adherence effectively.

Question 10:

Your AWS CodeBuild project downloads a database population script stored in an S3 bucket. Currently, the S3 bucket allows unauthenticated access, but you want to improve security by removing public access. 

How should you securely fix this issue so that CodeBuild can still access the script?

A. Add the bucket name to CodeBuild’s Allowed Buckets in its settings and update the build spec to download the script using the AWS CLI.
B. Enable HTTPS basic authentication on the S3 bucket and use URL with a token in the build spec to download the script.
C. Remove public access by applying a bucket policy denying unauthenticated requests, update the CodeBuild service role to have S3 permissions, and use the AWS CLI to download the script.
D. Remove public access from the bucket and download the script using the AWS CLI with hardcoded IAM access and secret keys.

Answer: C

Explanation:

The problem is that the S3 bucket hosting the database script currently allows unauthenticated (public) access, which is a significant security risk. The solution should ensure that only authenticated and authorized access is permitted, specifically for the CodeBuild project, while maintaining secure and automated access.

Option C offers the best approach. It involves removing unauthenticated access by applying a restrictive bucket policy that denies public or anonymous requests, thereby securing the bucket contents. Next, the CodeBuild project’s service role must be granted specific S3 permissions, such as s3:GetObject, allowing it to authenticate and access the bucket securely. Since CodeBuild uses an IAM role, the AWS CLI commands in the build spec will automatically inherit these credentials, providing seamless and secure access without embedding sensitive information.

Option A is insufficient because adding the bucket name to Allowed Buckets in CodeBuild only restricts which buckets the build can access but does not prevent unauthenticated access. It doesn’t enforce secure access controls on the bucket itself.

Option B is inappropriate because S3 does not support HTTPS basic authentication with tokens as a security mechanism. Using cURL with tokens for S3 access is non-standard, introduces complexity, and does not align with AWS best practices for secure service-to-service access.

Option D is insecure since it suggests embedding IAM access keys and secret keys directly in the build spec or environment, which risks key exposure and is against AWS security best practices. Managing credentials via IAM roles is far safer and more scalable.

Therefore, the secure and recommended method is Option C: secure the bucket with a policy denying public access, assign correct IAM permissions to CodeBuild’s service role, and use AWS CLI within CodeBuild to access the script securely and automatically.

SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.