Amazon AWS Certified Developer - Associate DVA-C02 Exam Dumps & Practice Test Questions
Question 1:
A company runs an application on Amazon EC2 instances that processes transactions. When a transaction is invalid, the app must send a chat message via a chat API requiring an access token. This token must remain encrypted both at rest and during transit, and it must be accessible by multiple AWS accounts.
Which solution provides this functionality while minimizing management effort?
A. Store the token as an AWS Systems Manager Parameter Store SecureString using an AWS-managed KMS key, allow cross-account access with a resource-based policy, and grant EC2 instances permissions to retrieve and decrypt the token.
B. Encrypt the token with a customer-managed AWS KMS key, store it in DynamoDB, grant permissions to EC2 instances for DynamoDB and KMS access, retrieve and decrypt the token on EC2, then send the message.
C. Use AWS Secrets Manager with a customer-managed KMS key to store the token, configure a resource-based policy for cross-account access, grant EC2 instances permissions to Secrets Manager, and retrieve the decrypted token to send the message.
D. Encrypt the token using an AWS-managed KMS key, store it in an S3 bucket with cross-account access via bucket policy, grant EC2 instances permissions to S3 and KMS, retrieve and decrypt the token on EC2, then send the message.
Correct answer: C
Explanation:
The question centers on securely storing and retrieving an access token that needs encryption at rest and in transit, cross-account accessibility, and minimal management overhead. Let's evaluate the options based on these criteria.
Option A uses AWS Systems Manager Parameter Store with SecureString. Parameter Store does encrypt data at rest using AWS KMS and encrypts in transit, and you can enable cross-account access using resource-based policies. However, Parameter Store is primarily designed for configuration data, not sensitive secrets requiring lifecycle management. It lacks built-in secret rotation and versioning features, which increases manual operational overhead when managing secrets such as access tokens.
Option B suggests manually encrypting the token using a customer-managed KMS key and storing it in DynamoDB. This approach complicates implementation because developers must handle encryption and decryption explicitly within their application code. Moreover, managing cross-account access to DynamoDB tables and ensuring security is complex. This option increases both development and maintenance burdens, thus not minimizing overhead.
Option C is the most suitable because AWS Secrets Manager is purpose-built for managing sensitive credentials. It natively encrypts secrets at rest with KMS and encrypts in transit. It supports resource-based policies, allowing secure cross-account access. Secrets Manager also simplifies secret lifecycle management by providing features like automatic rotation, versioning, and audit logging. EC2 instances only require permission to access Secrets Manager and retrieve the decrypted secret. This reduces operational complexity and management overhead significantly.
Option D involves storing the encrypted token in an S3 bucket with cross-account access enabled. While S3 supports encryption and bucket policies for access control, it is not optimized for managing secrets. This approach requires additional logic to decrypt and securely retrieve tokens, increases security risk if bucket policies are misconfigured, and lacks secret-specific management features such as rotation or audit trails.
In summary, Option C delivers the strongest balance of security, ease of management, and cross-account accessibility tailored to secret management. Hence, it is the best choice for storing and accessing the token.
Question 2:
A company has Amazon EC2 instances running across several AWS accounts. A developer needs to build an application that collects EC2 instance lifecycle events from all these accounts into a single centralized Amazon SQS queue located in the main AWS account.
What is the best approach to achieve this?
A. Configure EC2 to send lifecycle events from all accounts directly to the EventBridge event bus in the main account, add a rule there to match events, and route them to the SQS queue.
B. Use SQS resource policies to grant cross-account write permissions, create EventBridge rules in each account to send events to the main account’s SQS queue.
C. Implement a Lambda function that polls all EC2 instances in all accounts for lifecycle changes, and on detection, sends messages to the main account’s SQS queue; trigger the function every minute via EventBridge scheduled rule.
D. Set permissions on the main account’s EventBridge event bus to accept events from other accounts, create EventBridge rules in each account to forward lifecycle events to the main account’s event bus, then add a rule in the main account to route these events to the SQS queue.
Correct answer: D
Explanation:
The requirement is to aggregate EC2 instance lifecycle events from multiple AWS accounts and funnel them into a single SQS queue in a central (main) account, while ensuring scalability, security, and manageability.
Option A proposes having EC2 instances send lifecycle events directly to the main account’s EventBridge event bus. While EventBridge can route events internally, by default event buses do not accept events from other accounts without explicit permission and configuration in each source account. This option omits those crucial cross-account permission and forwarding steps, making it incomplete.
Option B suggests granting each account permission to write directly to the central SQS queue and configuring EventBridge rules in each account to send events there. This is problematic because EventBridge rules cannot target SQS queues in other AWS accounts natively. Also, managing direct cross-account access to SQS is complex and potentially risky from a security perspective.
Option C relies on a Lambda function that polls EC2 instance states across all accounts on a scheduled basis. This polling approach is inefficient, resource-intensive, and introduces latency. It bypasses the real-time event-driven model that AWS EventBridge supports, making it a poor architectural choice.
Option D provides the most robust, scalable, and secure solution. It configures the main account’s EventBridge event bus to accept events from other AWS accounts by setting appropriate resource policies. Then, in each AWS account, an EventBridge rule is created to forward EC2 lifecycle events to the main account’s event bus. Finally, a rule in the main account routes these events to the local SQS queue. This method leverages native EventBridge cross-account event routing capabilities, simplifies management, and ensures centralized, real-time collection of lifecycle events.
In conclusion, Option D is the AWS-recommended approach that efficiently centralizes multi-account EC2 lifecycle events using native EventBridge features, providing security, scalability, and operational simplicity.
Question 3:
A developer is working on an application that uses Amazon Cognito user pools and identity pools for authentication and authorization. The app allows authenticated users to upload and download their own files to Amazon S3. File sizes vary from 3 KB up to 300 MB. The developer wants to ensure that files are securely stored and accessed, with strict controls so that each user can only interact with their own files.
Which method offers the highest level of security for this scenario?
A. Use S3 Event Notifications to verify file upload and download requests and update the user interface accordingly.
B. Store metadata of uploaded files in a DynamoDB table, and filter the file list in the UI by matching user IDs.
C. Use Amazon API Gateway and AWS Lambda to handle file uploads/downloads, validating every request in Lambda.
D. Apply an IAM policy within the Amazon Cognito identity prefix to restrict users to their own folders in S3.
Correct answer: D
Explanation:
When building an app that uses Amazon Cognito and Amazon S3 for user file storage, the most critical security requirement is ensuring that users can only access their own files, preventing unauthorized data access. To accomplish this, the best practice is to use fine-grained access control through IAM policies linked with Cognito identity pools.
Option D is the most secure and maintainable approach. It leverages AWS IAM’s policy variables, such as ${cognito-identity-id}, which dynamically restrict each user’s permissions to a unique folder or prefix within the S3 bucket (for example, s3://bucket/private/${cognito-identity-id}/). This automatically ensures that users cannot access folders belonging to others. This approach uses AWS’s built-in security controls, requires minimal custom code, and scales easily as the user base grows.
Looking at the other options:
Option A relies on S3 Event Notifications, which trigger after uploads/downloads. They cannot enforce security proactively or prevent unauthorized access. They are meant for asynchronous processing, not access control.
Option B only enforces security at the UI level by filtering files in DynamoDB based on user ID. This approach does not prevent malicious users from directly accessing files in S3 outside the UI, as no IAM restrictions are enforced.
Option C uses API Gateway and Lambda to proxy file access, which could enforce security but introduces significant overhead and complexity, especially for large files up to 300 MB. Lambda has execution time and memory limits, making this approach less efficient and harder to scale.
Therefore, using IAM policies scoped by Cognito identity pool prefixes (Option D) offers the most robust, scalable, and straightforward security model for this use case.
Question 4:
A company is creating a scalable data processing system using AWS services to accelerate development and increase flexibility. This system must ingest large datasets from multiple sources and sequentially apply several business rules and transformations in a specific order. Additionally, the solution should support error reprocessing if any step fails. The company seeks a scalable, low-maintenance method to automate and orchestrate these workflows.
Which AWS service best meets these requirements?
A. AWS Batch
B. AWS Step Functions
C. AWS Glue
D. AWS Lambda
Correct answer: B
Explanation:
Designing a scalable and maintainable data processing pipeline that requires ordered execution of multiple steps, error handling, and reprocessing demands a robust orchestration service.
Option B, AWS Step Functions, is specifically built for orchestrating complex workflows involving multiple AWS services and steps that must execute in sequence. It offers a visual workflow interface, built-in error handling with retry policies, and state management that tracks progress and failures. Step Functions can coordinate calls to Lambda, Glue, Batch, and other services, enabling seamless automation of data processing pipelines with minimal manual oversight. Its serverless nature means it scales automatically without requiring infrastructure management, perfectly aligning with the company’s needs for scalability and low maintenance.
Option A, AWS Batch, is designed for running large-scale batch jobs, not workflow orchestration. Although it supports job dependencies, it lacks native support for complex error handling, retries, and visual workflow management needed here.
Option C, AWS Glue, is an ETL service ideal for cataloging and transforming data at scale. While it has job scheduling features, Glue’s workflow orchestration capabilities are limited and less flexible than Step Functions when managing complex, multi-step business logic with error reprocessing.
Option D, AWS Lambda, excels at serverless event-driven tasks but does not inherently provide workflow orchestration. To manage sequencing and error handling with Lambda alone, developers would need to build custom orchestration logic, which increases complexity and maintenance overhead.
Thus, AWS Step Functions (Option B) offers the best balance of scalability, ease of use, and built-in orchestration features to automate ordered data workflows with error handling in a low-maintenance way.
Question 5:
A developer has created a Python-based AWS Lambda function that reads data from Amazon S3 objects and writes the processed data to a DynamoDB table. The Lambda function is triggered successfully by S3 event notifications when new objects are created.
However, the function fails specifically when trying to write to the DynamoDB table. What is the most probable reason for this failure?
A. The Lambda function has exceeded its concurrency limit.
B. The DynamoDB table requires a global secondary index (GSI) for write operations.
C. The Lambda function lacks the necessary IAM permissions to write to DynamoDB.
D. The DynamoDB table is not located in the same Availability Zone as the Lambda function.
Correct answer: C
Explanation:
When a Lambda function is triggered successfully but fails during interaction with another AWS service—such as writing to a DynamoDB table—the first area to investigate is permissions, particularly IAM roles and policies.
In this scenario, the Lambda function is triggered by an S3 event successfully, indicating that the function itself is properly deployed, and it has sufficient permissions to access S3. However, it fails during the write to DynamoDB, narrowing the problem down to access issues related to DynamoDB.
Option A, regarding concurrency limits, is unlikely to be the cause here. Exceeding concurrency would result in throttling or invocation failures, not a specific failure during DynamoDB writes. Lambda throttling tends to affect invocation itself rather than isolated API calls.
Option B is incorrect because Global Secondary Indexes (GSIs) in DynamoDB are optional and used solely to enable queries on alternate keys. They do not affect the ability to perform write operations on the main table.
Option D reflects a misunderstanding of AWS architecture. DynamoDB is a fully managed regional service and is not tied to a specific Availability Zone (AZ). Lambda functions within the same region can access DynamoDB tables regardless of AZ, as AWS services are designed to be highly available and redundant.
Option C is the most likely cause. Lambda functions operate under an IAM execution role that grants temporary credentials. If this role does not explicitly grant permissions such as dynamodb:PutItem, UpdateItem, or BatchWriteItem on the target DynamoDB table, the function’s write attempts will be denied. This typically results in an AccessDeniedException in Lambda logs, signaling insufficient IAM permissions.
Therefore, verifying and updating the Lambda function’s IAM role to include the necessary DynamoDB write permissions will most likely resolve the failure.
Question 6:
How can a developer restrict the Amazon EC2 instance types that can be specified in an AWS CloudFormation template so that only approved instance types are allowed when deploying stacks across multiple AWS accounts?
A. Create individual CloudFormation templates for each allowed EC2 instance type.
B. Define separate resource entries for each EC2 instance type in the Resources section of the template.
C. Add multiple parameters in the CloudFormation template, one for each approved EC2 instance type.
D. Use a single parameter in the CloudFormation template that restricts allowed values to a predefined list of EC2 instance types.
Correct answer: D
Explanation:
When building CloudFormation templates intended for deployment across multiple accounts and teams, it’s essential to enforce constraints on configurable options—such as EC2 instance types—to ensure compliance, cost control, and operational consistency.
Option D describes the use of a CloudFormation parameter with an AllowedValues property, which restricts input values to a specific set. This approach is considered best practice because it combines flexibility and governance. It allows users launching the stack to choose from a predefined list of approved instance types, and if they try to specify an unapproved type, the deployment fails validation before any resources are provisioned. This mechanism enforces policy compliance without requiring template duplication or complex logic.
Option A suggests creating a separate template for each instance type, which is inefficient and difficult to maintain. Managing multiple templates leads to redundancy, increased risk of configuration drift, and more overhead when updates are required.
Option B involves defining multiple EC2 resources in the template for each instance type. This wastes resources and complicates the template unnecessarily, as CloudFormation will attempt to create all those resources unless conditionals are applied. Such complexity is unwarranted for simply restricting instance types.
Option C suggests multiple parameters—one for each instance type. This approach does not provide a selection mechanism or validation and makes the template input process confusing. It also complicates the stack deployment, requiring additional logic to select the correct parameter.
In summary, using a single parameter with AllowedValues (Option D) is the most maintainable and scalable solution. It enables validation, simplifies user input, enforces governance policies, and is widely recommended by AWS best practices. This approach ensures only approved instance types can be deployed via CloudFormation across multiple AWS accounts efficiently and securely.
Question 7:
A developer is using the BatchGetItem API of Amazon DynamoDB to retrieve multiple items at once. However, the response frequently contains items listed under the UnprocessedKeys attribute, meaning some requests weren't handled.
What strategies should the developer implement to improve the application's ability to handle these unprocessed keys?
A. Retry the batch request immediately without delay.
B. Retry the batch request using exponential backoff combined with a randomized delay.
C. Switch the application to use an AWS SDK for making DynamoDB requests.
D. Increase the provisioned read capacity units of the DynamoDB tables involved.
E. Increase the provisioned write capacity units of the DynamoDB tables involved.
Correct answers: B, D
Explanation:
When using DynamoDB’s BatchGetItem operation, the presence of UnprocessedKeys in the response indicates that DynamoDB couldn’t complete some read requests, usually due to exceeding provisioned throughput limits or temporary throttling. To build resilience, the developer should adopt strategies that address both the immediate retry behavior and the underlying capacity limits.
Why option B is correct: Implementing exponential backoff with jitter (randomized delay) is an industry best practice for handling throttling. Rather than retrying immediately—which can overwhelm the service further—exponential backoff progressively increases wait times between retry attempts, reducing the likelihood of repeated throttling. The random jitter helps spread retry attempts across time and clients, preventing retry storms and improving the chances of successful subsequent calls. AWS explicitly recommends this approach for managing UnprocessedKeys responses.
Why option D is correct: UnprocessedKeys often signal that the read capacity of the DynamoDB table is insufficient to handle the request load. Increasing provisioned read capacity units (RCUs) allows the table to process more read requests per second, directly reducing throttling and the occurrence of unprocessed items. Since BatchGetItem is a read operation, write capacity has no effect here.
Why option A is incorrect: Immediate retries without delay risk amplifying throttling by increasing traffic during already constrained periods, worsening failures rather than improving throughput.
Why option C is incorrect: While AWS SDKs handle retry logic and provide helpful tools, simply switching to an SDK won’t resolve capacity issues unless exponential backoff is properly implemented and capacity is sufficient.
Why option E is incorrect: Write capacity units affect write operations (PutItem, UpdateItem, BatchWriteItem), not read operations like BatchGetItem. Increasing write capacity won't alleviate read throttling.
In summary, combining exponential backoff with jitter and increasing provisioned read capacity creates a robust, scalable approach to handling UnprocessedKeys and improving application reliability.
Question 8:
A company runs a custom application on on-premises Linux servers, which are accessed through Amazon API Gateway. AWS X-Ray tracing is enabled on the API’s test stage.
What is the easiest method for a developer to enable X-Ray tracing on these on-premises servers with minimal setup?
A. Install and run the AWS X-Ray SDK on the on-premises servers to capture and send trace data.
B. Install and run the AWS X-Ray daemon on the on-premises servers to collect and forward trace data to AWS X-Ray.
C. Configure AWS Lambda to pull, process, and forward trace data from on-premises servers using the PutTraceSegments API.
D. Configure AWS Lambda to pull, process, and forward telemetry data using the PutTelemetryRecords API.
Correct answer: B
Explanation:
To enable AWS X-Ray tracing on on-premises servers with minimal configuration, it’s important to understand how the X-Ray system components interact. AWS X-Ray works by collecting trace segments generated by instrumented applications (using the X-Ray SDK) and forwarding these segments to the AWS X-Ray service. The forwarding is handled by the X-Ray daemon, which listens locally and batches segments before sending them over the network.
Why option B is correct: Installing and running the X-Ray daemon on on-premises Linux servers is the simplest and most effective way to enable tracing. The daemon acts as a local proxy that receives trace segments from instrumented application code (using the SDK) and sends the data to AWS X-Ray. This requires minimal setup: download the daemon, configure basic credentials or network access if needed, and run the service. The daemon is lightweight and specifically designed for this purpose, making it ideal for hybrid or on-premises environments.
Why option A is incorrect: The X-Ray SDK is essential for instrumenting application code to generate trace segments but cannot send data directly to the X-Ray service. It depends on the daemon to forward data, so SDK installation alone is insufficient.
Why option C is incorrect: Using a Lambda function to pull trace data and push it via the PutTraceSegments API would involve complex custom development. It would require capturing trace data, formatting it properly, managing retries, and securing API calls — all adding significant overhead compared to running the daemon locally.
Why option D is incorrect: The PutTelemetryRecords API is intended for sending telemetry about the X-Ray daemon itself (such as health and metrics), not actual trace segment data. Using this API would not fulfill the requirement to forward trace data and would misalign with its purpose.
In conclusion, installing and running the X-Ray daemon locally on on-premises servers is the most straightforward, least complex method to enable AWS X-Ray tracing outside of AWS infrastructure.
Question 9:
A company needs to securely share data with an external system through an HTTP API using an existing API key. The solution should allow programmatic management of the API key, ensure strong security, and maintain application performance without degradation.
Which method best fulfills these requirements?
A. Store the API key in AWS Secrets Manager and retrieve it at runtime via the AWS SDK for API calls.
B. Embed the API key in a local variable within the application code, commit the code to a private Git repository, and use the variable during runtime for API calls.
C. Save the API key as an object in a private Amazon S3 bucket with restricted access controlled by IAM policies, then retrieve it at runtime using the AWS SDK.
D. Store the API key in an Amazon DynamoDB table with access controlled by resource-based policies, retrieving it at runtime using the AWS SDK.
Correct answer: A
Explanation:
The ideal approach to securely manage sensitive credentials like API keys must balance security, ease of management, and performance impact. AWS Secrets Manager is purpose-built for this task, offering a managed, highly secure service designed specifically for storing and managing secrets such as API keys.
Option A leverages Secrets Manager’s robust features, including encryption at rest with AWS KMS, fine-grained access control using IAM policies, and audit logging through CloudTrail. Secrets Manager allows applications to retrieve credentials programmatically at runtime, ensuring secrets are never hardcoded or exposed in the codebase. It also supports secret rotation, which is critical for maintaining long-term security (even if rotation is not immediately needed here). The AWS SDK’s integration with Secrets Manager minimizes latency, ensuring the application’s performance remains unaffected.
Option B is insecure because embedding API keys directly in application code—even in a private Git repository—exposes credentials to potential leaks through repository access or accidental sharing. This practice violates security best practices and does not support auditing or rotation.
Option C uses Amazon S3, which offers encryption and access control, but S3 is not designed for secret management. It lacks native secret rotation and fine-grained audit capabilities, increasing operational risk and complexity. Managing secret retrieval and parsing adds further overhead.
Option D suggests DynamoDB, which can store data securely but is not intended for secrets management. It lacks integrated rotation and detailed auditing features, making it unsuitable compared to Secrets Manager.
In summary, AWS Secrets Manager (Option A) is the most secure, scalable, and efficient service for managing API keys programmatically with minimal performance impact, aligning perfectly with the scenario’s requirements.
Question 10:
A developer is deploying an application to Amazon ECS that requires securely storing and retrieving configuration variables such as API URLs and authentication credentials. These variables must be accessible across multiple environments (development, testing, production) and support all current and future application versions.
What is the best way for the developer to retrieve these variables with minimal changes to the application code?
A. Use AWS Systems Manager Parameter Store to store variables with unique paths per environment, and use AWS Secrets Manager to store credentials, retrieving both at runtime.
B. Use AWS Key Management Service (KMS) to store and retrieve the API URLs and credentials as unique encryption keys for each environment.
C. Store encrypted configuration files with the application, maintaining separate files per environment, and decrypt them at runtime.
D. Define the authentication information and API URLs directly in the ECS task definitions as environment variables, customizing them per environment during deployment.
Correct answer: A
Explanation:
Securely managing configuration and secrets across multiple environments while minimizing application code changes is critical in containerized deployments like Amazon ECS.
Option A is the most practical and secure approach. AWS Systems Manager Parameter Store is designed to manage configuration data, such as API URLs and other non-sensitive variables, using hierarchical paths that enable environment-specific segmentation (e.g., /dev/api/url, /prod/api/url). AWS Secrets Manager complements Parameter Store by securely storing sensitive credentials like API keys and passwords. Both services integrate natively with ECS, allowing the application to fetch variables dynamically at runtime without hardcoding or bundling sensitive information in the application itself. The use of AWS KMS under the hood ensures encryption at rest and transit, enhancing security. This method requires minimal modification to application code because the AWS SDKs provide straightforward access to these services.
Option B is not well suited because AWS KMS is intended for managing encryption keys, not as a direct store for application variables. Using KMS to store and retrieve API URLs and credentials would require custom encryption/decryption logic in the application, increasing complexity and risk.
Option C involves bundling encrypted files with the application. This method complicates secret management since the app must handle decryption and key rotation manually. It also complicates deployment workflows and scaling across environments, making it less flexible and harder to maintain.
Option D involves embedding variables directly in ECS task definitions as environment variables. While simple, this practice exposes sensitive information in plain text within task definitions, which is a security risk. It also requires updating task definitions with each change, adding operational overhead and reducing flexibility.
In conclusion, Option A provides a secure, scalable, and maintainable way to manage configuration variables and secrets across environments with minimal disruption to the application, aligning with AWS best practices for ECS deployments.
Top Amazon Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.