Amazon AWS Certified Solutions Architect - Professional SAP-C02 Exam Dumps & Practice Test Questions
A business is designing a hybrid DNS architecture using Amazon Route 53 to handle domain name resolution for both its AWS-hosted VPC resources and its on-premises systems. The solution must ensure that:
Which setup offers the highest performance for DNS resolution across this hybrid environment?
A. Associate the private hosted zone with all VPCs. Deploy a Route 53 inbound resolver in the shared services VPC. Attach all VPCs to the Transit Gateway and configure forwarding rules on the on-premises DNS server that direct queries to the inbound resolver.
B. Associate the private hosted zone with all VPCs. Deploy an EC2-based conditional forwarder in the shared services VPC. Attach all VPCs to the Transit Gateway and configure the on-premises DNS to forward queries to the forwarder.
C. Associate the private hosted zone with the shared services VPC. Deploy a Route 53 outbound resolver in that VPC. Attach all VPCs to the Transit Gateway and configure forwarding on the on-premises DNS server to use the outbound resolver.
D. Associate the private hosted zone with the shared services VPC. Deploy a Route 53 inbound resolver there. Connect only the shared VPC to the Transit Gateway and forward queries from the on-prem DNS to that resolver.
Correct Answer: A
Explanation:
The objective is to build a high-performance DNS resolution system for both cloud and on-premises environments. Option A delivers the most scalable and efficient solution. By associating the private hosted zone with all VPCs, each VPC can directly resolve AWS internal domains. Deploying a Route 53 inbound resolver in the shared services VPC allows DNS queries originating from on-premises systems to be securely forwarded into AWS. Using AWS Transit Gateway ensures all VPCs are interconnected and can access shared DNS infrastructure without manual route configuration between VPCs. The on-premises DNS server can forward relevant queries to the inbound resolver, allowing on-prem systems to resolve AWS-hosted names.
Option B involves using a conditional forwarder hosted on EC2, which adds complexity and overhead in comparison to using native AWS services.
Option C introduces an outbound resolver, which is meant for forwarding AWS DNS queries to on-prem systems—opposite of what's needed here.
Option D is incomplete, as the private hosted zone is only linked to the shared services VPC, meaning other VPCs wouldn’t resolve DNS directly.
Therefore, Option A is the most robust and efficient design for hybrid DNS resolution.
A weather data provider hosts its RESTful API using Amazon API Gateway integrated with AWS Lambda functions. The API stores and retrieves data from DynamoDB. DNS for the API is managed through Amazon Route 53. The company needs a disaster recovery solution that automatically switches the API to a secondary AWS Region if the primary region experiences a failure.
Which approach best satisfies these requirements?
A. Deploy a second set of Lambda functions in a different region. Use an edge-optimized API Gateway that includes Lambda integrations from both regions. Set up DynamoDB global tables.
B. Create a new API Gateway and Lambda setup in a separate region. Use a multivalue DNS record in Route 53 pointing to both API endpoints. Enable health checks. Convert DynamoDB to global tables.
C. Deploy a new API Gateway and Lambda functions in another AWS region. Set up a Route 53 failover record with health checks enabled. Convert DynamoDB to global tables.
D. Add a second API Gateway in a new region. Convert Lambda to global functions. Use multivalue DNS routing with health checks. Convert DynamoDB tables to global.
Correct Answer: C
Explanation:
For disaster recovery and automatic regional failover, Option C is the best choice. It involves deploying the full API infrastructure—API Gateway and Lambda functions—in a second AWS region. The DNS record in Route 53 is configured using a failover routing policy, which ensures traffic is directed to the secondary region only if the primary region becomes unhealthy.
Target health monitoring in Route 53 actively checks the health of the primary API endpoint. If the endpoint becomes unavailable, Route 53 automatically redirects traffic to the standby region, minimizing downtime.
Additionally, converting DynamoDB tables to global tables ensures consistent and synchronized data across both regions, eliminating the need for manual replication or reconciliation during failover.
Option A suggests edge-optimized APIs and routing to both regions, but it doesn’t provide the automated failover required.
Option B uses multivalue routing, which allows traffic to hit either region randomly—even when one is down, potentially sending users to a failed endpoint.
Option D introduces the concept of "global Lambda functions," which is not a native AWS feature. Multivalue routing also fails to guarantee failover behavior.
Thus, Option C delivers seamless failover, reliable performance, and data integrity across regions.
Question 3:
A company manages multiple AWS accounts under a single Organizational Unit (OU) named Production in AWS Organizations. The organization enforces strict restrictions using deny list Service Control Policies (SCPs) applied at the root. After acquiring a new business unit and adding its existing AWS account to the organization, administrators find they cannot modify AWS Config rules within the new account because of these root SCP restrictions.
Which approach allows administrators in the new business unit to update AWS Config while keeping current policies intact and minimizing ongoing maintenance?
A. Delete the root SCPs restricting AWS Config, then deploy standard AWS Config rules via AWS Service Catalog across all accounts, including the new one.
B. Create a temporary OU called Onboarding for the new account, apply an SCP permitting AWS Config actions to this OU, and after updating the rules, move the account into the Production OU.
C. Switch the root SCPs from deny lists to allow lists to permit only required services, and temporarily allow AWS Config actions for the new account at the root level.
D. Create a temporary Onboarding OU, apply an SCP allowing AWS Config actions, relocate the root SCP to the Production OU, then move the new account to Production after rule updates.
Correct Answer: B
Explanation:
In AWS Organizations, SCPs govern the maximum permissions available to accounts by defining service access boundaries. In this case, the root SCP restricts access to AWS Config, so when the new business unit joins, administrators there cannot adjust AWS Config rules due to those restrictions.
Option B presents the best solution because it allows the new account to temporarily bypass the restrictive root SCP without altering the organization’s core policies or security posture. Creating a temporary OU named Onboarding and applying an SCP to permit AWS Config actions enables administrators in the new account to update configuration rules. After completing the necessary updates, the account can be moved back to the Production OU, where the existing root SCP restrictions remain in effect.
This approach isolates the temporary exception to just the new account during onboarding, preventing unnecessary changes to the broader organizational SCPs, thus minimizing long-term maintenance and risk.
The other options pose risks or complexities:
A removes root SCP restrictions, potentially exposing other restricted services organization-wide, undermining security.
C requires converting deny lists to allow lists at the root, which is a significant, risk-prone change and might open unintended permissions.
D involves moving the root SCP to a lower OU and restructuring SCP placement, complicating governance and increasing management overhead.
Thus, B is the most secure, manageable, and efficient way to enable the new account’s administrators to modify AWS Config during onboarding.
Question 4:
A company runs a two-tier web application on-premises with a stateful application server and a PostgreSQL database on separate servers. Expecting a surge in users, the company plans to migrate to AWS using Amazon Aurora PostgreSQL, EC2 Auto Scaling, and Elastic Load Balancing (ELB).
Which architecture will ensure seamless scaling of both application and database layers while maintaining a consistent user experience?
A. Enable Aurora Auto Scaling for read replicas, use a Network Load Balancer with least outstanding requests routing and enable sticky sessions.
B. Enable Aurora Auto Scaling for writer instances, use an Application Load Balancer with round robin routing and enable sticky sessions.
C. Enable Aurora Auto Scaling for read replicas, use an Application Load Balancer with round robin routing and enable sticky sessions.
D. Enable Aurora Auto Scaling for writer instances, use a Network Load Balancer with least outstanding requests routing and enable sticky sessions.
Correct Answer: C
Explanation:
When migrating a two-tier web application to AWS with expectations for user growth, it is critical to design for horizontal scalability in both the application and database layers while preserving session consistency for users.
For the database layer, enabling Aurora Auto Scaling for read replicas is the optimal choice. Amazon Aurora supports automatic scaling of read replicas based on workload. This allows read queries to be distributed efficiently, reducing load on the primary writer and improving read throughput. Since write operations must go to the primary instance, scaling the writers is less common and typically unnecessary for most workloads.
For the application layer, using an Application Load Balancer (ALB) is recommended because it is designed to handle HTTP and HTTPS traffic and supports features like content-based routing and session affinity (sticky sessions). Sticky sessions ensure that stateful user sessions persist on the same backend EC2 instance, which is important for applications that store session data locally. The round robin routing algorithm provided by ALB helps evenly distribute incoming requests across the pool of application servers.
The Network Load Balancer (NLB), while excellent for low-latency and high-throughput TCP/UDP traffic, is not optimal for HTTP traffic or scenarios requiring session stickiness, making options A and D less suitable.
In summary, Option C leverages Aurora’s read replica auto-scaling for database scalability and ALB’s HTTP capabilities with sticky sessions for consistent and balanced application load distribution. This combination provides both efficient scaling and a seamless user experience as demand grows.
Question 5:
A company collects metadata from its on-premises hosted applications, which are accessed by consumer devices like smart TVs and internet radios. Many older devices fail due to unsupported HTTP headers in responses. Currently, an on-premises load balancer strips these problematic headers based on the User-Agent header. The company has migrated its applications to AWS Lambda functions and wants to use serverless technologies while still supporting these legacy devices.
Which architecture best supports the older devices, processes metadata effectively, and utilizes serverless components?
A. Use Amazon CloudFront to distribute the metadata service, forward requests to an Application Load Balancer (ALB), which invokes the right Lambda functions. Use a CloudFront function to strip headers based on User-Agent.
B. Use Amazon API Gateway REST API to invoke the Lambda functions and modify gateway responses to remove unsupported headers by User-Agent.
C. Use Amazon API Gateway HTTP API to invoke Lambda, use a response mapping template to remove headers based on User-Agent, and associate it with the API.
D. Use Amazon CloudFront distribution forwarding to ALB, which invokes Lambda functions, and implement a Lambda@Edge function to remove unsupported headers based on User-Agent.
Correct Answer: D
Explanation:
The company's goal is to maintain legacy device compatibility by removing unsupported HTTP headers while migrating to a fully serverless, scalable AWS architecture. Since applications are now Lambda-based, the design must integrate serverless compute and edge processing to efficiently handle requests and responses.
Option D is the best fit because it leverages Amazon CloudFront as a Content Delivery Network (CDN), reducing latency and distributing the metadata service globally. CloudFront can forward requests to an Application Load Balancer (ALB), which routes them to the appropriate Lambda function. This keeps the compute logic serverless and scalable.
The key innovation here is the use of Lambda@Edge, a feature that lets you execute Lambda functions at CloudFront edge locations close to users. Lambda@Edge can inspect responses in real time and modify them before they reach the client. By using the User-Agent header, it can selectively strip unsupported HTTP headers for legacy devices, ensuring compatibility without adding latency at the origin.
Option A also uses CloudFront and ALB but relies on CloudFront functions, which have limited capabilities compared to Lambda@Edge for complex response manipulation. Option B (API Gateway REST) and C (API Gateway HTTP API) focus on header manipulation at the gateway level, which does not provide edge-level processing, increasing latency and reducing scalability.
Therefore, Option D offers a seamless, globally distributed, and fully serverless architecture that supports legacy devices by removing problematic headers efficiently at the edge, ensuring compatibility and performance.
Question 6:
A company runs a traditional web application on Amazon EC2 instances but wants to refactor it into a containerized microservices architecture. The application needs separate environments for production and testing. The workload varies within known minimum and maximum limits. The company prefers a serverless design that minimizes operational overhead and costs.
What is the most cost-effective way to design this new application architecture?
A. Deploy container images as AWS Lambda functions with concurrency limits and create separate Lambda integrations via API Gateway for production and testing.
B. Store container images in Amazon Elastic Container Registry (ECR). Use two Amazon ECS clusters with Fargate to run containers, each with an Application Load Balancer for production and testing.
C. Store container images in ECR and deploy on two Amazon EKS clusters with Fargate, using Application Load Balancers for traffic routing.
D. Upload container images to AWS Elastic Beanstalk, create separate environments for production and testing, and use two Application Load Balancers.
Correct Answer: B
Explanation:
The company wants to modernize its application by migrating from EC2-based monolithic deployment to a microservices-based container architecture, maintaining separate production and testing environments, handling variable load efficiently, and minimizing operational management with serverless infrastructure.
Option B (Amazon ECS with Fargate) provides a managed container service with serverless compute. ECS abstracts server and cluster management, while Fargate allows running containers without provisioning or managing servers. This reduces operational complexity significantly. The container images stored in ECR can be easily deployed as ECS tasks. Auto-scaling can dynamically handle varying loads, and separate ECS clusters isolate production and testing. Application Load Balancers route traffic appropriately. This setup is cost-effective because billing is based on actual resource usage, without paying for idle capacity.
Option A involves running containers inside AWS Lambda, which is technically possible but not optimal. Lambda functions have execution time limits (15 minutes), memory constraints, and concurrency limits. It’s better suited for short-lived, event-driven tasks than long-running microservices. Managing concurrency for variable workloads adds complexity and may increase costs.
Option C uses Amazon EKS (Kubernetes) with Fargate, which offers advanced container orchestration but requires more expertise and operational effort. Kubernetes management introduces complexity that contradicts the company’s goal to minimize operational overhead.
Option D employs Elastic Beanstalk, which simplifies deployment but is less flexible for containerized microservices and does not offer the same granularity in scaling and environment isolation as ECS.
In summary, Option B best meets the company’s goals by providing a fully managed, serverless, container-based microservices architecture that scales with demand and maintains environment separation at a lower operational and financial cost.
A company runs a multi-tier web application hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). These EC2 instances are managed by an Auto Scaling group, which has a minimum and maximum capacity set to zero. Both the ALB and Auto Scaling group are duplicated in a secondary AWS Region as a backup. The application data resides in an Amazon RDS Multi-AZ instance, with a read replica available in the backup Region. Users access the application via a Route 53 DNS record.
The company wants to reduce its Recovery Time Objective (RTO) to less than 15 minutes by enabling automatic failover to the backup Region without adopting an active-active setup due to budget constraints.
Which solution should a solutions architect recommend?
A. Use a latency-based routing policy on Route 53 to distribute traffic between both ALBs. Deploy a Lambda function in the backup Region to promote the read replica and adjust Auto Scaling group settings. Trigger this Lambda from a CloudWatch alarm monitoring 5XX errors on the primary Region ALB.
B. Implement a Route 53 health check on the web application. When the health check fails, trigger an SNS notification that invokes a Lambda function in the backup Region to promote the read replica and update Auto Scaling group settings. Use a failover routing policy on Route 53 to switch traffic to the backup ALB.
C. Match the Auto Scaling group capacity in the backup Region to the primary. Use latency-based routing between ALBs and replace the read replica with a standalone RDS instance using cross-region replication with snapshots and S3.
D. Configure AWS Global Accelerator with both ALBs as equal-weight targets. Use a CloudWatch alarm to trigger a Lambda function promoting the read replica and adjusting Auto Scaling when 5XX errors appear in the primary Region.
Answer: B
Explanation:
To achieve a Recovery Time Objective (RTO) under 15 minutes with automatic failover to the backup Region in a cost-effective manner, the design must enable rapid detection of failures and quick traffic redirection.
Option B is the most suitable because it leverages Route 53 health checks to monitor application availability in the primary Region. If the health check detects an outage, it triggers an SNS notification that activates a Lambda function in the backup Region. This Lambda function promotes the RDS read replica to primary and updates the Auto Scaling group to launch EC2 instances. The failover routing policy in Route 53 ensures DNS switches automatically to the backup Region’s ALB. This approach is reliable, event-driven, and cost-efficient since it doesn’t require both Regions to be active simultaneously.
Option A uses latency-based routing, which balances traffic between regions based on latency but doesn’t provide a true failover mechanism. This can delay or complicate failover since traffic may still be directed to an unhealthy region.
Option C involves complex replication and higher operational costs by replacing the read replica with a standalone instance and managing snapshots. It also does not guarantee the required RTO due to manual snapshot delays.
Option D introduces AWS Global Accelerator, adding unnecessary complexity and cost. While Global Accelerator provides global traffic management, it does not directly automate failover actions like Lambda-triggered promotion and scaling.
Therefore, Option B best balances cost, simplicity, and speed to meet the company’s automatic failover and RTO goals.
A company hosts a critical application on a single EC2 instance. The application depends on the following infrastructure components:
A single-node Amazon ElastiCache for Redis cluster for caching
An Amazon RDS MariaDB instance for relational data storage
To maintain high availability, all infrastructure components must be kept active and recover quickly from failures. A solutions architect must design improvements to automatically recover infrastructure components with minimal downtime.
Which combination of actions should the architect take to improve reliability and reduce downtime? (Choose three)
A. Use an Elastic Load Balancer (ELB) to distribute traffic across multiple EC2 instances within an Auto Scaling group that has a minimum size of two.
B. Use an ELB with EC2 instances configured in unlimited mode.
C. Create a read replica of the RDS instance in the same Availability Zone (AZ) and promote it during failure.
D. Modify the RDS instance to use Multi-AZ deployment across two Availability Zones for automatic failover.
E. Create a replication group for the ElastiCache cluster and use an Auto Scaling group with minimum two nodes.
F. Enable Multi-AZ for the ElastiCache cluster to replicate data across AZs and support failover.
Answer: A, D, F
Explanation:
Ensuring infrastructure resilience involves addressing fault tolerance and failover capabilities at the compute, database, and caching layers.
Option A is essential because distributing traffic using an Elastic Load Balancer (ELB) across multiple EC2 instances increases availability. Pairing this with an Auto Scaling group set to a minimum of two instances ensures that if one instance fails, another automatically handles the load, reducing downtime and preventing single points of failure.
Option D is critical for database availability. Enabling Multi-AZ deployment for RDS MariaDB replicates data synchronously across two Availability Zones. If the primary AZ experiences failure, Amazon RDS automatically fails over to the standby instance in the other AZ, providing near-instant recovery without manual intervention.
Option F enhances the caching layer's fault tolerance. Enabling Multi-AZ for ElastiCache Redis creates a replica in a different AZ. If the primary node fails, the secondary node is promoted automatically, minimizing cache downtime and data loss risk.
The other options have drawbacks:
Option B: Unlimited mode relates to CPU credit usage on burstable instances and does not improve fault tolerance or recovery.
Option C: Creating a read replica in the same AZ provides no protection if that AZ fails.
Option E: Auto Scaling for ElastiCache is not a standard feature; scaling cache nodes typically requires manual intervention or scheduled scaling, and it doesn’t replace Multi-AZ for failover.
By implementing Options A, D, and F, the company gains automated failover, redundancy, and improved resilience across all critical components, ensuring minimal downtime and faster recovery from failures.
A retail company runs its e-commerce application on AWS, using Amazon EC2 instances behind an Application Load Balancer (ALB), with an Amazon RDS database. The application uses Amazon CloudFront with the ALB as its origin for caching static content. Amazon Route 53 manages the domain’s DNS. After a recent update, the ALB occasionally returns 502 Bad Gateway errors due to malformed HTTP headers, although refreshing the page immediately fixes the issue. While the team works on a fix, the solutions architect needs to display a custom error page instead of the default ALB error page whenever the 502 error occurs.
Which two actions should the architect take to achieve this with minimal operational effort?
A. Create an Amazon S3 bucket configured for static website hosting and upload the custom error page there.
B. Set up a CloudWatch alarm to trigger a Lambda function on ALB health check failures, which modifies the ALB forwarding rules to a public web server.
C. Add health checks to Route 53 DNS records and configure a fallback target to a public webpage if the health check fails.
D. Use a CloudWatch alarm to trigger a Lambda function on ALB internal errors, updating forwarding rules to point to a public web server.
E. Configure CloudFront to serve a custom error page from the S3 bucket and modify DNS to point to a publicly accessible web page.
Correct Answer: A and E
Explanation:
The problem requires showing a custom error page during 502 Bad Gateway errors from the ALB, with minimal operational overhead while the root cause is being addressed. The solution must be simple, reliable, and easy to maintain.
Option A: Creating an Amazon S3 bucket configured for static website hosting is a straightforward, cost-effective method to serve custom error pages. Static hosting in S3 ensures the error page is always available, independent of backend issues. Uploading custom HTML pages allows the company to present a branded and user-friendly error page without involving complex infrastructure changes.
Option E: Configuring Amazon CloudFront to handle custom error pages leverages CloudFront’s built-in error handling and caching capabilities. When a 502 error occurs at the ALB origin, CloudFront can serve the custom error page stored in the S3 bucket. This offloads error management from the ALB and ensures consistent user experience globally. It also reduces latency and operational complexity compared to reactive Lambda functions or DNS-level routing.
Why not the other options?
Options B and D introduce complexity by using CloudWatch alarms and Lambda functions to dynamically change ALB forwarding rules, which increases maintenance overhead and can introduce additional failure points.
Option C modifies DNS failover policies in Route 53, which is designed for routing and availability, not for displaying custom error content during HTTP errors. This approach is less precise and can cause DNS propagation delays.
Thus, the combination of hosting custom error pages in Amazon S3 and using CloudFront’s custom error response feature offers a simple, scalable, and low-maintenance solution for showing tailored error messages during temporary ALB issues.
Question 10:
A company is migrating its multi-tier web application to AWS. The application requires high availability, fault tolerance, and minimal downtime during deployments. The web tier runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The company wants to implement a deployment strategy that allows gradual traffic shifting from the current version to the new version without impacting end users.
Which AWS service or feature should be used to achieve this?
A. Use AWS Elastic Beanstalk with rolling deployments
B. Use AWS CodeDeploy with blue/green deployment on the EC2 instances behind the ALB
C. Manually update the EC2 instances one by one using SSH and deregister from the ALB before update
D. Launch new EC2 instances with the new version and update the ALB listener rules to point to the new instances
Correct Answer: B
Explanation:
In this scenario, the goal is to perform a deployment that minimizes downtime and allows gradual traffic shifting. AWS CodeDeploy with blue/green deployment is the most suitable solution for this. CodeDeploy can manage deployments on EC2 instances behind an Application Load Balancer and enables seamless traffic shifting from the old version to the new version. This strategy reduces risk by running both versions simultaneously and slowly rerouting users to the new environment.
Option A (Elastic Beanstalk rolling deployments) can update applications but may not provide as granular control or as smooth traffic shifting as CodeDeploy blue/green deployments.
Option C is a manual and error-prone approach, which risks downtime and inconsistent user experience.
Option D requires manual management of ALB listener rules and does not inherently provide gradual traffic shifting or automation.
Thus, Option B offers the best combination of automation, reliability, and minimal downtime for the migration process.
Top Amazon Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.