Amazon AWS Certified Advanced Networking - Specialty ANS-C01 Exam Dumps & Practice Test Questions
Which approach best fulfills the requirement for encrypted communication using mutual TLS (mTLS) between clients and backend services in an Amazon EKS environment, ensuring traffic remains encrypted without being decrypted by the load balancer?
A. Deploy the AWS Load Balancer Controller on Kubernetes and configure a Network Load Balancer with a TCP listener on port 443 that forwards traffic directly to the backend service Pods' IPs.
B. Deploy the AWS Load Balancer Controller on Kubernetes and configure an Application Load Balancer with an HTTPS listener on port 443 that forwards traffic directly to the backend service Pods' IPs.
C. Create a target group linked to the EKS node group's Auto Scaling group. Use an Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the target group.
D. Create a target group linked to the EKS node group's Auto Scaling group. Use a Network Load Balancer with a TLS listener on port 443 to forward traffic to the target group.
Answer: A
Explanation:
The core requirement here is to enable mutual TLS (mTLS) for secure, two-way encrypted communication between the client and backend service. This means traffic must stay encrypted in transit and must not be decrypted at the load balancer. Additionally, the solution should integrate well with Kubernetes scaling mechanisms like the Cluster Autoscaler and Horizontal Pod Autoscaler.
Option A is the best fit because configuring a Network Load Balancer (NLB) with a TCP listener supports TLS passthrough, meaning the NLB simply forwards encrypted TCP traffic directly to the backend pods without decrypting it. This allows the backend service itself to handle the mutual TLS handshake and certificate validation, maintaining true end-to-end encryption. The AWS Load Balancer Controller facilitates this integration with Kubernetes, and NLBs handle high volumes of TCP traffic efficiently, making this ideal for scenarios such as gRPC over port 443.
Option B uses an Application Load Balancer (ALB) with an HTTPS listener, which by design terminates TLS at the load balancer. This decrypts the traffic before forwarding, violating the requirement to keep traffic encrypted end-to-end, thus making it unsuitable for mTLS scenarios.
Option C also uses an ALB, which again decrypts traffic at the load balancer, conflicting with the goal of maintaining encryption without termination at the load balancer.
Option D suggests an NLB with a TLS listener, which does terminate TLS at the load balancer. While still encrypted, this means the load balancer decrypts traffic, breaking the requirement of TLS passthrough and mTLS handling by the backend.
In summary, Option A best supports mTLS by forwarding encrypted traffic directly to backend pods without termination, ensuring security and seamless Kubernetes integration.
Which design will correctly route HTTPS requests based on domain names while preserving the original client IP address for logging purposes?
A. Use an Application Load Balancer (ALB) with an HTTPS listener and path-based routing rules. Forward traffic to target groups and include the X-Forwarded-For header.
B. Use an Application Load Balancer (ALB) with an HTTPS listener and host-based routing rules per domain. Forward traffic to target groups and include the X-Forwarded-For header.
C. Use a Network Load Balancer (NLB) with a TLS listener and path-based routing rules. Forward traffic to target groups and preserve client IP addresses.
D. Use a Network Load Balancer (NLB) with a TLS listener per domain and host-based routing rules. Forward traffic to target groups and preserve client IP addresses.
Answer: B
Explanation:
This question focuses on the need to route HTTPS traffic based on domain names (host-based routing), ensure secure communication via HTTPS, and preserve the original client IP for accurate logging.
Option B stands out as the best choice. The Application Load Balancer (ALB) supports host-based routing, allowing it to route requests to different backend target groups depending on the domain name in the HTTP Host header. This directly addresses the requirement to route traffic by domain (URL). The ALB terminates TLS at the load balancer, offloading encryption processing from backend servers, which helps performance. Crucially, ALB adds the X-Forwarded-For header, passing the original client IP to backend servers so they can log client activity accurately for security audits.
Option A also uses an ALB, but it applies path-based routing instead of host-based routing. While path-based routing is useful for routing based on URL paths, it does not fully meet the requirement to route based on domains, which can differ entirely (e.g., domain1.com vs. domain2.com).
Option C uses a Network Load Balancer (NLB) with TLS listeners and path-based routing, but NLB operates at Layer 4 and does not support content-based routing such as path or host-based routing. While it preserves the client IP naturally, it cannot perform HTTP-level routing, making it unsuitable for domain-based routing.
Option D suggests NLB with TLS listeners and host-based routing per domain. However, NLBs do not natively support Layer 7 routing (host-based routing), so this option is technically inaccurate.
Therefore, Option B is the optimal solution because it fully meets all requirements: HTTPS termination, domain-based routing, and client IP preservation for logging.
Question 3:
Which configuration will fulfill the requirement that the application must only be accessible through AWS Global Accelerator, preventing any direct internet access to the Application Load Balancer (ALB)?
A. Place the ALB in a private subnet, attach an internet gateway to the VPC, but do not add routes in the subnet’s route tables to the internet gateway. Set up the accelerator with endpoint groups pointing to the ALB. Configure the ALB’s security group to allow inbound traffic only from the internet on the ALB listener port.
B. Place the ALB in a private subnet. Configure the accelerator with endpoint groups pointing to the ALB. Set the ALB’s security group to accept inbound traffic only from the internet on the ALB listener port.
C. Place the ALB in a public subnet, attach an internet gateway, add routes in the subnet route table to the internet gateway. Configure the accelerator with endpoint groups pointing to the ALB. Restrict the ALB’s security group to allow inbound traffic only from the accelerator’s IP addresses on the ALB listener port.
D. Place the ALB in a private subnet, attach an internet gateway, and add routes in the subnet route table to the internet gateway. Configure the accelerator with endpoint groups pointing to the ALB. Set the ALB’s security group to allow inbound traffic only from the accelerator’s IP addresses on the ALB listener port.
Correct Answer: D
Explanation:
The key requirement here is to ensure that the application is only reachable through the AWS Global Accelerator and cannot be accessed directly over the internet via the ALB. To achieve this, the ALB must be secured from direct internet exposure while still being accessible via the accelerator.
First, placing the ALB in a private subnet ensures it is not directly accessible from the public internet. Private subnets do not have automatic internet access unless routes and gateways are explicitly configured. This limits exposure by default.
Second, although the internet gateway must be attached to the VPC for outbound traffic or other needs, the private subnet’s route tables should prevent any direct route from the ALB to the internet gateway, stopping inbound internet traffic directly to the ALB.
Third, configuring the AWS Global Accelerator with endpoint groups targeting the ALB allows the traffic to be routed securely and efficiently from edge locations close to users, improving availability and latency.
Fourth, the security group on the ALB must explicitly allow inbound connections only from the IP addresses of the Global Accelerator. This ensures no other traffic, including direct internet traffic, can reach the ALB.
Option D correctly combines all these aspects: ALB in a private subnet with an internet gateway attached but proper routing to prevent direct internet access, the accelerator targeting the ALB, and security group restrictions allowing only the accelerator’s IPs.
Option A is incorrect because although the internet gateway is attached, it lacks proper routing in the route tables, which can cause misconfiguration or accidental exposure. Option B misses restricting the security group to accelerator IPs, allowing inbound internet traffic, which breaks the requirement. Option C places the ALB in a public subnet, which contradicts the requirement to avoid direct internet exposure.
Thus, D is the best choice to secure the ALB behind the Global Accelerator, meeting all requirements.
Question 4:
A global delivery company has multiple business units, each with applications running in separate AWS accounts and isolated application VPCs. All these applications require data access from a central shared services VPC in the same AWS Region. The company wants a network architecture that offers detailed security controls and can easily scale as more business units connect in the future.
Which option provides the most secure and scalable network connectivity?
A. Deploy a central transit gateway and create VPC attachments from each application VPC. Establish full mesh connectivity between all VPCs using the transit gateway.
B. Establish VPC peering connections directly between the central shared services VPC and each application VPC.
C. Use AWS PrivateLink by creating VPC endpoint services in the central shared services VPC and corresponding VPC endpoints in each application VPC.
D. Deploy a central transit VPC using a VPN appliance from AWS Marketplace. Create VPN connections from each VPC to this transit VPC to achieve full mesh connectivity.
Correct Answer: A
Explanation:
The company's network needs are complex: multiple business units with independent VPCs, a centralized shared services VPC, granular security requirements, and scalability as new units join. Among the options, a central AWS Transit Gateway (TGW) solution provides the most secure and scalable approach.
The Transit Gateway acts as a central hub to which all VPCs can attach. This hub-and-spoke architecture greatly simplifies network management by avoiding the complexity of numerous point-to-point connections. Instead of managing many individual VPC peering links (which scale poorly), each VPC only needs a single attachment to the TGW.
Using TGW also enhances security and control because routing policies and network segmentation can be implemented centrally. You can apply granular route tables per attachment, controlling what traffic is allowed between each VPC. This makes it easier to enforce security policies across business units.
TGW also scales seamlessly. As new business units add VPCs, you simply attach the new VPCs to the TGW. The TGW supports thousands of attachments, so it handles large-scale environments efficiently.
Option B (VPC peering) is simple but not scalable. Peering requires a connection for each VPC pair, so as the number of VPCs grows, the number of peering connections grows exponentially. It also lacks centralized route control.
Option C (PrivateLink) is designed primarily for exposing specific services privately and securely between VPCs. It’s great for service endpoints but not ideal for broad, scalable network connectivity between multiple VPCs.
Option D involves deploying and managing VPN appliances, which introduces operational complexity and overhead. While possible, it’s less efficient and scalable than using TGW, which is a fully managed AWS service optimized for this purpose.
Therefore, option A best meets the requirements of scalability, granular security control, and simplified management in a multi-account, multi-VPC environment.
Question 5:
Which approach best satisfies the requirement to identify the cause of network slowness and resolve throughput limitations for an AWS Direct Connect setup?
A. Examine the CloudWatch metrics VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress to find the Virtual Interface (VIF) with the highest traffic during slow periods, then create a new 10 Gbps dedicated connection and transfer traffic to it.
B. Examine the CloudWatch metrics VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress to identify the VIF causing high throughput, then upgrade the current dedicated connection to 10 Gbps bandwidth.
C. Review CloudWatch metrics ConnectionBpsIngress and ConnectionPpsEgress to find the high-traffic VIF, then upgrade the dedicated connection to a 5 Gbps hosted connection.
D. Check CloudWatch metrics ConnectionBpsIngress and ConnectionPpsEgress to identify the VIF with high throughput, create a new 10 Gbps dedicated connection, and shift traffic to it.
Correct answer: B
Explanation:
In this scenario, the primary goal is to detect which Virtual Interface (VIF) contributes the most to network slowness by monitoring throughput metrics, then to apply an effective solution to overcome the bandwidth bottleneck on AWS Direct Connect. AWS CloudWatch provides multiple metrics, but selecting the most granular and relevant ones is key.
Options A and B correctly recommend reviewing VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress metrics because these provide specific insight into each VIF’s throughput. This level of granularity helps pinpoint the exact VIF responsible for high traffic during observed slowness.
Option A suggests creating an entirely new 10 Gbps connection and moving traffic to it. Although this might temporarily alleviate bandwidth constraints, it introduces additional operational overhead and complexity in managing multiple dedicated connections. It also doesn’t directly enhance the current connection’s capacity.
Option B is more straightforward and efficient: after identifying the bandwidth-hungry VIF, simply upgrade the existing dedicated connection to 10 Gbps. This directly increases the available bandwidth and resolves throughput saturation without unnecessary traffic shifts or added infrastructure.
Options C and D advise using ConnectionBpsIngress and ConnectionPpsEgress metrics, which measure aggregate connection traffic but lack specificity for individual VIFs, making it harder to isolate problematic traffic sources. Also, upgrading to a 5 Gbps hosted connection in C offers less bandwidth than required and might not solve the issue.
In summary, B is the most effective choice because it uses the right metrics to identify the problem and applies a direct, scalable solution by upgrading the existing connection to 10 Gbps, efficiently resolving the bandwidth saturation.
Question 6:
Given a scenario where customers have overlapping internal IP addresses and cannot share or expose their IPs over the internet, which combination of solutions ensures private, secure connectivity without internet exposure?
A. Place the SaaS service endpoint behind a Network Load Balancer (NLB).
B. Set up an endpoint service and allow customers to connect to it.
C. Place the SaaS service endpoint behind an Application Load Balancer (ALB).
D. Use VPC peering with customers' VPCs and route traffic via NAT gateways.
E. Deploy an AWS Transit Gateway, connect the SaaS VPC to it, share the gateway with customers, and configure routing.
Correct answers: A, B, and E
Explanation:
When customers have overlapping IP addresses and strict privacy requirements—such as no sharing of internal IPs or internet exposure—the solution must provide isolated, private connectivity. Each option here targets different aspects of networking in AWS, but not all fit these strict needs.
Option A involves placing the SaaS endpoint behind a Network Load Balancer (NLB). NLBs operate at Layer 4, supporting TCP/UDP traffic and allowing private IP addressing without exposing endpoints publicly. This fits well in private network topologies and is optimal for handling traffic in overlapping IP address environments without routing over the internet.
Option B refers to creating an endpoint service via AWS PrivateLink. PrivateLink allows customers to connect privately to the SaaS provider over secure, private IP connections without traversing the public internet. This is critical for avoiding exposure and handling overlapping IP addresses because traffic is encapsulated within private AWS networks.
Option C suggests using an Application Load Balancer (ALB), which operates at Layer 7 for HTTP/S traffic. ALBs are primarily designed for web-facing applications and don’t inherently support private connectivity in scenarios requiring IP overlap management. They usually expose endpoints publicly and are not ideal here.
Option D proposes VPC peering with NAT gateways for routing. While VPC peering offers private connectivity, it does not support overlapping IP address spaces, which directly conflicts with the scenario. Additionally, routing through NAT gateways implies internet traffic, which the requirements explicitly forbid.
Option E involves deploying an AWS Transit Gateway. Transit Gateways enable scalable, private connections among multiple VPCs, even when IP address ranges overlap. Sharing the Transit Gateway with customers and configuring appropriate routing enables secure, private network communication while avoiding internet exposure.
The best approach combines A, B, and E: Using NLB to host the SaaS service privately, leveraging PrivateLink endpoint services to connect customers securely, and utilizing Transit Gateway to orchestrate private routing across VPCs with overlapping IPs. This trio provides a robust, scalable, and secure solution for private connectivity without internet traffic, fulfilling the scenario’s requirements perfectly.
Question 7:
Which AWS service is best suited for securely connecting multiple geographically dispersed on-premises data centers to an AWS VPC with minimal latency and consistent network performance?
A Direct Connect
B VPN Gateway
C Transit Gateway
D AWS Global Accelerator
Answer: A
Explanation:
When connecting multiple on-premises data centers to AWS VPCs, ensuring secure, low-latency, and consistent network performance is crucial. Among the listed options, AWS Direct Connect is the optimal choice. Direct Connect provides a dedicated, private network connection from your premises to AWS, bypassing the public internet. This dedicated connection reduces network jitter, latency, and bandwidth variability that VPN over the internet can face. It is designed for high throughput and stable connections, which is essential for production workloads requiring predictable performance.
A VPN Gateway (B), while secure, transmits data over the public internet, which can cause variability in latency and bandwidth due to congestion and routing factors outside your control. VPN is typically used for lower-cost, quick connectivity or as a backup to Direct Connect.
Transit Gateway (C) is a powerful AWS service for interconnecting multiple VPCs and on-premises networks but does not itself provide the physical connection; it often complements Direct Connect by aggregating multiple VPCs and on-prem networks for simplified routing.
AWS Global Accelerator (D) is designed to optimize the path of internet traffic to AWS applications globally but does not provide private, dedicated connections like Direct Connect.
Therefore, Direct Connect is the most appropriate service when minimal latency, consistent network performance, and private connectivity are required for multiple on-premises data centers.
Question 8:
In a hybrid architecture, how can you achieve high availability for Direct Connect connections between an on-premises data center and AWS?
A Use multiple VPN tunnels over a single Direct Connect connection
B Deploy multiple Direct Connect connections at a single AWS Direct Connect location
C Use multiple Direct Connect connections in different AWS Direct Connect locations and configure BGP for failover
D Use a Transit Gateway without Direct Connect for redundancy
Answer: C
Explanation:
High availability for Direct Connect connections is typically achieved by deploying multiple connections across different physical locations or AWS Direct Connect sites and configuring Border Gateway Protocol (BGP) for failover. This approach ensures that if one connection or site goes down, traffic automatically reroutes through the other connection, maintaining availability.
Option A is incorrect because VPN tunnels over a single Direct Connect link do not provide redundancy at the physical connection level. If the Direct Connect circuit fails, both VPN tunnels fail.
Option B suggests deploying multiple Direct Connect connections at a single location. While it may provide some redundancy, it does not protect against location-level failures such as facility outages, power failures, or fiber cuts at that site.
Option C correctly describes using multiple Direct Connect connections across physically separate AWS Direct Connect locations and leveraging BGP routing for automatic failover, providing a resilient, highly available architecture.
Option D does not offer Direct Connect redundancy. Transit Gateway is for network aggregation and routing but does not replace the physical connection or provide failover for Direct Connect links.
Thus, the recommended architecture for Direct Connect high availability is multiple Direct Connect connections across distinct locations with BGP failover.
Question 9:
What is the primary function of AWS Transit Gateway in a multi-account AWS environment?
A To provide a centralized hub to connect multiple VPCs and on-premises networks with simplified routing management
B To serve as a replacement for VPN Gateway for internet connectivity
C To monitor network traffic between VPCs for security compliance
D To automatically encrypt all data in transit between AWS regions
Answer: A
Explanation:
AWS Transit Gateway acts as a centralized hub that interconnects multiple VPCs, on-premises networks, and VPN connections in a scalable manner. It simplifies network architecture by reducing the number of point-to-point connections required between VPCs or from on-premises to AWS. Instead of managing multiple peering connections, Transit Gateway provides a single gateway that handles routing between connected networks.
Option B is incorrect because Transit Gateway is not a replacement for VPN Gateway. VPN Gateway is used specifically for establishing secure VPN tunnels to on-premises environments.
Option C is incorrect because Transit Gateway does not inherently monitor network traffic for security compliance. Monitoring is done via other AWS services like VPC Flow Logs, AWS Network Firewall, or third-party solutions.
Option D is incorrect because Transit Gateway does not automatically encrypt data in transit across AWS regions. Data encryption in transit is managed by TLS protocols or IPsec VPNs, and AWS supports inter-region peering for Transit Gateway, but encryption depends on the implementation.
Thus, the main function of AWS Transit Gateway is to simplify and centralize routing management across multiple VPCs and on-premises networks, making A the correct answer.
Question 10:
Which AWS networking service should you use to optimize the performance and availability of a public-facing global web application hosted on AWS?
A AWS Global Accelerator
B AWS Direct Connect
C AWS VPN Gateway
D AWS Transit Gateway
Answer: A
Explanation:
AWS Global Accelerator is designed specifically to improve the performance, availability, and reliability of public-facing global applications. It uses the AWS global network to direct user traffic to the nearest healthy AWS endpoints (e.g., Application Load Balancers, EC2 instances, or Elastic IPs), optimizing latency and availability by routing through AWS’s private backbone instead of the public internet.
Option B, Direct Connect, is for dedicated private connectivity from on-premises to AWS, not for optimizing internet-facing global applications.
Option C, VPN Gateway, provides secure connections between AWS and on-premises but does not optimize internet user traffic for a public web app.
Option D, Transit Gateway, aggregates and manages routing among VPCs and on-premises networks but is not designed to improve global internet-facing application performance.
Therefore, AWS Global Accelerator is the most suitable service for enhancing performance and availability of a global web application, making A the correct choice.
Top Amazon Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.