Google Professional Cloud Network Engineer Exam Dumps & Practice Test Questions

Question 1:

You need to limit access to your application hosted behind a Google Cloud HTTP(S) Load Balancer so that only specific client IP addresses can connect. 

What is the best way to enforce this access control?

A. Use Access Context Manager with VPC Service Controls to create a secure perimeter restricting access to allowed client IP ranges and Google health check IPs.
B. Use VPC Service Controls to mark the load balancer as a service restricted to the allowed client IP ranges and Google health check IPs.
C. Assign a network tag to backend instances (e.g., "application") and create a firewall rule targeting that tag, allowing traffic only from the allowed client IP ranges and Google health check IPs.
D. Label backend instances "application" and create a firewall rule targeting that label with allowed client IP ranges and Google health check IPs.

Correct Answer: C

Explanation:

In Google Cloud, when you deploy an external HTTP(S) Load Balancer, the backend instances do not receive traffic directly from the clients’ IP addresses; instead, traffic originates from Google’s global load balancing infrastructure. Because of this, traditional source IP filtering on the load balancer itself is not directly feasible. However, you can restrict access by using Google Cloud firewall rules that apply to the backend instances.

Option C is the best approach because it uses network tags on the backend instances—such as tagging instances with "application"—and then creates a firewall rule that allows ingress traffic only from specific IP address ranges (the trusted client IPs and Google’s health check IPs). Firewall rules in Google Cloud can be targeted by these tags, enabling granular control over which sources can connect to your backend instances. This approach effectively restricts access without interfering with the load balancer operation and health checks.

Options A and B involve VPC Service Controls and Access Context Manager, which are primarily designed for protecting Google-managed services (like Cloud Storage or BigQuery) and controlling API access based on identity and attributes. They are not applicable for restricting network access to compute instances behind a load balancer, making them unsuitable here.

Option D suggests using instance labels as targets for firewall rules. While labels are useful for organization and filtering in many GCP services, firewall rules cannot directly use labels as target selectors—only network tags are supported for this purpose.

In summary, the recommended method to restrict access to backends behind a Google Cloud HTTP(S) Load Balancer is to assign network tags to backend instances and create firewall rules targeting those tags with specific allowed source IP ranges. This balances security needs with the operational behavior of Google’s load balancer infrastructure.

Question 2:

Your users are primarily located near the Google Cloud regions us-east1 and europe-west1. Their workloads need to communicate efficiently with one another. 

What network design best reduces cost and optimizes network performance?

A. Set up two separate VPCs in each region, each with its own subnet, and connect them via VPN gateways.
B. Create two separate VPCs in each region and configure connectivity between instances using their external IP addresses.
C. Use a single VPC with subnets in both regions and employ a global load balancer to facilitate cross-region communication.
D. Create a single VPC with regional subnets in us-east1 and europe-west1, deploy workloads in these subnets, and allow communication using private RFC1918 IP addresses.

Correct Answer: D

Explanation:

Designing an efficient, cost-effective network topology that enables workloads in us-east1 and europe-west1 to communicate requires a solution that maximizes internal network use while minimizing overhead and costs.

Option D is the best solution. Google Cloud’s Virtual Private Cloud (VPC) networks are global resources, allowing you to create a single VPC spanning multiple regions, each with its own regional subnet. By deploying workloads in these subnets, instances can communicate using private internal IP addresses (RFC1918). This setup benefits from Google’s high-speed, low-latency backbone network for cross-region traffic, avoiding the public internet entirely. Using private IP communication eliminates egress charges typically incurred for public IP traffic, reducing costs significantly. It also simplifies firewall and security policy management by keeping all workloads within one network boundary.

Option A suggests creating two separate VPCs connected by VPN gateways. Although VPNs can connect separate networks, they incur additional latency and cost, especially for cross-region traffic. VPN traffic often traverses the public internet or VPN infrastructure, which is less efficient and more expensive compared to native VPC routing.

Option B relies on using external IP addresses for connectivity between workloads in different VPCs. This approach exposes traffic to the public internet, increasing latency, security risks, and egress costs. It is inefficient and not recommended for internal workload communication.

Option C proposes using a global load balancer for inter-workload communication within a single VPC. Global load balancers are designed for distributing external traffic to backend services and are not intended for direct internal workload-to-workload communication. This introduces unnecessary overhead and cost.

In conclusion, a single VPC with regional subnets that enables private IP communication between workloads in different regions is the optimal design. It provides better performance, cost savings, security, and simpler management, making it the ideal choice for cross-region workload communication in Google Cloud.

Question 3:

Your organization is implementing a single project serving three distinct departments. Two departments require network communication between them, while the third must remain completely isolated. The design must ensure separate network administrative control for each department and minimize operational complexity. 

Which network topology should you choose?

A. Use a Shared VPC Host Project along with separate Service Projects for each department
B. Create three separate VPCs and connect the two communicating departments via Cloud VPN
C. Set up three separate VPCs and use VPC peering to link the two departments that need to communicate
D. Deploy all departments within one project and use firewall rules and network tags to isolate traffic

Correct answer: C

Explanation:

In this scenario, the primary objectives are to maintain strict isolation for one department while allowing secure communication between the other two, and to establish distinct network administrative boundaries with low operational complexity. The most suitable approach is to create three separate VPC networks, one for each department, and then use VPC peering to interconnect only the two departments requiring communication.

VPC peering is a native, low-latency, high-bandwidth method of privately connecting two VPCs without exposing traffic to the public internet. This solution offers granular control: only the two relevant VPCs are connected, while the third remains completely isolated by not being peered. Additionally, because each department’s network is in a separate VPC, this naturally establishes clear administrative domains—each department can manage its VPC resources independently, fulfilling the requirement for distinct network administration.

Option A, which proposes a Shared VPC, centralizes network resources in one Host Project and delegates resource ownership to Service Projects. While this reduces operational overhead, it does not provide the needed network isolation because all subnets and routes remain within the shared Host Project, thus not fully separating administrative control for each department.

Option B suggests using Cloud VPN to connect VPCs. While VPN offers strong isolation and encryption, it introduces more complexity through the need to manage gateways, tunnels, and IPsec configurations. VPC peering is simpler to configure and better integrated for inter-VPC communication within the same cloud environment.

Option D relies on a single VPC with firewall rules and network tags for segmentation. This approach doesn’t create true isolation or separate administrative domains, increasing the risk of misconfiguration and security issues.

Therefore, option C achieves the best balance between security, clear administrative separation, and ease of management.

Question 4:

When migrating your DNS management to Google Cloud DNS, and you have a BIND-formatted zone file to import, which command is the correct one to use?

A. gcloud dns record-sets import ZONE_FILE --zone MANAGED_ZONE
B. gcloud dns record-sets import ZONE_FILE --replace-origin-ns --zone MANAGED_ZONE
C. gcloud dns record-sets import ZONE_FILE --zone-file-format --zone MANAGED_ZONE
D. gcloud dns record-sets import ZONE_FILE --delete-all-existing --zone MANAGED_ZONE

Correct answer: C

Explanation:

When transitioning DNS records from another provider into Google Cloud DNS, it is common to export your DNS zone as a BIND zone file, a standard plain-text format describing DNS resource records. Importing this file correctly requires the gcloud CLI tool to recognize the file format precisely.

The key flag that enables this recognition is --zone-file-format, which explicitly tells Cloud DNS that the input file follows the BIND format. Without this flag, the import process assumes a different default format, which may cause errors, partial imports, or incorrect DNS entries.

Option C is the only choice that includes the --zone-file-format flag along with specifying the managed zone via the --zone option. This ensures a smooth and accurate import of your BIND zone file into the correct Cloud DNS zone.

Looking at the other options:

  • Option A omits the --zone-file-format flag. Although it specifies the zone and the file, Cloud DNS may misinterpret the format, leading to failed or incorrect imports.

  • Option B adds the --replace-origin-ns flag, which would overwrite the nameserver records in your zone file with the Cloud DNS defaults. This is only necessary if you want to replace the NS records, and it still lacks the essential --zone-file-format flag, risking import errors.

  • Option D includes --delete-all-existing, instructing Cloud DNS to clear all existing records before importing. While sometimes useful, this option carries risk and does not solve the format recognition issue, as it also lacks the --zone-file-format flag.

In conclusion, option C is the safest, most reliable command to import a BIND-formatted zone file into Google Cloud DNS because it explicitly tells the system how to interpret the file, ensuring that your DNS records are imported correctly and fully.

Question 5:

You have an auto mode VPC network named Retail and want to create another VPC called Distribution that can be peered with Retail. 

How should you configure the Distribution VPC to ensure the peering is set up correctly?

A. Create the Distribution VPC in auto mode and establish peering between both VPCs
B. Create the Distribution VPC in custom mode with the CIDR 10.0.0.0/9, create necessary subnets, then peer the VPCs
C. Create the Distribution VPC in custom mode with the CIDR 10.128.0.0/9, create necessary subnets, then peer the VPCs
D. Rename the default VPC as "Distribution" and peer it with Retail

Correct Answer: B

Explanation:

When configuring VPC peering in Google Cloud, it is critical to ensure that the IP address ranges of the VPCs do not overlap. The Retail VPC is an auto mode network, which means it automatically creates subnets using the 10.128.0.0/9 CIDR block (covering IPs from 10.128.0.0 to 10.255.255.255). This IP range is reserved for Retail, so any new VPC intended to peer with Retail must have a non-overlapping CIDR range.

If you create the Distribution VPC in auto mode (Option A), Google Cloud will attempt to allocate subnets in the same 10.128.0.0/9 range, leading to overlapping IP addresses. This makes peering impossible due to IP conflicts.

Option C suggests creating Distribution with the 10.128.0.0/9 block, which directly conflicts with Retail’s existing IP range and thus would cause peering failure.

Renaming the default VPC to "Distribution" (Option D) does not solve the underlying issue of overlapping IP ranges. Peering requires explicit control over subnet CIDRs and non-overlapping IP address spaces, which renaming alone cannot achieve.

The correct approach is Option B: create the Distribution VPC in custom mode, allowing full control over subnet IP ranges. Assign it the CIDR block 10.0.0.0/9 (covering 10.0.0.0 to 10.127.255.255), which does not overlap with Retail’s range. Then create the required subnets manually. This configuration ensures the two VPCs have distinct IP spaces, enabling successful peering.

In summary, controlling subnet allocation through custom mode with a non-overlapping CIDR block is the key to properly setting up VPC peering with an existing auto mode VPC network.

Question 6:

You have deployed a third-party next-generation firewall and created a custom route 0.0.0.0/0 to route all outbound traffic through this firewall. However, you want your VPC instances without public IP addresses to access Google APIs like BigQuery and Cloud Pub/Sub directly, bypassing the firewall. 

What two actions should you take?

A. Enable Private Google Access at the subnet level
B. Enable Private Google Access at the VPC level
C. Enable Private Services Access at the VPC level
D. Create static routes to direct traffic for Google API external IPs via the default internet gateway
E. Create static routes to direct traffic for Google API internal IPs via the default internet gateway

Correct Answers: A and D

Explanation:

To allow VMs without public IP addresses to access Google APIs such as BigQuery and Cloud Pub/Sub while bypassing a third-party firewall that routes all 0.0.0.0/0 traffic through itself, two key configurations are needed: enabling private access to Google services, and ensuring that API traffic does not get routed through the firewall.

Firstly, Private Google Access must be enabled at the subnet level (Option A). This feature allows VMs without public IP addresses to reach Google APIs and services using internal IP routing rather than relying on external IP addresses. It is essential for allowing private VMs to communicate directly with Google services without going through the internet or a firewall. Note that this setting is configured per subnet, not at the VPC level, so Option B is incorrect.

Secondly, the custom route that directs all traffic (0.0.0.0/0) through the firewall will cause even Google API traffic to flow through the firewall, which is undesirable in this case. To avoid this, you must create custom static routes (Option D) that direct traffic destined for the external IP addresses of Google APIs via the default internet gateway, bypassing the firewall. Google publicly publishes the IP ranges used by their APIs, which you can use to define these routes. This setup ensures that traffic to BigQuery and Pub/Sub reaches Google directly, improving efficiency and reducing latency.

Option C refers to Private Services Access, which is used to connect to Google-managed services like Cloud SQL but does not affect public Google APIs. Therefore, it’s not relevant here.

Option E suggests routing traffic to internal IPs of Google APIs, which is technically invalid because Google APIs are accessed over public IP ranges, not internal IPs.

In conclusion, enabling Private Google Access at the subnet level and creating static routes to send API traffic directly via the default internet gateway ensure your private VMs can efficiently and securely access Google APIs without unnecessary inspection or latency introduced by the firewall.

Question 7:

All instances in your Google Cloud project have the custom metadata key enable-oslogin set to FALSE and project-wide SSH keys are blocked. None of the instances have any SSH keys assigned individually, and no project-wide SSH keys exist. Firewall rules permit SSH traffic from any IP. 

How can you successfully SSH into one of these instances?

A. Use Cloud Shell and connect via gcloud compute ssh.
B. Change the enable-oslogin metadata to TRUE and SSH with an external client like PuTTY or SSH.
C. Generate a new SSH key pair, ensure the private key format is correct, add the public key to the instance metadata, then SSH with a third-party client.
D. Generate a new SSH key pair, verify the public key format, add it to the project metadata, and SSH using an external client.

Correct Answer: C

Explanation:

In Google Cloud Platform (GCP), accessing Compute Engine instances via SSH requires valid SSH keys linked either to the project metadata or directly to the instance metadata. Here, the environment is specifically configured to block project-wide SSH keys and has enable-oslogin set to FALSE, disabling OS Login (the IAM-based SSH access control). This configuration means no project-wide keys can be used, and OS Login cannot manage SSH access, leaving only instance-level SSH keys as a viable option.

Option D is not viable because adding keys to project metadata won’t be effective when project-wide SSH keys are blocked by policy. This prevents instances from accepting project-level SSH keys altogether.

Option B suggests enabling OS Login. However, simply setting the metadata key to TRUE is insufficient. OS Login requires correct IAM permissions and instance OS support, making this option impractical without further configuration.

Option A—using Cloud Shell with gcloud compute ssh—depends on OS Login or project-wide SSH keys to automatically inject keys. Since both are disabled or blocked, this method will fail because Cloud Shell cannot push your key to the instance.

The only reliable solution is C: generate a new SSH key pair locally, verify the private key format is compatible with your SSH client, then manually add the corresponding public key to the instance’s metadata. This instance-level key overrides project-wide settings and bypasses OS Login restrictions. Once the key is added, you can SSH into the instance using any standard SSH client (PuTTY, OpenSSH, or Google Cloud SDK), assuming firewall rules allow SSH (port 22), which they do here.

This method directly grants access by associating an authorized key with the instance, enabling secure connection despite project-wide restrictions.

Question 8:

Your university requires centralized network control for multiple departments connecting on-premises infrastructure to Google Cloud. The solution must support 10 Gbps throughput, low latency, and be cost-efficient. 

Which setup best meets these criteria?

A. Use Shared VPC with VLAN attachments and Dedicated Interconnect deployed in the host project.
B. Use Shared VPC, deploy VLAN attachments in service projects, and connect to the host project's Shared VPC.
C. Use standalone projects, deploy VLAN attachments in each project, and connect via individual Interconnects.
D. Use standalone projects, deploying VLAN attachments and Interconnects separately in each project.

Correct Answer: A

Explanation:

For a university environment requiring multiple departments to access Google Cloud while maintaining centralized network control, the optimal architecture must be scalable, efficient, and manageable. The key requirements include high throughput (10 Gbps), low latency, centralized administration, and cost-effectiveness.

Option A, using a Shared VPC with VLAN attachments and Dedicated Interconnect in the host project, satisfies all these demands. Shared VPC enables the network team to centrally manage networking resources—such as subnets, firewall rules, and routing—while departments use service projects for their workloads. This preserves centralized control over the entire network architecture.

Deploying the VLAN attachments and Dedicated Interconnect in the host project optimizes resource usage by consolidating connectivity infrastructure. Dedicated Interconnect provides physical, direct connections between on-premises environments and Google Cloud, supporting very high bandwidth and low latency. Centralizing it reduces duplication and cost, as the bandwidth can be shared efficiently among all departments.

Option B is invalid because VLAN attachments must reside in the same project as the Interconnect. Deploying VLAN attachments in service projects connected to a host project Interconnect is technically unsupported, breaking the architecture.

Options C and D propose standalone projects with independent VLAN attachments and Interconnects for each department. This results in excessive duplication, higher operational costs, decentralized management, and complexity. It also violates the requirement for centralized network control.

In summary, option A delivers a scalable, centralized, and cost-effective solution, leveraging Shared VPC for network governance and Dedicated Interconnect for high-speed, low-latency connectivity shared among multiple departments. This approach streamlines management and reduces costs while meeting technical requirements.

Question 9:

You have an internal application running on multiple Compute Engine instances that serves HTTP and TFTP traffic to on-premises clients. You want to distribute incoming requests evenly but ensure that each client consistently connects to the same instance for both services. 

Which session affinity method should you use?

A. None
B. Client IP
C. Client IP and protocol
D. Client IP, port, and protocol

Correct Answer: B

Explanation:

When using load balancing in Google Cloud, session affinity—or “sticky sessions”—ensures that requests from the same client are consistently directed to the same backend instance. This is crucial for applications that maintain session state or require clients to interact with a particular server instance over time.

In your scenario, the application supports two different protocols: HTTP and TFTP. You want to make sure that a given client from on-premises always hits the same Compute Engine instance regardless of whether it is communicating via HTTP or TFTP.

Option A, “None,” means no session affinity is applied. This would cause each request to potentially route to different backend instances, breaking any session consistency your application might rely on. Therefore, it’s not suitable.

Option C, “Client IP and protocol,” uses both the client’s IP and the protocol type (HTTP or TFTP) to determine affinity. This means HTTP requests from a client might be routed to one instance, while TFTP requests could go to another, which does not meet your requirement of consistent routing across both services.

Option D, “Client IP, port, and protocol,” is even more restrictive, requiring the client’s IP, the source port, and the protocol to match for session affinity. Since client ports are typically ephemeral and change frequently, this option is impractical for ensuring consistent routing.

Option B, “Client IP,” is the best choice because it directs all traffic from the same client IP address to the same backend instance, regardless of protocol or port. This satisfies the need for stickiness across both HTTP and TFTP traffic, allowing clients to maintain consistent sessions across multiple protocols with the same backend.

In summary, selecting “Client IP” session affinity balances effective traffic distribution with consistent backend routing, making it the most appropriate solution for your multi-protocol internal application.

Question 10:

You have set up a VPC network named “Dev” with a single subnet and a firewall rule allowing only HTTP traffic. Logging is enabled for this rule. However, when you try to connect to a VM using Remote Desktop Protocol (RDP), the connection fails. 

You check the firewall logs but see no record of blocked traffic. How can you enable logging for the blocked RDP traffic?

A. Review VPC flow logs for the VM instance
B. Attempt to connect via SSH and check logs
C. Add a firewall rule allowing traffic on port 22 and enable logging
D. Create a deny-all firewall rule with priority 65500 and enable logging

Correct Answer: D

Explanation:

In Google Cloud Platform, firewall logging records only the traffic that matches a firewall rule where logging is explicitly enabled. Importantly, the default implied deny rule—which blocks all traffic not explicitly allowed—is not logged because it is not an explicit rule.

In this situation, the existing firewall rule allows only HTTP traffic (typically on port 80). Since RDP uses port 3389, the connection attempts are blocked by the default implied deny rule. Because this deny rule is implicit and does not have logging enabled, you see no blocked traffic in the firewall logs.

To log denied traffic, you must create an explicit firewall rule that denies traffic and enable logging on that rule. Setting this deny-all rule with a priority of 65500 ensures it does not override any specific allow rules but still captures all denied traffic before the default implicit deny kicks in.

Option A, checking VPC flow logs, offers metadata about network flows (source/destination IPs, ports, bytes transferred) but does not directly show whether traffic was allowed or denied by a firewall. Thus, flow logs are insufficient to track firewall-denied traffic explicitly.

Option B, attempting an SSH connection instead, does not help because the problem is related to logging denied traffic, not successful connection attempts.

Option C, creating a rule to allow port 22 (SSH) with logging, would log allowed SSH traffic but not denied RDP attempts. This does not address the core issue of logging blocked traffic.

Therefore, the only way to see logs for blocked RDP traffic (or any blocked traffic due to the default deny) is to create a specific deny firewall rule with logging enabled. This explicit deny rule captures and logs all blocked attempts, giving visibility into why connections like RDP fail, which is critical for troubleshooting and security auditing.


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.