Google Professional Cloud Security Engineer Exam Dumps & Practice Test Questions
Question 1:
Your team needs to ensure that a Compute Engine instance neither has internet access nor access to any Google APIs or services. Which two settings should be disabled to guarantee this restriction? (Select two.)
A. Public IP
B. IP Forwarding
C. Private Google Access
D. Static Routes
E. IAM Network User Role
Correct Answers: A, C
To prevent a Compute Engine instance from connecting to the internet or any Google APIs/services, disabling the right settings is essential. The two settings that must be disabled are Public IP and Private Google Access.
Public IP (A): Assigning a public IP address to a Compute Engine instance allows it to communicate directly over the internet. This means the instance could reach external systems freely. By disabling the public IP, the instance will not have a routable external address, effectively cutting off direct internet access.
Private Google Access (C): This feature enables instances without a public IP to access Google Cloud APIs and services through Google’s internal network. If your goal is to block any access to Google services as well, this setting must be disabled. Otherwise, the instance could still communicate with Google APIs privately, even without a public IP.
Other options do not meet the requirement for the following reasons:
IP Forwarding (B): This controls the instance's ability to route packets on behalf of other devices. While useful for some network configurations, it doesn’t impact the instance’s ability to access the internet or Google services directly.
Static Routes (D): Static routes define manual paths for network traffic. Unless routes specifically allow external access, disabling them won’t by itself prevent internet or Google API access.
IAM Network User Role (E): IAM roles govern user permissions to manage network resources but do not affect the network connectivity of the instance itself.
Thus, to completely block both internet and Google API access for the instance, disabling Public IP and Private Google Access is necessary.
Question 2:
Which two default firewall rules are automatically created and applied in a Virtual Private Cloud (VPC) network? (Select two.)
A. A rule permitting all outbound traffic
B. A rule denying all inbound traffic
C. A rule blocking inbound connections on port 25
D. A rule blocking all outbound traffic
E. A rule allowing all inbound traffic on port 80
Correct Answers: A, B
When you create a VPC network, the cloud provider automatically generates certain implied firewall rules to manage traffic by default. These rules provide a baseline security posture before any custom rules are added.
The first default rule is allowing all outbound traffic (A). By default, VPC networks let instances initiate outbound connections without restriction. This is essential because most workloads need to access external resources, updates, or services. The rule ensures that unless explicitly restricted, outbound traffic flows freely.
The second default implied rule is denying all inbound traffic (B). This is a fundamental security principle — inbound connections are blocked by default to protect instances from unsolicited or potentially harmful incoming network traffic. To allow inbound access, administrators must explicitly create firewall rules opening specific ports like SSH (22) or HTTP (80).
The other options are incorrect for the following reasons:
Blocking inbound port 25 (C): While blocking SMTP traffic is common to reduce spam, this is not part of the default implied firewall rules. This would be a custom security rule if implemented.
Blocking all outbound traffic (D): This is the opposite of the default behavior, which permits outbound traffic by default.
Allowing all inbound port 80 traffic (E): Allowing HTTP traffic inbound requires explicit firewall rules. It is not allowed by default, which is why this option is incorrect.
In summary, the VPC’s default firewall behavior permits outbound traffic and denies inbound traffic until custom rules are defined. Therefore, A and B correctly represent the implied firewall rules.
Question 3:
What is the most secure way for a customer to store plain text secrets in Google Cloud Platform?
A. Use Cloud Source Repositories for version control and store secrets in Cloud SQL.
B. Encrypt secrets using a Customer-Managed Encryption Key (CMEK) and store them in Cloud Storage.
C. Scan secrets using the Cloud Data Loss Prevention API and then save them in Cloud SQL.
D. Deploy a Source Code Management system on a Compute Engine VM with local SSDs and preemptible VM instances.
Correct Answer: B
Explanation:
Storing sensitive information such as plain text secrets requires careful handling to prevent unauthorized access or data leaks. In Google Cloud Platform (GCP), it’s essential to encrypt secrets before storage and control the encryption keys to maintain confidentiality and security. Option B best addresses these requirements by suggesting that secrets be encrypted using a Customer-Managed Encryption Key (CMEK) before being stored in Cloud Storage.
CMEKs allow customers to manage their own encryption keys via Google Cloud Key Management Service (KMS), giving full control over key lifecycle, rotation, and access policies. Encrypting secrets with CMEK means even if the underlying storage is compromised, the secrets remain protected and unreadable without the correct encryption key. Storing the encrypted secrets in Cloud Storage offers scalability, reliability, and integrates well with GCP’s security features.
Looking at other options:
A suggests using Cloud Source Repositories combined with Cloud SQL. However, storing secrets in Cloud SQL or source repositories is not best practice because they are not designed for secret management and do not guarantee encryption or secure key control. Source code repositories especially should not contain secrets, as they can be exposed during version control.
C proposes scanning secrets with the Cloud Data Loss Prevention (DLP) API and then storing them in Cloud SQL. While DLP is helpful for detecting sensitive data, it does not provide storage or encryption capabilities. Storing unencrypted secrets in Cloud SQL is insecure.
D involves deploying a source code management system on VMs with local SSDs and preemptible instances. This approach does not inherently secure secrets and risks data loss due to preemptible VM shutdowns, besides adding operational complexity.
In summary, Option B provides the most secure, scalable, and manageable method for handling secrets in GCP by combining encryption with customer-controlled keys and reliable cloud storage.
Question 4:
How can your team centrally manage Google Cloud Platform IAM permissions using on-premises Active Directory groups?
A. Configure Cloud Directory Sync to synchronize AD groups and assign IAM roles based on those groups.
B. Implement SAML 2.0 Single Sign-On (SSO) and assign IAM roles to the groups.
C. Use the Cloud Identity and Access Management API to create groups and IAM permissions from Active Directory.
D. Use the Admin SDK to create groups and assign IAM permissions from Active Directory.
Correct Answer: A
Explanation:
For organizations using on-premises Active Directory (AD) to manage user and group identities, it’s common to want centralized control over cloud permissions based on existing AD group membership. The goal is to have IAM permissions in Google Cloud Platform (GCP) reflect AD group memberships automatically and avoid managing permissions separately in the cloud.
Option A, setting up Cloud Directory Sync (Cloud Dirsync), is the recommended solution. Cloud Directory Sync enables the synchronization of on-premises AD users and groups to Google Cloud Identity. This way, AD groups and their members are mirrored in Google’s cloud environment. Once synced, IAM roles and permissions can be assigned directly to these Google groups that correspond to AD groups. This allows centralized permission management through AD group membership without duplicating administration efforts in GCP.
Examining other options:
B, configuring SAML 2.0 Single Sign-On (SSO), is useful for authenticating users with their AD credentials but does not synchronize group membership or manage IAM permissions by groups. SSO addresses user login but not permission mapping.
C suggests using the Cloud IAM API to create groups and assign permissions, but the IAM API does not provide direct integration or synchronization with on-prem AD. Using the API would require manual or custom tooling to sync groups and manage permissions, increasing complexity.
D involves the Admin SDK to manage Google groups, but it is not designed to sync groups from on-prem AD directly. The Admin SDK can manipulate cloud groups, but syncing requires additional tooling like Cloud Directory Sync.
Therefore, Cloud Directory Sync provides the most streamlined, automated, and manageable way to reflect on-prem AD group memberships in GCP IAM permissions, fulfilling the requirement for centralized, group-based permission management.
Question 5:
When building a secure container image, which two practices should you include if possible? (Select two.)
A. Make sure the application does not run as PID 1.
B. Package only a single application inside the container.
C. Remove any unnecessary tools that the application does not require.
D. Use publicly available container images as the base for your application.
E. Use multiple container image layers to conceal sensitive data.
Correct Answer: B, C
Explanation:
Creating a secure container image requires adhering to best practices that reduce the attack surface and improve manageability and efficiency. Two key practices are packaging a single application per container and removing any unnecessary tools.
Option B, packaging a single app per container, is fundamental in container security and design. This approach aligns with the principle of "one process per container," which simplifies security management by limiting what the container does. It isolates applications, reduces potential vulnerabilities, and makes monitoring and debugging easier. Containers with multiple apps or services increase complexity and can unintentionally expose security flaws.
Option C emphasizes minimizing the container image by stripping away all tools and dependencies not required by the application. Keeping only what is necessary reduces the container's size and attack surface, minimizing opportunities for attackers to exploit vulnerabilities in unused software. Smaller images also load faster and consume fewer resources, improving performance and security.
The other options are less ideal or incorrect for security reasons:
A suggests avoiding running as PID 1, which can be a useful practice in some contexts, but it is not as critical as the other two practices in container security.
D involves using public base images, which may contain vulnerabilities or outdated components. It’s safer to use official, trusted base images or build minimal custom images.
E suggests using many layers to hide data, but multiple layers can actually expose sensitive data and complicate image management. Best practice is to minimize layers and handle secrets securely using environment variables or secret management tools.
In summary, focusing on packaging a single app and removing unneeded tools is essential for secure container images, promoting simplicity, efficiency, and minimized attack risk.
Question 6:
A customer needs to deploy a 3-tier internal web application on Google Cloud Platform with a compliance requirement that only traffic originating from specific, trusted CIDR ranges can access the app. The customer agrees to rely solely on Google Cloud’s native SYN flood DDoS protection.
Which Google Cloud service best meets these criteria?
A. Cloud Armor
B. VPC Firewall Rules
C. Cloud Identity and Access Management (IAM)
D. Cloud CDN
Correct Answer: B
Explanation:
To meet the customer’s need for restricting access based on trusted CIDR blocks and relying on Google Cloud’s native SYN flood protection, VPC Firewall Rules are the most suitable solution.
VPC Firewall Rules operate within Google Cloud’s Virtual Private Cloud network and provide granular control over incoming and outgoing traffic based on IP address ranges, protocols, and ports. This directly aligns with the compliance requirement to only allow traffic from specified CIDR blocks, ensuring that the application is protected from unauthorized network access at the network level.
Option A, Cloud Armor, offers advanced DDoS protection and web application firewall capabilities, primarily for externally facing applications. Although it provides some SYN flood defense, Cloud Armor is not designed for fine-grained IP address filtering within internal networks, making it less suitable for this internal web app scenario.
Option C, Cloud IAM, manages user permissions and access control for Google Cloud resources but does not handle network traffic filtering or IP-based restrictions. IAM controls who can perform actions on cloud resources but cannot restrict traffic based on CIDR blocks.
Option D, Cloud CDN, improves performance by caching and delivering content globally but does not provide network-level security controls or IP filtering. It does not offer SYN flood protection or CIDR-based access restrictions.
In conclusion, since the customer requires IP-based traffic filtering for an internal application with native SYN flood protection, VPC Firewall Rules are the best fit. They provide the required CIDR-based access control directly within the GCP network, fulfilling the compliance and security requirements efficiently.
Question 7:
A company operates workloads within a private server room that must only be accessed through its internal company network. You want to enable access to these workloads from Compute Engine instances running in a Google Cloud Platform (GCP) project.
Which two methods would allow secure connectivity while ensuring the workloads remain accessible only within the private network? (Choose two.)
A. Set up Cloud VPN for the project.
B. Use Shared VPC for the project.
C. Use Cloud Interconnect for the project.
D. Establish VPC peering.
E. Enable Private Access on all Compute Engine instances.
Correct answers: A, C
Explanation:
To securely connect Compute Engine instances in Google Cloud to workloads residing in an on-premises server room while maintaining restricted access within a private company network, the two most suitable solutions are Cloud VPN and Cloud Interconnect.
Cloud VPN creates an encrypted tunnel over the public internet between your on-premises network and your Google Cloud Virtual Private Cloud (VPC). This ensures secure data transmission and restricts access so only authorized traffic can reach the workloads. It is straightforward to configure and commonly used to bridge cloud and on-premises networks securely.
Cloud Interconnect, on the other hand, provides a dedicated physical or partner-based connection between the on-premises environment and Google Cloud. Unlike VPN, this connection does not traverse the public internet, offering higher bandwidth, lower latency, and improved reliability. It is ideal for enterprises with heavy network demands and strict performance or compliance requirements. It also keeps traffic isolated within the private network.
The other options do not meet the requirements as effectively:
Shared VPC is designed to share networking resources across multiple GCP projects within an organization, but it doesn’t facilitate connectivity between on-premises networks and Google Cloud.
VPC peering connects two separate VPC networks within Google Cloud but cannot extend connectivity to on-premises infrastructure.
Private Access allows instances to access Google Cloud APIs privately but does not enable connection to on-premises workloads.
In summary, to meet the requirement of secure, private access from Google Cloud Compute Engine instances to on-premises workloads, Cloud VPN and Cloud Interconnect are the optimal choices because they provide encrypted, controlled connectivity suitable for private network scenarios.
Question 8:
A customer uses Cloud Identity-Aware Proxy (IAP) to protect their ERP system running on Compute Engine instances. Their security team requires that the ERP system only accepts requests coming through Cloud IAP.
What must the ERP system do to enforce this requirement effectively?
A. Validate the JWT token included in incoming HTTP requests.
B. Validate the identity-related headers in incoming HTTP requests.
C. Validate the x-forwarded-for headers in incoming HTTP requests.
D. Validate the user’s unique identifier headers in incoming HTTP requests.
Correct answer: A
Explanation:
When Cloud Identity-Aware Proxy (IAP) is enabled for an application, such as an ERP system hosted on Compute Engine, it acts as an access control layer that authenticates and authorizes user requests before allowing them to reach the backend. To ensure that the ERP system only accepts traffic routed through IAP—and not any unauthorized direct requests—the system must validate the authentication tokens that IAP generates.
Cloud IAP injects a JWT (JSON Web Token) into the HTTP headers of requests that pass through it. This token carries cryptographically signed claims about the user’s identity and authorization status. The ERP system should verify this JWT to confirm the request’s authenticity and that it originated from IAP. Proper JWT validation involves checking the token’s signature, issuer, expiration, and claims.
Simply trusting other headers like identity-related information or IP addresses (e.g., x-forwarded-for) is insecure because these headers can be spoofed or altered by clients or intermediaries. The JWT token is cryptographically protected, making it a reliable indicator of the request’s legitimacy.
Validating identity headers or unique user ID headers alone does not guarantee that the traffic was authorized by IAP. These headers may not be protected and could be fabricated by an attacker.
Using the JWT validation mechanism ensures the ERP system only processes requests that have been successfully authenticated and authorized by Cloud IAP, thereby providing a robust security layer and meeting the security team’s requirements.
In conclusion, the ERP system must validate the JWT assertion in HTTP requests to enforce that only requests routed through Cloud IAP are accepted. This makes option A the correct and secure approach.
You want to be alerted if a malicious actor tries to rerun a script that causes your Compute Engine instance to crash. What is the best approach to monitor script executions and receive notifications?
A. Set up an Alerting Policy in Stackdriver using a Process Health condition to monitor that the script execution count stays under a threshold, then enable notifications.
B. Create an Alerting Policy in Stackdriver based on CPU utilization, triggering an alert if CPU usage exceeds 80%.
C. Log each script execution to Stackdriver Logging, define a user-defined metric based on these logs, and configure a Stackdriver Dashboard and alerting on that metric.
D. Log all executions to Stackdriver Logging, export logs to BigQuery with a scheduled query that counts executions over time.
Correct Answer: C
Explanation:
To effectively detect if a malicious script is being rerun and causing system issues, it’s important to have precise monitoring on the script execution itself. The best solution is to log every execution event and create alerts based on those logs.
Option C is the most targeted and practical solution. By logging each script execution to Stackdriver Logging, you gain a precise record of when the script runs. You can then create a user-defined metric from these logs that counts executions, enabling you to set specific thresholds to trigger alerts. Additionally, a Stackdriver Dashboard can visualize this metric in real time, and alert notifications can be configured to inform you immediately if the script runs unexpectedly often. This approach directly monitors the event of interest—the script execution—giving you accurate and actionable information.
Option A is less suitable because Process Health monitoring focuses on system-level indicators like uptime or availability rather than tracking specific application-level events such as script runs. It’s not designed to count script executions and would not give a precise alert on that specific activity.
Option B involves monitoring CPU usage. While high CPU usage might indicate a problem, it’s an indirect and unreliable signal for malicious script execution. Legitimate processes could cause CPU spikes, resulting in false alarms and noisy alerts.
Option D uses BigQuery for analyzing logs, which is powerful for complex queries and long-term data analysis but introduces complexity and latency. It’s less suitable for real-time alerting needs, and setting up scheduled queries requires more maintenance compared to Stackdriver’s built-in alerting features.
In summary, Option C is the most effective because it provides direct, real-time monitoring of script executions with minimal overhead and precise alerting capabilities.
What is the optimal logging export strategy to collect and centralize logs from all development projects under a NONPROD folder into a SIEM system for unified monitoring?
A. Export logs from the NONPROD folder (including all child projects) to a Cloud Pub/Sub topic in a dedicated SIEM project and subscribe SIEM to this topic.
B. Export logs from the billing account level to a Cloud Storage sink in a dedicated SIEM project, with no inclusion of child projects, then process these storage objects in the SIEM.
C. Export logs from each individual development project to separate Cloud Pub/Sub topics in a dedicated SIEM project and subscribe SIEM to all topics.
D. Export logs to publicly accessible Cloud Storage buckets within each project and process the storage objects in the SIEM.
Correct Answer: A
Explanation:
To achieve a unified log view across all development projects for your Security Information and Event Management (SIEM) system, it’s essential to choose an export method that centralizes logs efficiently, supports real-time processing, and scales well as the number of projects grows.
Option A provides the best solution by exporting logs from the entire NONPROD folder—this includes all child development projects—to a single Cloud Pub/Sub topic in a dedicated SIEM project. By setting the includeChildren property to True, logs from every project beneath the folder are automatically captured. The SIEM can then subscribe to this topic and receive logs in real time, ensuring centralized and continuous monitoring. This method is both scalable and manageable since you only maintain one export configuration and one Pub/Sub topic, reducing operational overhead.
Option B is less effective because exporting logs only from the billing account with includeChildren set to False means you miss logs from individual projects under the NONPROD folder. Also, using Cloud Storage for log exports introduces latency and is less suitable for real-time alerting or analysis compared to Pub/Sub.
Option C requires exporting logs individually from each development project to separate Pub/Sub topics. While this uses Pub/Sub’s real-time capabilities, managing exports and subscriptions for multiple projects quickly becomes cumbersome as your environment scales. It increases operational complexity compared to the centralized approach in Option A.
Option D involves storing logs in publicly accessible Cloud Storage buckets, which raises serious security concerns. Public buckets expose sensitive data and violate best practices for log security. Additionally, Cloud Storage is not optimal for real-time log ingestion by SIEM due to batch processing delays.
Therefore, Option A stands out as the most secure, scalable, and real-time capable logging export strategy for consolidating development project logs under a NONPROD folder into a SIEM environment.
Top Google Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.