Snowflake SnowPro Core Recertification Exam Dumps & Practice Test Questions
What is the default Time Travel retention duration that Snowflake provides across all account types for accessing historical data?
A. 0 days
B. 1 day
C. 7 days
D. 14 days
Correct Answer: B
Explanation:
Snowflake offers a unique capability known as Time Travel, which allows users to view, query, and restore historical data. This feature is incredibly valuable for recovering from mistakes like accidental deletions, auditing data modifications, and analyzing changes over time.
By default, Snowflake’s Time Travel feature allows access to historical data for 1 day (24 hours) across all account types. This means any data that was changed or dropped within the past 24 hours can be retrieved without needing any special configurations or upgraded editions. This default retention window of 1 day applies to most standard accounts and ensures a balance between functionality and cost-efficiency.
Let’s examine the options:
Option A (0 days) is incorrect. While Time Travel can be disabled by setting the retention to 0 days, this is not the default state. It’s something the administrator must configure manually.
Option C (7 days) and Option D (14 days) are also incorrect. These longer retention periods are available only to upgraded accounts—specifically Enterprise Edition and higher—and are not configured by default. Users on higher-tier plans can set the Time Travel retention period up to 90 days, but this comes with additional storage costs.
The 1-day default makes it easy for all Snowflake users, even those on Standard Edition, to recover data within a reasonable window of time. If a table is mistakenly dropped or updated incorrectly, users can quickly use SQL statements like SELECT ... AT or UNDROP TABLE to revert changes.
In addition to Time Travel, Snowflake also includes a Fail-safe period of 7 days after the Time Travel period ends. However, Fail-safe is only for recovery purposes managed by Snowflake support and cannot be accessed directly by users.
So, to summarize:
1 day is the default Time Travel window for all accounts.
Longer periods require Enterprise Edition or above.
The 1-day window enables basic rollback and auditing capabilities with minimal configuration.
Thus, the best answer is B, since it reflects the out-of-the-box, default Time Travel duration across all Snowflake accounts.
When a network policy is applied to a user who is already logged in, and their IP address doesn't meet the new policy rules, how does Snowflake handle their current session?
A. Immediately log the user out.
B. Disable the network policy temporarily.
C. Stop the user from running further queries.
D. Allow the session to continue until it expires or the user logs out.
Correct Answer: D
Explanation:
In Snowflake, network policies are security configurations that define which IP addresses are allowed to access the system. These can be set at the account level or at the user level to restrict login attempts based on IP filtering. However, an important aspect of how Snowflake handles these policies is when and how they are enforced.
If a user is already logged in and a network policy is subsequently applied to them, Snowflake will not disrupt the existing session, even if the user’s current IP address does not comply with the new rules. Instead, the session is allowed to continue until it naturally ends — either through timeout or logout. From that point forward, the user must comply with the new policy in order to log in again.
Now let’s evaluate the answer options:
Option A (Log the user out) is incorrect. Snowflake does not forcibly terminate existing sessions simply because a network policy has changed. Doing so would be disruptive and contrary to Snowflake's commitment to session stability.
Option B (Deactivate the policy) is also wrong. Snowflake does not automatically deactivate security settings in response to user activity. The policy remains in force and will be applied during the next login attempt.
Option C (Prevent further queries) is inaccurate. Snowflake does not restrict user activity mid-session, even if the user is in violation of newly assigned network rules. All privileges and capabilities remain active until the session concludes.
Option D (Allow the session to continue) is correct. This behavior ensures a graceful transition between policy changes and avoids unplanned disruptions, particularly in environments where users may be running long-duration tasks.
This design also gives administrators time to coordinate changes and inform users without cutting them off unexpectedly. However, once the session expires or the user logs out, they’ll need to reconnect — and at that point, Snowflake enforces the updated network policy by validating the IP address during the login process.
In short, Snowflake prioritizes session continuity while ensuring that future access conforms to security configurations. Therefore, the correct and most secure response is
In Snowflake, what is the policy for modifying the privileges that come pre-assigned to system-defined roles like ACCOUNTADMIN or SYSADMIN?
A. These privileges cannot be revoked.
B. An ACCOUNTADMIN can revoke these privileges.
C. An ORGADMIN can remove these privileges.
D. Any custom role with adequate permissions can revoke them.
Correct Answer: A
Explanation:
Snowflake utilizes a role-based access control (RBAC) system that relies heavily on both system-defined roles and user-defined roles to manage access to resources within an account. System-defined roles such as ACCOUNTADMIN, SYSADMIN, and SECURITYADMIN are built into Snowflake and come with a predefined set of permissions essential to their intended administrative functions.
The critical policy to understand is that privileges granted to these system-defined roles are non-revocable. This is a deliberate architectural design by Snowflake to maintain security, stability, and continuity of administrative capabilities. Option A is therefore the correct answer: privileges cannot be revoked from system-defined roles.
Let’s take the example of the ACCOUNTADMIN role. This role is the highest-level administrative role within a Snowflake account. It is automatically granted all global privileges needed to oversee the account. If users were allowed to remove privileges from this role, it could undermine core functions like user management, billing visibility, or account recovery. For this reason, Snowflake hardwires these privileges into the role to prevent accidental or intentional disruptions to core operations.
Now, why are the other options incorrect?
B. An ACCOUNTADMIN can revoke privileges: While this role has extensive control over most account resources and roles, it cannot revoke privileges that are inherently part of the system-defined roles, including its own. These assignments are managed internally by Snowflake.
C. An ORGADMIN can remove privileges: The ORGADMIN role operates at a multi-account organizational level, primarily to manage account creation, billing, and org-wide configurations. It doesn’t have authority to manipulate role privileges within individual accounts, especially not system roles.
D. Any custom role with adequate permissions can revoke them: Custom roles may be granted powerful privileges, but they do not have access to alter or remove predefined privileges from built-in roles. Doing so would create significant security risks and violate the RBAC model’s integrity.
In conclusion, system-defined roles in Snowflake are structurally protected to ensure that they always have the permissions necessary to maintain operational integrity. The inability to revoke these privileges helps safeguard against misconfigurations, access loss, or security breaches. As such, Option A is the only valid choice.
In Snowflake, what is the initial access level assigned to a newly created securable object before any privileges are granted?
A. No access
B. Read access
C. Write access
D. Full access
Correct Answer: A
Explanation:
Snowflake enforces a “deny by default” policy for all securable objects, which includes items like databases, schemas, tables, views, and warehouses. This means that newly created objects are completely inaccessible to users and roles until permissions are explicitly granted. Therefore, the correct answer is A – No access.
This access policy is a cornerstone of Snowflake’s security-first architecture. It ensures that sensitive data or computing resources are not exposed accidentally, even to the object’s creator. For example, if a user creates a new table, they themselves do not automatically have access to query it unless the appropriate role they are using has been granted SELECT or other relevant privileges.
Privileges in Snowflake must be granted via roles using the GRANT command. These privileges include standard actions like SELECT, INSERT, UPDATE, DELETE, as well as structural actions like USAGE on schemas or databases. Once a role has been granted access to a securable object, that role can then be assigned to users or other roles, enabling access based on organizational policies.
Let’s break down why the other choices are incorrect:
B. Read access: There is no automatic read or SELECT access to any object. Even the role that creates the object must be explicitly granted SELECT to read from it.
C. Write access: Similarly, no INSERT, UPDATE, or DELETE privileges are granted by default. All these permissions must be granted manually.
D. Full access: This would be a serious security flaw if granted by default. It would mean any user might inadvertently or maliciously perform unintended actions on data or compute resources.
The default “no access” stance aligns with best practices in cloud security. It ensures that access is always intentional and explicitly configured. This minimizes data leaks, protects against privilege creep, and ensures compliance with standards like least privilege access and auditable access control.
In conclusion, until permissions are granted to roles through Snowflake’s RBAC model, securable objects remain completely inaccessible. This ensures strong security postures by default, making A – No access the correct and safest answer.
Within Snowflake’s Change Data Capture (CDC) capabilities, on which two types of objects can you enable streams to track data changes? (Choose two options.)
A. Pipe
B. Stage
C. Secure view
D. Materialized view
E. Shared table
Correct Answers: D and E
Explanation:
Snowflake offers streams as a native feature for Change Data Capture (CDC), enabling users to detect changes—such as inserts, updates, and deletes—on supported data objects. Streams are especially useful in incremental ETL pipelines, where tracking what’s changed since the last query allows systems to avoid full data refreshes, thereby optimizing performance and reducing cost.
Streams can only be created on objects that support mutability and change history, meaning the object must allow DML operations and must store data in a structured form. Let's evaluate each option:
A. Pipe:
Pipes are components used with Snowpipe, which automates data ingestion from stages into tables. A pipe defines the logic for continuous loading but does not store data or support DML operations. Since there are no row-level changes to track in a pipe, streams cannot be applied to it.
B. Stage:
Stages are storage locations (either internal or external, such as AWS S3 or Azure Blob) for loading/unloading files. They are not database tables and do not contain structured data in the form Snowflake tables do. As such, stages are not suitable for streaming operations or CDC tracking.
C. Secure view:
Views, including secure views, are logical representations of data, not physical structures. They cannot be the source of data changes themselves because they are based on underlying tables or queries. Therefore, streams cannot monitor changes on views directly.
D. Materialized view:
Unlike regular views, materialized views physically store the results of a query and automatically update as underlying data changes. Snowflake supports streams on materialized views, making them eligible objects for CDC tracking. This is especially useful in scenarios where real-time or near-real-time reporting is required on a subset of data.
E. Shared table:
Shared tables are part of Snowflake’s Secure Data Sharing feature, allowing data to be shared across different accounts. Snowflake permits streams on these shared tables, just like regular tables, which enables the recipient to implement change tracking or incremental processing.
In summary, the only objects among the options that support DML, store structured data, and are eligible for streams are materialized views and shared tables. These allow organizations to implement robust, real-time CDC solutions inside Snowflake.
When building dashboards in Snowsight, which chart type is currently supported for visualizing data trends within the Snowflake platform?
A. Area chart
B. Box plot
C. Heat grid
D. Pie chart
Correct Answer: A
Explanation:
Snowsight, Snowflake’s modern user interface, is more than just a SQL editor—it includes features for data exploration, visualization, and dashboarding directly within the Snowflake environment. Among the visual tools Snowsight offers, the area chart is a supported and effective way to display trends over time.
An area chart is essentially a line chart with the area beneath the line filled in, making it excellent for emphasizing volume or cumulative totals over a timeline. For example, if a business wants to visualize monthly sales growth or data storage consumption over time, area charts provide both visual appeal and analytical clarity. In Snowsight, users can select this chart type after executing a SQL query to render time-series or sequential data effectively.
Let’s analyze the other options:
B. Box plot:
Box plots are statistical charts that display the distribution of data, highlighting medians, quartiles, and outliers. While incredibly useful in analytical scenarios, Snowsight does not currently offer box plots as a native visualization option. Users requirng these must export data to tools like Tableau, Power BI, or Python notebooks.
C. Heat grid:
Often used to display intensity or density across two variables, heat grids (or heatmaps) are excellent for high-dimensional data. However, as of the current Snowsight feature set, heatmaps are not supported natively. Users must leverage this-party visualization tools if they wish to produce this chart type.
D. Pie chart:
Despite being one of the most widely recognized chart formats, Snowsight does not support pie charts. This may be intentional, as pie charts are frequently criticized in the data visualization community for being less effective than bar or area charts in displaying proportional relationships clearly.
By contrast, area charts are fully supported and widely used in Snowsight dashboards. They help Snowflake users visualize trends in cumulative values over time, integrate seamlessly with SQL query results, and provide real-time data views without exporting to external tools.
In summary, among the given options, only the area chart is supported and available in Snowsight for visualizing data directly within Snowflake dashboards.
What occurs when the “Notify & Suspend” option is activated on a Snowflake resource monitor after a specified credit usage threshold is reached?
A. An alert is sent to all account users with notifications enabled.
B. An alert is sent to all virtual warehouse users once credit usage surpasses 100%.
C. A notification is sent to all administrators with notifications enabled, and assigned warehouses are suspended after completing any active queries.
D. A notification is sent to all administrators with notifications enabled, and all assigned warehouses are immediately suspended, terminating ongoing queries.
Correct Answer: C
Explanation:
Snowflake provides resource monitors to help organizations manage and control the consumption of compute credits. These monitors are vital for ensuring cost control, especially in environments with multiple users and large-scale data processing. Resource monitors allow administrators to configure specific actions when usage thresholds are reached, such as “Notify,” “Suspend,” or “Notify & Suspend.”
The “Notify & Suspend” action is designed to take a balanced approach between cost management and operational continuity. Here’s what happens when this action is triggered:
Notification Mechanism:
When the defined credit usage threshold is reached, Snowflake sends a notification to account administrators only—specifically, those who have notifications enabled in their user settings. This ensures the message reaches only responsible individuals who are expected to respond to such events, rather than overloading all users with irrelevant alerts.
Suspension Behavior:
After notifying the administrators, Snowflake suspends all warehouses assigned to the resource monitor—but importantly, only after all current SQL statements have finished executing. This controlled approach ensures that important queries or transactions are not interrupted, which protects data integrity and avoids job failures. It's a graceful suspension that halts future credit usage without disrupting in-progress tasks.
Let’s examine why the other options are incorrect:
A is inaccurate because alerts are sent only to account administrators, not all users.
B is wrong on two counts: virtual warehouse users do not receive alerts, and the action is not dependent on exceeding 100%—thresholds can be set at custom percentages.
D describes the behavior of the “Suspend Immediately” action, where running queries are terminated. That’s not the case with “Notify & Suspend,” which waits for active processes to complete before suspending.
In summary, “Notify & Suspend” ensures a smooth transition to halt credit use by alerting key personnel and gracefully suspending warehouses. It combines cost management with operational reliability, which is essential for production environments. Thus, the correct answer is C.
What are two benefits of Snowflake’s architectural approach that separates compute from storage? (Choose two.)
A. Enables independent scaling of computing power.
B. Guarantees consistent data encryption across all compute instances.
C. Automatically transforms semi-structured data into structured formats.
D. Requires that storage growth and compute growth happen together.
E. Allows compute to scale without needing to increase storage capacity.
Correct Answers: A and E
Explanation:
One of the most innovative aspects of Snowflake’s cloud data platform is its decoupled architecture, meaning compute and storage are separated. Unlike legacy systems that often bind these two components tightly together, Snowflake gives organizations the freedom to scale each independently, which delivers both flexibility and cost efficiency.
Let’s break down the two correct options:
A. Independent scaling of computing resources
This is one of the most powerful features of Snowflake. In traditional systems, adding more storage often meant adding compute, and vice versa—even if only one was needed. Snowflake removes this limitation. If a workload becomes more compute-intensive (e.g., during complex query processing), you can scale up compute clusters without changing the storage configuration. This allows teams to manage performance dynamically based on real-time demands.
E. Scaling compute without needing additional storage
Another direct benefit of the separation is the elastic scaling of compute resources without altering storage. For example, during heavy data transformation tasks or high query concurrency periods, compute clusters (virtual warehouses) can be scaled up or spun out in parallel. This provides the needed performance boost without provisioning unnecessary storage, making resource usage more efficient and cost-effective.
Now, examining the incorrect options:
B. Data encryption
Snowflake does offer end-to-end encryption, but this is a standard security feature. It is not a result of separating compute and storage. Encryption is applied at rest and in transit, independently of architecture design.
C. Automatic conversion of semi-structured data
While Snowflake excels at handling semi-structured formats like JSON, Avro, and Parquet, this capability is due to its native support for VARIANT data types and powerful SQL engine—not because of the storage-compute separation.
D. Coupling of compute and storage growth
This contradicts the entire design philosophy of Snowflake. One of the platform’s biggest advantages is that compute and storage growth are independent, which enhances flexibility and scalability.
In conclusion, Snowflake’s architecture enables organizations to maximize resource efficiency and performance by allowing compute and storage to scale independently, making A and E the correct choices.
Which parameter in Snowflake allows an account administrator to set the default number of days that historical data is retained for Time Travel at the account level?
A. DATA_RETENTION_TIME_IN_DAYS
B. MAX_DATA_EXTENSION_TIME_IN_DAYS
C. MIN_DATA_RETENTION_TIME_IN_DAYS
D. MAX_CONCURRENCY_LEVEL
Correct Answer: A
Explanation:
In Snowflake, Time Travel is a powerful feature that allows users to query and restore data from previous states after it has been modified or deleted. This capability is critical for data recovery, auditing, and change tracking. The number of days that historical data remains available through Time Travel is controlled by specific parameters—chiefly the DATA_RETENTION_TIME_IN_DAYS parameter.
Option A, DATA_RETENTION_TIME_IN_DAYS, is the correct answer because it directly specifies the duration (in days) that Snowflake retains historical data for Time Travel. This parameter can be set at various levels—account, database, schema, and table—with the account-level setting serving as the default unless explicitly overridden at lower levels. It ensures that all new objects inherit this default unless customized.
Here’s why the other options are incorrect:
B. MAX_DATA_EXTENSION_TIME_IN_DAYS – This is a non-existent parameter in Snowflake. It may sound plausible, but no official documentation supports such a setting. There is no Snowflake mechanism by this name.
C. MIN_DATA_RETENTION_TIME_IN_DAYS – This parameter does exist, but it does not define the actual retention period. Instead, it sets a floor (i.e., a minimum threshold) at the account level, ensuring that no object can have a retention time shorter than this. It acts as a compliance control, not the active setting for retention duration.
D. MAX_CONCURRENCY_LEVEL – This relates to Snowflake’s warehouse performance, specifying how many queries a virtual warehouse can handle concurrently. It has no connection to Time Travel or data retention policies.
Additionally, consider:
For the Standard Edition, the maximum DATA_RETENTION_TIME_IN_DAYS is typically 1 day.
For Enterprise Edition and above, retention can be extended up to 90 days.
Administrators must weigh storage costs, as longer retention increases storage consumption.
Therefore, the correct and most relevant parameter for defining Time Travel’s data retention period is A. DATA_RETENTION_TIME_IN_DAYS.
At which system level is the Snowflake parameter MIN_DATA_RETENTION_TIME_IN_DAYS configured to enforce a minimum Time Travel retention policy?
A. Account
B. Database
C. Schema
D. Table
Correct Answer: A
Explanation:
The MIN_DATA_RETENTION_TIME_IN_DAYS parameter in Snowflake serves as a compliance safeguard that ensures Time Travel retention periods across the system meet a minimum required threshold. Its primary role is to prevent administrators or developers from configuring retention settings that fall below an organizational or regulatory minimum standard.
This parameter is set exclusively at the account level, which makes Option A the correct choice. By configuring this setting at the account level, organizations can enforce a consistent and compliant baseline for all Time Travel retention configurations across databases, schemas, and tables.
Here’s how it works:
When an object (e.g., a table) attempts to set a DATA_RETENTION_TIME_IN_DAYS value that is less than the account’s MIN_DATA_RETENTION_TIME_IN_DAYS, Snowflake overrides it, enforcing the higher value.
This prevents any object from slipping below the minimum data recovery and audit standards the organization has deemed necessary.
Now let’s look at why the other options are incorrect:
B. Database – While databases can have specific DATA_RETENTION_TIME_IN_DAYS settings, they do not control or enforce the minimum across the account.
C. Schema – Like databases, schemas can inherit retention settings but are not eligible to define the minimum. They follow whatever constraint is set at the account level.
D. Table – Tables can have individual retention periods, but again, any value set must comply with the account-level minimum. Tables do not define policy—they adhere to it.
This structure ensures centralized governance. By enforcing this rule from the top level of the Snowflake hierarchy, the platform enables organizations to uphold audit integrity, data recoverability, and legal retention standards.
Thus, for ensuring a minimum Time Travel retention duration across the entire Snowflake account, the parameter is configured at the Account level, making A the correct answer.
Top Snowflake Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.