Snowflake SnowPro Core Exam Dumps & Practice Test Questions
Question 1:
What feature does Snowflake offer that lets customers override its automatic clustering behavior to control data organization?
A. Micro-partitions
B. Clustering keys
C. Key partitions
D. Clustered partitions
Correct Answer: B
Explanation:
Snowflake’s architecture relies heavily on automatic data clustering through a system called micro-partitions, which are small, contiguous units of data storage. These micro-partitions allow Snowflake to automatically and efficiently manage how data is physically stored and accessed, without user intervention. However, this automatic clustering is not always optimized for specific query patterns or organizational requirements.
To address this, Snowflake provides a feature known as clustering keys. Clustering keys allow users to explicitly define one or more columns to guide how data is physically organized within these micro-partitions. When clustering keys are defined, Snowflake uses them to co-locate similar data together, improving the performance of queries that filter or aggregate data based on those keys. This is particularly important for large tables where scanning irrelevant data could be expensive and slow.
Micro-partitions, while fundamental to Snowflake’s storage design, are automatically managed by the platform and cannot be directly controlled or overridden by users. The terms “key partitions” and “clustered partitions” do not correspond to recognized Snowflake features or terminology in this context.
Therefore, the correct answer is B — clustering keys provide customers with a mechanism to override Snowflake’s default automatic clustering, allowing better control over data storage organization and optimized query performance.
Question 2:
Which two of the following are official Snowflake Virtual Warehouse scaling policies?
A. Custom
B. Economy
C. Optimized
D. Standard
Correct Answers: A, B
Explanation:
In Snowflake, virtual warehouses are compute clusters that execute queries and manage workloads. To efficiently handle fluctuating demand and optimize costs, Snowflake offers various scaling policies that automatically adjust the size and number of warehouses based on workload requirements.
The Custom scaling policy lets administrators define specific scaling behavior and parameters, such as the number of clusters and thresholds for scaling actions. This flexibility allows tailoring the scaling to meet unique workload patterns, ensuring resources are used efficiently without overspending.
The Economy scaling policy is designed to minimize costs by aggressively suspending warehouses during idle periods and resuming them when demand returns. It is suitable for organizations prioritizing cost savings, especially for workloads that have intermittent or unpredictable usage patterns.
Contrary to what the name suggests, Optimized is not an officially recognized scaling policy in Snowflake. While it sounds like a logical choice, Snowflake does not offer a policy by this name.
The Standard policy is Snowflake’s default approach, striking a balance between performance and cost-efficiency. It automatically scales compute resources up or down based on query demand, maintaining responsiveness while avoiding over-provisioning.
Therefore, Custom and Economy are valid official Snowflake scaling policies, while Optimized is not. Although Standard is also a valid policy, the question asks for two valid policies, making A and B the best answers in this context.
Question 3:
Is it possible for a single Snowflake database to be accessible from multiple Snowflake accounts?
A. True
B. False
Correct Answer: A
Explanation:
In Snowflake, a database is a logical container that holds tables, views, and other data objects. Typically, each database exists within the scope of a single Snowflake account. However, Snowflake offers a powerful feature called data sharing which allows for seamless, secure sharing of database objects across different Snowflake accounts.
Data sharing does not replicate or copy data between accounts; instead, it provides direct, read-only access to the shared database objects from one account to another. This enables multiple accounts to query the same dataset without the overhead or risk of data duplication, ensuring data consistency and reducing storage costs.
The ability to share databases or specific tables across accounts fosters collaboration between different teams, business units, or even external partners, while the original database owner retains control over what data is shared and who has access. This secure sharing mechanism enables a single database to effectively exist and be accessible across multiple Snowflake accounts simultaneously.
This multi-account accessibility is a core capability that distinguishes Snowflake’s multi-cloud data platform and simplifies data governance and collaboration at scale.
Therefore, the statement is true: a single Snowflake database can be accessed by multiple Snowflake accounts through data sharing.
Question 4:
Which Snowflake role is best suited for handling the creation and management of users and roles within a Snowflake account?
A. SYSADMIN
B. SECURITYADMIN
C. PUBLIC
D. ACCOUNTADMIN
Correct answer: B
Explanation:
In Snowflake, access control and administrative tasks are governed by a role-based access control (RBAC) system. Different roles are designed with specific privileges to manage various aspects of the platform, including data objects, security, and overall account administration. When the task is to create and manage users and roles, the SECURITYADMIN role is the most appropriate.
The SYSADMIN role (A) primarily manages data infrastructure components such as databases, schemas, and warehouses. While it holds extensive privileges over these objects, it does not inherently have the rights to create or manage users and roles unless elevated by a higher-level role. Therefore, SYSADMIN is not recommended for user and role management.
The SECURITYADMIN role (B) is explicitly designed for security administration, including user and role management. It can create and drop users, assign roles to users, create new roles, and manage role hierarchies. Positioned just below ACCOUNTADMIN in the role hierarchy, SECURITYADMIN provides the necessary control to administer security without granting full system-wide administrative privileges. This separation aligns with best practices of limiting privileges to reduce risk.
The PUBLIC role (C) is the default role granted to all Snowflake users, giving minimal access rights. It cannot perform any administrative functions like managing users or roles, making it unsuitable for this purpose.
The ACCOUNTADMIN role (D) has the highest level of privilege in Snowflake, with unrestricted access across the account. Although ACCOUNTADMIN can manage users and roles, it is best reserved for critical account-wide tasks due to the risk of accidental misconfiguration. Delegating routine user and role management to SECURITYADMIN supports the principle of least privilege.
In summary, SECURITYADMIN strikes the right balance for securely managing users and roles without exposing the entire system to unnecessary risk, making B the correct answer.
Question 5:
Is it true or false that Snowflake's bulk data unloading supports using a SELECT statement to specify the data extracted?
A. True
B. False
Correct answer: A
Explanation:
In Snowflake, unloading data in bulk from tables or query results is commonly done using the COPY INTO <location> command. This powerful feature allows exporting data to flat files stored either in internal Snowflake stages or external cloud storage services like Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage.
A key advantage of this command is its flexibility to accept a full SELECT statement as the source of the data to be unloaded. This means you can export the results of complex queries rather than being limited to entire tables. For example, you might select specific columns, filter rows, join multiple tables, or perform aggregations before unloading.
The ability to use a SELECT statement enables sophisticated data transformations as part of the unloading process. Functions such as OBJECT_CONSTRUCT allow converting rows into JSON format, while CAST functions can adjust data types for formats like Parquet. Additionally, Snowflake supports partitioning the output files using a PARTITION BY clause, which organizes the unloaded data into directory structures that optimize downstream processing.
By supporting SELECT statements, Snowflake gives users significant control over what data gets unloaded and how it is formatted. This feature enhances data pipeline flexibility, supports ETL processes, and allows efficient data exports tailored to analytical or operational needs.
Because of these capabilities, the assertion that bulk unloading supports a SELECT statement is True, making A the correct choice.
Question 6:
Which three types of internal stages are available in Snowflake for managing staged data?
A. Named Stage
B. User Stage
C. Table Stage
D. Schema Stage
Correct Answers: A, B, C
Explanation:
Snowflake offers internal stages as dedicated storage locations to temporarily hold data files during the data loading and unloading processes. These internal stages are fully managed by Snowflake and provide users with the convenience of staging files without relying on external cloud storage services. There are three main types of internal stages, each serving a distinct purpose:
Named Stage: This is a user-defined stage created explicitly using the CREATE STAGE command. Named stages can be customized to include specific file formats, encryption options, and storage integration details. They serve as reusable storage locations that multiple users or processes can access. Named stages are useful for centralized file management, sharing data across workflows, and enforcing access controls.
User Stage: Every Snowflake user automatically has a private user stage. This stage is personal to the user and does not require explicit creation. It’s ideal for individual use cases, such as temporary storage or testing scenarios. Files uploaded to the user stage remain private unless the user explicitly shares them. The user stage is accessed via commands like PUT to upload files and GET to download them.
Table Stage: Every table in Snowflake comes with its own dedicated stage that is automatically managed by Snowflake. This table stage is primarily used for unloading data from that table into files or temporarily staging files before loading them into the table. It simplifies export/import operations tied directly to specific tables.
The option D (Schema Stage) is incorrect because Snowflake does not support any internal stage concept linked to schemas. Schemas are logical database containers organizing objects but are not used for staging data.
Understanding these three internal stages—Named, User, and Table—is crucial for optimizing data ingestion, export workflows, and secure file management within Snowflake.
Question 7:
Is it true or false that a customer using SnowSQL or Snowflake’s native connectors cannot access the Snowflake Web Interface unless support explicitly grants permission?
A. True
B. False
Correct Answer: B
Explanation:
Snowflake provides multiple interfaces for users to interact with their data and cloud data warehouse environment. These include the Snowflake Web Interface (also known as Snowsight), SnowSQL (the command-line client), and various native connectors for integrating with external applications. Importantly, the ability to use one interface does not restrict access to the others.
The Snowflake Web Interface is a browser-based graphical UI that allows users to perform many administrative and querying functions such as writing SQL queries, managing databases, reviewing query history, and monitoring warehouse performance. Access to this web UI is granted based on user credentials and roles, without requiring any additional permissions from Snowflake support just because the user employs other interfaces.
SnowSQL is a command-line tool used to execute SQL commands and manage Snowflake resources programmatically. It operates independently of the Web UI and provides an alternative way to interact with Snowflake. Similarly, native connectors facilitate integration with external software and do not affect UI accessibility.
Therefore, the claim that users who connect via SnowSQL or native connectors must have explicit access granted by support to use the Web Interface is false. Users can freely switch between or simultaneously use these interfaces as long as they have appropriate credentials. This design ensures flexibility and convenience, enabling users to work with Snowflake in the manner that best suits their workflows and technical preferences.
Question 8:
Where can you monitor storage usage for an entire Snowflake account?
A. In the Snowflake Web UI under the Databases section
B. In the Snowflake Web UI under Account → Billing & Usage
C. Through the Information Schema’s ACCOUNT_USAGE_HISTORY view
D. Via the Account Usage Schema’s ACCOUNT_USAGE_METRICS view
Correct Answer: B
Explanation:
Monitoring storage consumption at the account level in Snowflake is crucial for organizations to track costs and optimize resource usage. Snowflake offers multiple tools to examine data usage, but only some provide a full, account-wide overview.
Option A refers to the Databases section in the Snowflake Web UI. While it shows storage details for individual databases, this does not aggregate usage across the entire account. This means you can see how much storage each database consumes, but not a consolidated total. Therefore, this option is not suited for overall account-level monitoring.
Option B, the Account → Billing & Usage section in the Snowflake Web UI, is designed to provide a comprehensive snapshot of account-wide storage metrics. This section includes data storage, staged files, and failsafe storage. It also offers historical trends and visual reports to help administrators track usage and manage billing effectively. The information here is derived from Snowflake’s internal ACCOUNT_USAGE schema and is intended for billing administrators and stakeholders responsible for cost management. This makes option B the correct and most practical choice for account-level storage monitoring.
Option C mentions an Information Schema view named ACCOUNT_USAGE_HISTORY. However, such a view does not exist in Snowflake’s Information Schema. Information Schema primarily provides metadata about database objects but does not include comprehensive account usage history views. The proper views for usage are under the ACCOUNT_USAGE schema, not Information Schema, making this option invalid.
Option D refers to a view named ACCOUNT_USAGE_METRICS in the Account Usage schema, which also does not exist. While Snowflake’s ACCOUNT_USAGE schema contains relevant views such as STORAGE_USAGE and DATABASE_STORAGE_USAGE_HISTORY, ACCOUNT_USAGE_METRICS is not a documented or standard view, thus invalidating this choice.
In summary, for a centralized and accurate account-level storage overview, the Billing & Usage section in the Snowflake Web UI is the best place to monitor usage.
Question 9:
Which two factors directly affect the credit consumption of Snowflake’s Compute Layer, specifically virtual warehouses? (Select two.)
A. The number of users
B. The size of the warehouse
C. The amount of data processed
D. The number of clusters configured for the warehouse
Correct Answers: B, D
Explanation:
In Snowflake, credits for compute resources are primarily consumed by virtual warehouses, which are clusters of compute resources running SQL queries and other operations. Understanding what influences credit consumption helps optimize costs and performance.
Option B, warehouse size, is a key determinant of credit consumption. Snowflake offers warehouses ranging from X-Small up to 6X-Large, with each size tier consuming credits at a different hourly rate. For example, an X-Small warehouse uses 1 credit per hour, while a Large warehouse consumes 8 credits per hour. The larger the warehouse, the more compute resources are allocated, resulting in higher credit consumption, regardless of whether it is fully utilized or idle. Therefore, warehouse size directly affects how many credits are consumed.
Option D refers to the number of clusters configured within a warehouse. Snowflake supports multi-cluster warehouses that can scale horizontally to handle high concurrency. When a multi-cluster warehouse is set up with a minimum and maximum number of clusters, additional clusters are spun up during workload spikes. Each active cluster consumes credits equivalent to a full warehouse of the configured size. For instance, if a Large warehouse has three active clusters, credits are consumed at three times the single Large warehouse rate. Thus, the number of clusters directly impacts total credit usage.
Option A, the number of users, is not a direct factor in credit calculation. While more users may increase query traffic, Snowflake bills compute usage based on warehouse runtime and configuration, not on user count. A single user running complex queries on a large warehouse can consume more credits than many users on a small warehouse.
Option C, the volume of data processed, might influence query duration, indirectly affecting how long the warehouse runs, but credit consumption is not calculated based on data volume. It is purely tied to warehouse uptime and size.
In conclusion, credit consumption in Snowflake's compute layer is primarily controlled by the warehouse size and the number of clusters running. This ensures scalability and performance but at increased cost with larger or multi-cluster setups. Therefore, the correct answers are B and D.
Question 10:
Which option best describes the concept of clustering in Snowflake?
A. Clustering refers to how data is organized and stored within Snowflake’s micro-partitions
B. It is required for a database administrator to specify a clustering method for every Snowflake table
C. The clustering key must be included when using the COPY command to load data into Snowflake
D. Clustering can be turned off at the Snowflake account level
Correct Answer: A
Clustering in Snowflake is fundamentally about how data is internally organized and stored across micro-partitions. Snowflake automatically breaks tables into micro-partitions as data is loaded, and each micro-partition holds metadata such as the minimum and maximum values of each column within that partition. This metadata helps Snowflake’s query optimizer determine which micro-partitions can be skipped during query execution, significantly enhancing query performance by avoiding unnecessary data scans.
Option A accurately captures this concept by stating that clustering is the way data is grouped and stored within these micro-partitions. This natural clustering allows Snowflake to efficiently prune partitions when processing queries, which speeds up data retrieval.
Option B is incorrect because it is not mandatory for a database administrator to specify a clustering method for every table. While you can define a clustering key to manually control how data is clustered—especially useful for very large tables with specific query patterns—Snowflake does not require this. By default, tables are naturally clustered based on the order of data insertion, and Snowflake handles clustering automatically.
Option C is false since the COPY command, which loads data into Snowflake tables, does not require or accept a clustering key. Clustering keys are specified separately during table creation or via SQL commands later. COPY simply loads the data, and Snowflake decides how to assign it to micro-partitions.
Option D is also incorrect. Clustering is a fundamental part of Snowflake’s architecture and cannot be disabled globally. You can choose not to specify a clustering key, in which case Snowflake relies on natural clustering, or you can disable automatic reclustering at the table level. However, there is no account-wide setting to disable clustering altogether.
In summary, clustering is an internal data organization method in Snowflake that optimizes query efficiency by grouping data into micro-partitions and enabling intelligent pruning during queries. This process is mostly automatic but can be manually fine-tuned for specific performance needs.
Top Snowflake Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.