Which of the following significantly improves the performance of selective point lookup queries on a table?
Clustering
Materialized Views
Zero-copy Cloning
Search Optimization Service
The Search Optimization Service significantly improves the performance of selective point lookup queries on tables by creating and maintaining a persistent data structure called a search access path, which allows some micro-partitions to be skipped when scanning the table
Which columns are part of the result set of the Snowflake LATERAL FLATTEN command? (Choose two.)
CONTENT
PATH
BYTE_SIZE
INDEX
DATATYPE
The LATERAL FLATTEN command in Snowflake produces a result set that includes several columns, among which PATH and INDEX are includedPATH indicates the path to the element within a data structure that needs to be flattened, and INDEX represents the index of the element if it is an array2.
How can a row access policy be applied to a table or a view? (Choose two.)
Within the policy DDL
Within the create table or create view DDL
By future APPLY for all objects in a schema
Within a control table
Using the command ALTER
A row access policy can be applied to a table or a view within the policy DDL when defining the policy. Additionally, an existing row access policy can be applied to a table or a view using the ALTER
How should a virtual warehouse be configured if a user wants to ensure that additional multi-clusters are resumed with no delay?
Configure the warehouse to a size larger than generally required
Set the minimum and maximum clusters to autoscale
Use the standard warehouse scaling policy
Use the economy warehouse scaling policy
To ensure that additional multi-clusters are resumed with no delay, a virtual warehouse should be configured to a size larger than generally required. This configuration allows for immediate availability of additional resources when needed, without waiting for new clusters to start up
Which of the following statements apply to Snowflake in terms of security? (Choose two.)
Snowflake leverages a Role-Based Access Control (RBAC) model.
Snowflake requires a user to configure an IAM user to connect to the database.
All data in Snowflake is encrypted.
Snowflake can run within a user's own Virtual Private Cloud (VPC).
All data in Snowflake is compressed.
Snowflake uses a Role-Based Access Control (RBAC) model to manage access to data and resources. Additionally, Snowflake ensures that all data is encrypted, both at rest and in transit, to provide a high level of security for data stored within the platform. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake feature allows a user to substitute a randomly generated identifier for sensitive data, in order to prevent unauthorized users access to the data, before loading it into Snowflake?
External Tokenization
External Tables
Materialized Views
User-Defined Table Functions (UDTF)
The feature in Snowflake that allows a user to substitute a randomly generated identifier for sensitive data before loading it into Snowflake is known as External Tokenization. This process helps to secure sensitive data by ensuring that it is not exposed in its original form, thus preventing unauthorized access3.
Which Snowflake architectural layer is responsible for a query execution plan?
Compute
Data storage
Cloud services
Cloud provider
In Snowflake’s architecture, the Cloud Services layer is responsible for generating the query execution plan. This layer handles all the coordination, optimization, and management tasks, including query parsing, optimization, and compilation into an execution plan that can be processed by the Compute layer.
Which of the following objects are contained within a schema? (Choose two.)
Role
Stream
Warehouse
External table
User
Share
In Snowflake, a schema is a logical grouping of database objects, which can include streams and external tables. A stream is an object that allows users to query data that has changed in specified tables or views, and an external table is a table that references data stored outside of Snowflake. Roles, warehouses, users, and shares are not contained within a schema. References: SHOW OBJECTS, Database, Schema, & Share DDL
Which command should be used to load data from a file, located in an external stage, into a table in Snowflake?
INSERT
PUT
GET
COPY
The COPY command is used in Snowflake to load data from files located in an external stage into a table. This command allows for efficient and parallelized data loading from various file formats1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
A company needs to allow some users to see Personally Identifiable Information (PII) while limiting other users from seeing the full value of the PII.
Which Snowflake feature will support this?
Row access policies
Data masking policies
Data encryption
Role based access control
Data masking policies in Snowflake allow for the obfuscation of specific data within a field, enabling some users to see the full data while limiting others. This feature is particularly useful for handling PII, ensuring that sensitive information is only visible to authorized users1.
What affects whether the query results cache can be used?
If the query contains a deterministic function
If the virtual warehouse has been suspended
If the referenced data in the table has changed
If multiple users are using the same virtual warehouse
The query results cache can be used as long as the data in the table has not changed since the last time the query was run. If the underlying data has changed, Snowflake will not use the cached results and will re-execute the query1.
A table needs to be loaded. The input data is in JSON format and is a concatenation of multiple JSON documents. The file size is 3 GB. A warehouse size small is being used. The following COPY INTO command was executed:
COPY INTO SAMPLE FROM @~/SAMPLE.JSON (TYPE=JSON)
The load failed with this error:
Max LOB size (16777216) exceeded, actual size of parsed column is 17894470.
How can this issue be resolved?
Compress the file and load the compressed file.
Split the file into multiple files in the recommended size range (100 MB - 250 MB).
Use a larger-sized warehouse.
Set STRIP_OUTER_ARRAY=TRUE in the COPY INTO command.
The error “Max LOB size (16777216) exceeded” indicates that the size of the parsed column exceeds the maximum size allowed for a single column value in Snowflake, which is 16 MB. To resolve this issue, the file should be split into multiple smaller files that are within the recommended size range of 100 MB to 250 MB. This will ensure that each JSON document within the files is smaller than the maximum LOB size allowed. Compressing the file, using a larger-sized warehouse, or setting STRIP_OUTER_ARRAY=TRUE will not resolve the issue of the column size exceeding the maximum allowed. References: COPY INTO Error during Structured Data Load: “Max LOB size (16777216) exceeded…”
What is the default file size when unloading data from Snowflake using the COPY command?
5 MB
8 GB
16 MB
32 MB
The default file size when unloading data from Snowflake using the COPY command is not explicitly stated in the provided resources. However, Snowflake documentation suggests that the file size can be specified using the MAX_FILE_SIZE option in the COPY INTO
Which snowflake objects will incur both storage and cloud compute charges? (Select TWO)
Materialized view
Sequence
Secure view
Transient table
Clustered table
In Snowflake, both materialized views and transient tables will incur storage charges because they store data. They will also incur compute charges when queries are run against them, as compute resources are used to process the queries. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command can be used to load data files into a Snowflake stage?
JOIN
COPY INTO
PUT
GET
The PUT command is used to load data files into a Snowflake stage. This command uploads data files from a local file system to a specified stage in Snowflake
Which command should be used to download files from a Snowflake stage to a local folder on a client's machine?
PUT
GET
COPY
SELECT
The GET command is used to download files from a Snowflake stage to a local folder on a client’s machine2.
What type of query benefits the MOST from search optimization?
A query that uses only disjunction (i.e., OR) predicates
A query that includes analytical expressions
A query that uses equality predicates or predicates that use IN
A query that filters on semi-structured data types
Search optimization in Snowflake is designed to improve the performance of queries that are selective and involve point lookup operations using equality and IN predicates. It is particularly beneficial for queries that access columns with a high number of distinct values1.
References = [COF-C02] SnowPro Core Certification Exam Study Guide, Snowflake Documentation
What happens to historical data when the retention period for an object ends?
The data is cloned into a historical object.
The data moves to Fail-safe
Time Travel on the historical data is dropped.
The object containing the historical data is dropped.
When the retention period for an object ends in Snowflake, Time Travel on the historical data is dropped ©. This means that the ability to access historical data via Time Travel is no longer available once the retention period has expired2.
Which Snowflake role can manage any object grant globally, including modifying and revoking grants?
USERADMIN
ORGADMIN
SYSADMIN
SECURITYADMIN
The SECURITYADMIN role in Snowflake can manage any object grant globally, including modifying and revoking grants. This role has the necessary privileges to oversee and control access to all securable objects within the Snowflake environment4.
In a Snowflake role hierarchy, what is the top-level role?
SYSADMIN
ORGADMIN
ACCOUNTADMIN
SECURITYADMIN
In a Snowflake role hierarchy, the top-level role is ACCOUNTADMIN. This role has the highest level of privileges and is capable of performing all administrative functions within the Snowflake account
What function can be used with the recursive argument to return a list of distinct key names in all nested elements in an object?
FLATTEN
GET_PATH
CHECK_JSON
PARSE JSON
The FLATTEN function can be used with the recursive argument to return a list of distinct key names in all nested elements within an object. This function is particularly useful for working with semi-structured data in Snowflake
Which Snowflake feature allows a user to track sensitive data for compliance, discovery, protection, and resource usage?
Tags
Comments
Internal tokenization
Row access policies
Tags in Snowflake allow users to track sensitive data for compliance, discovery, protection, and resource usage. They enable the categorization and tracking of data, supporting compliance with privacy regulations678. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What does a masking policy consist of in Snowflake?
A single data type, with one or more conditions, and one or more masking functions
A single data type, with only one condition, and only one masking function
Multiple data types, with only one condition, and one or more masking functions
Multiple data types, with one or more conditions, and one or more masking functions
A masking policy in Snowflake consists of a single data type, with one or more conditions, and one or more masking functions. These components define how the data is masked based on the specified conditions3.
The following JSON is stored in a VARIANT column called src of the CAR_SALES table:
A user needs to extract the dealership information from the JSON.
How can this be accomplished?
select src:dealership from car_sales;
select src.dealership from car_sales;
select src:Dealership from car_sales;
select dealership from car_sales;
In Snowflake, to extract a specific element from a JSON stored in a VARIANT column, the correct syntax is to use the dot notation. Therefore, the query select src.dealership from car_sales; will return the dealership information contained within each JSON object in the src column.
References: For a detailed explanation, please refer to the Snowflake documentation on querying semi-structured data.
Which Snowflake layer is always leveraged when accessing a query from the result cache?
Metadata
Data Storage
Compute
Cloud Services
The Cloud Services layer in Snowflake is responsible for managing the result cache. When a query is executed, the results are stored in this cache, and subsequent identical queries can leverage these cached results without re-executing the entire query1.
The Snowflake Search Optimization Services supports improved performance of which kind of query?
Queries against large tables where frequent DML occurs
Queries against tables larger than 1 TB
Selective point lookup queries
Queries against a subset of columns in a table
The Snowflake Search Optimization Service is designed to support improved performance for selective point lookup queries. These are queries that retrieve specific records from a database, often based on a unique identifier or a small set of criteria3.
What is the purpose of the Snowflake SPLIT TO_TABLE function?
To count the number of characters in a string
To split a string into an array of sub-strings
To split a string and flatten the results into rows
To split a string and flatten the results into columns
The purpose of the Snowflake SPLIT_TO_TABLE function is to split a string based on a specified delimiter and flatten the results into rows. This table function is useful for transforming a delimited string into a set of rows that can be further processed or queried5.
Which commands are restricted in owner's rights stored procedures? (Select TWO).
SHOW
MERGE
INSERT
DELETE
DESCRIBE
In owner’s rights stored procedures, certain commands are restricted to maintain security and integrity. The SHOW and DESCRIBE commands are limited because they can reveal metadata and structure information that may not be intended for all roles.
If 3 size Small virtual warehouse is made up of two servers, how many servers make up a Large warehouse?
4
8
16
32
In Snowflake, each size increase in virtual warehouses doubles the number of servers. Therefore, if a size Small virtual warehouse is made up of two servers, a Large warehouse, which is two sizes larger, would be made up of eight servers (2 servers for Small, 4 for Medium, and 8 for Large)2.
Size specifies the amount of compute resources available per cluster in a warehouse. Snowflake supports the following warehouse sizes:
https://docs.snowflake.com/en/user-guide/warehouses-overview.html
When floating-point number columns are unloaded to CSV or JSON files, Snowflake truncates the values to approximately what?
(12,2)
(10,4)
(14,8)
(15,9)
When unloading floating-point number columns to CSV or JSON files, Snowflake truncates the values to approximately 15 significant digits with 9 digits following the decimal point, which can be represented as (15,9). This ensures a balance between accuracy and efficiency in representing floating-point numbers in text-based formats, which is essential for data interchange and processing applications that consume these files.
References:
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
What is the Fail-safe period for a transient table in the Snowflake Enterprise edition and higher?
0 days
1 day
7 days
14 days
The Fail-safe period for a transient table in Snowflake, regardless of the edition (including Enterprise edition and higher), is 0 days. Fail-safe is a data protection feature that provides additional retention beyond the Time Travel period for recovering data in case of accidental deletion or corruption. However, transient tables are designed for temporary or short-term use and do not benefit from the Fail-safe feature, meaning that once their Time Travel period expires, data cannot be recovered.
References:
When using the ALLOW_CLI£NT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
A cached MFA token is valid for up to four hours. https://docs.snowflake.com/en/user-guide/security-mfa#using-mfa-token-caching-to-minimize-the-number-of-prompts-during-authentication-optional
How are network policies defined in Snowflake?
They are a set of rules that define the network routes within Snowflake.
They are a set of rules that dictate how Snowflake accounts can be used between multiple users.
They are a set of rules that define how data can be transferred between different Snowflake accounts within an organization.
They are a set of rules that control access to Snowflake accounts by specifying the IP addresses or ranges of IP addresses that are allowed to connect
to Snowflake.
Network policies in Snowflake are defined as a set of rules that manage the network-level access to Snowflake accounts. These rules specify which IP addresses or IP ranges are permitted to connect to Snowflake, enhancing the security of Snowflake accounts by preventing unauthorized access. Network policies are an essential aspect of Snowflake's security model, allowing administrators to enforce access controls based on network locations.
References:
What will happen if a Snowflake user increases the size of a suspended virtual warehouse?
The provisioning of new compute resources for the warehouse will begin immediately.
The warehouse will remain suspended but new resources will be added to the query acceleration service.
The provisioning of additional compute resources will be in effect when the warehouse is next resumed.
The warehouse will resume immediately and start to share the compute load with other running virtual warehouses.
When a Snowflake user increases the size of a suspended virtual warehouse, the changes to compute resources are queued but do not take immediate effect. The provisioning of additional compute resources occurs only when the warehouse is resumed. This ensures that resources are allocated efficiently, aligning with Snowflake's commitment to cost-effective and on-demand scalability.
References:
Which function should be used to insert JSON format string data inot a VARIANT field?
FLATTEN
CHECK_JSON
PARSE_JSON
TO_VARIANT
To insert JSON formatted string data into a VARIANT field in Snowflake, the correct function to use is PARSE_JSON. The PARSE_JSON function is specifically designed to interpret a JSON formatted string and convert it into a VARIANT type, which is Snowflake's flexible format for handling semi-structured data like JSON, XML, and Avro. This function is essential for loading and querying JSON data within Snowflake, allowing users to store and manage JSON data efficiently while preserving its structure for querying purposes. This function's usage and capabilities are detailed in the Snowflake documentation, providing users with guidance on how to handle semi-structured data effectively within their Snowflake environments.
References:
When referring to User-Defined Function (UDF) names in Snowflake, what does the term overloading mean?
There are multiple SOL UDFs with the same names and the same number of arguments.
There are multiple SQL UDFs with the same names and the same number of argument types.
There are multiple SQL UDFs with the same names but with a different number of arguments or argument types.
There are multiple SQL UDFs with different names but the same number of arguments or argument types.
In Snowflake, overloading refers to the creation of multiple User-Defined Functions (UDFs) with the same name but differing in the number or types of their arguments. This feature allows for more flexible function usage, as Snowflake can differentiate between functions based on the context of their invocation, such as the types or the number of arguments passed. Overloading helps to create more adaptable and readable code, as the same function name can be used for similar operations on different types of data.
References:
Which command can be used to load data into an internal stage?
LOAD
copy
GET
PUT
The PUT command is used to load data into an internal stage in Snowflake. This command uploads data files from a local file system to a named internal stage, making the data available for subsequent loading into a Snowflake table using the COPY INTO command.
References:
Which Snowflake technique can be used to improve the performance of a query?
Clustering
Indexing
Fragmenting
Using INDEX__HINTS
Clustering is a technique used in Snowflake to improve the performance of queries. It involves organizing the data in a table into micro-partitions based on the values of one or more columns. This organization allows Snowflake to efficiently prune non-relevant micro-partitions during a query, which reduces the amount of data scanned and improves query performance.
References:
A marketing co-worker has requested the ability to change a warehouse size on their medium virtual warehouse called mktg__WH.
Which of the following statements will accommodate this request?
ALLOW RESIZE ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT MODIFY ON WAREHOUSE MKTG WH TO ROLE MARKETING;
GRANT MODIFY ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT OPERATE ON WAREHOUSE MKTG WH TO ROLE MARKETING;
The correct statement to accommodate the request for a marketing co-worker to change the size of their medium virtual warehouse called mktg__WH is to grant the MODIFY privilege on the warehouse to the ROLE MARKETING. This privilege allows the role to change the warehouse size among other properties.
References:
What happens when a virtual warehouse is resized?
When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
Users who are trying to use the warehouse will receive an error message until the resizing is complete
When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.
References:
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Customer-managed encryption keys through Tri-Secret Secure
Automatic encryption of all data
Up to 90 days of data recovery through Time Travel
Object-level access control
Column-level security to apply data masking policies to tables and views
In all Snowflake editions, two key capabilities are universally available:
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
Which cache type is used to cache data output from SQL queries?
Metadata cache
Result cache
Remote cache
Local file cache
The Result cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the result, which saves time and computational resources.
References:
True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.
True
False
Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.References: Understanding and viewing Fail-safe | Snowflake Documentation
What is a responsibility of Snowflake's virtual warehouses?
Infrastructure management
Metadata management
Query execution
Query parsing and optimization
Management of the storage layer
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
References:
Where would a Snowflake user find information about query activity from 90 days ago?
account__usage . query history view
account__usage.query__history__archive View
information__schema . cruery_history view
information__schema - query history_by_ses s i on view
To find information about query activity from 90 days ago, a Snowflake user should use the account_usage.query_history_archive view. This view is designed to provide access to historical query data beyond the default 14-day retention period found in the standard query_history view. It allows users to analyze and audit past query activities for up to 365 days after the date of execution, which includes the 90-day period mentioned.
References:
What happens when a cloned table is replicated to a secondary database? (Select TWO)
A read-only copy of the cloned tables is stored.
The replication will not be successful.
The physical data is replicated
Additional costs for storage are charged to a secondary account
Metadata pointers to cloned tables are replicated
When a cloned table is replicated to a secondary database in Snowflake, the following occurs:
It’s important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but rather a consequence of storing additional data.
References:
Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)
SCIM
Federated authentication
TLS 1.2
Key-pair authentication
OAuth
OCSP authentication
Snowflake supports several methods for authenticating users, including federated authentication, key-pair authentication, and OAuth. Federated authentication allows users to authenticate using their organization’s identity provider. Key-pair authentication uses a public-private key pair for secure login, and OAuth is an open standard for access delegation commonly used for token-based authentication. References: Authentication policies | Snowflake Documentation, Authenticating to the server | Snowflake Documentation, External API authentication and secrets | Snowflake Documentation.
Which of the following commands cannot be used within a reader account?
CREATE SHARE
ALTER WAREHOUSE
DROP ROLE
SHOW SCHEMAS
DESCRBE TABLE
In Snowflake, a reader account is a type of account that is intended for consuming shared data rather than performing any data management or DDL operations. The CREATE SHARE command is used to share data from your account with another account, which is not a capability provided to reader accounts. Reader accounts are typically restricted from creating shares, as their primary purpose is to read shared data rather than to share it themselves.
References:
What is a best practice after creating a custom role?
Create the custom role using the SYSADMIN role.
Assign the custom role to the SYSADMIN role
Assign the custom role to the PUBLIC role
Add__CUSTOM to all custom role names
Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.
References:
What happens when an external or an internal stage is dropped? (Select TWO).
When dropping an external stage, the files are not removed and only the stage is dropped
When dropping an external stage, both the stage and the files within the stage are removed
When dropping an internal stage, the files are deleted with the stage and the files are recoverable
When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
When dropping an internal stage, only selected files are deleted with the stage and are not recoverable
When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.
On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.
References:
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
Load files that are approximately 25 MB or smaller.
Remove all dates and timestamps.
Load files that are approximately 100-250 MB (or larger)
Avoid using embedded characters such as commas for numeric data types
Remove semi-structured data types
When loading data into Snowflake, it is recommended to:
These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.
References:
In the query profiler view for a query, which components represent areas that can be used to help optimize query performance? (Select TWO)
Bytes scanned
Bytes sent over the network
Number of partitions scanned
Percentage scanned from cache
External bytes scanned
In the query profiler view, the components that represent areas that can be used to help optimize query performance include ‘Bytes scanned’ and ‘Number of partitions scanned’. ‘Bytes scanned’ indicates the total amount of data the query had to read and is a direct indicator of the query’s efficiency. Reducing the bytes scanned can lead to lower data transfer costs and faster query execution. ‘Number of partitions scanned’ reflects how well the data is clustered; fewer partitions scanned typically means better performance because the system can skip irrelevant data more effectively.
References:
A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Yes, because a table owner has full control and can unset masking policies.
Yes, because masking policies only apply to cloned tables.
No, because masking policies must always reference specific access roles.
No, because ownership of a table does not include the ability to change masking policies
Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.
References:
A company strongly encourages all Snowflake users to self-enroll in Snowflake's default Multi-Factor Authentication (MFA) service to provide increased login security for users connecting to Snowflake.
Which application will the Snowflake users need to install on their devices in order to connect with MFA?
Okta Verify
Duo Mobile
Microsoft Authenticator
Google Authenticator
Snowflake’s default Multi-Factor Authentication (MFA) service is powered by Duo Security. Users are required to install the Duo Mobile application on their devices to use MFA for increased login security when connecting to Snowflake. This service is managed entirely by Snowflake, and users do not need to sign up separately with Duo1.
Which account__usage views are used to evaluate the details of dynamic data masking? (Select TWO)
ROLES
POLICY_REFERENCES
QUERY_HISTORY
RESOURCE_MONIT ORS
ACCESS_HISTORY
To evaluate the details of dynamic data masking, the POLICY_REFERENCES and ACCESS_HISTORY views in the account_usage schema are used. The POLICY_REFERENCES view provides information about the objects to which a masking policy is applied, and the ACCESS_HISTORY view contains details about access to the masked data, which can be used to audit and verify the application of dynamic data masking policies.
References:
When using the ALLOW CLIENT_MFA_CACHING parameter, how long is a cached Multi-Factor Authentication (MFA) token valid for?
1 hour
2 hours
4 hours
8 hours
When using the ALLOW_CLIENT_MFA_CACHING parameter, a cached Multi-Factor Authentication (MFA) token is valid for up to 4 hours. This allows for continuous, secure connectivity without users needing to respond to an MFA prompt at the start of each connection attempt to Snowflake within this timeframe2.
At what level is the MIN_DATA_RETENTION_TIME_IN_DAYS parameter set?
Account
Database
Schema
Table
The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level. This parameter determines the minimum number of days Snowflake retains historical data for Time Travel operations
A Snowflake account has activated federated authentication.
What will occur when a user with a password that was defined by Snowflake attempts to log in to Snowflake?
The user will be unable to enter a password.
The user will encounter an error, and will not be able to log in.
The user will be able to log into Snowflake successfully.
After entering the username and password, the user will be redirected to an Identity Provider (IdP) login page.
When federated authentication is activated in Snowflake, users authenticate via an external identity provider (IdP) rather than using Snowflake-managed credentials. Therefore, a user with a password defined by Snowflake will be unable to enter a password and must use their IdP credentials to log in.
QUSTION NO: 579
What value provides information about disk usage for operations where intermediate results do not fit in memory in a Query Profile?
A. IO
B. Network
C. Pruning
D. Spilling
Answer: D
In Snowflake, when a query execution requires more memory than what is available, Snowflake handles these situations by spilling the intermediate results to disk. This process is known as "spilling." The Query Profile in Snowflake includes a metric that helps users identify when and how much data spilling occurs during the execution of a query. This information is crucial for optimizing queries as excessive spilling can significantly slow down query performance. The value that provides this information about disk usage due to intermediate results not fitting in memory is appropriately labeled as "Spilling" in the Query Profile.
References:
A user wants to access files stored in a stage without authenticating into Snowflake. Which type of URL should be used?
File URL
Staged URL
Scoped URL
Pre-signed URL
A Pre-signed URL should be used to access files stored in a Snowflake stage without requiring authentication into Snowflake. Pre-signed URLs are simple HTTPS URLs that provide temporary access to a file via a web browser, using a pre-signed access token. The expiration time for the access token is configurable, and this type of URL allows users or applications to directly access or download the files without needing to authenticate into Snowflake5.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which data types can be used in Snowflake to store semi-structured data? (Select TWO)
ARRAY
BLOB
CLOB
JSON
VARIANT
Snowflake supports the storage of semi-structured data using the ARRAY and VARIANT data types. The ARRAY data type can directly contain VARIANT, and thus indirectly contain any other data type, including itself. The VARIANT data type can store a value of any other type, including OBJECT and ARRAY, and is often used to represent semi-structured data formats like JSON, Avro, ORC, Parquet, or XML34.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake function will parse a JSON-null into a SQL-null?
TO_CHAR
TO_VARIANT
TO_VARCHAR
STRIP NULL VALUE
The STRIP_NULL_VALUE function in Snowflake is used to convert a JSON null value into a SQL NULL value1.
Who can grant object privileges in a regular schema?
Object owner
Schema owner
Database owner
SYSADMIN
In a regular schema within Snowflake, the object owner has the privilege to grant object privileges. The object owner is typically the role that created the object or to whom the ownership of the object has been transferred78.
References = [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake function is maintained separately from the data and helps to support features such as Time Travel, Secure Data Sharing, and pruning?
Column compression
Data clustering
Micro-partitioning
Metadata management
Micro-partitioning is a Snowflake function that is maintained separately from the data and supports features such as Time Travel, Secure Data Sharing, and pruning. It allows Snowflake to efficiently manage and query large datasets by organizing them into micro-partitions1.
What is the primary purpose of a directory table in Snowflake?
To store actual data from external stages
To automatically expire file URLs for security
To manage user privileges and access control
To store file-level metadata about data files in a stage
A directory table in Snowflake is used to store file-level metadata about the data files in a stage. It is conceptually similar to an external table and provides information such as file size, last modified timestamp, and file URL. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake feature provides increased login security for users connecting to Snowflake that is powered by Duo Security service?
OAuth
Network policies
Single Sign-On (SSO)
Multi-Factor Authentication (MFA)
Multi-Factor Authentication (MFA) provides increased login security for users connecting to Snowflake. Snowflake’s MFA is powered by Duo Security service, which adds an additional layer of security during the login process.
What is a directory table in Snowflake?
A separate database object that is used to store file-level metadata
An object layered on a stage that is used to store file-level metadata
A database object with grantable privileges for unstructured data tasks
A Snowflake table specifically designed for storing unstructured files
A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
Which statistics are displayed in a Query Profile that indicate that intermediate results do not fit in memory? (Select TWO).
Bytes scanned
Partitions scanned
Bytes spilled to local storage
Bytes spilled to remote storage
Percentage scanned from cache
The Query Profile statistics that indicate intermediate results do not fit in memory are the bytes spilled to local storage and bytes spilled to remote storage2.
What tasks can an account administrator perform in the Data Exchange? (Select TWO).
Add and remove members.
Delete data categories.
Approve and deny listing approval requests.
Transfer listing ownership.
Transfer ownership of a provider profile.
An account administrator in the Data Exchange can perform tasks such as adding and removing members and approving or denying listing approval requests. These tasks are part of managing the Data Exchange and ensuring that only authorized listings and members are part of it12.
Which statistics can be used to identify queries that have inefficient pruning? (Select TWO).
Bytes scanned
Bytes written to result
Partitions scanned
Partitions total
Percentage scanned from cache
The statistics that can be used to identify queries with inefficient pruning are ‘Partitions scanned’ and ‘Partitions total’. These statistics indicate how much of the data was actually needed and scanned versus the total available, which can highlight inefficiencies in data pruning34.
What happens to the objects in a reader account when the DROP MANAGED ACCOUNT command is executed?
The objects are dropped.
The objects enter the Fail-safe period.
The objects enter the Time Travel period.
The objects are immediately moved to the provider account.
When the DROP MANAGED ACCOUNT command is executed in Snowflake, it removes the managed account, including all objects created within the account, and access to the account is immediately restricted2.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
CREATE STAGE
COPY INTO