Which of the following are benefits of using a multi-cluster warehouse in Snowflake? (Choose any 2 options)
Reduced storage costs
Improved concurrency for handling many simultaneous queries
Automatic scaling to meet varying query loads
Faster data loading
Amulti-cluster warehouseprovides Snowflake’s highest level of concurrency support by allocating multiple compute clusters behind a single warehouse definition. It automatically adds clusters during peak workload to eliminate query queuing and maintain stable performance for dashboards, BI tools, and multi-user environments.
Improved concurrencyis the primary benefit—large numbers of simultaneous queries are distributed across clusters.
Automatic scaling(scaling out and in) ensures compute cost efficiency, as Snowflake adds clusters only when needed and removes them when load decreases.
Incorrect options:
Storage costs are unaffected because compute and storage are independent.
Faster data loading may occur if warehouse size increases, but multi-cluster design is driven by concurrency, not ingestion performance.
The design enables smooth operation in high-demand enterprise environments.
====================================================
What is a primary function of a view in Snowflake?
Providing a virtual table based on the result set of a query
Executing network configurations
Managing user authentication
Storing raw data
A view in Snowflake represents avirtual tablewhose contents are defined by a stored SQL query. Views allow users to encapsulate transformation logic, simplify complex joins, enforce column-level security, and provide curated datasets for downstream consumers.
Views donotstore data themselves; the underlying tables store the data, and Snowflake executes the view’s query each time the view is referenced. This makes views ideal for abstraction layers, semantic modeling, and separating compute costs across user groups.
Incorrect options:
Snowflake views do not store raw data.
Views have no role in authentication or network configuration.
Views help streamline business logic and control data access efficiently.
====================================================
How can users integrate Streamlit with Snowflake notebooks?
By writing HTML code
By creating manual scripts
By configuring network settings
By using Streamlit functions within Python code in the notebook
Snowflake Notebooks provide native integration with Streamlit, allowing users to build interactive data apps directly inside Snowflake. Users can call Streamlit functions such as st.write(), st.dataframe(), st.button(), and st.slider() within Python cells. No external HTML, scripting, or network configuration is required. Streamlit components execute seamlessly in the notebook environment, enabling dashboards and interactive visualizations on top of Snowflake data.
=======================================
What information can be accessed using the Snowsight Monitoring tab?
Virtual warehouse usage metrics
Query execution history
Database Time Travel snapshots
Database schema changes history
The Snowsight Monitoring tab provides a centralized view of virtual warehouse usage metrics, enabling administrators and developers to evaluate how compute resources are being consumed. This includes critical insights such as credit usage, query load, concurrency levels, average queue times, execution durations, and auto-scaling activity (for multi-cluster warehouses). These metrics help determine whether a warehouse is correctly sized, whether concurrency issues are occurring, or whether workloads require scaling up or adding clusters.
Query history is available in a different section—“Activity → Query History”—not under Monitoring. Time Travel snapshots are not visualized within Monitoring; Time Travel is controlled via retention parameters and accessed with SQL (AT/BEFORE clauses). Schema change history is also not part of Monitoring and instead is discoverable through ACCOUNT_USAGE or specific metadata views.
The Monitoring tab exists specifically to help evaluate warehouse performance and resource consumption, enabling optimization of compute spending and better workload management.
==================
Which Snowflake component is responsible for data encryption?
Database storage
Virtual warehouse
Data loading utility
Cloud services
Snowflake’s database storage layer is responsible for encrypting data at rest. All persisted data—whether structured, semi-structured, or unstructured—is encrypted using strong encryption algorithms such as AES-256. This process is automatic and transparent to users, ensuring that files and micro-partitions stored in Snowflake-managed cloud storage are encrypted by default. Data in transit is protected separately through TLS.
Virtual warehouses provide compute resources to execute queries and do not perform storage-level encryption. Data loading utilities (such as COPY INTO and client tools) orchestrate data movement but do not handle at-rest encryption. The Cloud Services layer manages metadata, session management, security policies, and query optimization, but the actual encryption of stored data is part of the storage subsystem.
==================
What type of mode available in PARSE_DOCUMENT function? (choose two)
OCR
OMR
CONTENT
LAYOUT
PARSE_DOCUMENT supports two processing modes:OCRandLAYOUT. OCR mode performs Optical Character Recognition, extracting raw text from scanned documents, images within PDFs, or low-quality text-based documents. It is ideal for scenarios like contract ingestion, receipt processing, or older scanned documents. LAYOUT mode extracts structured layout elements—tables, paragraphs, lines, bounding boxes—preserving the original document’s spatial organization. This enables downstream analytical tasks such as table reconstruction or semantic segmentation of document content. OMR (Optical Mark Recognition) is not a supported feature, and “CONTENT” is not a valid mode. By supporting OCR and LAYOUT modes, Snowflake Cortex provides robust document intelligence capabilities directly within the Snowflake environment.
What is the name for a Snowflake account holder who shares data with other Snowflake accounts?
A data provider
A data consumer
A data reader
A data administrator
In Snowflake’s Secure Data Sharing model, the accountsharingdata is known as thedata provider. Providers create shares that include selected objects (tables, views, secure views, etc.) and grant access to consumer accounts.
Adata consumerreceives the shared data.Data readeranddata administratorare not defined terms in Snowflake’s sharing architecture.
==================
Which of the following are benefits of using a multi-clustered warehouse in Snowflake? (Choose any 3 options)
Improved query performance during peak load
Automatic scaling of clusters based on query demand
Reduced storage costs
Increased compute capacity
A multi-cluster warehouse raises Snowflake’s capability to handle concurrent workloads efficiently. When large numbers of users or queries hit the system, Snowflake can automatically start additional clusters (scaling out) to distribute workloads, reducing queuing and improving performance. As query demand drops, Snowflake scales in by shutting down clusters to conserve credits. This dynamic auto-scaling provides significantly increased compute capacity during peak usage periods and enhances responsiveness. Importantly, multi-cluster warehouses affect only compute, not storage, so they do not reduce storage costs. Their primary benefits revolve around performance stability, concurrency handling, workload isolation, and compute elasticity.
=======================================
Which package is needed for Python worksheet?
NumPy
Snowpark
Matplotlib
Pandas
Snowflake’s Python Worksheets require theSnowpark for Pythonpackage. Snowpark integrates deeply with Snowflake’s compute engine, enabling DataFrame operations, UDF creation, ML workflows, and direct execution of Python logic inside Snowflake compute.
While additional libraries like Pandas, NumPy, and Matplotlib may be available in worksheets, they arenot requiredto enable the Python execution environment. Snowpark is the core computational interface that connects Python to Snowflake’s data and compute layers.
It enables:
Distributed data processing
Transformations expressed in Python but executed inside Snowflake
Access to Snowflake tables as DataFrames
Integration with Snowpark ML and Cortex functions
Thus, Snowpark is the essential package for Python worksheet support.
====================================================
Which of the following statements are true about Zero-Copy Cloning in Snowflake? (Select TWO)
It requires significant additional storage.
It is immediate and metadata-only operation.
It duplicates the underlying data storage.
It creates a writable copy of an object.
Zero-Copy Cloning is ametadata-only operation, creating instantaneous clones without copying underlying data. Clones are fully writable and operate independently. Snowflake only allocates storage when data diverges due to modifications. It does not require significant storage nor duplicate physical data.
====================================================
What is a key characteristic of a Snowflake virtual warehouse?
It provides compute resources.
It manages account roles.
It permanently stores data.
It encrypts data.
A virtual warehouse is the compute engine of Snowflake. It provides CPU, memory, and temporary storage needed to execute SQL queries, data loading operations, and DML actions. Warehouses can be sized dynamically and suspended or resumed to optimize cost.
Warehouses donotstore data; Snowflake’s storage is independent and centralized. Warehouses do not manage roles—access control is handled through Snowflake’s RBAC system. Encryption is performed automatically by Snowflake’s storage and cloud services, not by warehouses.
Thus, the correct characteristic is that virtual warehouses supply compute.
==================
If you'd like, I can provideQuestions 42–55 next, with the same 150–200-word explanations.
You said:
Questions 42–55 next, with the same 150–200-word explanations
ChatGPT said:
Below areQuestions 42–55, all formatted exactly per your requirements, with150–200-word explanationsandevery original option preserved exactly.
Which command is used to grant privileges to a role in Snowflake?
ASSIGN
CREATE
REVOKE
GRANT
TheGRANTcommand is used in Snowflake to assign privileges to a role. GRANT allows administrators to authorize roles to perform actions on objects such as tables, warehouses, schemas, or databases. The REVOKE command removes privileges, CREATE creates objects, and ASSIGN is not a valid Snowflake command. Therefore, GRANT is the correct command for privilege assignment.
=======================================
What is the highest level object in the Snowflake object hierarchy?
Database
Virtual Warehouse
Schema
Account
TheAccountis the top-level Snowflake container encompassing:
• All databases
• All schemas
• All compute resources (warehouses)
• All roles, users, and governance structures
All other objects exist within the Account context.
====================================================
What Snowflake parameter is configured in the Query Processing layer?
The minimum and maximum serverless compute limits
The types of tables available in an account
The sizing of virtual warehouses
The minimum and maximum micro-partition limits
The Query Processing layer of Snowflake is wherevirtual warehouses operate, so warehouse sizing parameters (X-Small to 6X-Large) fall under this layer. Warehouse size determines compute power, concurrency, and performance behavior for SQL workloads. Administrators configure warehouse size based on workload intensity, response time requirements, and cost considerations.
Serverless compute limits and micro-partition limits belong to storage and services layers. Table types (permanent, transient, temporary) are storage-level configurations, not part of Query Processing.
Thus, warehouse sizing is the correct parameter configured at the Query Processing layer.
==================
Which SQL function is used to parse a string as JSON data within a Snowflake query?
CONVERT_JSON()
TO_JSON()
PARSE_JSON()
EXTRACT_JSON()
ThePARSE_JSON()function converts a valid JSON string into aVARIANTvalue. This allows Snowflake to store and query nested, hierarchical data using dot and bracket notation.
Example:
SELECT PARSE_JSON('{"name":"John","age":30}') AS data;
After parsing, fields can be accessed like:
data:name or data['age']
Incorrect options:
TO_JSON()converts VARIANT to a JSON string (opposite direction).
CONVERT_JSONandEXTRACT_JSONare not Snowflake functions.
PARSE_JSON is essential for dynamically loading, transforming, or analyzing JSON content from files, streams, or external applications.
====================================================
Which statement is true about Snowflake Data Exchange? (Choose any 2 options)
It is limited to internal data sharing only
It requires complex ETL processes to transfer data
It supports data sharing between different regions and cloud providers
It allows organizations to securely share live, governed data
Snowflake Data Exchangeprovides governed, real-time data collaboration between Snowflake accounts. It enables providers to publishlive datasetswhile consumers query that data without copying or moving it. Because Snowflake uses secure data sharing primitives at the metadata layer,no ETL pipelinesor data duplication are required.
A key advantage is support forcross-region and cross-cloud sharing, allowing collaboration across AWS, Azure, and GCP regions seamlessly.
Data Exchange listings support controlled visibility, entitlement management, and auditing. Providers maintain full control over updates since consumers always access the live, authoritative version of the dataset.
Incorrect statements:
It is not limited to internal sharing—external sharing is a major feature.
ETL is not required because Snowflake’s architecture exposes shared objects directly.
====================================================
How do you specify a custom delimiter for a CSV file when using COPY INTO
