A retail company has over 3000 stores all using the same Point of Sale (POS) system. The company wants to deliver near real-time sales results to category managers. The stores operate in a variety of time zones and exhibit a dynamic range of transactions each minute, with some stores having higher sales volumes than others.
Sales results are provided in a uniform fashion using data engineered fields that will be calculated in a complex data pipeline. Calculations include exceptions, aggregations, and scoring using external functions interfaced to scoring algorithms. The source data for aggregations has over 100M rows.
Every minute, the POS sends all sales transactions files to a cloud storage location with a naming convention that includes store numbers and timestamps to identify the set of transactions contained in the files. The files are typically less than 10MB in size.
How can the near real-time results be provided to the category managers? (Select TWO).
Correct Answer:BC
To provide near real-time sales results to category managers, the Architect can use the following steps:
✑ Create an external stage that references the cloud storage location where the POS
sends the sales transactions files. The external stage should use the file format and encryption settings that match the source files2
✑ Create a Snowpipe that loads the files from the external stage into a target table in
Snowflake. The Snowpipe should be configured with AUTO_INGEST = true, which means that it will automatically detect and ingest new files as they arrive in the external stage. The Snowpipe should also use a copy option to purge the files from the external stage after loading, to avoid duplicate ingestion3
✑ Create a stream on the target table that captures the INSERTS made by the
Snowpipe. The stream should include the metadata columns that provide information about the file name, path, size, and last modified time. The stream should also have a retention period that matches the real-time analytics needs4
✑ Create a task that runs a query on the stream to process the near real-time data.
The query should use the stream metadata to extract the store number and timestamps from the file name and path, and perform the calculations for exceptions, aggregations, and scoring using external functions. The query should also output the results to another table or view that can be accessed by the category managers. The task should be scheduled to run at a frequency that matches the real-time analytics needs, such as every minute or every 5 minutes.
The other options are not optimal or feasible for providing near real-time results:
✑ All files should be concatenated before ingestion into Snowflake to avoid micro-
ingestion. This option is not recommended because it would introduce additional latency and complexity in the data pipeline. Concatenating files would require an external process or service that monitors the cloud storage location and performs the file merging operation. This would delay the ingestion of new files into Snowflake and increase the risk of data loss or corruption. Moreover, concatenating files would not avoid micro-ingestion, as Snowpipe would still ingest each concatenated file as a separate load.
✑ An external scheduler should examine the contents of the cloud storage location
and issue SnowSQL commands to process the data at a frequency that matches the real-time analytics needs. This option is not necessary because Snowpipe can automatically ingest new files from the external stage without requiring an external trigger or scheduler. Using an external scheduler would add more overhead and dependency to the data pipeline, and it would not guarantee near real-time ingestion, as it would depend on the polling interval and the availability of the external scheduler.
✑ The copy into command with a task scheduled to run every second should be used
to achieve the near-real time requirement. This option is not feasible because tasks cannot be scheduled to run every second in Snowflake. The minimum interval for tasks is one minute, and even that is not guaranteed, as tasks are subject to scheduling delays and concurrency limits. Moreover, using the copy into command with a task would not leverage the benefits of Snowpipe, such as automatic file detection, load balancing, and micro-partition
optimization. References:
✑ 1: SnowPro Advanced: Architect | Study Guide
✑ 2: Snowflake Documentation | Creating Stages
✑ 3: Snowflake Documentation | Loading Data Using Snowpipe
✑ 4: Snowflake Documentation | Using Streams and Tasks for ELT
✑ : Snowflake Documentation | Creating Tasks
✑ : Snowflake Documentation | Best Practices for Loading Data
✑ : Snowflake Documentation | Using the Snowpipe REST API
✑ : Snowflake Documentation | Scheduling Tasks
✑ : SnowPro Advanced: Architect | Study Guide
✑ : Creating Stages
✑ : Loading Data Using Snowpipe
✑ : Using Streams and Tasks for ELT
✑ : [Creating Tasks]
✑ : [Best Practices for Loading Data]
✑ : [Using the Snowpipe REST API]
✑ : [Scheduling Tasks]
A table contains five columns and it has millions of records. The cardinality distribution of the columns is shown below:
Column C4 and C5 are mostly used by SELECT queries in the GROUP BY and ORDER BY clauses. Whereas columns C1, C2 and C3 are heavily used in filter and join conditions of SELECT queries.
The Architect must design a clustering key for this table to improve the query performance. Based on Snowflake recommendations, how should the clustering key columns be ordered
while defining the multi-column clustering key?
Correct Answer:D
According to the Snowflake documentation, the following are some considerations for choosing clustering for a table1:
✑ Clustering is optimal when either:
✑ Clustering is most effective when the clustering key is used in the following types of query predicates:
✑ Clustering is less effective when the clustering key is not used in any of the above
query predicates, or when the clustering key is used in a predicate that requires a function or expression to be applied to the key (e.g. DATE_TRUNC, TO_CHAR, etc.).
✑ For most tables, Snowflake recommends a maximum of 3 or 4 columns (or
expressions) per key. Adding more than 3-4 columns tends to increase costs more than benefits.
Based on these considerations, the best option for the clustering key columns is C. C1, C3, C2, because:
✑ These columns are heavily used in filter and join conditions of SELECT queries,
which are the most effective types of predicates for clustering.
✑ These columns have high cardinality, which means they have many distinct values and can help reduce the clustering skew and improve the compression ratio.
✑ These columns are likely to be correlated with each other, which means they can help co-locate similar rows in the same micro-partitions and improve the scan efficiency.
✑ These columns do not require any functions or expressions to be applied to them, which means they can be directly used in the predicates without affecting the clustering.
References: 1: Considerations for Choosing Clustering for a Table | Snowflake Documentation
An Architect needs to grant a group of ORDER_ADMIN users the ability to clean old data in an ORDERS table (deleting all records older than 5 years), without granting any privileges on the table. The group??s manager (ORDER_MANAGER) has full DELETE privileges on the table.
How can the ORDER_ADMIN role be enabled to perform this data cleanup, without needing the DELETE privilege held by the ORDER_MANAGER role?
Correct Answer:C
This is the correct answer because it allows the ORDER_ADMIN role to perform the data cleanup without needing the DELETE privilege on the ORDERS table. A stored procedure is a feature that allows scheduling and executing SQL statements or
stored procedures in Snowflake. A stored procedure can run with either the caller??s rights or the owner??s rights. A caller??s rights stored procedure runs with the privileges of the role that called the stored procedure, while an owner??s rights stored procedure runs with the privileges of the role that created the stored procedure. By creating a stored procedure that runs with owner??s rights, the ORDER_MANAGER role can delegate the specific task of deleting old data to the ORDER_ADMIN role, without granting the ORDER_ADMIN role more general privileges on the ORDERS table. The stored procedure must include the appropriate business logic to delete only the records older than 5 years, and the ORDER_MANAGER role must grant the USAGE privilege on the stored procedure to the ORDER_ADMIN role. The ORDER_ADMIN role can then execute the stored procedure to perform the data cleanup12.
References:
✑ Snowflake Documentation: Stored Procedures
✑ Snowflake Documentation: Understanding Caller??s Rights and Owner??s Rights Stored Procedures
What does a Snowflake Architect need to consider when implementing a Snowflake Connector for Kafka?
Correct Answer:D
The Snowflake Connector for Kafka is a Kafka Connect sink connector that reads data from one or more Apache Kafka topics and loads the data into a Snowflake table. The connector supports different authentication methods to connect to Snowflake, such as key pair authentication, OAUTH, and basic authentication (for example, username and password). The connector also supports different encryption methods, such as HTTPS and SSL1. The connector does not require that every Kafka message is in JSON or Avro format, as it can handle other formats such as CSV, XML, and Parquet2. The default retention time for Kafka topics is not relevant for the connector, as it only consumes the messages that are available in the topics and does not store them in Kafka. The connector will create one table and one pipe to ingest data for each topic by default, but this behavior can be customized by using the snowflake.topic2table.map configuration property3. If the connector cannot create the table or the pipe, it will log an error and retry the operation until it succeeds or the connector is stopped4. References:
✑ Installing and Configuring the Kafka Connector
✑ Overview of the Kafka Connector
✑ Managing the Kafka Connector
✑ Troubleshooting the Kafka Connector
Which Snowflake objects can be used in a data share? (Select TWO).
Correct Answer:BD
https://docs.snowflake.com/en/user-guide/data-sharing-intro