Free ARA-C01 Exam Dumps

Question 16

Files arrive in an external stage every 10 seconds from a proprietary system. The files range in size from 500 K to 3 MB. The data must be accessible by dashboards as soon as it arrives.
How can a Snowflake Architect meet this requirement with the LEAST amount of coding? (Choose two.)

Correct Answer:AE
The requirement is for the data to be accessible as quickly as possible after it arrives in the external stage with minimal coding effort.
Option A: Snowpipe with auto-ingest is a service that continuously loads data as it arrives in the stage. With auto-ingest, Snowpipe automatically detects new files as they arrive in a cloud stage and loads the data into the specified Snowflake table with minimal delay and no intervention required. This is an ideal low-maintenance solution for the given scenario where files are arriving at a very high frequency.
Option E: Using a combination of a task and a stream allows for real-time change data
capture in Snowflake. A stream records changes (inserts, updates, and deletes) made to a table, and a task can be scheduled to trigger on a very short interval, ensuring that changes are processed into the dashboard tables as they occur.

Question 17

A table, EMP_ TBL has three records as shown:
ARA-C01 dumps exhibit
The following variables are set for the session:
ARA-C01 dumps exhibit
Which SELECT statements will retrieve all three records? (Select TWO).

Correct Answer:BE
✑ The correct answer is B and E because they use the correct syntax and values for the identifier function and the session variables.
✑ The identifier function allows you to use a variable or expression as an identifier (such as a table name or column name) in a SQL statement. It takes a single argument and returns it as an identifier. For example, identifier($tbl_ref) returns
EMP_TBL as an identifier.
✑ The session variables are set using the SET command and can be referenced using the $ sign. For example, $var1 returns Name1 as a value.
✑ Option A is incorrect because it uses Stbl_ref and Scol_ref, which are not valid session variables or identifiers. They should be $tbl_ref and $col_ref instead.
✑ Option C is incorrect because it uses identifier<Stbl_ref>, which is not a valid syntax for the identifier function. It should be identifier($tbl_ref) instead.
✑ Option D is incorrect because it uses Cvarl, var2, and var3, which are not valid session variables or values. They should be $var1, $var2, and $var3
instead. References:
✑ Snowflake Documentation: Identifier Function
✑ Snowflake Documentation: Session Variables
✑ Snowflake Learning: SnowPro Advanced: Architect Exam Study Guide

Question 18

Which of the following ingestion methods can be used to load near real-time data by using the messaging services provided by a cloud provider?

Correct Answer:A
Snowflake Connector for Kafka and Snowpipe are two ingestion methods that can be used to load near real-time data by using the messaging services provided by a cloud provider. Snowflake Connector for Kafka enables you to stream structured and semi- structured data from Apache Kafka topics into Snowflake tables. Snowpipe enables you to load data from files that are continuously added to a cloud storage location, such as Amazon S3 or Azure Blob Storage. Both methods leverage Snowflake??s micro-partitioning and columnar storage to optimize data ingestion and query performance. Snowflake streams and Spark are not ingestion methods, but rather components of the Snowflake architecture. Snowflake streams provide change data capture (CDC) functionality by tracking data changes in a table. Spark is a distributed computing framework that can be used to process large-scale data and write it to Snowflake using the Snowflake Spark Connector. References:
✑ Snowflake Connector for Kafka
✑ Snowpipe
✑ Snowflake Streams
✑ Snowflake Spark Connector

Question 19

The Business Intelligence team reports that when some team members run queries for their dashboards in parallel with others, the query response time is getting significantly slower What can a Snowflake Architect do to identify what is occurring and troubleshoot this issue?
A)
ARA-C01 dumps exhibit
B)
ARA-C01 dumps exhibit
C)
ARA-C01 dumps exhibit
D)
ARA-C01 dumps exhibit

Correct Answer:A
The image shows a SQL query that can be used to identify which queries are spilled to remote storage and suggests changing the warehouse parameters to address this issue. Spilling to remote storage occurs when the memory allocated to a warehouse is insufficient to process a query, and Snowflake uses disk or cloud storage as a temporary cache. This can significantly slow down the query performance and increase the cost. To troubleshoot this issue, a Snowflake Architect can run the query shown in the image to find out which queries are spilling, how much data they are spilling, and which warehouses they are using. Then, the architect can adjust the warehouse size, type, or scaling policy to provide enough memory for the queries and avoid spilling12. References:
✑ Recognizing Disk Spilling
✑ Managing the Kafka Connector

Question 20

Based on the Snowflake object hierarchy, what securable objects belong directly to a Snowflake account? (Select THREE).

Correct Answer:AEF
✑ A securable object is an entity to which access can be granted in
Snowflake. Securable objects include databases, schemas, tables, views, stages, pipes, functions, procedures, sequences, tasks, streams, roles, warehouses, and shares1.
✑ The Snowflake object hierarchy is a logical structure that organizes the securable
objects in a nested manner. The top-most container is the account, which contains all the databases, roles, and warehouses for the customer organization. Each database contains schemas, which in turn contain tables, views, stages, pipes, functions, procedures, sequences, tasks, and streams. Each role can be granted privileges on other roles or securable objects. Each warehouse can be used to execute queries on securable objects2.
✑ Based on the Snowflake object hierarchy, the securable objects that belong
directly to a Snowflake account are databases, roles, and warehouses. These objects are created and managed at the account level, and do not depend on any other securable object. The other options are not correct because:
References:
✑ 1: Overview of Access Control | Snowflake Documentation
✑ 2: Securable Objects | Snowflake Documentation
✑ 3: CREATE SCHEMA | Snowflake Documentation
✑ 4: CREATE TABLE | Snowflake Documentation
✑ [5]: CREATE STAGE | Snowflake Documentation