Labour Day Special - 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: c4sdisc65

ARA-C01 PDF

$38.5

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

ARA-C01 PDF + Testing Engine

$61.6

$175.99

3 Months Free Update

  • Exam Name: SnowPro Advanced: Architect Certification Exam
  • Last Update: May 4, 2024
  • Questions and Answers: 162
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

ARA-C01 Engine

$46.2

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included

ARA-C01 Practice Exam Questions with Answers SnowPro Advanced: Architect Certification Exam Certification

Question # 6

What is a valid object hierarchy when building a Snowflake environment?

A.

Account --> Database --> Schema --> Warehouse

B.

Organization --> Account --> Database --> Schema --> Stage

C.

Account --> Schema > Table --> Stage

D.

Organization --> Account --> Stage --> Table --> View

Full Access
Question # 7

How do Snowflake databases that are created from shares differ from standard databases that are not created from shares? (Choose three.)

A.

Shared databases are read-only.

B.

Shared databases must be refreshed in order for new data to be visible.

C.

Shared databases cannot be cloned.

D.

Shared databases are not supported by Time Travel.

E.

Shared databases will have the PUBLIC or INFORMATION_SCHEMA schemas without explicitly granting these schemas to the share.

F.

Shared databases can also be created as transient databases.

Full Access
Question # 8

What are some of the characteristics of result set caches? (Choose three.)

A.

Time Travel queries can be executed against the result set cache.

B.

Snowflake persists the data results for 24 hours.

C.

Each time persisted results for a query are used, a 24-hour retention period is reset.

D.

The data stored in the result cache will contribute to storage costs.

E.

The retention period can be reset for a maximum of 31 days.

F.

The result set cache is not shared between warehouses.

Full Access
Question # 9

What are purposes for creating a storage integration? (Choose three.)

A.

Control access to Snowflake data using a master encryption key that is maintained in the cloud provider’s key management service.

B.

Store a generated identity and access management (IAM) entity for an external cloud provider regardless of the cloud provider that hosts the Snowflake account.

C.

Support multiple external stages using one single Snowflake object.

D.

Avoid supplying credentials when creating a stage or when loading or unloading data.

E.

Create private VPC endpoints that allow direct, secure connectivity between VPCs without traversing the public internet.

F.

Manage credentials from multiple cloud providers in one single Snowflake object.

Full Access
Question # 10

An Architect needs to design a Snowflake account and database strategy to store and analyze large amounts of structured and semi-structured data. There are many business units and departments within the company. The requirements are scalability, security, and cost efficiency.

What design should be used?

A.

Create a single Snowflake account and database for all data storage and analysis needs, regardless of data volume or complexity.

B.

Set up separate Snowflake accounts and databases for each department or business unit, to ensure data isolation and security.

C.

Use Snowflake's data lake functionality to store and analyze all data in a central location, without the need for structured schemas or indexes

D.

Use a centralized Snowflake database for core business data, and use separate databases for departmental or project-specific data.

Full Access
Question # 11

How can an Architect enable optimal clustering to enhance performance for different access paths on a given table?

A.

Create multiple clustering keys for a table.

B.

Create multiple materialized views with different cluster keys.

C.

Create super projections that will automatically create clustering.

D.

Create a clustering key that contains all columns used in the access paths.

Full Access
Question # 12

A table contains five columns and it has millions of records. The cardinality distribution of the columns is shown below:

ARA-C01 question answer

Column C4 and C5 are mostly used by SELECT queries in the GROUP BY and ORDER BY clauses. Whereas columns C1, C2 and C3 are heavily used in filter and join conditions of SELECT queries.

The Architect must design a clustering key for this table to improve the query performance.

Based on Snowflake recommendations, how should the clustering key columns be ordered while defining the multi-column clustering key?

A.

C5, C4, C2

B.

C3, C4, C5

C.

C1, C3, C2

D.

C2, C1, C3

Full Access
Question # 13

Which of the below commands will use warehouse credits?

A.

SHOW TABLES LIKE 'SNOWFL%';

B.

SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;

C.

SELECT COUNT(*) FROM SNOWFLAKE;

D.

SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;

Full Access
Question # 14

An Architect has chosen to separate their Snowflake Production and QA environments using two separate Snowflake accounts.

The QA account is intended to run and test changes on data and database objects before pushing those changes to the Production account. It is a requirement that all database objects and data in the QA account need to be an exact copy of the database objects, including privileges and data in the Production account on at least a nightly basis.

Which is the LEAST complex approach to use to populate the QA account with the Production account’s data and database objects on a nightly basis?

A.

1) Create a share in the Production account for each database

2) Share access to the QA account as a Consumer

3) The QA account creates a database directly from each share

4) Create clones of those databases on a nightly basis

5) Run tests directly on those cloned databases

B.

1) Create a stage in the Production account

2) Create a stage in the QA account that points to the same external object-storage location

3) Create a task that runs nightly to unload each table in the Production account into the stage

4) Use Snowpipe to populate the QA account

C.

1) Enable replication for each database in the Production account

2) Create replica databases in the QA account

3) Create clones of the replica databases on a nightly basis

4) Run tests directly on those cloned databases

D.

1) In the Production account, create an external function that connects into the QA account and returns all the data for one specific table

2) Run the external function as part of a stored procedure that loops through each table in the Production account and populates each table in the QA account

Full Access
Question # 15

How can the Snowpipe REST API be used to keep a log of data load history?

A.

Call insertReport every 20 minutes, fetching the last 10,000 entries.

B.

Call loadHistoryScan every minute for the maximum time range.

C.

Call insertReport every 8 minutes for a 10-minute time range.

D.

Call loadHistoryScan every 10 minutes for a 15-minutes range.

Full Access
Question # 16

An Architect is designing a pipeline to stream event data into Snowflake using the Snowflake Kafka connector. The Architect’s highest priority is to configure the connector to stream data in the MOST cost-effective manner.

Which of the following is recommended for optimizing the cost associated with the Snowflake Kafka connector?

A.

Utilize a higher Buffer.flush.time in the connector configuration.

B.

Utilize a higher Buffer.size.bytes in the connector configuration.

C.

Utilize a lower Buffer.size.bytes in the connector configuration.

D.

Utilize a lower Buffer.count.records in the connector configuration.

Full Access
Question # 17

An Architect for a multi-national transportation company has a system that is used to check the weather conditions along vehicle routes. The data is provided to drivers.

The weather information is delivered regularly by a third-party company and this information is generated as JSON structure. Then the data is loaded into Snowflake in a column with a VARIANT data type. This

table is directly queried to deliver the statistics to the drivers with minimum time lapse.

A single entry includes (but is not limited to):

- Weather condition; cloudy, sunny, rainy, etc.

- Degree

- Longitude and latitude

- Timeframe

- Location address

- Wind

The table holds more than 10 years' worth of data in order to deliver the statistics from different years and locations. The amount of data on the table increases every day.

The drivers report that they are not receiving the weather statistics for their locations in time.

What can the Architect do to deliver the statistics to the drivers faster?

A.

Create an additional table in the schema for longitude and latitude. Determine a regular task to fill this information by extracting it from the JSON dataset.

B.

Add search optimization service on the variant column for longitude and latitude in order to query the information by using specific metadata.

C.

Divide the table into several tables for each year by using the timeframe information from the JSON dataset in order to process the queries in parallel.

D.

Divide the table into several tables for each location by using the location address information from the JSON dataset in order to process the queries in parallel.

Full Access
Question # 18

Company A would like to share data in Snowflake with Company B. Company B is not on the same cloud platform as Company A.

What is required to allow data sharing between these two companies?

A.

Create a pipeline to write shared data to a cloud storage location in the target cloud provider.

B.

Ensure that all views are persisted, as views cannot be shared across cloud platforms.

C.

Setup data replication to the region and cloud platform where the consumer resides.

D.

Company A and Company B must agree to use a single cloud platform: Data sharing is only possible if the companies share the same cloud provider.

Full Access
Question # 19

A company wants to Integrate its main enterprise identity provider with federated authentication with Snowflake.

The authentication integration has been configured and roles have been created in Snowflake. However, the users are not automatically appearing in Snowflake when created and their group membership is not reflected in their assigned rotes.

How can the missing functionality be enabled with the LEAST amount of operational overhead?

A.

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users and roles.

B.

OAuth must be configured between the identity provider and Snowflake. Then the authorization server must be configured with the right mapping of users, and the resource server must be configured with the right mapping of role assignment.

C.

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM, their groups will get created as group accounts in Snowflake and the proper roles can be granted.

D.

SCIM must be enabled between the identity provider and Snowflake. Once both are synchronized through SCIM. users will automatically get created and their group membership will be reflected as roles In Snowflake.

Full Access
Question # 20

How is the change of local time due to daylight savings time handled in Snowflake tasks? (Choose two.)

A.

A task scheduled in a UTC-based schedule will have no issues with the time changes.

B.

Task schedules can be designed to follow specified or local time zones to accommodate the time changes.

C.

A task will move to a suspended state during the daylight savings time change.

D.

A frequent task execution schedule like minutes may not cause a problem, but will affect the task history.

E.

A task schedule will follow only the specified time and will fail to handle lost or duplicated hours.

Full Access
Question # 21

Database DB1 has schema S1 which has one table, T1.

DB1 --> S1 --> T1

The retention period of EG1 is set to 10 days.

The retention period of s: is set to 20 days.

The retention period of t: Is set to 30 days.

The user runs the following command:

Drop Database DB1;

What will the Time Travel retention period be for T1?

A.

10 days

B.

20 days

C.

30 days

D.

37 days

Full Access
Question # 22

A company has a table with that has corrupted data, named Data. The company wants to recover the data as it was 5 minutes ago using cloning and Time Travel.

What command will accomplish this?

A.

CREATE CLONE TABLE Recover_Data FROM Data AT(OFFSET => -60*5);

B.

CREATE CLONE Recover_Data FROM Data AT(OFFSET => -60*5);

C.

CREATE TABLE Recover_Data CLONE Data AT(OFFSET => -60*5);

D.

CREATE TABLE Recover Data CLONE Data AT(TIME => -60*5);

Full Access
Question # 23

What Snowflake system functions are used to view and or monitor the clustering metadata for a table? (Select TWO).

A.

SYSTEMSCLUSTERING

B.

SYSTEMSTABLE_CLUSTERING

C.

SYSTEMSCLUSTERING_DEPTH

D.

SYSTEMSCLUSTERING_RATIO

E.

SYSTEMSCLUSTERING_INFORMATION

Full Access
Question # 24

A company has a Snowflake environment running in AWS us-west-2 (Oregon). The company needs to share data privately with a customer who is running their Snowflake environment in Azure East US 2 (Virginia).

What is the recommended sequence of operations that must be followed to meet this requirement?

A.

1. Create a share and add the database privileges to the share

2. Create a new listing on the Snowflake Marketplace

3. Alter the listing and add the share

4. Instruct the customer to subscribe to the listing on the Snowflake Marketplace

B.

1. Ask the customer to create a new Snowflake account in Azure EAST US 2 (Virginia)

2. Create a share and add the database privileges to the share

3. Alter the share and add the customer's Snowflake account to the share

C.

1. Create a new Snowflake account in Azure East US 2 (Virginia)

2. Set up replication between AWS us-west-2 (Oregon) and Azure East US 2 (Virginia) for the database objects to be shared

3. Create a share and add the database privileges to the share

4. Alter the share and add the customer's Snowflake account to the share

D.

1. Create a reader account in Azure East US 2 (Virginia)

2. Create a share and add the database privileges to the share

3. Add the reader account to the share

4. Share the reader account's URL and credentials with the customer

Full Access
Question # 25

In a managed access schema, what are characteristics of the roles that can manage object privileges? (Select TWO).

A.

Users with the SYSADMIN role can grant object privileges in a managed access schema.

B.

Users with the SECURITYADMIN role or higher, can grant object privileges in a managed access schema.

C.

Users who are database owners can grant object privileges in a managed access schema.

D.

Users who are schema owners can grant object privileges in a managed access schema.

E.

Users who are object owners can grant object privileges in a managed access schema.

Full Access
Question # 26

Assuming all Snowflake accounts are using an Enterprise edition or higher, in which development and testing scenarios would be copying of data be required, and zero-copy cloning not be suitable? (Select TWO).

A.

Developers create their own datasets to work against transformed versions of the live data.

B.

Production and development run in different databases in the same account, and Developers need to see production-like data but with specific columns masked.

C.

Data is in a production Snowflake account that needs to be provided to Developers in a separate development/testing Snowflake account in the same cloud region.

D.

Developers create their own copies of a standard test database previously created for them in the development account, for their initial development and unit testing.

E.

The release process requires pre-production testing of changes with data of production scale and complexity. For security reasons, pre-production also runs in the production account.

Full Access
Question # 27

What integration object should be used to place restrictions on where data may be exported?

A.

Stage integration

B.

Security integration

C.

Storage integration

D.

API integration

Full Access
Question # 28

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

A.

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

Full Access
Question # 29

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

A.

An external table

B.

A pipe

C.

A stream

D.

A copy command at regular intervals

Full Access