Spring Special Sale - 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: spcl70

Practice Free Professional-Data-Engineer Google Professional Data Engineer Exam Exam Questions Answers With Explanation

We at Crack4sure are committed to giving students who are preparing for the Google Professional-Data-Engineer Exam the most current and reliable questions . To help people study, we've made some of our Google Professional Data Engineer Exam exam materials available for free to everyone. You can take the Free Professional-Data-Engineer Practice Test as many times as you want. The answers to the practice questions are given, and each answer is explained.

Question # 6

You need to compose visualization for operations teams with the following requirements:

Telemetry must include data from all 50,000 installations for the most recent 6 weeks (sampling once every minute)

The report must not be more than 3 hours delayed from live data.

The actionable report should only show suboptimal links.

Most suboptimal links should be sorted to the top.

Suboptimal links can be grouped and filtered by regional geography.

User response time to load the report must be <5 seconds.

You create a data source to store the last 6 weeks of data, and create visualizations that allow viewers to see multiple date ranges, distinct geographic regions, and unique installation types. You always show the latest data without any changes to your visualizations. You want to avoid creating and updating new visualizations each month. What should you do?

A.

Look through the current data and compose a series of charts and tables, one for each possiblecombination of criteria.

B.

Look through the current data and compose a small set of generalized charts and tables bound to criteria filters that allow value selection.

C.

Export the data to a spreadsheet, compose a series of charts and tables, one for each possiblecombination of criteria, and spread them across multiple tabs.

D.

Load the data into relational database tables, write a Google App Engine application that queries all rows, summarizes the data across each criteria, and then renders results using the Google Charts and visualization API.

Question # 7

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all thedata in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

A.

Export the data into a Google Sheet for virtualization.

B.

Create an additional table with only the necessary columns.

C.

Create a view on the table to present to the virtualization tool.

D.

Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Question # 8

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

A.

Rowkey: date#device_idColumn data: data_point

B.

Rowkey: dateColumn data: device_id, data_point

C.

Rowkey: device_idColumn data: date, data_point

D.

Rowkey: data_pointColumn data: device_id, date

E.

Rowkey: date#data_pointColumn data: device_id

Question # 9

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

A.

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.

B.

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.

C.

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.

D.

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Question # 10

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

A.

Ensure all the tables are included in global dataset.

B.

Ensure each table is included in a dataset for a region.

C.

Adjust the settings for each table to allow a related region-based security group view access.

D.

Adjust the settings for each view to allow a related region-based security group view access.

E.

Adjust the settings for each dataset to allow a related region-based security group view access.

Question # 11

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Professional-Data-Engineer question answer

A.

Option A

B.

Option B.

C.

Option C

D.

Option D

Question # 12

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

A.

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Question # 13

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?

A.

Change the processing job to use Google Cloud Dataproc instead.

B.

Manually start the Cloud Dataflow job each morning when you get into the office.

C.

Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.

D.

Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Question # 14

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

A.

Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.

B.

Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

C.

Use the NOW () function in BigQuery to record the event’s time.

D.

Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Question # 15

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

A.

Store the common data in BigQuery as partitioned tables.

B.

Store the common data in BigQuery and expose authorized views.

C.

Store the common data encoded as Avro in Google Cloud Storage.

D.

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Question # 16

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

A.

The CSV data loaded in BigQuery is not flagged as CSV.

B.

The CSV data has invalid rows that were skipped on import.

C.

The CSV data loaded in BigQuery is not using BigQuery’s default encoding.

D.

The CSV data has not gone through an ETL phase before loading into BigQuery.

Question # 17

Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?

A.

Add a node to the MySQL cluster and build an OLAP cube there.

B.

Use an ETL tool to load the data from MySQL into Google BigQuery.

C.

Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.

D.

Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

Question # 18

You are building a model to make clothing recommendations. You know a user’s fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available. How should you use this data to train the model?

A.

Continuously retrain the model on just the new data.

B.

Continuously retrain the model on a combination of existing data and the new data.

C.

Train on the existing data while using the new data as your test set.

D.

Train on the new data while using the existing data as your test set.

Question # 19

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patientrecords. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

A.

Add capacity (memory and disk space) to the database server by the order of 200.

B.

Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.

C.

Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

D.

Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Question # 20

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?

A.

Re-write the application to load accumulated data every 2 minutes.

B.

Convert the streaming insert code to batch load for individual messages.

C.

Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.

D.

Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

Question # 21

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

A.

Load data into different partitions.

B.

Load data into a different dataset for each client.

C.

Put each client’s BigQuery dataset into a different table.

D.

Restrict a client’s dataset to approved users.

E.

Only allow a service account to access the datasets.

F.

Use the appropriate identity and access management (IAM) roles for each client’s users.

Question # 22

You are designing a cloud-native historical data processing system to meet the following conditions:

The data being analyzed is in CSV, Avro, and PDF formats and will be accessed by multiple analysis tools including Cloud Dataproc, BigQuery, and Compute Engine.

A streaming data pipeline stores new data daily.

Peformance is not a factor in the solution.

The solution design should maximize availability.

How should you design data storage for this solution?

A.

Create a Cloud Dataproc cluster with high availability. Store the data in HDFS, and peform analysis as needed.

B.

Store the data in BigQuery. Access the data using the BigQuery Connector or Cloud Dataproc and Compute Engine.

C.

Store the data in a regional Cloud Storage bucket. Aceess the bucket directly using Cloud Dataproc, BigQuery, and Compute Engine.

D.

Store the data in a multi-regional Cloud Storage bucket. Access the data directly using Cloud Dataproc, BigQuery, and Compute Engine.

Question # 23

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

A.

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Question # 24

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:

The user profile: What the user likes and doesn’t like to eat

The user account information: Name, address, preferred meal times

The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use?

A.

BigQuery

B.

Cloud SQL

C.

Cloud Bigtable

D.

Cloud Datastore

Question # 25

You work for a global shipping company. You want to train a model on 40 TB of data to predict which ships in each geographic region are likely to cause delivery delays on any given day. The model will be based on multiple attributes collected from multiple sources. Telemetry data, including location in GeoJSON format, will be pulled from each ship and loaded every hour. You want to have a dashboard that shows how many and which ships are likely to cause delays within a region. You want to use a storage solution that has native functionality for prediction and geospatial processing. Which storage solution should you use?

A.

BigQuery

B.

Cloud Bigtable

C.

Cloud Datastore

D.

Cloud SQL for PostgreSQL

Question # 26

The marketing team at your organization provides regular updates of a segment of your customer dataset. The marketing team has given you a CSV with 1 million records that must be updated in BigQuery. When you use the UPDATE statement in BigQuery, you receive a quotaExceeded error. What should you do?

A.

Reduce the number of records updated each day to stay within the BigQuery UPDATE DML statement limit.

B.

Increase the BigQuery UPDATE DML statement limit in the Quota management section of the Google Cloud Platform Console.

C.

Split the source CSV file into smaller CSV files in Cloud Storage to reduce the number of BigQuery UPDATE DML statements per BigQuery job.

D.

Import the new records from the CSV file into a new BigQuery table. Create a BigQuery job that merges the new records with the existing records and writes the results to a new BigQuery table.

Question # 27

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)

A.

Redis

B.

HBase

C.

MySQL

D.

MongoDB

E.

Cassandra

F.

HDFS with Hive

Question # 28

You are deploying a batch pipeline in Dataflow. This pipeline reads data from Cloud Storage, transforms the data, and then writes the data into BigQuory. The security team has enabled anorganizational constraint in Google Cloud, requiring all Compute Engine instances to use only internal IP addresses and no external IP addresses. What should you do?

A.

Ensure that the firewall rules allow access to Cloud Storage and BigQuery. Use Dataflow with only internal IPs.

B.

Ensure that your workers have network tags to access Cloud Storage and BigQuery. Use Dataflow with only internal IP addresses.

C.

Create a VPC Service Controls perimeter that contains the VPC network and add Dataflow. Cloud Storage, and BigQuery as allowedservices in the perimeter. Use Dataflow with only internal IP addresses.

D.

Ensure that Private Google Access is enabled in the subnetwork. Use Dataflow with only internal IP addresses.

Question # 29

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

A.

Load the data every 30 minutes into a new partitioned table in BigQuery.

B.

Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery

C.

Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore

D.

Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Question # 30

You have a variety of files in Cloud Storage that your data science team wants to use in their models Currently, users do not have a method to explore, cleanse, and validate the data in Cloud Storage. You are looking for a low code solution that can be used by your data science team to quickly cleanse and explore data within Cloud Storage. What should you do?

A.

Load the data into BigQuery and use SQL to transform the data as necessary Provide the data science team access to staging tables to explore the raw data.

B.

Provide the data science team access to Dataflow to create a pipeline to prepare and validate the raw data and load data into BigQuery for data exploration.

C.

Provide the data science team access to Dataprep to prepare, validate, and explore the data within Cloud Storage.

D.

Create an external table in BigQuery and use SQL to transform the data as necessary Provide the data science team access to the external tables to explore the raw data.

Question # 31

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)

A.

Introduce data compression for each file to increase the rate file of file transfer.

B.

Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.

C.

Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.

D.

Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.

E.

Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Question # 32

An organization maintains a Google BigQuery dataset that contains tables with user-level datA. They want to expose aggregates of this data to other Google Cloud projects, while still controlling access to the user-level data. Additionally, they need to minimize their overall storage cost and ensure the analysis cost for other projects is assigned to those projects. What should they do?

A.

Create and share an authorized view that provides the aggregate results.

B.

Create and share a new dataset and view that provides the aggregate results.

C.

Create and share a new dataset and table that contains the aggregate results.

D.

Create dataViewer Identity and Access Management (IAM) roles on the dataset to enable sharing.

Question # 33

You have an upstream process that writes data to Cloud Storage. This data is then read by an Apache Spark job that runs on Dataproc. These jobs are run in the us-central1 region, but the data could be stored anywhere in the United States. You need to have a recovery process in place in case of a catastrophic single region failure. You need an approach with a maximum of 15 minutes of data loss (RPO=15 mins). You want to ensure that there is minimal latency when reading the data. What should you do?

A.

1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.2. Enable turbo replication.3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the us-south1 region.4. In case of a regional failure, redeploy your Dataproc duster to the us-south1 region and continue reading from the same bucket.

B.

1. Create a dual-region Cloud Storage bucket in the us-central1 and us-south1 regions.2. Enable turbo replication.3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in the same region.4. In case of a regional failure, redeploy the Dataproc clusters to the us-south1 region and read from the same bucket.

C.

1. Create a Cloud Storage bucket in the US multi-region.2. Run the Dataproc cluster in a zone in the ua-central1 region, reading data from the US multi-region bucket.3. In case of a regional failure, redeploy the Dataproc cluster to the us-central2 region and continue reading from the same bucket.

D.

1. Create two regional Cloud Storage buckets, one in the us-central1 region and one in the us-south1 region.2. Have the upstream process write data to the us-central1 bucket. Use the Storage Transfer Service to copy data hourly from the us-central1 bucket to the us-south1 bucket.3. Run the Dataproc cluster in a zone in the us-central1 region, reading from the bucket in that region.4. In case of regional failure, redeploy your Dataproc clust

Question # 34

You work for a large real estate firm and are preparing 6 TB of home sales data lo be used for machine learning You will use SOL to transform the data and use BigQuery ML lo create a machine learning model. You plan to use the model for predictions against a raw dataset that has not been transformed. How should you set up your workflow in order to prevent skew at prediction time?

A.

When creating your model, use BigQuerys TRANSFORM clause to define preprocessing stops. At prediction time, use BigQuery"s ML. EVALUATE clause without specifying any transformations on the raw input data.

B.

When creating your model, use BigQuery's TRANSFORM clause to define preprocessing steps Before requesting predictions, use a saved query to transform your raw input data, and then use ML. EVALUATE

C.

Use a BigOuery to define your preprocessing logic. When creating your model, use the view as your model training data. At prediction lime, use BigQuery's ML EVALUATE clause without specifying any transformations on the raw input data.

D.

Preprocess all data using Dataflow. At prediction time, use BigOuery"s ML. EVALUATE clause without specifying any further transformations on the input data.

Question # 35

Your company has data assets across multiple Cloud Storage buckets and BigQuery datasets containing raw and processed data. The requirement is to establish a unified data governance framework that allows for centralized metadata discovery, data quality monitoring, and consistent security policy application across these various data stores without physically moving or duplicating the data. You need to implement a solution to achieve this federated governance. What should you do?

A.

Deploy a centralized Cloud SQL database to store metadata extracted from BigQuery and Cloud Storage using custom scripts.

Integrate the database with Looker Studio for data discovery and visualization.

Implement a custom policy engine using Cloud Run functions triggered by changes in IAM policies to enforce consistent security across projects.

B.

Create a Looker Studio dashboard on BigQuery INFORMATION_SCHEMA views to visualize and monitor data quality.

Manage security using IAM policies at the project level, supplemented by BigQuery authorized views for granular access control.

C.

Export metadata out of Dataplex Universal Catalog by running a metadata export job.

Implement Dataproc Metastore to manage table schemas and Apache Hive metastore for metadata discovery.

Manage security using a combination of BigQuery row-level security and Cloud Storage policies.

D.

Use Dataplex to organize the BigQuery datasets and Cloud Storage buckets into lakes and zones.

Use Dataplex for automated metadata discovery, centralized security policy management, data profiling, and data quality tasks.

Question # 36

You work for a manufacturing company that sources up to 750 different components, each from a different supplier. You’ve collected a labeled dataset that has on average 1000 examples for each unique component. Your team wants to implement an app to help warehouse workers recognize incoming components based on a photo of the component. You want to implement the first working version of this app (as Proof-Of-Concept) within a few working days. What should you do?

A.

Use Cloud Vision AutoML with the existing dataset.

B.

Use Cloud Vision AutoML, but reduce your dataset twice.

C.

Use Cloud Vision API by providing custom labels as recognition hints.

D.

Train your own image recognition model leveraging transfer learning techniques.

Question # 37

Your organization uses a multi-cloud data storage strategy, storing data in Cloud Storage, and data in Amazon Web Services' (AWS) S3 storage buckets. All data resides in US regions. You want to query up-to-date data by using BigQuery. regardless of which cloud the data is stored in. You need to allow users to query the tables from BigQuery without giving direct access to the data in the storage buckets What should you do?

A.

Set up a BigQuery Omni connection to the AWS S3 bucket data Create BigLake tables over the Cloud Storage and S3 data and query the data using BigQuery directly.

B.

Set up a BigQuery Omni connection to the AWS S3 bucket data. Create external tables over the Cloud Storage and S3 data and query the data using BigQuery directly.

C.

Use the Storage Transfer Service to copy data from the AWS S3 buckets to Cloud Storage buckets Create BigLake tables over the Cloud Storage data and query the data using BigQuery directly.

D.

Use the Storage Transfer Service to copy data from the AWS S3 buckets to Cloud Storage buckets Create external tables over the Cloud Storage data and query the data using BigQuery directly

Question # 38

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.

The data scientists have written the following code to read the data for a new key features in the logs.

BigQueryIO.Read

.named(“ReadLogData”)

.from(“clouddataflow-readonly:samples.log_data”)

You want to improve the performance of this data read. What should you do?

A.

Specify the TableReference object in the code.

B.

Use .fromQuery operation to read specific fields from the table.

C.

Use of both the Google BigQuery TableSchema and TableFieldSchema classes.

D.

Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.

Question # 39

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

A.

Make a call to the Stackdriver API to list all logs, and apply an advanced filter.

B.

In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.

C.

In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.

D.

Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Question # 40

Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their use cases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you do first?

A.

Use Google Stackdriver Audit Logs to review data access.

B.

Get the identity and access management IIAM) policy of each table

C.

Use Stackdriver Monitoring to see the usage of BigQuery query slots.

D.

Use the Google Cloud Billing API to see what account the warehouse is being billed to.

Question # 41

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

A.

Issue a command to restart the database servers.

B.

Retry the query with exponential backoff, up to a cap of 15 minutes.

C.

Retry the query every second until it comes back online to minimize staleness of data.

D.

Reduce the query frequency to once every hour until the database comes back online.

Question # 42

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

A.

Cancel

B.

Drain

C.

Stop

D.

Finish

Question # 43

Your company maintains a hybrid deployment with GCP, where analytics are performed on your anonymized customer data. The data are imported to Cloud Storage from your data center through parallel uploads to a data transfer server running on GCP. Management informs you that the daily transfers take too long and have

asked you to fix the problem. You want to maximize transfer speeds. Which action should you take?

A.

Increase the CPU size on your server.

B.

Increase the size of the Google Persistent Disk on your server.

C.

Increase your network bandwidth from your datacenter to GCP.

D.

Increase your network bandwidth from Compute Engine to Cloud Storage.

Question # 44

You are designing a data mesh on Google Cloud by using Dataplex to manage data in BigQuery and Cloud Storage. You want to simplify data asset permissions. You are creating a customer virtual lake with two user groups:

• Data engineers, which require lull data lake access

• Analytic users, which require access to curated data

You need to assign access rights to these two groups. What should you do?

A.

1. Grant the dataplex.dataOwner role to the data engineer group on the customer data lake.2. Grant the dataplex.dataReader role to the analytic user group on the customer curated zone.

B.

1. Grant the dataplex.dataReader role to the data engineer group on the customer data lake.2. Grant the dataplex.dataOwner to the analytic user group on the customer curated zone.

C.

1. Grant the bigquery.dataownex role on BigQuery datasets and the storage.objectcreator role on Cloud Storage buckets to data engineers. 2. Grant the bigquery.dataViewer role on BigQuery datasets and the storage.objectViewer role on Cloud Storage buckets to analytic users.

D.

1. Grant the bigquery.dataViewer role on BigQuery datasets and the storage.objectviewer role on Cloud Storage buckets to data engineers.2. Grant the bigquery.dataOwner role on BigQuery datasets and the storage.objectEditor role on Cloud Storage buckets to analytic users.

Question # 45

Your infrastructure team has set up an interconnect link between Google Cloud and the on-premises network. You are designing a high-throughput streaming pipeline to ingest data in streaming from an Apache Kafka cluster hosted on-premises. You want to store the data in BigQuery, with as minima! latency as possible. What should you do?

A.

Use a proxy host in the VPC in Google Cloud connecting to Kafka. Write a Dataflow pipeline, read data from the proxy host, and write the data to BigQuery.

B.

Setup a Kafka Connect bridge between Kafka and Pub/Sub. Use a Google-provided Dataflow template to read the data from Pub/Sub, and write the data to BigQuery.

C.

Setup a Kafka Connect bridge between Kafka and Pub/Sub. Write a Dataflow pipeline, read the data from Pub/Sub, and write the data toBigQuery.

D.

Use Dataflow, write a pipeline that reads the data from Kafka, and writes the data to BigQuery.

Question # 46

What are two of the benefits of using denormalized data structures in BigQuery?

A.

Reduces the amount of data processed, reduces the amount of storage required

B.

Increases query speed, makes queries simpler

C.

Reduces the amount of storage required, increases query speed

D.

Reduces the amount of data processed, increases query speed

Question # 47

Which of the following statements is NOT true regarding Bigtable access roles?

A.

Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.

B.

To give a user access to only one table in a project, grant the user the Bigtable Editor role forthat table.

C.

You can configure access control only at the project level.

D.

To give a user access to only one table in a project, you must configure access through your application.

Question # 48

Which of the following is not true about Dataflow pipelines?

A.

Pipelines are a set of operations

B.

Pipelines represent a data processing job

C.

Pipelines represent a directed graph of steps

D.

Pipelines can share data between instances

Question # 49

What is the general recommendation when designing your row keys for a Cloud Bigtable schema?

A.

Include multiple time series values within the row key

B.

Keep the row keep as an 8 bit integer

C.

Keep your row key reasonably short

D.

Keep your row key as long as the field permits

Question # 50

Which is not a valid reason for poor Cloud Bigtable performance?

A.

The workload isn't appropriate for Cloud Bigtable.

B.

The table's schema is not designed correctly.

C.

The Cloud Bigtable cluster has too many nodes.

D.

There are issues with the network connection.

Question # 51

You have a data processing application that runs on Google Kubernetes Engine (GKE). Containers need to be launched with their latest available configurations from a container registry. Your GKE nodes need to have GPUs. local SSDs, and 8 Gbps bandwidth. You want to efficiently provision the data processing infrastructure and manage the deployment process. What should you do?

A.

Use Compute Engi.no startup scriots to pull container Images, and use gloud commands to provision the infrastructure.

B.

Use GKE to autoscale containers, and use gloud commands to provision the infrastructure.

C.

Use Cloud Build to schedule a job using Terraform build to provision the infrastructure and launch with the most current container images.

D.

Use Dataflow to provision the data pipeline, and use Cloud Scheduler to run the job.

Question # 52

Which of the following is not possible using primitive roles?

A.

Give a user viewer access to BigQuery and owner access to Google Compute Engine instances.

B.

Give UserA owner access and UserB editor access for all datasets in a project.

C.

Give a user access to view all datasets in a project, but not run queries on them.

D.

Give GroupA owner access and GroupB editor access for all datasets in a project.

Question # 53

Which of these numbers are adjusted by a neural network as it learns from a training dataset (select 2 answers)?

A.

Weights

B.

Biases

C.

Continuous features

D.

Input values

Question # 54

You are developing a software application using Google's Dataflow SDK, and want to use conditional, for loops and other complex programming structures to create a branching pipeline. Which component will be used for the data processing operation?

A.

PCollection

B.

Transform

C.

Pipeline

D.

Sink API

Question # 55

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

A.

Rewrite the job in Pig.

B.

Rewrite the job in Apache Spark.

C.

Increase the size of the Hadoop cluster.

D.

Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Question # 56

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?

A.

Use a row key of the form .

B.

Use a row key of the form .

C.

Use a row key of the form #.

D.

Use a row key of the form >##.

Question # 57

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)

A.

There are very few occurrences of mutations relative to normal samples.

B.

There are roughly equal occurrences of both normal and mutated samples in the database.

C.

You expect future mutations to have different features from the mutated samples in the database.

D.

You expect future mutations to have similar features to the mutated samples in the database.

E.

You already have labels for which samples are mutated and which are normal in the database.

Question # 58

You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?

A.

Update the current pipeline and use the drain flag.

B.

Update the current pipeline and provide the transform mapping JSON object.

C.

Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.

D.

Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.

Question # 59

Your company is using WHILECARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the following error:

# Syntax error : Expected end of statement but got “-“ at [4:11]

SELECT age

FROM

bigquery-public-data.noaa_gsod.gsod

WHERE

age != 99

AND_TABLE_SUFFIX = ‘1929’

ORDER BY

age DESC

Which table name will make the SQL statement work correctly?

A.

‘bigquery-public-data.noaa_gsod.gsod‘

B.

bigquery-public-data.noaa_gsod.gsod*

C.

‘bigquery-public-data.noaa_gsod.gsod’*

D.

‘bigquery-public-data.noaa_gsod.gsod*`

Question # 60

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

A.

Put the data into Google Cloud Storage.

B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.

C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.

D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Question # 61

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?

A.

Eliminate features that are highly correlated to the output labels.

B.

Combine highly co-dependent features into one representative feature.

C.

Instead of feeding in each feature individually, average their values in batches of 3.

D.

Remove the features that have null values for more than 50% of the training records.

Question # 62

Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine learning applications can you use? (Choose three.)

A.

Supervised learning to determine which transactions are most likely to be fraudulent.

B.

Unsupervised learning to determine which transactions are most likely to be fraudulent.

C.

Clustering to divide the transactions into N categories based on feature similarity.

D.

Supervised learning to predict the location of a transaction.

E.

Reinforcement learning to predict the location of a transaction.

F.

Unsupervised learning to predict the location of a transaction.

Question # 63

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?

A.

The message body for the sensor event is too large.

B.

Your custom endpoint has an out-of-date SSL certificate.

C.

The Cloud Pub/Sub topic has too many messages published to it.

D.

Your custom endpoint is not acknowledging messages within the acknowledgement deadline.

Question # 64

Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose three.)

A.

Disable writes to certain tables.

B.

Restrict access to tables by role.

C.

Ensure that the data is encrypted at all times.

D.

Restrict BigQuery API access to approved users.

E.

Segregate data across multiple tables or databases.

F.

Use Google Stackdriver Audit Logging to determine policy violations.

Question # 65

You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need to process, store and analyze these very large datasets in real time. What should you do?

A.

Send the data to Google Cloud Datastore and then export to BigQuery.

B.

Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in Google BigQuery.

C.

Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc whenever analysis is required.

D.

Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from Cloud Storage, and run an analysis as needed.

Question # 66

Cloud Dataproc charges you only for what you really use with _____ billing.

A.

month-by-month

B.

minute-by-minute

C.

week-by-week

D.

hour-by-hour

Question # 67

Which of the following statements about Legacy SQL and Standard SQL is not true?

A.

Standard SQL is the preferred query language for BigQuery.

B.

If you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.

C.

One difference between the two query languages is how you specify fully-qualified table names (i.e. table names that include their associated project name).

D.

You need to set a query language for each dataset and the default is Standard SQL.

Question # 68

What are two methods that can be used to denormalize tables in BigQuery?

A.

1) Split table into multiple tables; 2) Use a partitioned table

B.

1) Join tables into one table; 2) Use nested repeated fields

C.

1) Use a partitioned table; 2) Join tables into one table

D.

1) Use nested repeated fields; 2) Use a partitioned table

Question # 69

What is the recommended action to do in order to switch between SSD and HDD storage for your Google Cloud Bigtable instance?

A.

create a third instance and sync the data from the two storage types via batch jobs

B.

export the data from the existing instance and import the data into a new instance

C.

run parallel instances where one is HDD and the other is SDD

D.

the selection is final and you must resume using the same storage type

Question # 70

What are the minimum permissions needed for a service account used with Google Dataproc?

A.

Execute to Google Cloud Storage; write to Google Cloud Logging

B.

Write to Google Cloud Storage; read to Google Cloud Logging

C.

Execute to Google Cloud Storage; execute to Google Cloud Logging

D.

Read and write to Google Cloud Storage; write to Google Cloud Logging

Question # 71

Which of the following are feature engineering techniques? (Select 2 answers)

A.

Hidden feature layers

B.

Feature prioritization

C.

Crossed feature columns

D.

Bucketization of a continuous feature

Question # 72

You are planning to use Google's Dataflow SDK to analyze customer data such as displayed below. Your project requirement is to extract only the customer name from the data source and then write to an output PCollection.

Tom,555 X street

Tim,553 Y street

Sam, 111 Z street

Which operation is best suited for the above data processing requirement?

A.

ParDo

B.

Sink API

C.

Source API

D.

Data extraction

Question # 73

Which of these statements about exporting data from BigQuery is false?

A.

To export more than 1 GB of data, you need to put a wildcard in the destination filename.

B.

The only supported export destination is Google Cloud Storage.

C.

Data can only be exported in JSON or Avro format.

D.

The only compression option available is GZIP.

Question # 74

When you store data in Cloud Bigtable, what is the recommended minimum amount of stored data?

A.

500 TB

B.

1 GB

C.

1 TB

D.

500 GB

Question # 75

By default, which of the following windowing behavior does Dataflow apply to unbounded data sets?

A.

Windows at every 100 MB of data

B.

Single, Global Window

C.

Windows at every 1 minute

D.

Windows at every 10 minutes

Question # 76

If a dataset contains rows with individual people and columns for year of birth, country, and income, how many of the columns are continuous and how many are categorical?

A.

1 continuous and 2 categorical

B.

3 categorical

C.

3 continuous

D.

2 continuous and 1 categorical

Question # 77

You are building a Dataflow pipeline to ingest customer feedback. Before loading to your data warehouse, you must validate email addresses and enrich unstructured comment strings with a generative AI sentiment classification. Invalid records need to be routed for manual review. How should you implement this pipeline?

A.

Apply a ParDo transform in Dataflow to validate each element, use a RunInference transform to assign sentiment scores, and use side outputs to route valid/invalid records.

B.

After the Dataflow load completes, execute a Cloud Run function to scan for invalid entries and call Vertex AI to assign sentiment scores.

C.

Use Dataflow to load all data into BigQuery and execute a SQL MERGE statement to flag invalid records and BigQuery ML to assign sentiment scores.

D.

Configure the data source system to pre-validate data before sending it to Dataflow and use the BigQuery ML.GENERATE_TEXT command to assign a sentiment score.

Question # 78

You launched a new gaming app almost three years ago. You have been uploading log files from the previous day to a separate Google BigQuery table with the table name format LOGS_yyyymmdd. You have been using table wildcard functions to generate daily and monthly reports for all time ranges. Recently, you discovered that some queries that cover long date ranges are exceeding the limit of 1,000 tables and failing. How can you resolve this issue?

A.

Convert all daily log tables into date-partitioned tables

B.

Convert the sharded tables into a single partitioned table

C.

Enable query caching so you can cache data from previous months

D.

Create separate views to cover each month, and query from these views

Question # 79

Your United States-based company has created an application for assessing and responding to user actions. The primary table’s data volume grows by 250,000 records per second. Many third parties use your application’s APIs to build the functionality into their own frontend applications. Your application’s APIs should comply with the following requirements:

Single global endpoint

ANSI SQL support

Consistent access to the most up-to-date data

What should you do?

A.

Implement BigQuery with no region selected for storage or processing.

B.

Implement Cloud Spanner with the leader in North America and read-only replicas in Asia and Europe.

C.

Implement Cloud SQL for PostgreSQL with the master in Norht America and read replicas in Asia and Europe.

D.

Implement Cloud Bigtable with the primary cluster in North America and secondary clusters in Asia and Europe.

Question # 80

You have created an external table for Apache Hive partitioned data that resides in a Cloud Storage bucket, which contains a large number of files. You notice that queries against this table are slow. You want to improve the performance of these queries What should you do?

A.

Migrate the Hive partitioned data objects to a multi-region Cloud Storage bucket.

B.

Create an individual external table for each Hive partition by using a common table name prefix Use wildcard table queries to reference the partitioned data.

C.

Change the storage class of the Hive partitioned data objects from Coldline to Standard.

D.

Upgrade the external table to a BigLake table Enable metadata caching for the table.

Professional-Data-Engineer PDF

$33

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

Professional-Data-Engineer PDF + Testing Engine

$52.8

$175.99

3 Months Free Update

  • Exam Name: Google Professional Data Engineer Exam
  • Last Update: Mar 23, 2026
  • Questions and Answers: 400
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

Professional-Data-Engineer Engine

$39.6

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included