Spring Special Sale - 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: spcl70

Practice Free Professional-Cloud-Architect Google Certified Professional - Cloud Architect (GCP) Exam Questions Answers With Explanation

We at Crack4sure are committed to giving students who are preparing for the Google Professional-Cloud-Architect Exam the most current and reliable questions . To help people study, we've made some of our Google Certified Professional - Cloud Architect (GCP) exam materials available for free to everyone. You can take the Free Professional-Cloud-Architect Practice Test as many times as you want. The answers to the practice questions are given, and each answer is explained.

Question # 6

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

A.

Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.

B.

Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.

C.

Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.

D.

Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Question # 7

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

A.

Use a private cluster with a private endpoint with master authorized networks configured.

B.

Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.

C.

Use a private cluster with a public endpoint with master authorized networks configured.

D.

Use a public cluster with master authorized networks enabled and firewall rules.

Question # 8

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.

Add three new Cloud VPN connections

D.

Add a new Carrier Peering connection

Question # 9

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

Question # 10

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.)

A.

Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

B.

Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.

C.

Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.

D.

Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Question # 11

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Question # 12

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

Question # 13

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Question # 14

For this question, refer to the Dress4Win case study.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?

A.

Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B.

Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C.

Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D.

Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Question # 15

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

A.

They should enable Google Stackdriver Debugger on the application code to show errors in the code.

B.

They should add additional unit tests and production scale load tests on their cloud staging environment.

C.

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Question # 16

For this question, refer to the Mountkirk Games case study

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.

What should you do to isolate development environments from staging and production?

A.

Create a project for development and test and another for staging and production.

B.

Create a network for development and test and another for staging and production.

C.

Create one subnetwork for development and another for staging and production.

D.

Create one project for development, a second for staging and a third for production.

Question # 17

For this question, refer to the Mountkirk Games case study.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

A.

Create a scalable environment in GCP for simulating production load.

B.

Use the existing infrastructure to test the GCP-based backend at scale.

C.

Build stress tests into each component of your application using resources internal to GCP to simulate load.

D.

Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Question # 18

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Question # 19

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A.

Tests should scale well beyond the prior approaches.

B.

Unit tests are no longer required, only end-to-end tests.

C.

Tests should be applied after the release is in the production environment.

D.

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Question # 20

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

A.

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Question # 21

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A.

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Question # 22

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

A.

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Question # 23

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Question # 24

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

A.

Create network load balancers. Use preemptible Compute Engine instances.

B.

Create network load balancers. Use non-preemptible Compute Engine instances.

C.

Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.

D.

Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Question # 25

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API.

You want to follow Google-recommended practices. How should you design the backend?

A.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.

B.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.

C.

Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.

D.

Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

Question # 26

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

A.

Apply a Cloud Armor security policy to external load balancers using a named IP list for Fastly.

B.

Apply a Cloud Armor security policy to external load balancers using the IP addresses that Fastly has published. C. Apply a VPC firewall rule on port 443 for Fastly IP address ranges.

C.

Apply a VPC firewall rule on port 443 for network resources tagged with scurceiplisr-fasrly.

Question # 27

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud

infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video

encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted

after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you

do?

A.

Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for

analysis.

B.

Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.

C.

Use the gcloud recommender command to list the idle virtual machine instances.

D.

From the Google Console, identify which Compute Engine instances in the managed instance groups are

no longer responding to health check probes.

Question # 28

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Question # 29

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Question # 30

For this question, refer to the Cymbal Retail case study. Cymbal has a centralized project that supports large video files for Vertex Al model training. Standard storage costs have suddenly increased this month, and you need to determine why. What should you do?

A.

Investigate if the project owner disabled a soft-delete policy on the bucket holding the video files.

B.

Investigate if the project owner moved from dual-region storage to region storage

C.

Investigate If the project owner enabled a soft-delete policy on the bucket holding the video files.

D.

Investigate if the project owner moved from multi-region storage to region stotage.

Question # 31

For this question, refer to the Cymbal Retail case study. Cymbal's generative Al models require high-performance storage for temporary files generated during model training and inference. These files are ephemeral and frequently accessed and modified You need to select a storage solution that minimizes latency and cost and maximizes performance for generative Al workloads. What should you do?

A.

Use a Cloud Storage bucket in the same region as your virtual machines Configure lifecycle policies to delete files after processing

B.

Use Filestore to store temporary files

C.

Use performance persistent disks.

D.

Use Local SSDs attached to the VMs running the generative Al models

Question # 32

Refer to the Altostrat Media case study for the following solutions regarding cost optimization for batch processing and microservices testing strategies.

Altostrat is experiencing fluctuating computational demands for its batch processing jobs. These jobs are not time-critical and can tolerate occasional interruptions. You want to optimize cloud costs and address batch processing needs. What should you do?

A.

Configure reserved VM instances

B.

Deploy spot VM instances.

C.

Set up standard VM instances.

D.

Use Cloud Run functions.

Question # 33

A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment You want to advocate for the adoption of Google Cloud Deployment Manager What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers

A.

Cloud Deployment Manager uses Python.

B.

Cloud Deployment Manager APIs could be deprecated in the future.

C.

Cloud Deployment Manager is unfamiliar to the company's engineers.

D.

Cloud Deployment Manager requires a Google APIs service account to run.

E.

Cloud Deployment Manager can be used to permanently delete cloud resources.

F.

Cloud Deployment Manager only supports automation of Google Cloud resources.

Question # 34

For this question, refer to the Cymbal Retail case study. Cymbal wants to migrate its diverse database environment to Google Cloud while ensuring high availability and performance for online customers. The company also wants to efficiently store and access large product images These images typically stay In the catalog for more than 90 days and are accessed less and less frequently. You need to select the appropriate Google Cloud services for each database. You also need to design a storage solution for the product images that optimizes cost and performance What should you do?

A.

Migrate all databases to Spanner for consistency, and use Cloud Storage Standard for image storage

B.

Migrate all databases to self-managed instances on Compute Engino. and use a persistent disk for image storage.

C.

Migrate MySQL and SQL Server to Spanner. Redis to Memorystore. and MongoDB to Firestore Use Cloud Storage Standard for image storage, and move

images to Cloud Storage Nearline storage when products become less popular.

D.

Migrate MySQL to Cloud SQL. SQL Server to Cloud SQL. Redis to Memorystore. and MongoDB to Firestore. Use Cloud Storage Standard for image storage, and move images to Cloud Storage Coldline storage when products become less popular

Question # 35

For this question, refer to the Cymbal Retail case study. Cymbal wants you to connect their on-premises systems to Google Cloud while maintaining secure communication between their on-premises and cloud environments You want to follow Google's recommended approach to ensure the most secure and manageable solution. What should you do?

A.

Use a bastion host to provide secure access lo Google Cloud resources from Cymbal's on-premises systems.

B.

Configure a static VPN connection using SSH tunnels to connect the on-premises systems to Google Cloud

C.

Configure a Cloud VPN gateway and establish a VPN tunnel Configure firewall rules to restrict access to specific resources and services based on IP addresses and ports.

D.

Use Google Cloud's VPC peering to connect Cymbal's on-premises network to Google Cloud.

Question # 36

For this question, refer to the Cymbal Retail case study. Cymbal wants to migrate their product catalog management processes to Google Cloud. You need to ensure a smooth migration with proper change management to minimize disruption and risks to the business. You want to follow Google-recommended practices to automate product catalog enrichment, improve product discoverability, increase customer engagement, and minimize costs. What should you do?

A.

Design a migration plan to move all of Cymbal's data to Cloud Storage, and use Compute Engine for all business logic

B.

Design a migration plan to move all of Cymbal's data to Cloud Storage, and use Cloud Run functions for all business logic

C.

Design a migration plan, starting with a pilot project focusing on a specific product category, and gradually expand to other categories.

D.

Design a migration plan with a scheduled window to move all components at once Perform extensive testing to ensure a successful migration.

Question # 37

For this question, refer to the Cymbal Retail case study. Cymbal wants you to design a cloud-first data storage infrastructure for the product catalog modernization project. You want to ensure efficient data access and high availability for Cymbals web application and virtual agents while minimizing operational costs. What should you do?

A.

Use AlloyDB for structured product data, and Cloud Storage for product images

B.

Use Spanner for the structured product data, and BigTable for product images

C.

Use Filestore for the structured product data and Cloud Storage for product images

D.

Use Cloud Storage for structured product data, and BigQuery for product images

Question # 38

For this question, refer to the Cymbal Retail case study Cymbal plans to migrate their existing on-premises systems to Google Cloud and implement Al-powered virtual agents to handle customer interactions You need to provision the compute resources that can scale for the Al-powered virtual agents What should you do?

A.

Use Cloud SQL to store the customer data and product catalog.

B.

Configure Cloud Build to call Al Applications (formerly Vertex Al Agent Builder).

C.

Deploy a Google Kubernetes Engine (GKE) cluster with autoscaling enabled

D.

Create a single, large Compute Engine VM instance with a high CPU allocation.

Question # 39

For this question, refer to the TerramEarth case study.

TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour. How should you design the data ingestion?

A.

Vehicles write data directly to GCS.

B.

Vehicles write data directly to Google Cloud Pub/Sub.

C.

Vehicles stream data directly to Google BigQuery.

D.

Vehicles continue to write data using the existing system (FTP).

Question # 40

For this question refer to the TerramEarth case study.

Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.

A.

Opex/capex allocation, LAN changes, capacity planning

B.

Capacity planning, TCO calculations, opex/capex allocation

C.

Capacity planning, utilization measurement, data center expansion

D.

Data Center expansion, TCO calculations, utilization measurement

Question # 41

For this question, refer to the TerramEarth case study.

The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their development effort on business value versus creating a custom framework. Which method should they use?

A.

Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.

B.

Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.

C.

Use Google App Engine with the Swagger (open API Specification) framework. Focus on an API for the public.

D.

Use Google Container Engine with a Django Python container. Focus on an API for the public.

E.

Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.

Question # 42

For this question refer to the TerramEarth case study

Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?

A.

Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.

B.

Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.

C.

Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.

D.

Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.

Question # 43

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?

A.

Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.

B.

Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.

C.

Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.

D.

Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Question # 44

For this question, refer to the TerramEarth case study.

TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure. You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?

A)

Professional-Cloud-Architect question answer

B)

Professional-Cloud-Architect question answer

C)

Professional-Cloud-Architect question answer

D)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question # 45

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20 Gbps. You want to follow Google-recommended practices.

How should you set up the connection?

A.

Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.

B.

Create a VPC and connect it to your on-premises data center using a single Cloud VPN.

C.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center

using Dedicated Interconnect.

D.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter

using a single Cloud VPN.

Question # 46

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

A.

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Question # 47

For this question, refer to the TerramEarth case study.

TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?

A.

Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.

B.

Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.

C.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.

D.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo

Question # 48

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

A.

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.

B.

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.

C.

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.

D.

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Question # 49

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Question # 50

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

A.

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.

B.

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.

C.

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.

D.

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Question # 51

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Question # 52

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Question # 53

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Question # 54

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Question # 55

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

A.

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.

Create a single G Suite account to manage users with each stage of each application in its own project.

D.

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Question # 56

Your team plans to use Vertex AI to develop and deploy machine learning models for various use cases for fraud detection, product recommendations, and customer churn prediction. You want to enhance the security posture of the Vertex AI and Workbench environment by restricting data exfiltration. What should you do?

A.

Create a service perimeter and include ml.googleapis.com and document.googleapis.com as protected services.

B.

Enable VPC Flow Logs to monitor network traffic to and from Vertex AI services and to identify suspicious activity.

C.

Create a service perimeter and include aiplatform.googleapis.com and notebooks.googleapis.com as protected services.

D.

Enable Private Google Access for the VPC network to allow Vertex AI services to access public Google services without traversing the public internet.

Question # 57

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution.

The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract

transform load (ETL) tool. What should you do?

A.

Use gsutil to batch move files in sequence.

B.

Use gsutil to batch copy the files in parallel.

C.

Use gsutil to extract the files as the first part of ETL.

D.

Use gsutil to load the files as the last part of ETL.

Question # 58

You need to implement a network ingress for a new game that meets the defined business and technical

requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud

regions. What should you do?

A.

Configure a global load balancer connected to a managed instance group running Compute Engine

instances.

B.

Configure kubemci with a global load balancer and Google Kubernetes Engine.

C.

Configure a global load balancer with Google Kubernetes Engine.

D.

Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.

Question # 59

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily.

You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the user’s

perspective. What should you do?

A.

Create CPU Utilization and Request Latency as service level indicators.

B.

Create GKE CPU Utilization and Memory Utilization as service level indicators.

C.

Create Request Latency and Error Rate as service level indicators.

D.

Create Server Uptime and Error Rate as service level indicators.

Question # 60

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

A.

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Question # 61

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google

Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

A.

Configure Workload Identity and service accounts to be used by the application platform.

B.

Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the

application platform.

C.

Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use

Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to

be used by the application platform.

D.

Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud

Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used

by the application platform.

Question # 62

Mountkirk Games wants to limit the physical location of resources to their operating Google Cloud regions.

What should you do?

A.

Configure an organizational policy which constrains where resources can be deployed.

B.

Configure IAM conditions to limit what resources can be configured.

C.

Configure the quotas for resources in the regions not being used to 0.

D.

Configure a custom alert in Cloud Monitoring so you can disable resources as they are created in other

regions.

Question # 63

Your development team has created a mobile game app. You want to test the new mobile app on Android and

iOS devices with a variety of configurations. You need to ensure that testing is efficient and cost-effective. What

should you do?

A.

Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.

B.

Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.

C.

Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on the

containers, and test the mobile app.

D.

Upload your mobile app with different configurations to Firebase Hosting and test each configuration.

Question # 64

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

A.

Replace RabbitMQ with Google Pub/Sub.

B.

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.

C.

Resize compute resources to match predefined Compute Engine machine types.

D.

Containerize the micro services and host them in Google Kubernetes Engine.

Professional-Cloud-Architect PDF

$33

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

Professional-Cloud-Architect PDF + Testing Engine

$52.8

$175.99

3 Months Free Update

  • Exam Name: Google Certified Professional - Cloud Architect (GCP)
  • Last Update: Mar 23, 2026
  • Questions and Answers: 333
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

Professional-Cloud-Architect Engine

$39.6

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included