Summer Special - 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: c4sdisc65

Databricks-Generative-AI-Engineer-Associate PDF

$38.5

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

Databricks-Generative-AI-Engineer-Associate PDF + Testing Engine

$61.6

$175.99

3 Months Free Update

  • Exam Name: Databricks Certified Generative AI Engineer Associate
  • Last Update: Oct 16, 2025
  • Questions and Answers: 61
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

Databricks-Generative-AI-Engineer-Associate Engine

$46.2

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included

Databricks-Generative-AI-Engineer-Associate Practice Exam Questions with Answers Databricks Certified Generative AI Engineer Associate Certification

Question # 6

A Generative Al Engineer is creating an LLM-based application. The documents for its retriever have been chunked to a maximum of 512 tokens each. The Generative Al Engineer knows that cost and latency are more important than quality for this application. They have several context length levels to choose from.

Which will fulfill their need?

A.

context length 514; smallest model is 0.44GB and embedding dimension 768

B.

context length 2048: smallest model is 11GB and embedding dimension 2560

C.

context length 32768: smallest model is 14GB and embedding dimension 4096

D.

context length 512: smallest model is 0.13GB and embedding dimension 384

Full Access
Question # 7

A Generative Al Engineer is responsible for developing a chatbot to enable their company’s internal HelpDesk Call Center team to more quickly find related tickets and provide resolution. While creating the GenAI application work breakdown tasks for this project, they realize they need to start planningwhich data sources (either Unity Catalog volume or Delta table) they could choose for this application. They have collected several candidate data sources for consideration:

call_rep_history: a Delta table with primary keys representative_id, call_id. This table is maintained to calculate representatives’ call resolution from fields call_duration and call start_time.

transcript Volume: a Unity Catalog Volume of all recordings as a *.wav files, but also a text transcript as *.txt files.

call_cust_history: a Delta table with primary keys customer_id, cal1_id. This table is maintained to calculate how much internal customers use the HelpDesk to make sure that the charge back model is consistent with actual service use.

call_detail: a Delta table that includes a snapshot of all call details updated hourly. It includes root_cause and resolution fields, but those fields may be empty for calls that are still active.

maintenance_schedule – a Delta table that includes a listing of both HelpDesk application outages as well as planned upcoming maintenance downtimes.

They need sources that could add context to best identify ticket root cause and resolution.

Which TWO sources do that? (Choose two.)

A.

call_cust_history

B.

maintenance_schedule

C.

call_rep_history

D.

call_detail

E.

transcript Volume

Full Access
Question # 8

A Generative Al Engineer is building a production-ready LLM system which replies directly to customers. The solution makes use of the Foundation Model API via provisioned throughput. They are concerned that the LLM could potentially respond in a toxic or otherwise unsafe way. They also wish to perform this with the least amount of effort.

Which approach will do this?

A.

Host Llama Guard on Foundation Model API and use it to detect unsafe responses

B.

Add some LLM calls to their chain to detect unsafe content before returning text

C.

Add a regex expression on inputs and outputs to detect unsafe responses.

D.

Ask users to report unsafe responses

Full Access
Question # 9

A Generative Al Engineer interfaces with an LLM with prompt/response behavior that has been trained on customer calls inquiring about product availability. The LLM is designed to output “In Stock” if the product is available or only the term “Out of Stock” if not.

Which prompt will work to allow the engineer to respond to call classification labels correctly?

A.

Respond with “In Stock” if the customer asks for a product.

B.

You will be given a customer call transcript where the customer asks about product availability. The outputs are either “In Stock” or “Out of Stock”. Format the output in JSON, for example: {“call_id”: “123”, “label”: “In Stock”}.

C.

Respond with “Out of Stock” if the customer asks for a product.

D.

You will be given a customer call transcript where the customer inquires about product availability. Respond with “In Stock” if the product is available or “Out of Stock” if not.

Full Access
Question # 10

A Generative Al Engineer is using an LLM to classify species of edible mushrooms based on text descriptions of certain features. The model is returning accurate responses in testing and the Generative Al Engineer is confident they have the correct list of possible labels, but the output frequently contains additional reasoning in the answer when the Generative Al Engineer only wants to return the label with no additional text.

Which action should they take to elicit the desired behavior from this LLM?

A.

Use few snot prompting to instruct the model on expected output format

B.

Use zero shot prompting to instruct the model on expected output format

C.

Use zero shot chain-of-thought prompting to prevent a verbose output format

D.

Use a system prompt to instruct the model to be succinct in its answer

Full Access
Question # 11

A Generative Al Engineer is helping a cinema extend its website's chat bot to be able to respond to questions about specific showtimes for movies currently playing at their local theater. They already have the location of the user provided by location services to their agent, and a Delta table which is continually updated with the latest showtime information by location. They want to implement this new capability In their RAG application.

Which option will do this with the least effort and in the most performant way?

A.

Create a Feature Serving Endpoint from a FeatureSpec that references an online store synced from the Delta table. Query the Feature Serving Endpoint as part of the agent logic / tool implementation.

B.

Query the Delta table directly via a SQL query constructed from the user's input using a text-to-SQL LLM in the agent logic / tool

C.

implementation. Write the Delta table contents to a text column.then embed those texts using an embedding model and store these in the vector index Look

up the information based on the embedding as part of the agent logic / tool implementation.

D.

Set up a task in Databricks Workflows to write the information in the Delta table periodically to an external database such as MySQL and query the information from there as part of the agent logic / tool implementation.

Full Access
Question # 12

A Generative AI Engineer received the following business requirements for an external chatbot.

The chatbot needs to know what types of questions the user asks and routes to appropriate models to answer the questions. For example, the user might ask about upcoming event details. Another user might ask about purchasing tickets for a particular event.

What is an ideal workflow for such a chatbot?

A.

The chatbot should only look at previous event information

B.

There should be two different chatbots handling different types of user queries.

C.

The chatbot should be implemented as a multi-step LLM workflow. First, identify the type of question asked, then route the question to the appropriate model. If it’s an upcoming event question, send the query to a text-to-SQL model. If it’s about ticket purchasing, the customer should be redirected to a payment platform.

D.

The chatbot should only process payments

Full Access
Question # 13

A Generative AI Engineer is developing an LLM application that users can use to generate personalized birthday poems based on their names.

Which technique would be most effective in safeguarding the application, given the potential for malicious user inputs?

A.

Implement a safety filter that detects any harmful inputs and ask the LLM to respond that it is unable to assist

B.

Reduce the time that the users can interact with the LLM

C.

Ask the LLM to remind the user that the input is malicious but continue the conversation with the user

D.

Increase the amount of compute that powers the LLM to process input faster

Full Access
Question # 14

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

A.

Switch to using External Models instead

B.

Deploy the model using pay-per-token throughput as it comes with cost guarantees

C.

Change to a model with a fewer number of parameters in order to reduce hardware constraint issues

D.

Throttle the incoming batch of requests manually to avoid rate limiting issues

Full Access
Question # 15

After changing the response generating LLM in a RAG pipeline from GPT-4 to a model with a shorter context length that the company self-hosts, the Generative AI Engineer is getting the following error:

Databricks-Generative-AI-Engineer-Associate question answer

What TWO solutions should the Generative AI Engineer implement without changing the response generating model? (Choose two.)

A.

Use a smaller embedding model to generate

B.

Reduce the maximum output tokens of the new model

C.

Decrease the chunk size of embedded documents

D.

Reduce the number of records retrieved from the vector database

E.

Retrain the response generating model using ALiBi

Full Access
Question # 16

A team wants to serve a code generation model as an assistant for their software developers. It should support multiple programming languages. Quality is the primary objective.

Which of the Databricks Foundation Model APIs, or models available in the Marketplace, would be the best fit?

A.

Llama2-70b

B.

BGE-large

C.

MPT-7b

D.

CodeLlama-34B

Full Access
Question # 17

A Generative AI Engineer wants to build an LLM-based solution to help a restaurant improve its online customer experience with bookings by automatically handling common customer inquiries. The goal of the solution is to minimize escalations to human intervention and phone calls while maintaining a personalized interaction. To design the solution, the Generative AI Engineer needs to define the input data to the LLM and the task it should perform.

Which input/output pair will support their goal?

A.

Input: Online chat logs; Output: Group the chat logs by users, followed by summarizing each user’s interactions

B.

Input: Online chat logs; Output: Buttons that represent choices for booking details

C.

Input: Customer reviews; Output: Classify review sentiment

D.

Input: Online chat logs; Output: Cancellation options

Full Access
Question # 18

A Generative Al Engineer has created a RAG application to look up answers to questions about a series of fantasy novels that are being asked on the author’s web forum. The fantasy novel texts are chunked and embedded into a vector store with metadata (page number, chapter number, book title), retrieved with the user’s query, and provided to an LLM for response generation. The Generative AI Engineer used their intuition to pick the chunking strategy and associated configurations but now wants to more methodically choose the best values.

Which TWO strategies should the Generative AI Engineer take to optimize their chunking strategy and parameters? (Choose two.)

A.

Change embedding models and compare performance.

B.

Add a classifier for user queries that predicts which book will best contain the answer. Use this to filter retrieval.

C.

Choose an appropriate evaluation metric (such as recall or NDCG) and experiment with changes in the chunking strategy, such as splitting chunks by paragraphs or chapters.

Choose the strategy that gives the best performance metric.

D.

Pass known questions and best answers to an LLM and instruct the LLM to provide the best token count. Use a summary statistic (mean, median, etc.) of the best token counts to choose chunk size.

E.

Create an LLM-as-a-judge metric to evaluate how well previous questions are answered by the most appropriate chunk. Optimize the chunking parameters based upon the values of the metric.

Full Access