70special - Ends in 0d 00h 00m 00s - Coupon code: 70special

Note! 1z0-1127-24 has been withdrawn. The new exam code is 1z0-1127-25

1z0-1127-24 Practice Exam Questions with Answers Oracle Cloud Infrastructure 2024 Generative AI Professional Certification

Question # 6

What is LangChain?

A.

A JavaScript library for natural language processing

B.

A Ruby library for text generation

C.

A Python library for building applications with Large Language Models

D.

A Java library for text summarization

Full Access
Question # 7

How does the Retrieval-Augmented Generation (RAG) Token technique differ from RAG Sequence when generating a model's response?

A.

Unlike RAG Sequence, RAG Token generates the entire response at once without considering individual parts.

B.

RAG Token does not use document retrieval but generates responses based on pre-existing knowledge only.

C.

RAG Token retrieves documents oar/at the beginning of the response generation and uses those for the entire content

D.

RAG Token retrieves relevant documents for each part of the response and constructs the answer incrementally.

Full Access
Question # 8

Which is NOT a category of pertained foundational models available in the OCI Generative AI service?

A.

Translation models

B.

Summarization models

C.

Generation models

D.

Embedding models

Full Access
Question # 9

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

A.

Determines the maximum number of tokens the model can generate per response

B.

Specifies a string that tells the model to stop generating more content

C.

Assigns a penalty to tokens that have already appeared in the preceding text

D.

Controls the randomness of the model's output, affecting its creativity

Full Access
Question # 10

Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship.

What is the nature of these relationships, and why are they crucial for language models?

A.

Hierarchical relationships; important for structuring database queries

B.

Linear relationships; they simplify the modeling process

C.

Semantic relationships; crucial for understanding context and generating precise language

D.

Temporal relationships; necessary for predicting future linguistic trends

Full Access
Question # 11

What issue might arise from using small data sets with the Vanilla fine-tuning method in the OCI Generative AI service?

A.

Overfilling

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Full Access
Question # 12

Which is a distinguishing feature of "Parameter-Efficient Fine-tuning (PEFT)" as opposed to classic Tine- tuning" in Large Language Model training?

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data. PEFT modifies

D.

PEFT parameters and b typically used when no training data exists.

Full Access
Question # 13

What is the primary purpose of LangSmith Tracing?

A.

To monitor the performance of language models

B.

To generate test cases for language models

C.

To analyze the reasoning process of language

D.

To debug issues in language model outputs

Full Access
Question # 14

How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?

A.

Increasing the temperature flattens the distribution, allowing for more varied word choices.

B.

Increasing the temperature removes the impact of the most likely word.

C.

Temperature has no effect on probability distribution; it only changes the speed of decoding.

D.

Decreasing the temperature broadens the distribution, making less likely words more probable.

Full Access
Question # 15

In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?

A.

Selecting a random word from the entire vocabulary at each step

B.

Choosing the word with the highest probability at each step of decoding

C.

Picking a word based on its position in a sentence structure

D.

Using a weighted random selection based on a modulated distribution

Full Access
Question # 16

Which is the main characteristic of greedy decoding in the context of language model word prediction?

A.

It chooses words randomly from the set of less probable candidates.

B.

It requires a large temperature setting to ensure diverse word selection.

C.

It selects words bated on a flattened distribution over the vocabulary.

D.

It picks the most likely word email at each step of decoding.

Full Access
Question # 17

In the simplified workflow for managing and querying vector data, what is the role of indexing?

A.

To compress vector data for minimized storage usage

B.

To convert vectors into a nonindexed format for easier retrieval

C.

To categorize vectors based on their originating data type (text, images, audio)

D.

To map vectors to a data structure for faster searching, enabling efficient retrieval

Full Access
Question # 18

What does accuracy measure in the context of fine-tuning results for a generative model?

A.

The depth of the neural network layers used in the model

B.

The number of predictions a model makes, regardless of whether they are correct or incorrect

C.

How many predictions the model made correctly out of all the predictions in an evaluation

D.

The proportion of incorrect predictions made by the model during an evaluation

Full Access
Question # 19

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours arc required for fine-tuning if the cluster is active for 10 hours?

A.

10 unit hours

B.

30 unit hours

C.

15 unit hours

D.

40 unit hours

Full Access