70special - Ends in 0d 00h 00m 00s - Coupon code: 70special

Note! 1z0-1110-23 has been withdrawn. The new exam code is 1z0-1110-24

1z0-1110-23 Practice Exam Questions with Answers Oracle Cloud Infrastructure Data Science 2023 Professional Certification

Question # 6

You want to write a Python script to create a collection of different projects for your data science team. Which Oracle Cloud Infrastructure (OCI) Data Science Interface would you use?

A.

Programming Language Software Development Kit (SDK)

B.

Mobile App

C.

Command Line Interface (CLI)

D.

OCI Console

Full Access
Question # 7

You have built a machine model to predict whether a bank customer is going to default on a

loan. You want to use Local Interpretable Model-Agnostic Explanations (LIME) to understand a

specific prediction. What is the key idea behind LIME?

A.

Global behaviour of a machine learning model may be complex, while the local behaviour

may be approximated with a simpler surrogate model.

B.

Model-agnostic techniques are more interpretable than techniques that are dependent on

the types of models.

C.

Global and local behaviours of machine learning models are similar.

D.

Local explanation techniques are model-agnostic, while global explanation techniques are

not

Full Access
Question # 8

You want to write a program that performs document analysis tasks such as extracting text and

tables from a document. Which Oracle AI service would you use?

A.

OCI Language

B.

Oracle Digital Assistant

C.

OCI Speech

D.

OCI Vision

Full Access
Question # 9

You are preparing a configuration object necessary to create a Data Flow application. Which THREE parameter values should you provide?

A.

The path to the arhive.zip file.

B.

The local path to your pySpark script.

C.

The compartment of the Data Flow application.

D.

The bucket used to read/write the pySpark script in Object Storage.

E.

The display name of the application.

Full Access
Question # 10

The Oracle AutoML pipeline automates hyperparameter tuning by training the model with different parameters in parallel. You have created an instance of Oracle AutoML as ora-cle_automl and now you want an output with all the different trials performed by Oracle Au-toML. Which of the following command gives you the results of all the trials?

A.

Oracle.automl.visualize_algorith_selection_trails()

B.

Oracle.automl.visualize_adaptive_sampling_trails()

C.

Oracle.automl.print_trials()

D.

Oracle.automl.visualize_tuning_trails()

Full Access
Question # 11

. You realize that your model deployment is about to reach its utilization limit. What would you do

to avoid the issue before requests start to fail?

A.

Update the deployment to add more instances.

B.

Delete the deployment.

C.

Update the deployment to use fewer instances.

D.

Update the deployment to use a larger virtual machine (more CPUs/memory).

E.

Reduce the load balancer bandwidth limit so that fewer requests come in.

Full Access
Question # 12

What preparation steps are required to access an Oracle AI service SDK from a Data Science

notebook session?

A.

Create and upload score.py and runtime.yaml.

B.

Create and upload the APIsigning key and config file.

C.

Import the REST API.

D.

Call the ADS command to enable AI integration

Full Access
Question # 13

As you are working in your notebook session, you find that your notebook session does not have

enough compute CPU and memory for your workload.

How would you scale up your notebook session without losing your work?

A.

Create a temporary bucket on Object Storage, write all your files and data to Object Storage,

delete your notebook session, provision a new notebook session on a larger compute shape,

Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid)

and copy your files and data from your temporary bucket onto your new notebook session.

B.

Ensure your files and environments are written to the block volume storage under the

/home/datascience directory, deactivate the notebook session, and activate the notebook

session with a larger compute shape selected.

C.

Download all your files and data to your local machine, delete your notebook session,

provision a new notebook session on a larger compute shape, and upload your files from

your local machine to the new notebook session.

D.

Deactivate your notebook session, provision a new notebook session on a larger compute

shape and re-create all of your file changes.

Full Access
Question # 14

As a data scientist, you have stored sensitive data in a database. You need to protect this data by

using a master encryption algorithm, which uses symmetric keys. Which master encryption

algorithm would you choose in the Oracle Cloud Infrastructure (OCI) Vault service?

A.

Triple Data Encryption Standard Algorithm

B.

Elliptical Curve Cryptography Digital Signature Algorithm

C.

Advanced Encryption Standard Keys

D.

Rivert-Shamir-Adleman Keys

Full Access
Question # 15

You are a data scientist building a pipeline in the Oracle Cloud Infrastructure (OCI) Data Science

service for your machine learning project. You want to optimize the pipeline completion time by

running some steps in parallel. Which statement is true about running pipeline steps in parallel?

A.

Steps in a pipeline can be run only sequentially.

B.

Pipeline steps can be run in sequence or in parallel, as long as they create a directed acyclic

graph (DAG).

C.

All pipeline steps are always run in parallel.

D.

Parallel steps cannot be run if they are completely independent of each other.

Full Access
Question # 16

You want to evaluate the relationship between feature values and target variables. You have a

large number of observations having a near uniform distribution and the features are highly

correlated.

Which model explanation technique should you choose?

A.

Feature Permutation Importance Explanations

B.

Local Interpretable Model-Agnostic Explanations

C.

Feature Dependence Explanations

D.

Accumulated Local Effects

Full Access
Question # 17

You want to ensure that all stdout and stderr from your code are automatically collected and

logged, without implementing additional logging in your code. How would you achieve this with Data

Science Jobs?

A.

On job creation, enable logging and select a log group. Then, select either a log or the option

to enable automatic log creation.

B.

Make sure that your code is using the standard logging library and then store all the logs to

Object Storage at the end of the job.

C.

Create your own log group and use a third-party logging service to capture job run details for

log collection and storing.

D.

You can implement custom logging in your code by using the Data Science Jobs logging

service.

Full Access
Question # 18

You are creating an Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage bucket, train a model, and save it to the model catalog. How would you design the authentication mechanism for the job?

A.

Package your personal OC file and keys in the job artifact.

B.

Use the resource principal of the job run as the signer in the job code, ensuring there is a dynamic group for this job run with appropriate access to Object Storage and the model catalog.

C.

Store your personal OCI config file and kays in the Vault, and access the Vault through the job nun resource principal

D.

Create a pre-authenticated request (PAA) for the Object Storage bucket, and use that in the job code.

Full Access
Question # 19

You are a data scientist trying to load data into your notebook session. You understand that

Accelerated Data Science (ADS) SDK supports loading various data formats.

Which of the following THREE are ADS supported data formats?

A.

DOCX

B.

Pandas DataFrame

C.

JSON

D.

Raw Images

E.

XML

Full Access
Question # 20

Youare a data scientist working for a manufacturing company. You have developed a forecasting

model to predict the sales demand in the upcoming months. You created a model artifact that

contained custom logic requiring third party libraries. When you deployed the model, it failed to run

because you did not include all the third party dependencies in the model artifact. What file should

be modified to include the missing libraries?

A.

model_artifact_validate.py

B.

score.py

C.

requirements.txt

D.

runtime.yaml

Full Access
Question # 21

You are working as a data scientist for a healthcare company. They decide to analyze the data to

find patterns in a large volume of electronic medical records. You are asked to build a PySpark

solution to analyze these records in a JupyterLab notebook. What is the order of recommended

steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?

A.

Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

B.

Install a Spark conda environment. Configure core-site.xml. Launch a notebook session.

Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your

PySpark application.

C.

Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application

with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a

notebook session.

D.

Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment.

E.

Develop your PySpark application. Create a Data Flow application with the Accelerated Data

Science (ADS) SDK.

Full Access
Question # 22

You are building a model and need input that represents data as morning, afternoon, or evening.

However, the data contains a time stamp. What part of the Data Science life cycle would you be in

when creating the new variable?

A.

Data access

B.

Feature engineering

C.

Model type selection

D.

Model validation

Full Access
Question # 23

You have just received a new data set from a colleague. You want to quickly find out summary

information about the data set, such as the types of features, the total number of observations, and

distributions of the data. Which Accelerated Data Science (ADS) SDK method from the ADSDataset

class would you use?

A.

show_corr()

B.

to_xgb ()

C.

compute ()

D.

show_in_notebook ()

Full Access
Question # 24

Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from

reference libraries and index websites such as scikit-learn?

A.

DataLabeling

B.

DatasetBrowser

C.

SecretKeeper

D.

ADSTuner

Full Access