We at Crack4sure are committed to giving students who are preparing for the Microsoft AI-900 Exam the most current and reliable questions . To help people study, we've made some of our Microsoft Azure AI Fundamentals exam materials available for free to everyone. You can take the Free AI-900 Practice Test as many times as you want. The answers to the practice questions are given, and each answer is explained.
You have a webchat bot that provides responses from a QnA Maker knowledge base.
You need to ensure that the bot uses user feedback to improve the relevance of the responses over time.
What should you use?
key phrase extraction
sentiment analysis
business logic
active learning
According to the Microsoft Azure AI Fundamentals (AI-900) study guide and the official Microsoft Learn module “Describe features of common AI workloads”, QnA Maker (now part of Azure AI Language services) allows developers to build, train, and publish a knowledge base that provides natural-language answers to user queries. A key capability of this service is active learning, which enables the knowledge base to automatically suggest improvements by analyzing user feedback and usage patterns.
Active learning is an iterative process in which the service observes real user interactions and identifies ambiguous questions or pairs of similar questions that produce uncertain or multiple answers. The system then recommends updates or refinements to the knowledge base to improve the accuracy and relevance of responses. This feedback loop helps ensure that over time, the chatbot’s responses align more closely with actual user expectations and language variations.
In contrast:
A. Key phrase extraction identifies main ideas in text and is used in content summarization, not in response optimization.
B. Sentiment analysis detects emotional tone (positive, negative, neutral), but it doesn’t refine QnA responses.
C. Business logic defines operational rules in an application, not machine learning-driven feedback.
The AI-900 guide specifically emphasizes that QnA Maker supports active learning to improve the quality of answers based on end-user feedback, making this the verified and official Microsoft answer.
Reference (from Microsoft Learn AI-900 content):
“Active learning uses feedback from end users to automatically suggest improvements to a knowledge base, helping improve the accuracy of answers over time.”
Providing contextual information to improve the responses quality of a generative Al solution is an example of which prompt engineering technique?
providing examples
fine-tuning
grounding data
system messages
In Microsoft Azure OpenAI Service and the AI-900/AI-102 study materials, grounding data is the correct term used to describe the process of providing contextual or external information to improve the accuracy, relevance, and quality of responses generated by a generative AI model such as GPT-3.5 or GPT-4.
Grounding is a prompt engineering technique where the AI model is supplemented with relevant background data, such as company documents, knowledge bases, or user context, that helps the model generate factually correct and context-aware responses. Microsoft Learn defines grounding as a way to connect the model’s general knowledge to specific, real-world information. For example, if you ask a GPT-3.5 model about your organization’s HR policies, the base model will not know them unless that policy information is provided (grounded) in the prompt. By embedding this contextual data, the AI becomes “grounded” in the facts it needs to respond reliably.
This technique differs from other prompt engineering concepts:
A. Providing examples (few-shot prompting) shows the model sample inputs and outputs to guide formatting or style, not factual context.
B. Fine-tuning involves retraining the model with labeled data to permanently adjust its behavior — it’s not a prompt-based technique.
D. System messages define the model’s role, tone, or style (for example, “You are a helpful assistant”) but do not add factual context.
Therefore, when you provide contextual information (like product details, policy documents, or reference text) within a prompt to enhance the quality and factual reliability of the model’s responses, you are applying the grounding data technique.
You need to track multiple versions of a model that was trained by using Azure Machine Learning. What should you do?
Provision an inference duster.
Explain the model.
Register the model.
Register the training data.
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Machine Learning,” registering a model is the correct way to track multiple versions of models in Azure Machine Learning.
When you train models in Azure Machine Learning, each trained version can be registered in the workspace’s Model Registry. Registration stores the model’s metadata, including version, training environment, parameters, and lineage. Each registration automatically increments the version number, enabling you to manage, deploy, and compare multiple model iterations efficiently.
The other options are incorrect:
A. Provision an inference cluster – Used for model deployment, not version tracking.
B. Explain the model – Provides interpretability but does not track versions.
D. Register the training data – Registers data assets, not models.
You have an Azure Machine Learning model that uses clinical data to predict whether a patient has a disease.
You clean and transform the clinical data.
You need to ensure that the accuracy of the model can be proven.
What should you do next?
Train the model by using the clinical data.
Split the clinical data into Two datasets.
Train the model by using automated machine learning (automated ML).
Validate the model by using the clinical data.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules on machine learning concepts, ensuring that the accuracy of a predictive model can be proven requires data partitioning—specifically splitting the available data into training and testing datasets. This is a foundational concept in supervised machine learning.
When you split the data, typically about 70–80% of the dataset is used for training the model, while the remaining 20–30% is used for testing (or validation). The reason behind this approach is to ensure that the model’s performance metrics—such as accuracy, precision, recall, and F1-score—are evaluated on data the model has never seen before. This prevents overfitting and allows you to demonstrate that the model generalizes well to new, unseen data.
In the AI-900 Microsoft Learn content under “Describe the machine learning process”, it is explained that after cleaning and transforming the data, the next essential step is data splitting to “evaluate model performance objectively.” By keeping training and testing data separate, you can prove the reliability and accuracy of the model’s predictions, which is particularly crucial in sensitive domains like clinical or healthcare analytics, where decision transparency and validation are vital.
Option A (Train the model by using the clinical data) is incorrect because you should not train and evaluate on the same data—it would lead to biased results.
Option C (Train the model using automated ML) is incorrect because automated ML is a method for training and tuning, but it doesn’t inherently prove accuracy.
Option D (Validate the model by using the clinical data) is also incorrect if you use the same dataset for validation and training—it would not prove true accuracy.
Therefore, per Microsoft’s official AI-900 study content, the verified correct answer is B. Split the clinical data into two datasets.
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) study materials and Microsoft’s Responsible AI guidelines, customers must obtain approval based on their intended usage before accessing and deploying Azure OpenAI Service. This requirement ensures that Microsoft upholds its commitment to Responsible AI principles, which include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
The Azure OpenAI Service provides access to powerful language models such as GPT series and Codex, which can generate, summarize, and understand natural language and code. Because of the potential for misuse—such as generating harmful content, misinformation, or unethical automation—Microsoft enforces a use case review and approval process before granting customers access to the service. This process involves submitting an application describing the intended purpose, deployment method, and compliance measures. Only after Microsoft validates that the proposed use aligns with responsible AI practices will access be approved.
This aligns with Microsoft’s documented commitment that “customers are required to submit an application that describes their intended use of the Azure OpenAI Service,” ensuring that all deployments follow ethical and legal standards. This approval step helps maintain transparency and prevent harmful or non-compliant use cases such as deepfake generation, biased automation, or malicious chatbot deployment.
Other options listed in the question are incorrect:
Commit to a minimum level of expenditure – Microsoft does not require financial commitments for ethical approval.
Pay an upfront fee – Payment is handled through normal Azure billing, not a special fee.
Provide credit card details – Not a responsible AI requirement; this is standard for any Azure subscription.
Therefore, the correct and verified answer per Microsoft’s Responsible AI framework and Azure AI-900 study
Select the answer that correctly completes the sentence



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft’s Responsible AI Framework, the Reliability and Safety principle ensures that AI systems operate consistently, accurately, and as intended, even when confronted with unexpected data or edge cases. It emphasizes that AI systems must be tested, validated, and monitored to ensure stable performance and to prevent harm caused by inaccurate or unreliable outputs.
In the given scenario, the AI system is designed not to provide predictions when key fields contain unusual or missing values. This approach demonstrates that the system is built to avoid unreliable or unsafe outputs that could result from incomplete or corrupted data. Microsoft explicitly outlines that reliable AI systems must handle data anomalies and input validation properly to prevent incorrect predictions.
Here’s how the other options differ:
Inclusiveness ensures accessibility for all users, including those with disabilities or from different backgrounds. It’s unrelated to prediction control or data reliability.
Privacy and Security protects sensitive data and ensures proper handling of personal information, not system prediction logic.
Transparency ensures that users understand how an AI system makes its decisions but doesn’t address prediction reliability.
Thus, stopping a prediction when data is incomplete or abnormal directly supports the Reliability and Safety principle — it ensures that the AI model functions correctly under valid conditions and avoids unintended or harmful outcomes.
This principle aligns with Microsoft’s Responsible AI guidance, which highlights that AI solutions must “operate reliably and safely, even under unexpected conditions, to protect users and maintain trust.”
Select the answer that correctly completes the sentence.


Privacy and security.
According to Microsoft’s Responsible AI Principles, implementing filters to block harmful or inappropriate content in a Generative AI chat solution demonstrates a commitment to the Privacy and Security principle. This principle ensures that AI systems are designed and operated in a way that protects users, their data, and society from harm.
When a chat system uses Generative AI models (like Azure OpenAI’s GPT-based services), there is a risk that the model might produce unsafe, offensive, or sensitive content. Microsoft addresses this through content filters and safety systems, which automatically detect and block violent, hate-based, or sexually explicit outputs. This is part of responsible deployment practices to ensure that user interactions remain safe, private, and compliant with ethical standards.
Implementing these filters aligns with the Privacy and Security principle because it:
Protects users from exposure to harmful or abusive content.
Ensures that conversations are safeguarded against malicious or unsafe use.
Upholds user trust by maintaining a safe digital environment for all participants.
Let’s briefly clarify why the other options are incorrect:
Fairness deals with ensuring unbiased treatment and equitable outcomes in AI decisions.
Transparency focuses on explaining how AI systems make decisions.
Accountability refers to human oversight and responsibility for AI actions.
Thus, content filtering mechanisms are explicitly an example of Privacy and Security, as they protect users and data from harm or misuse while maintaining ethical AI behavior.
Therefore, the verified correct answer is Privacy and security.
When you design an AI system to assess whether loans should be approved, the factors used to make the decision should be explainable.
This is an example of which Microsoft guiding principle for responsible AI?
transparency
inclusiveness
fairness
privacy and security
Microsoft’s Responsible AI Principles, as outlined in the AI-900 certification materials and official Microsoft documentation, emphasize six guiding principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The principle of transparency means that AI systems should be designed so their decisions and processes are understandable and explainable to users and stakeholders.
In this scenario, the AI system is being developed to decide whether a loan should be approved. Such a decision directly affects people’s lives and finances, so it is essential that the system can explain which factors influenced its decision—for example, credit score, income, or payment history. Microsoft’s Responsible AI framework stresses that transparency helps ensure trust between humans and AI systems. When decisions are explainable, users can understand and contest the reasoning if necessary.
The other options do not align precisely with this scenario:
B. Inclusiveness focuses on making AI accessible to all people, regardless of ability or background.
C. Fairness ensures that AI systems treat all individuals equally and do not discriminate. While fairness is important for loan assessment, the question specifically highlights the need for explainability, not equality.
D. Privacy and Security deals with safeguarding user data, which is separate from explaining decisions.
Therefore, the principle demonstrated here is transparency, as it ensures decision-making processes are clear, explainable, and traceable—directly aligning with Microsoft’s responsible AI guidance.
You are building an AI-based app.
You need to ensure that the app uses the principles for responsible AI.
Which two principles should you follow? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Implement an Agile software development methodology
Implement a process of Al model validation as part of the software review process
Establish a risk governance committee that includes members of the legal team, members of the risk management team, and a privacy officer
Prevent the disclosure of the use of Al-based algorithms for automated decision making
The correct answers are B. Implement a process of AI model validation as part of the software review process and C. Establish a risk governance committee that includes members of the legal team, members of the risk management team, and a privacy officer.
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Responsible AI principles, responsible AI emphasizes six key principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles ensure that AI systems are trustworthy, ethical, and safe for users and society.
Option B aligns with the reliability and safety principle. Model validation ensures that AI models behave as expected, perform accurately across different data conditions, and produce consistent results. Microsoft teaches that AI models should be validated, tested, and monitored regularly to avoid unintended outcomes, bias, or failures. Validation processes help ensure that the AI behaves responsibly before deployment and continues to perform reliably over time.
Option C aligns with the accountability and governance principle. Establishing a risk governance committee that includes legal, privacy, and risk management experts ensures that AI development and deployment are overseen responsibly. This committee is responsible for reviewing compliance with data protection laws, ensuring ethical practices, and managing risks associated with AI-driven decisions. Microsoft emphasizes that accountability requires human oversight and governance structures to ensure ethical alignment throughout the AI system’s lifecycle.
The incorrect options are:
A. Implement an Agile software development methodology: Agile is a software project management approach, not a Responsible AI principle.
D. Prevent the disclosure of the use of AI-based algorithms: This violates the transparency principle, which requires organizations to disclose when and how AI is used.
Therefore, following the official Responsible AI framework taught in AI-900, the correct and verified answers are B and C, as they directly promote reliability, safety, accountability, and governance in AI systems.
To complete the sentence, select the appropriate option in the answer area.



According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, object detection is a type of computer vision workload that not only identifies objects within an image but also determines their location by drawing bounding boxes around them. This functionality is clearly described in the Microsoft Learn module “Identify features of computer vision workloads.”
In this scenario, the AI system analyzes an image to find a vehicle and then returns a bounding box showing where that vehicle is located within the image frame. That ability — to detect, classify, and localize multiple objects — perfectly defines object detection.
Microsoft’s study content contrasts object detection with other computer vision workloads as follows:
Image classification: Determines what object or scene is present in an image as a whole but does not locate it (e.g., “this is a car”).
Object detection: Identifies what objects are present and where they are, usually returning coordinates for bounding boxes (e.g., “car detected at position X, Y”).
Optical Character Recognition (OCR): Extracts text content from images or scanned documents.
Facial detection: Specifically locates human faces within an image or video feed, often as part of face recognition systems.
In Azure, object detection capabilities are available through services such as Azure Computer Vision, Custom Vision, and Azure Cognitive Services for Vision, which can be trained to detect vehicles, products, or other objects in various image datasets.
Therefore, based on the AI-900 study guide and Microsoft Learn materials, the verified and correct answer is Object detection, as it accurately describes the process of returning a bounding box indicating an object’s position in an image.
You plan to apply Text Analytics API features to a technical support ticketing system.
Match the Text Analytics API features to the appropriate natural language processing scenarios.
To answer, drag the appropriate feature from the column on the left to its scenario on the right. Each feature may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



Box1: Sentiment analysis
Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.
Box 2: Broad entity extraction
Broad entity extraction: Identify important concepts in text, including key
Key phrase extraction/ Broad entity extraction: Identify important concepts in text, including key phrases and named entities such as people, places, and organizations.
Box 3: Entity Recognition
Named Entity Recognition: Identify and categorize entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. Well-known entities are also recognized and linked to more information on the web.
Which two components can you drag onto a canvas in Azure Machine Learning designer? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
dataset
co mpute
pipeline
module
In Azure Machine Learning designer, a low-code drag-and-drop interface, users can visually build machine learning workflows. According to the AI-900 study guide and Microsoft Learn module “Create and publish models with Azure Machine Learning designer”, two key components that can be dragged onto the designer canvas are datasets and modules.
Datasets (A): These are collections of data that serve as the input for training or evaluating models. They can be registered in the workspace and then dragged onto the canvas for use in transformations or model training.
Modules (D): These are prebuilt processing and modeling components that perform operations such as data cleaning, feature engineering, model training, and evaluation. Examples include “Split Data,” “Train Model,” and “Evaluate Model.”
Compute (B) and Pipeline (C) are not drag-and-drop items within the designer. Compute targets are infrastructure resources used to run the pipeline, while a pipeline represents the overall workflow, not a component that can be added like a dataset or module.
Hence, the correct answers are A. Dataset and D. Module.
You are developing a conversational AI solution that will communicate with users through multiple channels including email, Microsoft Teams, and webchat.
Which service should you use?
Text Analytics
Azure Bot Service
Translator
Form Recognizer
According to the Microsoft Azure AI Fundamentals official study guide and Microsoft Learn module “Describe features of conversational AI workloads on Azure”, Azure Bot Service is the core Azure platform for building, testing, deploying, and managing conversational agents or chatbots. These bots can communicate with users across multiple channels, including email, Microsoft Teams, Slack, Facebook Messenger, and webchat.
Azure Bot Service integrates deeply with the Bot Framework SDK and Azure Cognitive Services such as Language Understanding (LUIS) or Azure AI Language, enabling natural language processing and multi-channel message delivery. The service abstracts away channel management, meaning that developers can build one bot logic that connects seamlessly to several communication platforms.
Option analysis:
A. Text Analytics is a Cognitive Service used for text mining tasks like key phrase extraction, language detection, and sentiment analysis — not for building chatbots.
C. Translator provides language translation but cannot manage conversations or multi-channel delivery.
D. Form Recognizer extracts structured information from documents and forms — unrelated to conversational interaction.
The AI-900 course explicitly defines Azure Bot Service as “a managed platform that enables intelligent, multi-channel conversational experiences between users and bots.” This service allows businesses to unify chat experiences across multiple digital communication channels.
Thus, based on the official Microsoft Learn content and AI-900 syllabus, the best and verified answer is B. Azure Bot Service, as it is the designated Azure solution for deploying a single conversational AI experience accessible from multiple platforms such as email, Teams, and webchat.
You run a charity event that involves posting photos of people wearing sunglasses on Twitter.
You need to ensure that you only retweet photos that meet the following requirements:
Include one or more faces.
Contain at least one person wearing sunglasses.
What should you use to analyze the images?
the Verify operation in the Face service
the Detect operation in the Face service
the Describe Image operation in the Computer Vision service
the Analyze Image operation in the Computer Vision service
The scenario requires two checks on each photo: (1) there is at least one face, and (2) at least one detected face is wearing sunglasses. The Azure AI Face service – Detect operation is purpose-built for this combination. It detects faces and returns per-face attributes, including glasses type, so you can enforce both rules in a single pass. From the official guidance, the Detect API “detects human faces in an image and returns the rectangle coordinates of their locations” and exposes face attributes such as glasses. A concise attribute extract states: “Glasses: NoGlasses, ReadingGlasses, Sunglasses, Swimming Goggles.” With this, you can count faces (requirement 1) and then verify that at least one face’s glasses attribute equals sunglasses (requirement 2).
By contrast, other options don’t align as precisely:
A. Verify (Face service) compares whether two detected faces belong to the same person. It does not provide content attributes like sunglasses; it requires face inputs for identity/one-to-one scenarios, which doesn’t meet your content-filter goal.
C. Describe Image (Computer Vision) returns a natural-language caption of the whole image. While a caption might mention “a person wearing sunglasses,” it’s not guaranteed, is not face-scoped, and offers less deterministic filtering than a structured attribute on a detected face.
D. Analyze Image (Computer Vision) can return tags such as “person” or sometimes “sunglasses,” but those tags are image-level and not bound to specific faces. You need to ensure that a detected face (not just any region) is wearing sunglasses. Face-scoped attributes from Face Detect are more reliable for this logic.
Therefore, the most accurate and exam-aligned choice is B. the Detect operation in the Face service, because it allows you to programmatically confirm face presence and per-face sunglasses in a precise, rule-driven workflow.
Select the answer that correctly completes the sentence.



The correct completion of the sentence is:
“The Form Recognizer service can be used to extract information from a driver’s license to populate a database.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of computer vision workloads,” Azure Form Recognizer (part of Azure AI Document Intelligence) is a document processing service that uses machine learning and optical character recognition (OCR) to extract structured data, key-value pairs, and text from documents such as invoices, receipts, identity cards, and driver’s licenses.
This service allows businesses to automate data entry and document processing workflows by converting physical or scanned documents into machine-readable formats. For example, with a driver’s license, Form Recognizer can extract structured data fields such as Name, Date of Birth, License Number, and Expiration Date, and automatically populate those values into a database or CRM system.
The AI-900 study materials emphasize that Form Recognizer is designed to handle both structured and unstructured document layouts. It includes prebuilt models for common document types (like invoices, receipts, and identity documents) and supports custom models for domain-specific forms.
By comparison:
Computer Vision extracts general text or image content but doesn’t structure or label extracted fields.
Custom Vision is used for training image classification or object detection models.
Conversational Language Understanding is for processing text or speech to determine intent, not extracting document data.
Therefore, based on the Microsoft Learn AI-900 official study content, the Form Recognizer service is the correct choice, as it is explicitly designed to extract and structure data from documents like driver’s licenses, forms, and receipts — making it ideal for automatically populating a database.
You send an image to a Computer Vision API and receive back the annotated image shown in the exhibit.

Which type of computer vision was used?
object detection
semantic segmentation
optical character recognition (OCR)
image classification
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image.
The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like " indoor " , which can ' t be localized with bounding boxes.
TION NO: 81
Which AI service should you use to create a bot from a frequently asked questions (FAQ) document?
QnA Maker
Language Understanding (LUIS)
Text Analytics
Speech
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules under the topic “Describe features of common AI workloads” and “Identify capabilities of Azure AI services”, QnA Maker is the service designed specifically to create a knowledge base (KB) or question-and-answer bot from existing content such as FAQ documents, product manuals, support pages, or structured knowledge sources.
QnA Maker enables developers to take semi-structured text (for example, an FAQ document or webpage) and automatically generate a knowledge base of pairs of questions and corresponding answers. This knowledge base can then be connected to a chatbot, typically through the Azure Bot Service, so that users can interact with it conversationally. The key advantage is that the process does not require deep machine learning or programming expertise. The service uses natural language processing (NLP) to match user queries with the most relevant pre-defined answers in the knowledge base.
In the AI-900 curriculum, this falls under the Conversational AI workload—creating intelligent bots that can respond naturally to user questions. Microsoft’s training content explains that “QnA Maker extracts pairs of question and answer from your content and builds a knowledge base that can be queried by bots and other applications.” The output, as shown in the example diagram, demonstrates how user input (the question) triggers a request to the QnA Maker API, which returns a JSON response containing the best-matched answer.
The other options are not correct because:
B. Language Understanding (LUIS) is used to interpret user intent and extract entities, not to create FAQs.
C. Text Analytics performs text extraction, sentiment analysis, and key-phrase detection but does not build a Q & A knowledge base.
D. Speech handles speech-to-text or text-to-speech, not Q & A matching.
Therefore, per the AI-900 study guide and Microsoft Learn, the verified and correct answer is A. QnA Maker.
You plan to use Azure Cognitive Services to develop a voice controlled personal assistant app.
Match the Azure Cognitive Services to the appropriate tasks.
To answer, drag the appropriate service from the column on the left to its description on the right Each service may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn Cognitive Services documentation, developing a voice-controlled personal assistant app involves integrating multiple Azure AI services that specialize in different aspects of language and speech processing. The three services in focus—Azure AI Speech, Azure AI Language Service, and Azure AI Translator Text—perform unique but complementary roles in conversational AI systems.
Convert a user’s speech to text ? Azure AI SpeechThe Azure AI Speech service provides speech-to-text (STT) capabilities. It enables applications to recognize spoken language and convert it into written text in real time. This is often the first step in voice-enabled applications, transforming audio input into a machine-readable format that can be analyzed further.
Identify a user’s intent ? Azure AI Language serviceOnce speech has been transcribed, the Azure AI Language service (which includes capabilities like Conversational Language Understanding and Text Analytics) interprets the meaning of the text. It detects the user’s intent (what the user wants to accomplish) and extracts entities (key data points) from the input. This service helps the assistant understand commands like “Book a flight” or “Set a reminder.”
Provide a spoken response to the user ? Azure AI SpeechAfter determining an appropriate response, the system uses the text-to-speech (TTS) feature of Azure AI Speech to convert the assistant’s text-based reply back into natural-sounding spoken language, allowing the user to hear the response.
Together, these services form the backbone of a conversational AI system: Speech-to-Text ? Language Understanding ? Text-to-Speech, aligning precisely with the AI-900 curriculum’s explanation of how Azure Cognitive Services enable intelligent voice-based interactions.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


Yes, Yes, and No.
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module “Identify features of natural language processing (NLP) workloads on Azure”, the Azure Translator service is a cloud-based AI service within Azure Cognitive Services that provides real-time text translation across multiple languages.
“You can use the Translator service to translate text between languages.” – Yes.This is the core function of the Translator service. It takes text as input in one language and returns it in another using advanced neural machine translation models. This aligns with the AI-900 learning objective: “Describe the capabilities of Azure Cognitive Services for language”, which specifically names Azure Translator as the service used to perform automatic text translation. The service supports over 100 languages and dialects, offering both single-sentence and document-level translations.
“You can use the Translator service to detect the language of a given text.” – Yes.This statement is also true. The Translator service automatically detects the source language if it is not specified in the request. This feature is documented in the Azure Translator API, where the system identifies the input language before performing translation. The AI-900 exam content emphasizes this as one of the Translator service’s built-in capabilities—language detection for untagged text.
“You can use the Translator service to transcribe audible speech into text.” – No.This is not a function of Translator. Transcription (converting speech to text) is a speech AI workload, handled by the Azure Speech Service, not Translator. The Speech-to-Text capability in Azure Cognitive Services processes spoken audio input and returns the text transcription. The Translator service only works with text input, not direct audio.
Therefore, based on official AI-900 guidance, the verified configuration is:
? Yes – for text translation
? Yes – for language detection
? No – for speech transcription.
This aligns precisely with the AI-900 learning outcomes describing Text Translation and Language Detection as Translator capabilities, and Speech Transcription as part of the separate Speech service.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



The correct answers are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore fundamental principles of machine learning.”
In supervised machine learning, data is typically divided into three main subsets:
Training set – used to train the model, i.e., to teach the algorithm the patterns and relationships between input features and output labels.
Validation set – used to evaluate the model during training to tune hyperparameters and prevent overfitting.
Test set – used after training to assess the final model’s performance on unseen data.
Let’s analyze each statement in light of these definitions:
“A validation set includes the set of input examples that will be used to train a model.” ? NoThis is incorrect because the training set, not the validation set, contains the input examples used for model training. The validation set is separate from the training data to ensure unbiased evaluation.
“A validation set can be used to determine how well a model predicts labels.” ? YesThis is correct. The validation set helps assess how effectively the model generalizes during training. It measures performance and helps tune model parameters for optimal results.
“A validation set can be used to verify that all the training data was used to train the model.” ? NoThis is false. The validation set is not used to verify the completeness of training data usage. It exists independently to evaluate the model’s performance during training cycles.
According to Microsoft Learn, using a validation set helps ensure that a model generalizes well and avoids overfitting to the training data. It plays a crucial role in refining and optimizing models before final testing.
You need to use Azure Machine Learning designer to build a model that will predict automobile prices.
Which type of modules should you use to complete the model? To answer, drag the appropriate modules to the correct locations. Each module may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.



Box 1: Select Columns in Dataset
For Columns to be cleaned, choose the columns that contain the missing values you want to change. You can choose multiple columns, but you must use the same replacement method in all selected columns.
Example:

The task is to build a machine learning model in Azure Machine Learning designer to predict automobile prices, which is a regression problem since the output (price) is a continuous numeric value. The pipeline must follow the logical data preparation, training, and evaluation flow as outlined in the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn module “Create a machine learning model with Azure Machine Learning designer.”
Here’s the correct sequence and reasoning:
Select Columns in Dataset:The first step after loading the raw automobile dataset is to choose the relevant columns that will be used as features (inputs) and the label (output). This module ensures that only necessary fields (for example, horsepower, engine size, mileage, etc.) are used to train the model while excluding irrelevant columns like vehicle ID or serial number.
Split Data:Next, the cleaned and filtered dataset must be split into two subsets: training data and testing data (often 70/30 or 80/20). This allows the model to be trained on one portion and evaluated on the other to measure predictive accuracy.
Linear Regression:Since automobile price prediction is a numeric prediction task, the appropriate learning algorithm is Linear Regression. This supervised algorithm learns relationships between numeric features and the target (price).
Finally, the workflow connects the training data and Linear Regression module to the Train Model module, which outputs a trained regression model. The trained model is then linked to the Score Model module to compare predicted vs. actual prices.
This pipeline fully aligns with Microsoft’s recommended process for regression in Azure ML Designer.
You need to reduce the load on telephone operators by implementing a Chabot to answer simple questions with predefined answers.
Which two Al services should you use to achieve the goal? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Azure 8ol Service
Azure Machine Learning
Translator
Language Service
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure,” to create a chatbot that can automatically answer simple, predefined user questions, you need two main Azure AI components — one to handle the conversation interface and another to manage the knowledge and language understanding aspect.
Azure Bot Service (A)This service is used to create, manage, and deploy chatbots that interact with users through text or voice. The Bot Service provides the framework for conversation management, user interaction, and channel integration (e.g., webchat, Microsoft Teams, Skype). It serves as the backbone of conversational AI applications and supports integration with other cognitive services like the Language Service.
Language Service (D)The Azure AI Language Service (which now includes Question Answering, formerly QnA Maker) is used to build and manage the knowledge base of predefined questions and answers. This service enables the chatbot to understand user queries and return appropriate responses automatically. The QnA capability allows you to import documents, FAQs, or structured data to create a searchable database of responses for the bot.
Why the other options are incorrect:
B. Azure Machine Learning: This service is used for building, training, and deploying custom machine learning models, not for chatbot Q & A automation.
C. Translator: This service performs language translation, which is not required for answering predefined questions unless multilingual support is specifically needed.
Therefore, to implement a chatbot that can answer simple, repetitive user questions and reduce the load on human operators, you combine Azure Bot Service (for interaction) with the Language Service (for question-answering intelligence).
What is an advantage of using a custom model in Form Recognizer?
Only a custom model can be deployed on-premises.
A custom model can be trained to recognize a variety of form types.
A custom model is less expensive than a prebuilt model.
A custom model always provides higher accuracy.
Azure AI Form Recognizer extracts information from structured and semi-structured documents. A custom model in Form Recognizer allows an organization to train the system on its specific document layouts and data fields.
As per the AI-900 study guide, a key advantage of a custom model is its flexibility. It can be trained with a set of labeled examples (e.g., invoices, purchase orders, receipts) that match the company’s format. Once trained, the model learns where to locate and extract fields such as invoice numbers, dates, or totals—regardless of layout differences between form types.
Option B is correct because a custom model can be trained to recognize a variety of form types, making it adaptable for diverse business processes.
Options A, C, and D are incorrect:
A: Both prebuilt and custom models are cloud-based; on-premises deployment is not an exclusive feature.
C: Custom models are not cheaper; they may involve additional training costs.
D: Custom models do not always guarantee higher accuracy—accuracy depends on the training data quality.
Which metric can you use to evaluate a classification model?
true positive rate
mean absolute error (MAE)
coefficient of determination (R2)
root mean squared error (RMSE)
For evaluating a classification model, the appropriate metric from the options provided is the True Positive Rate (TPR), also known as Sensitivity or Recall. According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Evaluate model performance”, classification models are evaluated using metrics that measure how accurately the model predicts categorical outcomes such as “yes/no,” “spam/not spam,” or “approved/denied.”
The True Positive Rate measures the proportion of correctly identified positive cases out of all actual positive cases. Mathematically, it is expressed as:
True Positive Rate (Recall)=True PositivesTrue Positives + False Negatives\text{True Positive Rate (Recall)} = \frac{\text{True Positives}}{\text{True Positives + False Negatives}}True Positive Rate (Recall)=True Positives + False NegativesTrue Positives?
This metric is important when missing positive predictions carries a high cost, such as in medical diagnosis or fraud detection. Microsoft Learn highlights classification evaluation metrics such as accuracy, precision, recall, F1 score, and AUC (Area Under the Curve) as suitable for classification models.
The other options—Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²)—are regression metrics used to evaluate models that predict numeric values rather than categories. For example, they apply to predicting house prices or temperatures, not yes/no decisions.
Therefore, the correct classification evaluation metric among the choices is A. True Positive Rate.
Which type of natural language processing (NLP) entity is used to identify a phone number?
regular expression
machine-learned
list
Pattern-any
In Natural Language Processing (NLP), entities are pieces of information extracted from text, such as names, locations, or phone numbers. According to the Microsoft Learn module “Explore natural language processing in Azure,” Azure’s Language Understanding (LUIS) supports several entity types:
Machine-learned entities – Automatically learned based on context in training data.
List entities – Used for predefined, limited sets of values (e.g., colors or product names).
Pattern.any entities – Capture flexible, unstructured phrases in user input.
Regular expression entities – Use regex patterns to match specific data formats such as phone numbers, postal codes, or dates.
A regular expression is ideal for recognizing phone numbers because phone numbers follow specific numeric or symbol-based patterns (e.g., (555)-123-4567 or +1 212 555 0199). By defining a regex pattern, the AI model can accurately extract phone numbers regardless of text context.
For which two workloads can you use computer vision? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
creating photorealistic images by using three-dimensional models
assigning the color pixels in an image to object names
describing the contents of an image
detecting inconsistencies and anomalies in a stream of data
creating visual representations of numerical data
The correct answers are B. assigning the color pixels in an image to object names and C. describing the contents of an image.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of computer vision workloads on Azure,” computer vision is a branch of AI that enables systems to analyze, interpret, and understand visual data from images and videos. It allows machines to identify objects, people, text, and even describe scenes automatically.
Option B: Assigning color pixels in an image to object names represents image classification or object detection, which are key computer vision workloads. In these tasks, AI analyzes pixel patterns to determine which pixels correspond to specific objects (for example, classifying pixels as “car,” “tree,” or “road”).
Option C: Describing the contents of an image corresponds to image captioning, another computer vision workload. It involves using AI models trained to generate natural language descriptions of what is visible in an image, such as “A group of people sitting at a dining table.” Azure’s Computer Vision service provides this functionality through its “Describe Image” API.
Incorrect options:
A. Creating photorealistic images involves generative AI and 3D modeling, not traditional computer vision.
D. Detecting inconsistencies and anomalies in a data stream relates to anomaly detection, not computer vision.
E. Creating visual representations of numerical data involves data visualization, not AI-driven image analysis.
You use Azure Machine Learning designer to publish an inference pipeline.
Which two parameters should you use to consume the pipeline? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
the model name
the training endpoint
the authentication key
the REST endpoint
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore Azure Machine Learning”, when you publish an inference pipeline (a deployed web service for real-time predictions) using Azure Machine Learning designer, you make the model accessible as a RESTful endpoint. Consumers—such as applications, scripts, or services—interact with this endpoint to submit data and receive predictions.
To securely access this deployed pipeline, two critical parameters are required:
REST endpoint (Option D):The REST endpoint is a URL automatically generated when the inference pipeline is deployed. It defines the network location where clients send HTTP POST requests containing input data (usually in JSON format). The endpoint routes these requests to the deployed model, which processes the data and returns prediction results. The REST endpoint acts as the primary access point for consuming the model’s inferencing capability programmatically.
Authentication key (Option C):The authentication key (or API key) is a security token provided by Azure to ensure that only authorized users or systems can access the endpoint. When invoking the REST service, the key must be included in the request header (typically as the value of the Authorization header). This mechanism enforces secure, authenticated access to the deployed model.
The other options are incorrect:
A. The model name is not required to consume the endpoint; it is used internally within the workspace.
B. The training endpoint is used for training pipelines, not for inference.
Therefore, according to Microsoft’s official AI-900 learning objectives and Azure Machine Learning documentation, when consuming a published inference pipeline, you must use both the REST endpoint (D) and the authentication key (C). These parameters ensure secure, controlled, and programmatic access to the deployed AI model for real-time predictions.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and the Microsoft Learn module: “Describe features of common AI workloads”, conversational AI solutions like chatbots can be created using various methods—not only through custom code. Azure provides both no- code/low-code and developer-focused approaches. For instance, users can design chatbots using Power Virtual Agents, which requires no programming knowledge, or they can use Azure Bot Service with the Bot Framework SDK for fully customized scenarios. Hence, the statement “Chatbots can only be built by using custom code” is False (No) because Azure supports multiple levels of technical involvement for building bots.
The second statement is True (Yes) because the Azure Bot Service is designed specifically to host, manage, and connect conversational bots to users across different channels. Microsoft Learn explicitly explains that the service provides integrated hosting, connection management, and telemetry for bots built using the Bot Framework or Power Virtual Agents. It acts as the foundation for deploying, scaling, and managing chatbot workloads in Azure.
The third statement is also True (Yes) because Azure Bot Service supports integration with Microsoft Teams, among many other channels such as Skype, Facebook Messenger, Slack, and web chat. Microsoft documentation states that Azure-hosted bots can communicate directly with Teams users through the Teams channel, enabling intelligent virtual assistants within the Teams environment.
Which Azure Al Language feature can be used to retrieve data, such as dates and people ' s names, from social media posts?
language detection
speech recognition
key phrase extraction
entity recognition
The Azure AI Language service provides several NLP features, including language detection, key phrase extraction, sentiment analysis, and named entity recognition (NER).
When you need to extract specific data points such as dates, names, organizations, or locations from unstructured text (for example, social media posts), the correct feature is Entity Recognition.
Entity Recognition identifies and classifies information in text into predefined categories like:
Person names (e.g., “John Smith”)
Organizations (e.g., “Contoso Ltd.”)
Dates and times (e.g., “October 22, 2025”)
Locations, events, and quantities
This capability helps transform unstructured textual data into structured data that can be analyzed or stored.
Option analysis:
A (Language detection): Determines the language of a text (e.g., English, French).
B (Speech recognition): Converts spoken audio to text; not applicable here.
C (Key phrase extraction): Identifies important phrases or topics but not specific entities like names or dates.
D (Entity recognition): Correctly extracts names, dates, and other specific data from text.
Hence, the accurate feature for this scenario is D. Entity Recognition.
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Identify guiding principles for responsible AI,” Fairness is one of Microsoft’s six core principles of Responsible AI. The principle of fairness ensures that AI systems treat all individuals and groups equitably, and that the models do not produce biased or discriminatory outcomes.
Bias in AI systems can occur when training data reflects existing prejudices, inequalities, or imbalances. For example, if a dataset used for a hiring model underrepresents a certain demographic group, the AI system might produce unfair recommendations. Microsoft emphasizes that AI should not reflect or reinforce bias and that developers must actively design, test, and monitor models to mitigate unfairness.
Microsoft’s Six Responsible AI Principles:
Fairness – AI systems should treat everyone equally and avoid bias.
Reliability and safety – AI systems must operate as intended even under unexpected conditions.
Privacy and security – AI must protect personal and business data.
Inclusiveness – AI should empower all people and be accessible to diverse users.
Transparency – AI systems should be understandable and their decisions explainable.
Accountability – Humans should be accountable for AI system outcomes.
The other options do not fit this context:
Accountability ensures human responsibility for AI decisions.
Inclusiveness focuses on accessibility and empowering all users.
Transparency relates to making AI systems understandable.
Therefore, the correct answer is fairness, as it directly addresses the principle that AI systems should NOT reflect biases from the datasets used to train them.
In which scenario should you use key phrase extraction?
translating a set of documents from English to German
generating captions for a video based on the audio track
identifying whether reviews of a restaurant are positive or negative
identifying which documents provide information about the same topics
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Extract insights from text with the Text Analytics service”, key phrase extraction is a feature of the Text Analytics service that identifies the most important words or phrases in a given document. It helps summarize the main ideas by isolating significant concepts or terms that describe what the text is about.
In this scenario, the goal is to determine which documents share similar topics or themes. By extracting key phrases from each document (for example, “policy renewal,” “coverage limits,” “claim process”), you can compare and categorize documents based on overlapping keywords. This is exactly how key phrase extraction is used—to summarize and group text content by topic relevance.
The other options do not fit this use case:
A. Translation uses the Translator service, not key phrase extraction.
B. Generating video captions involves speech recognition and computer vision.
C. Identifying sentiment relates to sentiment analysis, not key phrase extraction.
Select the answer that correctly completes the sentence.



In Azure OpenAI Service, the temperature parameter directly controls the creativity and determinism of responses generated by models such as GPT-3.5. According to the Microsoft Learn documentation for Azure OpenAI models, temperature is a numeric value (typically between 0.0 and 2.0) that determines how “random” or “deterministic” the output should be.
A lower temperature value (for example, 0 or 0.2) makes the model’s responses more deterministic, meaning the same prompt consistently produces nearly identical outputs.
A higher temperature value (for example, 0.8 or 1.0) encourages creativity and variety, causing the model to generate different phrasing or interpretations each time it responds.
When a question specifies the need for more deterministic responses, Microsoft’s guidance is to decrease the temperature parameter. This adjustment makes the model focus on the most probable tokens (words) rather than exploring less likely options, improving reliability and consistency—ideal for business or technical applications where consistent answers are essential.
The other parameters serve different purposes:
Frequency penalty reduces repetition of the same phrases but does not control randomness.
Max response (max tokens) limits the maximum length of the generated output.
Stop sequence defines specific tokens that tell the model when to stop generating text.
Thus, the correct and Microsoft-verified completion is:
“You can modify the Temperature parameter to produce more deterministic responses from a chat solution that uses the Azure OpenAI GPT-3.5 model.”
You need to develop a mobile app for employees to scan and store their expenses while travelling.
Which type of computer vision should you use?
semantic segmentation
image classification
object detection
optical character recognition (OCR)
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore computer vision”, Optical Character Recognition (OCR) is a form of computer vision that enables a system to detect and extract printed or handwritten text from images or documents. OCR is particularly useful in scenarios where the goal is to digitize textual information from physical documents, such as receipts, invoices, or travel expense forms — exactly as described in this question.
In the given scenario, employees need a mobile application that allows them to scan and store expenses while traveling. The process involves taking photos of receipts that contain printed text, such as vendor names, totals, dates, and item descriptions. The OCR technology automatically detects the text areas within the image and converts them into machine-readable and searchable data that can be stored in a database or processed further for expense management.
Microsoft’s Azure Cognitive Services include the Computer Vision API and the Form Recognizer service, both of which use OCR technology. The Form Recognizer builds upon OCR by adding intelligent document understanding, enabling it to extract structured data from expense receipts automatically.
Other answer options are incorrect for the following reasons:
A. Semantic segmentation assigns labels to every pixel in an image, typically used in autonomous driving or medical imaging, not for text extraction.
B. Image classification identifies the overall category of an image (e.g., “This is a receipt”), but it does not extract the textual content.
C. Object detection identifies and locates objects in an image with bounding boxes but is not used for text reading or conversion.
Therefore, based on the official AI-900 training and Microsoft Learn content, the correct answer is D. Optical Character Recognition (OCR) — the technology that enables extracting textual information from scanned expense receipts.
Select the answer that correctly completes the sentence.


“features.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe fundamental principles of machine learning on Azure,” in a machine learning model, the data used as inputs are known as features, while the data that represents the output or target prediction is known as the label.
Features are measurable attributes or properties of the data used by a model to learn patterns and make predictions. They are also referred to as independent variables because they influence the result that the model tries to predict. For example, in a machine learning model that predicts house prices:
Features might include square footage, location, and number of bedrooms, while
The label would be the house price (the value being predicted).
In the context of Azure Machine Learning, during model training, features are passed into the algorithm as input variables (X-values), and the label is the corresponding output (Y-value). The model then learns the relationship between the features and the label.
Let’s review the incorrect options:
Functions: These are mathematical operations or relationships used inside algorithms, not the input data itself.
Labels: These are the outputs or results that the model predicts, not the inputs.
Instances: These refer to individual data records or rows in the dataset, not the input fields themselves.
Hence, in any supervised or unsupervised learning process, the input data (independent variables) are called features, and the model uses them to predict labels (dependent variables).
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


? Yes – Extract key phrases
? No – Generate press releases
? Yes – Detect sentiment
The Azure AI Language service is a powerful set of natural language processing (NLP) tools within Azure Cognitive Services, designed to analyze, understand, and interpret human language in text form. According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation, this service includes several capabilities such as key phrase extraction, sentiment analysis, language detection, named entity recognition (NER), and question answering.
Extract key phrases from documents ? YesThe Key Phrase Extraction feature identifies the most relevant words or short phrases within a document, helping summarize important topics. This is useful for indexing, summarizing, or organizing content. For instance, from “Azure AI Language helps analyze customer feedback,” it may extract “Azure AI Language” and “customer feedback” as key phrases.
Generate press releases based on user prompts ? NoThis functionality falls under generative AI, specifically within Azure OpenAI Service, which uses models such as GPT-4 for text creation. The Azure AI Language service focuses on analyzing and understanding existing text, not generating new content like press releases or articles.
Build a social media feed analyzer to detect sentiment ? YesThe Sentiment Analysis capability determines the emotional tone (positive, neutral, negative, or mixed) of text data, making it ideal for analyzing social media posts, reviews, or feedback. Businesses often use this to gauge customer satisfaction or brand reputation.
In summary, the Azure AI Language service analyzes text to extract insights and detect sentiment but does not generate new textual content.
Match the Al solution to the appropriate task.
To answer, drag the appropriate solution from the column on the left to its task on the right. Each solution may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



This question evaluates your understanding of how different Azure AI workloads correspond to specific tasks in image, text, and content generation scenarios, as explained in the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn modules covering common AI workloads and Azure services.
Generate a caption from a given image ? Computer VisionThis is a computer vision task because it involves analyzing the visual elements of an image and producing descriptive text (a caption). Azure AI Vision provides image analysis and captioning capabilities through its Describe Image API, which uses deep learning models to recognize objects, scenes, and actions in an image and automatically generate natural-language descriptions (e.g., “A cat sitting on a sofa”).
Generate an image from a given caption ? Generative AIThis task belongs to Generative AI, which focuses on creating new content such as text, code, or images based on prompts. Tools like Azure OpenAI Service with DALL-E can interpret text descriptions and generate realistic images that match the given caption. Generative AI is capable of creative synthesis, not just analysis, making it the appropriate category.
Generate a 200-word summary from a 2,000-word article ? Text AnalyticsText analytics (a subset of natural language processing) allows summarization, sentiment analysis, and entity recognition from large text corpora. Azure AI Language includes text summarization capabilities that condense long documents into concise summaries while preserving meaning and key information.
You need to build an app that will read recipe instructions aloud to support users who have reduced vision.
Which version service should you use?
Text Analytics
Translator Text
Speech
Language Understanding (LUIS)
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of speech capabilities in Azure Cognitive Services”, the Azure Speech service provides functionality for converting text to spoken words (speech synthesis) and speech to text (speech recognition).
In this scenario, the app must read recipe instructions aloud to assist users with visual impairments. This task is achieved through speech synthesis, also known as text-to-speech (TTS). The Azure Speech service uses advanced neural network models to generate natural-sounding voices in many languages and accents, making it ideal for accessibility scenarios such as screen readers, virtual assistants, and educational tools.
Microsoft Learn defines Speech service as a unified offering that includes:
Speech-to-text (speech recognition): Converts spoken words into text.
Text-to-speech (speech synthesis): Converts written text into natural-sounding audio output.
Speech translation: Translates spoken language into another language in real time.
Speaker recognition: Identifies or verifies a person based on their voice.
The other options do not fit the requirements:
A. Text Analytics – Performs text-based natural language analysis such as sentiment, key phrase extraction, and entity recognition, but it cannot produce audio output.
B. Translator Text – Translates text between languages but does not generate speech output.
D. Language Understanding (LUIS) – Interprets user intent from text or speech for conversational bots but does not read text aloud.
Therefore, based on the AI-900 curriculum and Microsoft Learn documentation, the correct service for converting recipe text to spoken audio is the Azure Speech service.
? Final Answer: C. Speech
You are evaluating whether to use a basic workspace or an enterprise workspace in Azure Machine Learning.
What are two tasks that require an enterprise workspace? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Use a graphical user interface (GUI) to run automated machine learning experiments.
Create a compute instance to use as a workstation.
Use a graphical user interface (GUI) to define and run machine learning experiments from Azure Machine Learning designer.
Create a dataset from a comma-separated value (CSV) file.
The correct answers are A. Use a graphical user interface (GUI) to run automated machine learning experiments and C. Use a graphical user interface (GUI) to define and run machine learning experiments from Azure Machine Learning designer.
According to the Microsoft Azure AI Fundamentals (AI-900) official documentation and Microsoft Learn module “Create and manage Azure Machine Learning workspaces”, there are two workspace tiers: Basic and Enterprise. The Enterprise workspace provides advanced capabilities for automation, visualization, and collaboration that are not available in the Basic tier.
Specifically:
Automated machine learning (AutoML) using a GUI is only available in the Enterprise tier. AutoML automatically selects algorithms and tunes hyperparameters through the Azure Machine Learning studio interface.
Azure Machine Learning designer, which allows users to visually drag and drop datasets and modules to create machine learning pipelines, also requires the Enterprise workspace.
In contrast:
B. Create a compute instance and D. Create a dataset from a CSV file are fundamental actions supported in both Basic and Enterprise workspaces. These do not require the advanced licensing features of the Enterprise edition.
Therefore, tasks involving the graphical, no-code tools—Automated ML (AutoML) and the Designer—require the Enterprise workspace, aligning with AI-900’s learning objectives.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.


Yes, Yes, No.
According to the Microsoft Azure AI Fundamentals (AI-900) study materials, conversational AI enables applications, websites, and digital assistants to interact with users via natural language. A chatbot is a key conversational AI workload and can be integrated into multiple channels such as web pages, Microsoft Teams, Facebook Messenger, and Cortana using Azure Bot Service and Bot Framework.
“A restaurant can use a chatbot to answer queries through Cortana” — Yes.Azure Bot Service supports multi-channel deployment, which includes Cortana integration. This means the same bot can respond to voice or text input via Cortana, making it a valid use case for a restaurant to provide menu details, reservations, or order tracking through voice-based AI assistants.
“A restaurant can use a chatbot to answer inquiries about business hours from a webpage” — Yes.This is a standard scenario for chatbots embedded on a company website. As per Microsoft Learn’s Describe features of conversational AI module, a chatbot can be added to a website to handle FAQs such as business hours, location, or menu details, thereby improving response time and reducing repetitive human workload.
“A restaurant can use a chatbot to automate responses to customer reviews on an external website” — No.Azure bots and other conversational AI tools cannot automatically interact with or post on external third-party platforms where the business does not control the data or API integration. Automated posting or replying to reviews on external review sites (e.g., Yelp or Google Reviews) would violate both ethical and technical boundaries of responsible AI usage outlined by Microsoft.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



These answers align with the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore conversational AI in Microsoft Azure.”
1. A webchat bot can interact with users visiting a website ? Yes
This statement is true. The Azure Bot Service allows developers to create intelligent chatbots that can be integrated into a webchat interface. This enables visitors to interact with the bot directly from a website, asking questions and receiving automated responses. This is a typical use case of conversational AI, where natural language processing (NLP) is used to interpret and respond to user input conversationally.
2. Automatically generating captions for pre-recorded videos is an example of conversational AI ? No
This statement is false. Automatically generating captions from video content is an example of speech-to-text (speech recognition) technology, not conversational AI. While it uses AI to convert spoken words into text, it lacks the two-way interactive communication characteristic of conversational AI. This task is typically handled by the Azure AI Speech service, which transcribes spoken content.
3. A smart device in the home that responds to questions such as “What will the weather be like today?” is an example of conversational AI ? Yes
This statement is true. Smart home assistants that engage in dialogue with users are powered by conversational AI. These devices use speech recognition to understand spoken input, natural language understanding (NLU) to determine intent, and speech synthesis (text-to-speech) to respond audibly. This represents the full conversational AI loop, where machines communicate naturally with humans.
Which OpenAI model does GitHub Copilot use to make suggestions for client-side JavaScript?
GPT-4
Codex
DALL-E
GPT-3
According to the Microsoft Azure AI Fundamentals (AI-900) learning path and Microsoft Learn documentation on GitHub Copilot, GitHub Copilot is powered by OpenAI Codex, a specialized language model derived from the GPT-3 family but fine-tuned specifically on programming languages and code data.
OpenAI Codex was designed to translate natural language prompts into executable code in multiple programming languages, including JavaScript, Python, C#, TypeScript, and Go. It can understand comments, function names, and code structure to generate relevant code suggestions in real time.
When a developer writes client-side JavaScript, GitHub Copilot uses Codex to analyze the context of the file and generate intelligent suggestions, such as completing functions, writing boilerplate code, or suggesting improvements. Codex can also explain what specific code does and provide inline documentation, which enhances developer productivity.
Option A (GPT-4): While some newer versions of GitHub Copilot (Copilot X) may integrate GPT-4 for conversational explanations, the core code completion engine remains based on Codex, as per the AI-900-level content.
Option C (DALL-E): Used for image generation, not for programming tasks.
Option D (GPT-3): Codex was fine-tuned from GPT-3 but has been further trained specifically for code generation tasks.
Therefore, the verified and official answer from Microsoft’s AI-900 curriculum is B. Codex — the OpenAI model used by GitHub Copilot to make suggestions for client-side JavaScript and other programming languages.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn documentation for Azure AI Custom Vision, this service is a specialized part of the Azure AI Vision family that enables developers to train custom image classification and object detection models. It allows organizations to build tailored computer vision models that recognize images or specific objects relevant to their business needs.
Detect objects in an image ? YesThe Azure AI Custom Vision service supports both image classification (assigning an image to one or more categories) and object detection (identifying and locating objects within an image using bounding boxes). This means it can indeed detect and differentiate multiple objects in a single image, making this statement true.
Requires your own data to train the model ? YesThe Custom Vision service is designed to be customizable. Unlike prebuilt Azure AI Vision models that work out of the box, Custom Vision requires you to upload and label your own dataset for training. The model then learns from your examples to perform specialized image recognition tasks relevant to your domain. Thus, this statement is also true.
Analyze video files ? NoWhile Custom Vision can analyze images, it does not directly process or analyze video files. Video analysis is handled by a different service—Azure Video Indexer—which can extract insights such as spoken words, scenes, and faces from videos.
In summary:
? Yes – Detect objects in images
? Yes – Requires your own data
? No – Does not analyze video files.
You are building a chatbot that will use natural language processing (NLP) to perform the following actions based on the text input of a user:
• Accept customer orders.
• Retrieve support documents.
• Retrieve order status updates.
Which type of NLP should you use?
sentiment analysis
translation
language modeling
named entity recognition
In the Microsoft Azure AI Fundamentals (AI-900) curriculum, language modeling is described as a core component of Natural Language Processing (NLP) that enables an AI system to understand, interpret, and generate human language in context. The AI-900 Microsoft Learn module “Identify features of Natural Language Processing workloads” explains that language modeling is used to analyze user input and determine intent — essential for conversational systems like chatbots.
In this question, the chatbot must:
Accept customer orders.
Retrieve support documents.
Retrieve order status updates.
All these tasks require the bot to understand user intent and context from text input. This understanding process is driven by language modeling, which predicts meaning and structure within sentences, enabling the system to decide what action to take next.
Microsoft Learn distinguishes between various NLP techniques:
Sentiment analysis detects emotional tone (positive/negative/neutral).
Translation converts text between languages.
Named Entity Recognition (NER) identifies specific entities like names or dates.However, none of these individually allow a system to process commands, requests, or user intents — that capability is part of language modeling, which powers LUIS (Language Understanding Intelligent Service) or the modern Azure Cognitive Service for Language.
Therefore, to build a chatbot that can interpret commands and respond contextually — such as processing orders or retrieving documents — you must use language modeling.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE; Each correct selection is worth one point.



The Azure OpenAI DALL-E model is a generative image model designed to create original images from textual descriptions (prompts). According to the Microsoft Learn documentation and the AI-900 study guide, DALL-E’s primary function is text-to-image generation—it converts creative or descriptive text input into visually relevant imagery.
“Generate captions for uploaded images” ? NoDALL-E cannot create image captions. Captioning an image (describing what’s in an uploaded image) is a vision analysis task, not an image generation task. That functionality belongs to Azure AI Vision, which can analyze and describe images, detect objects, and generate captions automatically.
“Reliably generate technically accurate diagrams” ? NoWhile DALL-E can create visually appealing artwork or conceptual sketches, it is not designed for producing precise or technically correct diagrams, such as engineering schematics or architectural blueprints. The model’s generative process emphasizes creativity and visual diversity rather than factual or geometric accuracy. Thus, it cannot be relied upon for professional technical outputs.
“Generate decorative images to enhance learning materials” ? YesThis is one of DALL-E’s strongest use cases. It can generate decorative, conceptual, or illustrative images to enhance presentations, educational materials, and marketing content. It enables educators and designers to quickly produce unique visuals aligned with specific themes or topics, enhancing engagement and creativity.
To complete the sentence, select the appropriate option in the answer area.


facial analysis.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of computer vision workloads on Azure,” facial analysis is a computer vision capability that detects faces and extracts attributes such as facial expressions, emotions, pose, occlusion, and image quality factors like exposure and noise. It does not identify or verify individual identities; rather, it interprets facial features and image characteristics to analyze conditions in an image.
In this question, the AI solution helps photographers take better portrait photos by providing feedback on exposure, noise, and occlusion — tasks directly linked to facial analysis. The model analyzes the detected face to determine if the image is well-lit, clear, and unobstructed, thereby improving photo quality. These capabilities are part of the Azure Face service in Azure Cognitive Services, which includes both facial detection and facial analysis functionalities.
Here’s how the other options differ:
Facial detection only identifies that a face exists in an image and provides its location using bounding boxes, without further interpretation.
Facial recognition goes a step further — it attempts to identify or verify a person’s identity by comparing the detected face with stored images. This is not what the scenario describes.
Thus, when an AI solution evaluates image quality aspects like exposure, noise, and occlusion, it’s performing facial analysis, which focuses on understanding image and facial characteristics rather than identification.
In summary, based on Microsoft’s AI-900 study material, this scenario demonstrates facial analysis, a subcategory of computer vision tasks within Azure Cognitive Services.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common machine learning types”, there are three main types of machine learning: supervised learning, unsupervised learning, and reinforcement learning. Within supervised learning, two common approaches are regression and classification, while clustering is a primary example of unsupervised learning.
“You train a regression model by using unlabeled data.” – No.Regression models are trained with labeled data, meaning the input data includes both features (independent variables) and target labels (dependent variables) representing continuous numerical values. Examples include predicting house prices or sales forecasts. Unlabeled data (data without target output values) cannot be used to train regression models; such data is used in unsupervised learning tasks like clustering.
“The classification technique is used to predict sequential numerical data over time.” – No.Classification is used for categorical predictions, where outputs belong to discrete classes, such as spam/not spam or disease present/absent. Predicting sequential numerical data over time refers to time series forecasting, which is typically a regression or forecasting problem, not classification. The AI-900 syllabus clearly separates classification (categorical prediction) from regression (continuous value prediction) and time series (temporal pattern analysis).
“Grouping items by their common characteristics is an example of clustering.” – Yes.This statement is correct. Clustering is an unsupervised learning technique used to group similar data points based on their features. The AI-900 study materials describe clustering as the process of “discovering natural groupings in data without predefined labels.” Common examples include customer segmentation or document grouping.
Therefore, based on Microsoft’s AI-900 training objectives and definitions:
Regression ? supervised learning using labeled continuous data (No)
Classification ? categorical prediction, not sequential numeric forecasting (No)
Clustering ? grouping by similarity (Yes)
You are building a tool that will process images from retail stores and identity the products of competitors.
The solution must be trained on images provided by your company.
Which Azure Al service should you use?
Azure Al Custom Vision
Azure Al Computer Vision
Face
Azure Al Document Intelligence
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn documentation, Azure AI Custom Vision is specifically designed for training custom image classification and object detection models using images that a company provides. In this scenario, the company wants to identify competitor products from images captured in retail stores — a classic use case for custom image classification or object detection, depending on whether you are labeling entire images or identifying multiple items within an image.
Azure AI Custom Vision allows users to:
Upload their own labeled training images.
Train a model that learns to recognize specific objects (in this case, competitor products).
Evaluate, iterate, and deploy the model as an API endpoint for real-time inference.
This fits perfectly with the requirement that the solution “must be trained on images provided by your company.” The key phrase here indicates the need for a custom-trained model rather than a prebuilt one.
The other options are not suitable for this scenario:
B. Azure AI Computer Vision provides prebuilt models for general-purpose image understanding (e.g., detecting common objects, reading text, describing scenes). It is not intended for training on custom datasets.
C. Face service is limited to detecting and recognizing human faces; it cannot be trained to identify products.
D. Azure AI Document Intelligence (formerly Form Recognizer) is focused on extracting structured data from documents and forms, not analyzing retail images.
Therefore, per Microsoft’s official AI-900 training content, when a solution must be trained on custom company images to recognize specific products, the appropriate service is Azure AI Custom Vision.
Which Azure service can use the prebuilt receipt model in Azure Al Document Intelligence?
Azure Al Computer Vision
Azure Machine Learning
Azure Al Services
Azure Al Custom Vision
The prebuilt receipt model is part of Azure AI Document Intelligence (formerly Form Recognizer), which belongs to the broader Azure AI Services family. The prebuilt receipt model is designed to automatically extract key information such as merchant names, dates, totals, and tax amounts from receipts without requiring custom training.
Among the given options, C. Azure AI Services is correct because it encompasses all cognitive AI capabilities—vision, language, speech, and document processing. Specifically, Azure AI Document Intelligence is included within Azure AI Services and provides both prebuilt and custom models for processing invoices, receipts, business cards, and identity documents.
Options A (Computer Vision) and D (Custom Vision) are image-based services, not form-processing tools. Option B (Azure Machine Learning) focuses on building custom predictive models, not using prebuilt document models.
Therefore, the correct answer is C. Azure AI Services, which includes the prebuilt receipt model in Document Intelligence.
Select the answer that correctly completes the sentence.



“Optical Character Recognition (OCR) extracts text from handwritten documents.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Identify features of computer vision workloads,” Optical Character Recognition (OCR) is a computer vision capability that enables AI systems to detect and extract printed or handwritten text from images, scanned documents, and photographs.
Microsoft Learn explains that OCR uses machine learning algorithms to analyze visual data, locate regions containing text, and then convert that text into machine-readable digital format. This capability is essential for automating processes such as document digitization, form processing, and data extraction.
OCR technology is provided through services such as the Azure Cognitive Services Computer Vision API and Azure Form Recognizer. The Computer Vision API’s OCR feature can extract text from both typed and handwritten sources, including receipts, invoices, letters, and forms. Once extracted, this text can be processed, searched, or stored electronically, enabling automation and efficiency in document management systems.
Let’s review the incorrect options:
Object detection identifies and locates objects in an image by drawing bounding boxes (e.g., detecting vehicles or people).
Facial recognition identifies or verifies individuals by comparing facial features.
Image classification assigns an image to one or more predefined categories (e.g., “dog,” “car,” “tree”).
None of these perform the task of extracting textual content from images — that is uniquely handled by Optical Character Recognition (OCR).
Therefore, based on the AI-900 official study content, the verified and correct answer is Optical Character Recognition (OCR), as it specifically extracts text (printed or handwritten) from image-based documents.
Match the types of natural languages processing workloads to the appropriate scenarios.
To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.


According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of Natural Language Processing (NLP) workloads on Azure”, Azure Cognitive Services provides several text analytics and language understanding workloads that perform different types of language processing tasks. Each workload extracts specific information or performs distinct analysis operations on text data.
Entity Recognition ? Extracts persons, locations, and organizations from the textEntity recognition is a feature of Azure Cognitive Service for Language (formerly Text Analytics). It identifies and categorizes named entities in unstructured text, such as people, organizations, locations, dates, and more. The study guide defines this workload as: “Entity recognition locates and classifies named entities in text into predefined categories.” This allows applications to extract structured information from raw text data—for example, identifying “Microsoft” as an organization and “Seattle” as a location.
Sentiment Analysis ? Evaluates text along a positive–negative scaleSentiment analysis determines the emotional tone or opinion expressed in a piece of text. It classifies text as positive, negative, neutral, or mixed, which is widely used for social media monitoring, customer feedback, and product reviews. Microsoft’s official documentation describes it as: “Sentiment analysis evaluates text and returns a sentiment score indicating whether the sentiment is positive, negative, neutral, or mixed.”
Translation ? Returns text translated to the specified target languageThe Translator service, part of Azure Cognitive Services, automatically translates text from one language to another. It supports multiple languages and provides near real-time translation. The AI-900 content specifies that “translation workloads are used to automatically translate text between languages using machine translation models.”
In summary:
Entity Recognition ? Extracts entities like names and locations.
Sentiment Analysis ? Determines emotional tone.
Translation ? Converts text between languages.
? Final Answers:
Extracts persons, locations, and organizations ? Entity recognition
Evaluates text along a positive–negative scale ? Sentiment analysis
Returns text translated to the specified target language ? Translation
Match the Azure Al service to the appropriate generative Al capability.
To answer, drag the appropriate service from the column on the left to its capability on the right. Each service may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



This question maps each Azure AI service to its correct capability based on the Microsoft Azure AI Fundamentals (AI-900) syllabus and Microsoft Learn documentation on Azure Cognitive Services.
Classify and label images ? Azure AI VisionAzure AI Vision (formerly Computer Vision) provides capabilities to analyze visual content, detect objects, classify images, and extract information from pictures. It includes object detection, image classification, and tagging, which are core vision tasks. This service enables businesses to build solutions that understand visual input, such as identifying products, reading signs, or detecting faces in images.
Generate conversational responses ? Azure OpenAI ServiceAzure OpenAI Service integrates powerful large language models such as GPT-3.5 and GPT-4, capable of generating human-like text responses, summarizations, translations, and dialogues. These models are designed for natural language generation (NLG) and conversational AI, making them ideal for chatbots, virtual agents, and intelligent assistants that produce dynamic, context-aware replies.
Convert speech to text in real time ? Azure AI SpeechAzure AI Speech provides speech-to-text capabilities (speech recognition) that convert spoken language into written text instantly. It is commonly used in transcription services, voice command systems, and live captioning applications. Additionally, the Speech service supports text-to-speech (speech synthesis) and speech translation, making it versatile for voice-based AI applications.
By understanding each service’s specialization—Vision for visual data, OpenAI for generative text, and Speech for audio processing—you can correctly match the capabilities.
You plan to create an Al application by using Azure Al Foundry. The solution will be deployed to dedicated virtual machines. Which deployment option should you use?
serverless API
Azure Kubernetes Service (AKS) cluster
Azure virtual machines
managed compute
To complete the sentence, select the appropriate option in the answer area.


Confidence.
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure,” the confidence score represents the calculated probability that a model’s prediction is correct. In image classification, when an AI model analyzes an image and assigns it to a specific category, it also produces a confidence value—a numerical probability (usually between 0 and 1) indicating how certain the model is about its prediction.
For example, if an image classification model identifies an image as a “cat” with a confidence of 0.92, it means the model is 92% certain that the image depicts a cat. The confidence value helps developers and users understand the model’s certainty level about its classification output.
Microsoft Learn emphasizes that in Azure Cognitive Services—such as the Custom Vision Service—each prediction result includes both the predicted label (class) and a confidence score. These confidence scores are essential for evaluating model performance and determining thresholds for automated decisions (e.g., accepting predictions only above a 0.8 probability).
Let’s evaluate the other options:
Accuracy: This is an overall performance metric measuring the percentage of correct predictions across the dataset, not a probability for a single prediction.
Root Mean Square Error (RMSE): This is a metric for regression models, not classification tasks. It measures average error magnitude between predicted and actual values.
Sentiment: This is a type of prediction (positive, negative, neutral) in text analysis, not a probability metric.
Therefore, based on Microsoft’s AI-900 training materials and Azure Cognitive Services documentation, the calculated probability of a correct image classification is called Confidence, which expresses how sure the model is about its prediction for a specific input.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



In Microsoft Azure AI Language Service, both Named Entity Recognition (NER) and Key Phrase Extraction are core features for text analytics. They serve distinct purposes in analyzing and structuring unstructured text data.
Named Entity Recognition (NER):NER is used to identify and categorize specific entities within text, such as people, organizations, locations, dates, times, and quantities. According to Microsoft Learn’s “Analyze text with Azure AI Language” module, NER scans text to extract these entities along with their types. Therefore, the statement “Named entity recognition can be used to retrieve dates and times in a text string” is True (Yes).
Key Phrase Extraction:This feature identifies the most important phrases or main topics in a block of text. It is useful for summarization or highlighting central ideas without classifying them into specific categories. Therefore, the statement “Key phrase extraction can be used to retrieve important phrases in a text string” is also True (Yes).
City Name Retrieval:While key phrase extraction highlights major phrases, it does not extract specific entities like cities or dates. Extracting such details requires Named Entity Recognition, which is designed to find named entities such as city names, people, or organizations. Hence, the statement “Key phrase extraction can be used to retrieve all the city names in a text string” is False (No).
Match the principles of responsible AI to the appropriate descriptions.
To answer, drag the appropriate principle from the column on the left to its description on the right. Each principle may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



The correct answers are derived from the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Identify guiding principles for responsible AI.”
Microsoft defines six core principles of Responsible AI:
Fairness
Reliability and safety
Privacy and security
Inclusiveness
Transparency
Accountability
Each principle addresses a key ethical and operational requirement for developing and deploying trustworthy AI systems.
Reliability and safety – “AI systems must consistently operate as intended, even under unexpected conditions.”This principle ensures that AI models are dependable, robust, and perform accurately under diverse circumstances. Microsoft emphasizes that systems should be thoroughly tested and monitored to guarantee predictable behavior, prevent harm, and maintain safety. A reliable AI solution should continue to function properly when faced with unusual or noisy inputs, and fail safely when issues arise. This principle focuses on stability, testing, and dependable performance.
Privacy and security – “AI systems must protect and secure personal and business information.”This principle ensures that AI systems comply with data privacy laws and ethical standards. It protects users’ sensitive data against unauthorized access and misuse. Microsoft highlights that organizations must implement strong encryption, data anonymization, and access control mechanisms to maintain confidentiality. Protecting user data is essential to building trust and compliance with global standards like GDPR.
Other principles such as fairness and inclusiveness apply to ensuring equitable and accessible AI, but they do not directly relate to system operation or information protection.
? Final Answers:
“Operate as intended” ? Reliability and safety
“Protect and secure information” ? Privacy and security
To complete the sentence, select the appropriate option in the answer area.



The correct answer is object detection. According to the Microsoft Azure AI Fundamentals (AI-900) official study materials and Microsoft Learn module “Explore computer vision”, object detection is the process of identifying and locating objects within an image or video. The primary characteristic of object detection, as emphasized in the study guide, is its ability to return a bounding box around each detected object along with a corresponding label or class.
In this question, the task involves returning a bounding box that indicates the location of a vehicle in an image. This is the exact definition of object detection — identifying that the object exists (a vehicle) and determining its position within the frame. Microsoft Learn clearly differentiates this from other computer vision tasks. Image classification, for example, only determines what an image contains as a whole (for instance, “this image contains a vehicle”), but it does not indicate where in the image the object is located. Optical character recognition (OCR) is specifically used for extracting printed or handwritten text from images, and semantic segmentation involves classifying every pixel in an image to understand boundaries in greater detail, often used in autonomous driving or medical imaging.
The official AI-900 guide highlights object detection as one of the key computer vision workloads supported by Azure Computer Vision, Custom Vision, and Azure Cognitive Services. These services are designed to detect multiple instances of various object types in a single image, outputting bounding boxes and confidence scores for each.
Therefore, based on the AI-900 official curriculum and Microsoft Learn concepts, returning a bounding box that shows the location of a vehicle is a textbook example of object detection, as it involves both recognition and localization of the object within the image frame.
Match the principles of responsible AI to appropriate requirements.
To answer, drag the appropriate principles from the column on the left to its requirement on the right. Each principle may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify guiding principles for responsible AI”, responsible AI is built upon six foundational principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability. Each principle serves to guide the ethical design, deployment, and management of artificial intelligence systems.
Fairness – This principle ensures that AI systems treat all people fairly and do not discriminate based on personal attributes such as gender, race, or age. The Microsoft Learn content emphasizes that “AI systems should treat everyone fairly” and that organizations must evaluate datasets and model outputs for bias. In this scenario, “The system must not discriminate based on gender, race” clearly aligns with Fairness because it directly addresses equitable treatment and unbiased decision-making.
Privacy and Security – Microsoft’s responsible AI framework stresses that “AI systems must be secure and respect privacy.” This means personal data should be safeguarded, processed lawfully, and visible only to authorized users. The statement “Personal data must be visible only to approved users” reflects the importance of protecting sensitive information and controlling access—precisely the intent of the Privacy and Security principle.
Transparency – Transparency refers to ensuring that users understand how AI systems operate and make decisions. Microsoft notes that “AI systems should be understandable and users should be able to know why decisions are made.” The requirement “Automated decision-making processes must be recorded so that approved users can identify why a decision was made” directly supports this principle. Transparency promotes trust and accountability by documenting the reasoning behind AI outputs.
Reliability and Safety, though another core principle, does not directly relate to any of the provided statements in this question.
Select the answer that correctly completes the sentence.



The correct completion of the sentence is:
“The interactive answering of questions entered by a user as part of an application is an example of natural language processing.”
According to the Microsoft Azure AI Fundamentals (AI-900) official study materials, Natural Language Processing (NLP) is a branch of Artificial Intelligence that focuses on enabling computers to understand, interpret, and respond to human language in a way that is both meaningful and useful. It is one of the key AI workloads described in the “Describe features of common AI workloads” module on Microsoft Learn.
When a user types a question into an application and the system responds interactively — such as in a chatbot, Q & A system, or virtual assistant — this process requires language understanding. NLP allows the system to process the input text, determine user intent, extract relevant entities, and generate an appropriate response. This is the foundational capability behind services such as Azure Cognitive Service for Language, Language Understanding (LUIS), and QnA Maker (now integrated as Question Answering in the Language service).
Microsoft’s study guide explains that NLP workloads include the following key scenarios:
Language understanding: Determining intent and context from text or speech.
Text analytics: Extracting meaning, key phrases, sentiment, or named entities.
Conversational AI: Powering bots and virtual agents to interact using natural language.These systems rely on NLP models to analyze user inputs and respond accordingly.
In contrast:
Anomaly detection identifies data irregularities.
Computer vision analyzes images or video.
Forecasting predicts future values based on historical data.
Therefore, based on the AI-900 official materials, the interactive answering of user questions through an application clearly falls under Natural Language Processing (NLP).
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.


Statement
Yes / No
Providing an explanation of the outcome of a credit loan application is an example of the Microsoft transparency principle for responsible AI.
Yes
A triage bot that prioritizes insurance claims based on injuries is an example of the Microsoft reliability and safety principle for responsible AI.
Yes
An AI solution that is offered at different prices for different sales territories is an example of the Microsoft inclusiveness principle for responsible AI.
No
This question is based on the Responsible AI principles defined by Microsoft, a major topic in the AI-900: Microsoft Azure AI Fundamentals certification. The goal of Responsible AI is to ensure that artificial intelligence is developed and used ethically, safely, and transparently to benefit people and society. Microsoft’s framework defines six core principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability.
Transparency Principle – YesProviding an explanation for a loan application decision clearly reflects transparency. According to Microsoft’s Responsible AI guidelines, transparency involves ensuring that users and stakeholders understand how AI systems make decisions. When a financial AI model explains why a loan was approved or denied, it promotes user trust and confidence in automated decision-making. Transparency helps individuals understand influencing factors (like income or credit score), thereby fostering ethical AI deployment.
Reliability and Safety Principle – YesA triage bot that prioritizes insurance claims based on injury severity demonstrates reliability and safety. This principle ensures that AI systems consistently operate as intended, handle data accurately, and do not cause unintended harm. For a triage bot, safety means it must correctly interpret medical or claim information and consistently provide appropriate prioritization. Microsoft emphasizes that reliable AI systems must be tested rigorously, function correctly in various scenarios, and maintain user safety at all times.
Inclusiveness Principle – NoAn AI solution priced differently for various sales territories is unrelated to inclusiveness. Inclusiveness focuses on designing AI systems that are accessible and fair to all users, including those with disabilities or from different demographic backgrounds. Price variation across territories is a business strategy, not an ethical AI inclusion concern. Hence, this statement does not align with any Responsible AI principle.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



Box 1: No
Box 2: Yes
Box 3: Yes
Anomaly detection encompasses many important tasks in machine learning:
Identifying transactions that are potentially fraudulent.
Learning patterns that indicate that a network intrusion has occurred.
Finding abnormal clusters of patients.
Checking values entered into a system.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



The correct answers are Yes, Yes, and Yes.
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn content in the section “Describe features of conversational AI workloads on Azure”, bots created using Azure Bot Service can interact with users across multiple channels. The AI-900 syllabus explains that Azure Bot Service integrates with various communication platforms, allowing developers to build a single bot that can be deployed in many contexts without rewriting the logic.
“You can communicate with a bot by using Cortana.” – Yes.The AI-900 learning materials explain that Cortana, Microsoft’s intelligent personal assistant, can serve as a channel for bots built with the Azure Bot Service. Through the Bot Framework, bots can be connected to Cortana to allow users to interact via voice or text. Although Cortana is less prominent now, it remains conceptually included in the AI-900 coverage as an example of a voice-based conversational AI channel.
“You can communicate with a bot by using Microsoft Teams.” – Yes.This statement is true and directly referenced in the AI-900 syllabus. Microsoft Teams is a fully supported communication channel for Azure Bot Service. Bots in Teams can handle chat messages, commands, and interactions in team or personal contexts. The Microsoft Learn materials specify Teams as one of the native connectors where enterprise users can interact with organizational bots.
“You can communicate with a bot by using a webchat interface.” – Yes.This is also true. The Web Chat channel is one of the most common ways to deploy bots publicly. Azure Bot Service provides a Web Chat control that can be embedded directly into a webpage or web application. This allows users to interact with the bot using a chat window, just like on customer service websites.
Therefore, all three interfaces—Cortana (voice-based), Microsoft Teams (enterprise chat), and Web Chat (browser-based)—are valid and officially supported communication channels for Azure bots.
You need to predict the sea level in meters for the next 10 years.
Which type of machine learning should you use?
classification
regression
clustering
In the most basic sense, regression refers to prediction of a numeric target.
Linear regression attempts to establish a linear relationship between one or more independent variables and a numeric outcome, or dependent variable.
You use this module to define a linear regression method, and then train a model using a labeled dataset. The trained model can then be used to make predictions.
You have the following dataset.

You plan to use the dataset to train a model that will predict the house price categories of houses.
What are Household Income and House Price Category? To answer, select the appropriate option in the answer area.
NOTE: Each correct selection is worth one point.



In machine learning, especially within the Microsoft Azure AI Fundamentals (AI-900) framework, datasets used for supervised learning are composed of features (inputs) and labels (outputs). According to the Microsoft Learn module “Explore the machine learning process”, a feature is any measurable property or attribute used by the model to make predictions, whereas a label is the actual value or category the model is trying to predict.
Household Income ? FeatureA feature (also known as an independent variable) represents the input data that the machine learning algorithm uses to detect patterns or correlations. In this dataset, Household Income is a numeric value that influences the prediction of house price categories. During training, the model learns how variations in household income correlate with changes in the house price category. Microsoft Learn defines features as “the attributes or measurable inputs that are used to train the model.” Thus, Household Income serves as a predictive input or feature.
House Price Category ? LabelThe label (or dependent variable) represents the output the model aims to predict. It is the known result during training that helps the algorithm learn correct mappings between features and outcomes. In this scenario, House Price Category—which can take values such as “Low,” “Middle,” or “High”—is the classification outcome that the model will predict based on household income (and possibly other variables). According to Microsoft Learn, “the label is the variable that contains the known values that the model is trained to predict.”
In summary, the dataset defines a supervised learning classification problem, where Household Income is the feature (input) and House Price Category is the label (output) that the model will learn to predict.
Which statement is an example of a Microsoft responsible AI principle?
Al systems must use only publicly available data.
Al systems must protect the interests of the company
Al systems must be understandable.
Al systems must keep personal details public
The correct answer is C. AI systems must be understandable, which corresponds to the Transparency principle of Microsoft’s Responsible AI framework.
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Identify guiding principles for responsible AI”, Microsoft defines six key principles for responsible AI:
Fairness – AI systems should treat everyone equitably.
Reliability and safety – AI should function as intended, even under unexpected conditions.
Privacy and security – AI must protect personal and business data.
Inclusiveness – AI should empower everyone and engage diverse users.
Transparency – AI systems should be understandable.
Accountability – People should be accountable for AI systems.
The statement “AI systems must be understandable” aligns directly with the Transparency principle, ensuring that AI decisions and behaviors can be explained and interpreted by developers, users, and stakeholders. Microsoft emphasizes that transparent AI builds trust, allows debugging, and ensures ethical usage.
Other options are incorrect:
A. Use only publicly available data – Not a principle of Responsible AI.
B. Protect the interests of the company – Focused on business goals, not ethical AI.
D. Keep personal details public – Violates the Privacy and Security principle.
? Final Answer (Q179): C. AI systems must be understandable.
You need to develop a chatbot for a website. The chatbot must answer users’ questions based on the information in the following documents:
A product troubleshooting guide in a Microsoft Word document
A frequently asked questions (FAQ) list on a webpage
Which service should you use to process the documents?
Azure Bot Service
Language Understanding
Text Analytics
QnA Maker
QnA Maker is an Azure Cognitive Service used to build question-and-answer knowledge bases from structured and unstructured documents, such as FAQs, product manuals, or webpages. According to the AI-900 study guide and Microsoft Learn module “Build a knowledge base with QnA Maker”, this service allows you to extract question-answer pairs from existing data sources like FAQ pages, PDF files, or Word documents.
In this scenario, you have:
A product troubleshooting guide (Word document)
A FAQ webpage
QnA Maker can automatically read both sources, extract relevant Q & A pairs, and create a knowledge base that your chatbot can use to respond to user queries intelligently.
To clarify the other options:
A. Azure Bot Service provides the chatbot interface and conversation logic but doesn’t extract knowledge from documents.
B. Language Understanding (LUIS) identifies intents and entities in natural language input, but it’s not used to read document content.
C. Text Analytics is used for key phrase extraction and sentiment analysis, not Q & A creation.
Therefore, the correct service for processing FAQ-style and document-based content into a question-answering bot is QnA Maker.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Azure Cognitive Services documentation, the Custom Vision service is a specialized computer vision tool that allows users to build, train, and deploy custom image classification and object detection models. It is part of the Azure Cognitive Services suite, designed for scenarios where pre-built Computer Vision models do not meet specific business requirements.
“The Custom Vision service can be used to detect objects in an image.” ? YesThis statement is true. The Custom Vision service supports object detection, enabling the model to identify and locate multiple objects within a single image using bounding boxes. For example, it can locate cars, products, or animals in photos.
“The Custom Vision service requires that you provide your own data to train the model.” ? YesThis statement is true. Unlike pre-trained models such as the standard Computer Vision API, the Custom Vision service requires users to upload and label their own images. The system uses this labeled dataset to train a model specific to the user’s scenario, improving accuracy for custom use cases.
“The Custom Vision service can be used to analyze video files.” ? NoThis statement is false. The Custom Vision service works only with static images, not videos. To analyze video files, Azure provides Video Indexer and Azure Media Services, which are designed for extracting insights from moving visual content.
Match the Al workload to the appropriate task.
To answer, drag the appropriate Al workload from the column on the left to its task on the right. Each workload may be used once, more than once, or not at all
NOTE: Each correct match is worth one point.


You plan to build a conversational Al solution that can be surfaced in Microsoft Teams. Microsoft Cortana, and Amazon Alexa. Which service should you use?
Azure Bot Service
Azure Cognitive Search
Language service
Speech
According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Describe features of conversational AI workloads on Azure,” the Azure Bot Service is the dedicated Azure service for building, connecting, deploying, and managing conversational AI experiences across multiple channels — such as Microsoft Teams, Cortana, and Amazon Alexa.
The Azure Bot Service integrates with the Bot Framework SDK to design intelligent chatbots that can communicate with users in natural language. It also connects seamlessly with other Azure Cognitive Services, such as Language Service (LUIS) for intent understanding and Speech Service for voice input/output.
The question specifies that the conversational AI must be accessible through multiple platforms, including Microsoft Teams, Cortana, and Alexa. Azure Bot Service supports this multi-channel communication model out of the box, allowing developers to configure a single bot that interacts through many endpoints simultaneously.
Other options:
B. Azure Cognitive Search: Used for information retrieval and knowledge mining, not conversational AI.
C. Language Service: Provides natural language understanding, key phrase extraction, sentiment analysis, etc., but doesn’t handle multi-channel communication.
D. Speech: Provides speech-to-text and text-to-speech conversion but is not a chatbot platform.
Therefore, the best solution for building and deploying a multi-channel conversational AI system is Azure Bot Service, as clearly defined in Microsoft’s AI-900 learning content.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE Each correct selection is worth one point



Full Detailed Explanation (250–300 words):
“You can fine-tune some Azure OpenAI models by using your own data.” – YESThis statement is true. Azure OpenAI allows customers to fine-tune certain models like GPT-3, GPT-3.5, and some embedding models with their own data. Fine-tuning customizes a model to perform better on specific tasks or match a company’s domain terminology, tone, or context. According to Microsoft Learn’s AI-900 and Azure OpenAI documentation, fine-tuning is supported for approved use cases while maintaining Microsoft’s Responsible AI oversight and compliance process.
“Pretrained generative AI models are a component of Azure OpenAI.” – YESThis statement is also true. Azure OpenAI provides access to pretrained large language and generative AI models such as GPT-3.5, GPT-4, Codex, and DALL·E. These models are pretrained on vast datasets and made available via APIs, allowing developers to generate text, code, and images without needing to train their own models. This is a core feature of Azure OpenAI’s service offering.
“To build a solution that complies with Microsoft responsible AI principles, you must build and train your own model.” – NOThis statement is false. Compliance with Microsoft Responsible AI principles (Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, Accountability) does not require building custom models. Prebuilt Azure AI and OpenAI services already align with Responsible AI standards. Developers simply need to use these services responsibly, applying governance and ethical design practices.
Which feature of the Azure Al Language service should you use to automate the masking of names and phone numbers in text data?
Personally Identifiable Information (Pll) detection
entity linking
custom text classification
custom named entity recognition (NER)
The correct answer is A. Personally Identifiable Information (PII) detection.
In the Azure AI Language service, PII detection is a built-in feature designed to automatically identify and redact sensitive or confidential information from text data. According to the Microsoft Learn module “Identify capabilities of Azure AI Language” and the AI-900 study guide, this capability can detect personal data such as names, phone numbers, email addresses, credit card numbers, and other identifiers.
When applied, the service scans input text and either masks or removes these PII elements based on configurable parameters, ensuring compliance with data privacy regulations like GDPR or HIPAA.
For example, if a document contains “John Doe’s phone number is 555-123-4567,” PII detection can return “******’s phone number is ***********,” thereby preventing exposure of sensitive personal details.
Option analysis:
A. Personally Identifiable Information (PII) detection: ? Correct. It identifies and masks sensitive data in text.
B. Entity linking: Connects recognized entities to known data sources like Wikipedia; not used for redaction.
C. Custom text classification: Classifies text into predefined categories; not designed for masking personal data.
D. Custom named entity recognition (NER): Detects domain-specific entities you define but doesn’t automatically mask them.
Therefore, to automate masking of names and phone numbers, the appropriate Azure AI Language feature is PII detection.
You need to identify groups of rows with similar numeric values in a dataset. Which type of machine learning should you use?
clustering
regression
classification
When you need to identify groups of rows with similar numeric values in a dataset, the correct machine learning approach is clustering. This method belongs to unsupervised learning, where the model groups data points based on similarity without using pre-labeled training data.
In Azure AI-900 study modules, clustering is introduced as a technique for discovering natural groupings in data. For instance, clustering could be used to group customers with similar purchase histories or to find products with similar features. The algorithm—such as K-means or hierarchical clustering—calculates distances between data points and organizes them into clusters based on how close they are numerically or statistically.
The other options are incorrect:
B. Regression predicts continuous numeric values (e.g., predicting sales or prices).
C. Classification assigns data to predefined categories (e.g., spam or not spam).
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



This question evaluates understanding of clustering—an unsupervised learning technique explained in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn module “Explore fundamental principles of machine learning.” Clustering involves finding natural groupings within data without prior knowledge of output labels. The algorithm identifies similarities among data points and groups them accordingly, with each group (or cluster) containing items that are more similar to each other than to those in other groups.
Organizing documents into groups based on similarities of the text contained in the documents ? YesThis is a classic clustering application. In text analytics or natural language processing (NLP), clustering algorithms such as K-means or hierarchical clustering are used to group documents with similar content or topics. According to Microsoft Learn, “clustering identifies relationships in data and groups items that share common characteristics.” Therefore, organizing text documents based on content similarity is a textbook example of clustering.
Grouping similar patients based on symptoms and diagnostic test results ? YesThis is another example of clustering. In healthcare analytics, clustering can be used to segment patients with similar health patterns or risks. The study guide emphasizes that clustering can “discover natural groupings in data such as customers with similar buying patterns or patients with similar clinical results.” Thus, this task correctly describes unsupervised clustering because it does not involve predicting a known outcome but grouping based on similarity.
Predicting whether a person will develop mild, moderate, or severe allergy symptoms based on pollen count ? NoThis is a classification problem, not clustering. Classification is a supervised learning technique where the model is trained with labeled data to predict predefined categories (in this case, mild, moderate, or severe). Microsoft Learn clearly distinguishes between clustering (discovering hidden patterns) and classification (predicting predefined categories).
You have a database that contains a list of employees and their photos.
You are tagging new photos of the employees.
For each of the following statements select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



These answers are derived from the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure.” The Azure Face service, part of Azure Cognitive Services, provides advanced facial recognition capabilities including detection, verification, identification, grouping, and similarity analysis.
Let’s analyze each statement:
“The Face service can be used to group all the employees who have similar facial characteristics.” ? YesThe Face service supports a grouping function that automatically organizes a collection of unknown faces into groups based on visual similarity. It doesn’t require labeled data; instead, it identifies clusters of similar-looking faces. This is particularly useful when building or validating datasets of people.
“The Face service will be more accurate if you provide more sample photos of each employee from different angles.” ? YesAccording to Microsoft documentation, model accuracy improves when you provide multiple high-quality images of each person under different conditions—such as varying lighting, poses, and angles. This diversity allows the service to better learn unique facial characteristics and improves recognition reliability, especially for identification and verification tasks.
“If an employee is wearing sunglasses, the Face service will always fail to recognize the employee.” ? NoThis is incorrect. While occlusions (like sunglasses or hats) can reduce accuracy, the service may still recognize the person depending on how much of the face remains visible. Microsoft Learn explicitly notes that partial occlusion affects recognition confidence but does not guarantee failure.
In conclusion, the Face service can group similar faces (Yes), become more accurate with diverse samples (Yes), and still recognize partially covered faces though with lower confidence (No). These principles align directly with the Face API’s core functions and AI-900 learning objectives regarding computer vision and responsible AI-based facial recognition.
Match the types of machine learning to the appropriate scenarios.
To answer, drag the appropriate machine learning type from the column on the left to its scenario on the right. Each machine learning type may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of computer vision workloads on Azure”, computer vision models can perform different types of image analysis depending on the goal of the task. The main types include image classification, object detection, and semantic segmentation. Each method analyzes images at a different level of granularity.
Image Classification ? Separate images of polar bears and brown bearsImage classification assigns an entire image to a specific category or label. The model analyzes the image as a whole and determines which predefined class it belongs to. For example, in this case, the model would look at the features of each image and decide whether it shows a polar bear or a brown bear. The Microsoft Learn materials define classification as “assigning an image to a specific category.”
Object Detection ? Determine the location of a bear in a photoObject detection identifies where objects appear within an image by drawing bounding boxes around them. This type of model not only classifies what object is present but also provides its location. Microsoft Learn explains that object detection “detects and locates individual objects within an image.” For instance, the model can detect a bear in a forest scene and highlight its position.
Semantic Segmentation ? Determine which pixels in an image are part of a bearSemantic segmentation is the most detailed form of image analysis. It classifies each pixel in an image according to the object it belongs to. In this scenario, the model identifies every pixel corresponding to the bear’s body. The AI-900 content defines this as “classifying every pixel in an image into a category.”
To summarize:
Image classification ? Categorizes entire images.
Object detection ? Locates and labels objects within images.
Semantic segmentation ? Labels each pixel for precise object boundaries.
https://nanonets.com/blog/how-to-do-semantic-segmentation-using-deep-learning/
To complete the sentence, select the appropriate option in the answer area.



The correct answer is “adding and connecting modules on a visual canvas.”
According to the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore automated machine learning in Azure Machine Learning,” the Azure Machine Learning designer is a drag-and-drop, no-code environment that allows users to create, train, and deploy machine learning models visually. It is specifically designed for users who prefer an intuitive graphical interface rather than writing extensive code.
Microsoft Learn defines Azure Machine Learning designer as a tool that allows you to “build, test, and deploy machine learning models by dragging and connecting pre-built modules on a visual canvas.” These modules can represent data inputs, transformations, training algorithms, and evaluation processes. By linking them together, users can create an end-to-end machine learning pipeline.
The designer simplifies the machine learning workflow by allowing data scientists, analysts, and even non-developers to:
Import and prepare datasets visually.
Choose and connect algorithm modules (e.g., classification, regression, clustering).
Train and evaluate models interactively.
Publish inference pipelines as web services for prediction.
Let’s analyze the other options:
Automatically performing common data preparation tasks – This describes Automated ML (AutoML), not the Designer.
Automatically selecting an algorithm to build the most accurate model – Also a characteristic of AutoML, where the system tests multiple algorithms automatically.
Using a code-first notebook experience – This describes the Azure Machine Learning notebooks environment, which uses Python and SDKs, not the Designer interface.
Therefore, based on the official AI-900 learning objectives and Microsoft Learn documentation, the Azure Machine Learning designer allows you to create models by adding and connecting modules on a visual canvas, providing a no-code, interactive experience ideal for users building custom machine learning workflows visually.
You need to determine the location of cars in an image so that you can estimate the distance between the cars.
Which type of computer vision should you use?
optical character recognition (OCR)
object detection
image classification
face detection
Object detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image.
The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like " indoor " , which can ' t be localized with bounding boxes.
Select the answer that correctly completes the sentence.



In the Microsoft Azure AI Fundamentals (AI-900) curriculum, computer vision capabilities refer to artificial intelligence systems that can analyze and interpret visual content such as images and videos. The Azure AI Vision and Face API services provide pretrained models for detecting, recognizing, and analyzing visual information, enabling developers to build intelligent applications that understand what they " see. "
When asked how computer vision capabilities can be deployed, the correct answer is to integrate a face detection feature into an app. This aligns with Microsoft Learn’s module “Describe features of computer vision workloads,” which explains that computer vision can identify objects, classify images, detect faces, and extract text (OCR). The Face API, a part of Azure AI Vision, specifically provides face detection, verification, and emotion recognition capabilities.
Integrating these services into an application allows it to perform actions such as:
Detecting human faces in photos or video streams.
Recognizing facial attributes like age, emotion, or head pose.
Enabling secure authentication based on face recognition.
The other options are incorrect because they relate to different AI workloads:
Develop a text-based chatbot for a website: This falls under Conversational AI, implemented with Azure Bot Service or Conversational Language Understanding (CLU).
Identify anomalous customer behavior on an online store: This task relates to machine learning and anomaly detection models, not computer vision.
Suggest automated responses to incoming email: This uses Natural Language Processing (NLP) capabilities, not visual analysis.
Therefore, the correct and Microsoft-verified completion of the statement is:
“Computer vision capabilities can be deployed to integrate a face detection feature into an app.”
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Describe features of natural language processing (NLP) workloads on Azure,” Natural Language Processing refers to the branch of AI that enables computers to interpret, understand, and generate human language. One of the main NLP workloads identified by Microsoft is speech-to-text conversion, which transforms spoken words into written text.
Creating a text transcript of a voice recording perfectly fits this definition because it involves converting audio language data into text form — a process handled by speech recognition models. These models analyze the acoustic features of human speech, segment phonemes, identify words, and produce a text transcript. On Azure, this function is implemented using the Azure Cognitive Services Speech-to-Text API, part of the Language and Speech services.
Let’s examine the other options to clarify why they are incorrect:
Computer vision workload: Involves interpreting and analyzing visual data such as images and videos (e.g., object detection, facial recognition). It does not deal with speech or audio.
Knowledge mining workload: Refers to extracting useful information from large amounts of structured and unstructured data using services like Azure Cognitive Search, not transcribing audio.
Anomaly detection workload: Involves identifying unusual patterns in data (e.g., fraud detection or sensor anomalies), unrelated to language or speech.
In summary, when a system creates a text transcript from spoken audio, it is performing a speech recognition task—classified under Natural Language Processing (NLP). This workload helps make spoken content searchable, analyzable, and accessible, aligning with Microsoft’s Responsible AI goal of enhancing accessibility through language understanding.
In which two scenarios can you use speech recognition? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
an in-car system that reads text messages aloud
providing closed captions for recorded or live videos
creating an automated public address system for a train station
creating a transcript of a telephone call or meeting
The correct answers are B and D.
Speech recognition, part of Azure’s Speech service, converts spoken audio into written text. It is a core feature of Azure Cognitive Services for speech-to-text scenarios.
Providing closed captions for recorded or live videos (B) – This is a typical application of speech recognition. The AI system listens to audio content from a video and generates real-time or post-event captions. Azure’s Speech-to-Text API is frequently used in broadcasting and video platforms to improve accessibility and searchability.
Creating a transcript of a telephone call or meeting (D) – Another common use case is automated transcription. The Speech service can process real-time audio streams (such as meetings or calls) and produce accurate text transcripts. This is widely used in customer service, call analytics, and meeting documentation.
The incorrect options are:
A. an in-car system that reads text messages aloud – This uses Text-to-Speech, not speech recognition.
C. creating an automated public address system for a train station – This also uses Text-to-Speech, since it generates spoken output from text.
Therefore, scenarios that convert spoken words into text correctly represent speech recognition, making B and D the right answers.
You have a dataset that contains information about taxi journeys that occurred during a given period.
You need to train a model to predict the fare of a taxi journey.
What should you use as a feature?
the number of taxi journeys in the dataset
the trip distance of individual taxi journeys
the fare of individual taxi journeys
the trip ID of individual taxi journeys
The label is the column you want to predict. The identified Features are the inputs you give the model to predict the Label.
Example:
The provided data set contains the following columns:
vendor_id: The ID of the taxi vendor is a feature.
rate_code: The rate type of the taxi trip is a feature.
passenger_count: The number of passengers on the trip is a feature.
trip_time_in_secs: The amount of time the trip took. You want to predict the fare of the trip before the trip is completed. At that moment, you don ' t know how long the trip would take. Thus, the trip time is not a feature and you ' ll exclude this column from the model.
trip_distance: The distance of the trip is a feature.
payment_type: The payment method (cash or credit card) is a feature.
fare_amount: The total taxi fare paid is the label.
Which term is used to describe uploading your own data to customize an Azure OpenAI model?
completion
grounding
fine -tuning
prompt engineering
In Azure OpenAI Service, fine-tuning refers to the process of uploading your own labeled dataset to customize or adapt a pretrained model (such as GPT-3.5 or Curie) for a specific use case. According to the Microsoft Learn documentation and AI-900 official study guide, fine-tuning allows organizations to improve a model’s performance on domain-specific tasks or to align responses with brand tone and context.
Fine-tuning differs from simple prompting because it requires providing structured training data (usually in JSONL format) that contains pairs of input prompts and ideal completions. The model uses this data to adjust its internal weights, thereby “learning” your organization’s language patterns, terminology, or industry context.
Option review:
A. Completion: Refers to the text generated by a model in response to a prompt. It’s the output, not the customization process.
B. Grounding: Integrates external, up-to-date data sources (like search results or databases) during inference but doesn’t alter the model’s parameters.
C. Fine-tuning: ? Correct — this is the process of uploading and training with your own data.
D. Prompt engineering: Involves designing effective prompts but does not change the underlying model.
Thus, fine-tuning is the term used for customizing an Azure OpenAI model using your own uploaded data.
Select the answer that correctly completes the sentence.


validation.
In the Microsoft Azure AI Fundamentals (AI-900) study materials, a key concept in machine learning model development is splitting data into subsets for training, validation, and testing. A randomly extracted subset of data from a dataset is most commonly used for validation — that is, for evaluating the performance of the model during or after training.
Here’s how this process works:
Training set – This portion of the dataset is used to train the machine learning model. The model learns patterns, relationships, and parameters from this data.
Validation set – This is a randomly selected subset (separate from training data) used to fine-tune model hyperparameters and evaluate how well the model generalizes to unseen data. It helps detect overfitting — when the model performs well on training data but poorly on new data.
Test set – A final, untouched dataset used to measure the model’s real-world performance after all training and tuning are complete.
By reserving a random subset for validation, data scientists ensure that the model’s performance metrics reflect generalization, not memorization of the training data.
Let’s review the incorrect options:
Algorithms – These are the mathematical frameworks or methods used to build models (e.g., decision trees, neural networks). They are not data subsets.
Features – These are input variables (attributes) used by the model, not randomly selected data subsets.
Labels – These are target values or outcomes the model predicts; again, not data subsets.
Therefore, in alignment with Azure AI-900’s machine learning fundamentals, the correct completion is:
? “A randomly extracted subset of data from a dataset is commonly used for validation of the model.”
Match the machine learning tasks to the appropriate scenarios.
To answer, drag the appropriate task from the column on the left to its scenario on the right. Each task may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



This question tests your understanding of machine learning workflow tasks as described in the Microsoft Azure AI Fundamentals (AI-900) study guide and the Microsoft Learn module “Explore the machine learning process.” The AI-900 curriculum divides the machine learning lifecycle into key phases: data preparation, feature engineering and selection, model training, model evaluation, and model deployment. Each phase has specific tasks designed to prepare, build, and assess predictive models before deployment.
Examining the values of a confusion matrix ? Model evaluationIn Azure Machine Learning, evaluating a model involves checking its performance using metrics such as accuracy, precision, recall, and F1-score. The confusion matrix is one of the most common tools for this purpose. According to Microsoft Learn, “model evaluation is the process of assessing a trained model’s performance against test data to ensure reliability before deployment.” Analyzing the confusion matrix helps determine whether predictions align with actual outcomes, making this task part of model evaluation.
Splitting a date into month, day, and year fields ? Feature engineeringFeature engineering refers to transforming raw data into features that better represent the underlying patterns to improve model performance. The study guide describes it as “the process of creating new input features from existing data.” Splitting a date field into separate numeric fields (month, day, year) is a classic example of feature engineering because it enables the model to learn from temporal patterns that might otherwise remain hidden.
Picking temperature and pressure to train a weather model ? Feature selectionFeature selection involves identifying the most relevant variables that have predictive power for the model. As defined in Microsoft Learn, “feature selection is the process of choosing the most useful subset of input features for training.” In this scenario, selecting temperature and pressure variables as inputs for a weather prediction model fits perfectly within the feature selection stage.
Therefore, the correct matches are:
? Examining confusion matrix ? Model evaluation
? Splitting date field ? Feature engineering
? Picking temperature & pressure ? Feature selection
Select the answer that correctly completes the sentence.



According to the Microsoft Azure AI Fundamentals (AI-900) official study guide and the Microsoft Learn module “Identify features of common machine learning types”, the classification technique is a type of supervised machine learning used to predict which category or class a new observation belongs to, based on patterns learned from labeled training data.
In this scenario, a banking system that predicts whether a loan will be repaid is dealing with a binary outcome—either the loan will be repaid or will not be repaid. These two possible results represent distinct classes, making this problem a classic example of binary classification. During training, the model learns from historical data containing features such as customer income, credit score, loan amount, and repayment history, along with labeled outcomes (repaid or defaulted). After training, it can classify new applications into one of these two categories.
The AI-900 curriculum distinguishes between three key supervised and unsupervised learning approaches:
Classification: Predicts discrete categories (e.g., spam/not spam, fraud/not fraud, will repay/won’t repay).
Regression: Predicts continuous numerical values (e.g., house prices, sales forecast, temperature).
Clustering: Groups data based on similarity without predefined labels (e.g., customer segmentation).
Since the banking problem focuses on predicting a categorical outcome rather than a continuous numeric value, it fits squarely into the classification domain. In Azure Machine Learning, such tasks can be performed using algorithms like Logistic Regression, Decision Trees, or Support Vector Machines (SVMs), all configured for categorical prediction.
Therefore, per Microsoft’s official AI-900 learning objectives, a banking system predicting whether a loan will be repaid represents a classification type of machine learning problem.
You need to generate images based on user prompts. Which Azure OpenAI model should you use?
GPT-4
DALL-E
GPT-3
Whisper
According to the Microsoft Azure OpenAI Service documentation and AI-900 official study materials, the DALL-E model is specifically designed to generate and edit images from natural language prompts. When a user provides a descriptive text input such as “a futuristic city skyline at sunset”, DALL-E interprets the textual prompt and produces an image that visually represents the content described. This functionality is known as text-to-image generation and is one of the creative AI capabilities supported by Azure OpenAI.
DALL-E belongs to the family of generative models that can create new visual content, expand existing images, or apply transformations to images based on textual instructions. Within Azure OpenAI, the DALL-E API enables developers to integrate image creation directly into applications—useful for design assistance, marketing content generation, or visualization tools. The model learns from vast datasets of text–image pairs and is optimized to ensure alignment, diversity, and accuracy in the produced visuals.
By contrast, the other options serve different purposes:
A. GPT-4 is a large language model for text-based generation, reasoning, and conversation, not for creating images.
C. GPT-3 is an earlier text generation model, primarily used for language tasks like summarization, classification, and question answering.
D. Whisper is an automatic speech recognition (ASR) model used to convert spoken language into written text; it has no image-generation capability.
Therefore, when the requirement is to generate images based on user prompts, the only Azure OpenAI model that fulfills this purpose is DALL-E. This aligns directly with the AI-900 learning objective covering Azure OpenAI generative capabilities for text, code, and image creation.
Match the Al workload to the appropriate task.
To answer, drag the appropriate Ai workload from the column on the left to its task on the right. Each workload may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.



This question tests your understanding of AI workloads as described in the Microsoft Azure AI Fundamentals (AI-900) study guide. Each Azure AI workload is designed to handle specific types of data and tasks: text, images, documents, or content generation.
Extract data from medical admission forms for import into a patient tracking database ? Azure AI Document IntelligenceFormerly known as Form Recognizer, this service belongs to the Azure AI Document Intelligence workload. It extracts key-value pairs, tables, and textual information from structured and semi-structured documents such as forms, invoices, and admission sheets. For medical forms, Document Intelligence can identify fields like patient name, admission date, and diagnosis and export them into structured formats for database import.
Automatically create drafts for a monthly newsletter ? Generative AIThis task involves creating original written content, which is a capability of Generative AI. Microsoft’s Azure OpenAI Service uses large language models (like GPT-4) to generate human-like text, summaries, or articles. Generative AI workloads are ideal for automating creative writing, drafting newsletters, producing blogs, or summarizing reports.
Analyze aerial photos to identify flooded areas ? Computer VisionComputer Vision workloads involve analyzing and interpreting visual data from images or videos. This includes detecting objects, classifying scenes, and identifying patterns such as flooded regions in aerial imagery. Azure’s Computer Vision or Custom Vision services can be trained to detect water coverage or terrain changes using image recognition techniques.
Thus, the correct matches are:
Azure AI Document Intelligence ? Extract medical form data
Generative AI ? Create newsletter drafts
Computer Vision ? Identify flooded areas from aerial photos
Match the facial recognition tasks to the appropriate questions.
To answer, drag the appropriate task from the column on the left to its question on the right. Each task may be used once, more than once, or not at all.
NOTE: Each correct selection is worth one point.



The correct matches are based on the Microsoft Azure AI Fundamentals (AI-900) Official Study Guide and the Microsoft Learn module “Explore computer vision in Microsoft Azure.” These materials explain that facial recognition tasks can be categorized into four major operations: verification, identification, similarity, and grouping. Each task serves a distinct purpose in facial recognition scenarios.
Verification – “Do two images of a face belong to the same person?”The verification task determines whether two facial images represent the same individual. Azure Face API compares the facial features and returns a confidence score indicating the likelihood that the two faces belong to the same person.
Similarity – “Does this person look like other people?”The similarity task compares a face against a collection of faces to find visually similar individuals. It does not confirm identity but measures how closely two or more faces resemble each other.
Grouping – “Do all the faces belong together?”Grouping organizes a set of unknown faces into clusters based on similar facial features. This is used when identities are not known beforehand, helping discover potential duplicates or visually similar clusters within an image dataset.
Identification – “Who is this person in this group of people?”The identification task is used when the system tries to determine who a specific person is by comparing their face against a known collection (face database or gallery). It returns the identity that best matches the input face.
According to Microsoft’s AI-900 training, these tasks form the basis of Azure Face API’s capabilities. Each helps solve a different type of facial recognition problem—from matching pairs to discovering unknown identities—making them essential components of responsible AI-based vision systems.
You have an Azure Machine Learning model that predicts product quality. The model has a training dataset that contains 50,000 records. A sample of the data is shown in the following table.

For each of the following Statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



This question tests the understanding of features and labels in machine learning, a core concept covered in the Microsoft Azure AI Fundamentals (AI-900) syllabus under “Describe fundamental principles of machine learning on Azure.”
In supervised machine learning, data is divided into features (inputs) and labels (outputs).
Features are the independent variables — measurable properties or characteristics used by the model to make predictions.
Labels are the dependent variables — the target outcome the model is trained to predict.
From the provided dataset, the goal of the Azure Machine Learning model is to predict product quality (Pass or Fail). Therefore:
Mass (kg) is a feature – Yes“Mass (kg)” represents an input variable used by the model to learn patterns that influence product quality. It helps the algorithm understand how variations in mass might correlate with passing or failing the quality test. Thus, it is correctly classified as a feature.
Quality Test is a label – YesThe “Quality Test” column indicates the outcome of the manufacturing process, marked as either Pass or Fail. This is the target the model tries to predict during training. In Azure ML terminology, this column is the label, as it represents the dependent variable.
Temperature (C) is a label – No“Temperature (C)” is an input that helps the model determine quality outcomes, not the outcome itself. It influences the quality result but is not the value being predicted. Therefore, temperature is another feature, not a label.
In conclusion, per Microsoft Learn and AI-900 study materials, features are measurable inputs (like mass and temperature), while the label is the target output (like the quality test result).
Select the answer that correctly completes the sentence.


Text extraction.
According to the Microsoft Azure AI Fundamentals (AI-900) study guide and Microsoft Learn documentation for Azure AI Vision (formerly Computer Vision), text extraction—also known as Optical Character Recognition (OCR)—is the computer vision capability that detects and extracts printed or handwritten text from images and video frames.
In this scenario, a traffic monitoring system collects vehicle registration numbers (license plates) from CCTV footage. These registration numbers are alphanumeric text that must be read and converted into digital form for processing, storage, or analysis. The Azure AI Vision service’s OCR (text extraction) feature performs this function. It analyzes each frame from the video feed, detects text regions (the license plates), and converts the visual text into machine-readable text data.
This process is widely used in Automatic Number Plate Recognition (ANPR) systems that support law enforcement, toll booths, and parking management solutions. The OCR model can handle variations in font, lighting, and angle to accurately extract license plate numbers.
The other options describe different vision capabilities:
Image classification assigns an image to a general category (e.g., “car,” “truck,” or “bike”), not text extraction.
Object detection identifies and locates objects in images using bounding boxes (e.g., detecting the car itself), but not the text written on the car.
Spatial analysis tracks people or objects in a defined physical space (e.g., counting individuals entering a building), not reading text.
Therefore, for a traffic monitoring system that identifies vehicle registration numbers from CCTV footage, the most accurate Azure AI Vision capability is Text extraction (OCR).
You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web app that will guide them to the best resource or answer.
Which service should you integrate with the web app to meet the goal?
Azure Al Language Service
Face
Azure Al Translator
Azure Al Custom Vision
QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semistructured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base—automatically. Your knowledge base gets smarter, too, as it
continually learns from user behavior.
You need to make the press releases of your company available in a range of languages.
Which service should you use?
Translator Text
Text Analytics
Speech
Language Understanding (LUIS)
The Translator Text service (part of Azure Cognitive Services) provides real-time text translation across multiple languages. According to Microsoft Learn’s AI-900 module on “Identify features of Natural Language Processing (NLP) workloads”, translation is one of the four main NLP tasks, alongside key phrase extraction, sentiment analysis, and language understanding.
In this scenario, the company wants to make press releases available in a range of languages, which requires converting text from one language to another while preserving meaning and tone. The Translator Text API supports more than 100 languages and can be integrated into web apps, chatbots, or content management systems for automatic multilingual publishing.
The other options perform different functions:
Text Analytics (B) extracts insights such as key phrases or sentiment but does not translate.
Speech (C) focuses on converting between speech and text, not text translation.
Language Understanding (LUIS) (D) identifies user intent but does not perform translation.
Therefore, to provide multilingual press releases, the appropriate service is A. Translator Text, which ensures accurate, fast, and scalable translation across global audiences.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.



This question assesses knowledge of the Azure Cognitive Services Speech and Text Analytics capabilities, as described in the Microsoft Azure AI Fundamentals (AI-900) official study guide and Microsoft Learn modules “Explore natural language processing” and “Explore speech capabilities.” These services are part of Azure Cognitive Services, which provide prebuilt AI capabilities for speech, language, and text understanding.
You can use the Speech service to transcribe a call to text ? YesThe Speech-to-Text feature in the Azure Speech service automatically converts spoken words into written text. Microsoft Learn explains: “The Speech-to-Text capability enables applications to transcribe spoken audio to text in real time or from recorded files.” This makes it ideal for call transcription, voice assistants, and meeting captioning.
You can use the Text Analytics service to extract key entities from a call transcript ? YesOnce a call has been transcribed into text, the Text Analytics service (part of Azure Cognitive Services for Language) can process that text to extract key entities, key phrases, and sentiment. For example, it can identify names, organizations, locations, and product mentions. Microsoft Learn notes: “Text Analytics can extract key phrases and named entities from text to derive insights and structure from unstructured data.”
You can use the Speech service to translate the audio of a call to a different language ? YesThe Azure Speech service also includes Speech Translation, which can translate spoken language in real time. It converts audio input from one language into translated text or speech output in another language. Microsoft Learn describes this as: “Speech Translation combines speech recognition and translation to translate spoken audio to another language.”
3 Months Free Update
3 Months Free Update
3 Months Free Update
TESTED 14 Apr 2026