Summer Sale Coupon - 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: c4sbfdisc

DBS-C01 PDF

$44

$109.99

3 Months Free Update

  • Printable Format
  • Value of Money
  • 100% Pass Assurance
  • Verified Answers
  • Researched by Industry Experts
  • Based on Real Exams Scenarios
  • 100% Real Questions

DBS-C01 PDF + Testing Engine

$70.4

$175.99

3 Months Free Update

  • Exam Name: AWS Certified Database - Specialty
  • Last Update: Apr 17, 2024
  • Questions and Answers: 324
  • Free Real Questions Demo
  • Recommended by Industry Experts
  • Best Economical Package
  • Immediate Access

DBS-C01 Engine

$52.8

$131.99

3 Months Free Update

  • Best Testing Engine
  • One Click installation
  • Recommended by Teachers
  • Easy to use
  • 3 Modes of Learning
  • State of Art Technology
  • 100% Real Questions included

DBS-C01 Practice Exam Questions with Answers AWS Certified Database - Specialty Certification

Question # 6

A financial services company has an application deployed on AWS that uses an Amazon Aurora PostgreSQL DB cluster. A recent audit showed that no log files contained database administrator activity. A database specialist needs to recommend a solution to provide database access and activity logs. The solution should use the least amount of effort and have a minimal impact on performance.

Which solution should the database specialist recommend?

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Full Access
Question # 7

A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table.

To prepare the new table with identical settings, which steps should be performed? (Choose two.)

A.

Re-create global secondary indexes in the new table

B.

Define IAM policies for access to the new table

C.

Define the TTL settings

D.

Encrypt the table from the AWS Management Console or use the update-table command

E.

Set the provisioned read and write capacity

Full Access
Question # 8

A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low.

Which solution meets these requirements?

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Full Access
Question # 9

A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups.

Which solution will meet this requirement with the LEAST operational overhead?

A.

Configure an RDS event notification subscription for DB security group events.

B.

Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification.

C.

Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups.

D.

Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups.

Full Access
Question # 10

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.

How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

A.

Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

B.

Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

C.

Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

D.

Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

Full Access
Question # 11

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.

How should the Database Specialist apply the parameter group change for the DB instance?

A.

Select the option to apply the change immediately

B.

Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied

C.

Apply the change manually by rebooting the DB instance during the approved maintenance window

D.

Reboot the secondary Multi-AZ DB instance

Full Access
Question # 12

A company is using an Amazon Aurora PostgreSQL DB cluster with an xlarge primary instance master and two large Aurora Replicas for high availability and read-only workload scaling. A failover event occurs and application performance is poor for several minutes. During this time, application servers in all Availability Zones are healthy and responding normally.

What should the company do to eliminate this application performance issue?

A.

Configure both of the Aurora Replicas to the same instance class as the primary DB instance. Enable cache coherence on the DB cluster, set the primary DB instance failover priority to tier-0, and assign a failover priority of tier-1 to the replicas.

B.

Deploy an AWS Lambda function that calls the DescribeDBInstances action to establish which instance has failed, and then use the PromoteReadReplica operation to promote one Aurora Replica to be the primary DB instance. Configure an Amazon RDS event subscription to send a notification to an Amazon SNS topic to which the Lambda function is subscribed.

C.

Configure one Aurora Replica to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and one replica with the same instance class. Set the failover priority to tier-1 for the other replicas.

D.

Configure both Aurora Replicas to have the same instance class as the primary DB instance. Implement Aurora PostgreSQL DB cluster cache management. Set the failover priority to tier-0 for the primary DB instance and to tier-1 for the replicas.

Full Access
Question # 13

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Full Access
Question # 14

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

A.

Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B.

Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C.

Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D.

Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E.

Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Full Access
Question # 15

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora

PostgreSQL database on AWS.

The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration.

What is the MOST operationally efficient solution that meets these requirements?

A.

Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.

B.

Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase.

C.

Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.

D.

Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.

Full Access
Question # 16

A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances.

How should the Database Specialist optimize the database migration using AWS DMS?

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Full Access
Question # 17

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.

Which approach meets these requirements?

A.

Set the max_connections parameter to 16,000 in the instance-level parameter group.

B.

Modify the client connection timeout to 300 seconds.

C.

Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.

D.

Enable the query cache at the instance level.

Full Access
Question # 18

A business is operating an on-premises application that is divided into three tiers: web, application, and MySQL database. The database is predominantly accessed during business hours, with occasional bursts of activity throughout the day. As part of the company's shift to AWS, a database expert wants to increase the availability and minimize the cost of the MySQL database tier.

Which MySQL database choice satisfies these criteria?

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Full Access
Question # 19

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads.

Which solution will meet these requirements in the MOST operationally efficient manner?

A.

Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data

B.

Use Amazon Redshift for relational data and JSON data.

C.

Use Amazon RDS for relational data. Use Amazon Neptune for JSON data

D.

Use Amazon Redshift for relational data. Use Amazon S3 for JSON data.

Full Access
Question # 20

A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.

The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.

How can a Database Specialist address these requirements with minimal user involvement?

A.

Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.

B.

Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.

C.

Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.

D.

Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Full Access
Question # 21

The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster.

Which settings will result in the LEAST amount of downtime for the application during failover? (Select three.)

A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C.

Edit and enable Aurora DB cluster cache management in parameter groups.

D.

Set TCP keepalive parameters to a high value.

E.

Set JDBC connection string timeout variables to a low value.

F.

Set Java DNS caching timeouts to a high value.

Full Access
Question # 22

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis.

Which solution meets these requirements?

A.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

B.

Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs.

C.

Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs.

D.

Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster.

Full Access
Question # 23

A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.

How should a Database Specialist ensure DynamoDB can handle the increased traffic?

A.

Ensure the table is always provisioned to meet peak needs

B.

Allow burst capacity to handle the additional load

C.

Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic

D.

Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Full Access
Question # 24

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.

Which action will meet these requirements?

A.

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B.

Modify the DB instance and enable encryption.

C.

Restore a DB instance from the most recent automated snapshot and enable encryption.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

Full Access
Question # 25

A company uses an Amazon RDS for PostgreSQL database in the us-east-2 Region. The company wants to have a copy of the database available in the us-west-2 Region as part of a new disaster recovery strategy.

A database architect needs to create the new database. There can be little to no downtime to the source database. The database architect has decided to use AWS Database Migration Service (AWS DMS) to replicate the database across Regions. The database architect will use full load mode and then will switch to change data capture (CDC) mode.

Which parameters must the database architect configure to support CDC mode for the RDS for PostgreSQL database? (Choose three.)

A.

Set wal_level = logical.

B.

Set wal_level = replica.

C.

Set max_replication_slots to 1 or more, depending on the number of DMS tasks.

D.

Set max_replication_slots to 0 to support dynamic allocation of slots.

E.

Set wal_sender_timeout to 20,000 milliseconds.

F.

Set wal_sender_timeout to 5,000 milliseconds.

Full Access
Question # 26

A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company’s development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.

How should this be accomplished?

A.

Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.

B.

Create a dump file from the DB cluster. Load the dump file into a new DB cluster.

C.

Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.

D.

Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.

Full Access
Question # 27

A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas.

In the event of a primary failure, what will occur?

A.

Aurora will promote an Aurora Replica that is of the same size as the primary instance

B.

Aurora will promote an arbitrary Aurora Replica

C.

Aurora will promote the largest-sized Aurora Replica

D.

Aurora will not promote an Aurora Replica

Full Access
Question # 28

A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.

What should the company do to achieve this in the shortest amount of time?

A.

Use a blue-green deployment with a complete application-level failover test

B.

Use the RDS console to reboot the DB instance by choosing the option to reboot with failover

C.

Use RDS fault injection queries to simulate the primary node failure

D.

Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Full Access
Question # 29

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

A.

Update the log_connections parameter in the default parameter group

B.

Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance

C.

Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days

D.

Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days

E.

Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Full Access
Question # 30

A company is planning to migrate a 40 TB Oracle database to an Amazon Aurora PostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS) task within a single replication instance. During early testing, AWS DMS is not scaling to the company's needs. Full load and change data capture (CDC) are taking days to complete.

The source database server and the target DB cluster have enough network bandwidth and CPU bandwidth for the additional workload. The replication instance has enough resources to support the replication. A database specialist needs to improve database performance, reduce data migration time, and create multiple DMS tasks.

Which combination of changes will meet these requirements? (Choose two.)

A.

Increase the value of the ParallelLoadThreads parameter in the DMS task settings for the tables.

B.

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a higher value.

C.

Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasks parameter to a lower value.

D.

Use parallel load with different data boundaries for larger tables.

E.

Run the DMS tasks on a larger instance class. Increase local storage on the instance.

Full Access
Question # 31

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Full Access
Question # 32

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.

What should a database specialist do to meet this requirement with the LEAST amount of downtime?

A.

Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.

B.

Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.

C.

Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.

D.

Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.

Full Access
Question # 33

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

A.

The restored DB instance does not have Enhanced Monitoring enabled

B.

The production DB instance is using a custom parameter group

C.

The restored DB instance is using the default security group

D.

The production DB instance is using a custom option group

Full Access
Question # 34

A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions.

This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.

Which set of actions will meet these requirements?

A.

Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

B.

Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

C.

Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

D.

Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

Full Access
Question # 35

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.

Which solution will enable this change?

A.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.

B.

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

C.

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.

D.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

Full Access
Question # 36

A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:

€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.

Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Full Access
Question # 37

A company is developing a new web application. An AWS CloudFormation template was created as a part of the build process.

Recently, a change was made to an AWS::RDS::DBInstance resource in the template. The CharacterSetName property was changed to allow the application to process international text. A change set was generated using the new template, which indicated that the existing DB instance should be replaced during an upgrade.

What should a database specialist do to prevent data loss during the stack upgrade?

A.

Create a snapshot of the DB instance. Modify the template to add the DBSnapshotIdentifier property with the ID of the DB snapshot. Update the stack.

B.

Modify the stack policy using the aws cloudformation update-stack command and the set-stack-policy command, then make the DB resource protected.

C.

Create a snapshot of the DB instance. Update the stack. Restore the database to a new instance.

D.

Deactivate any applications that are using the DB instance. Create a snapshot of the DB instance. Modify the template to add the DBSnapshotIdentifier property with the ID of the DB snapshot. Update the stack and reactivate the applications.

Full Access
Question # 38

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine.

The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.

Which solution will meet these requirements?

A.

Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.

B.

Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.

C.

Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.

D.

Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.

Full Access
Question # 39

A company performs an audit on various data stores and discovers that an Amazon S3 bucket is storing a credit card number. The S3 bucket is the target of an AWS Database Migration Service (AWS DMS) continuous replication task that uses change data capture (CDC). The company determines that this field is not needed by anyone who uses the target data. The company has manually removed the existing credit card data from the S3 bucket.

What is the MOST operationally efficient way to prevent new credit card data from being written to the S3 bucket?

A.

Add a transformation rule to the DMS task to ignore the column from the source data endpoint.

B.

Add a transformation rule to the DMS task to mask the column by using a simple SQL query.

C.

Configure the target S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS).

D.

Remove the credit card number column from the data source so that the DMS task does not need to be altered.

Full Access
Question # 40

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South

America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM.

The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered.

How should the database administrator remediate this issue at the lowest cost?

A.

Enable auto scaling and set the target usage rate to 90%.

B.

Switch the table to provisioned mode and enable auto scaling.

C.

Switch the table to provisioned mode and set the throughput to the peak value.

D.

Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table.

Full Access
Question # 41

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

A.

Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D.

Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Full Access
Question # 42

A social media company is using Amazon DynamoDB to store user profile data and user activity data. Developers are reading and writing the data, causing the size of the tables to grow significantly. Developers have started to face performance bottlenecks with the tables.

Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

A.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

B.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

C.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.

D.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.

Full Access
Question # 43

A corporation wishes to move a 1 TB Oracle database from its current location to an Amazon Aurora PostgreSQL DB cluster. The database specialist at the firm noticed that the Oracle database stores 100 GB of large binary objects (LOBs) across many tables. The Oracle database supports LOBs up to 500 MB in size and an average of 350 MB. AWS DMS was picked by the Database Specialist to transfer the data with the most replication instances.

How should the database specialist improve the transfer of the database to AWS DMS?

A.

Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together

B.

Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs

C.

Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs

D.

Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together

Full Access
Question # 44

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Full Access
Question # 45

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.

What should a database specialist do to accommodate the changing requirements for DAX?

A.

Increase the number of nodes in the existing DAX cluster.

B.

Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

C.

Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

D.

Modify the node type in the existing DAX cluster.

Full Access
Question # 46

An application reads and writes data to an Amazon RDS for MySQL DB instance. A new reporting dashboard needs read-only access to the database. When the application and reports are both under heavy load, the database experiences performance degradation. A database specialist needs to improve the database performance.

What should the database specialist do to meet these requirements?

A.

Create a read replica of the DB instance. Configure the reports to connect to the replication instance endpoint.

B.

Create a read replica of the DB instance. Configure the application and reports to connect to the cluster endpoint.

C.

Enable Multi-AZ deployment. Configure the reports to connect to the standby replica.

D.

Enable Multi-AZ deployment. Configure the application and reports to connect to the cluster endpoint.

Full Access
Question # 47

A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east- 1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to MySQL and run queries from the bastion host using these details.

When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a “could not connect to server: Connection times out” error message to Amazon CloudWatch Logs.

What is the cause of this error?

A.

The user name and password the application is using are incorrect.

B.

The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.

C.

The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.

D.

The user name and password are correct, but the user is not authorized to use the DB instance.

Full Access
Question # 48

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

A.

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Full Access
Question # 49

A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user’s browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.

Which database solution meets these requirements?

A.

Amazon DocumentDB

B.

Amazon RDS Multi-AZ deployment

C.

Amazon DynamoDB global table

D.

Amazon Aurora Global Database

Full Access
Question # 50

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

A.

Restart the application tool used to execute queries.

B.

Change to a database instance class with higher throughput.

C.

Convert from Single-AZ to Multi-AZ.

D.

Increase the I/O parameter in Amazon RDS Enhanced Monitoring.

E.

Convert from General Purpose to Provisioned IOPS (PIOPS).

Full Access
Question # 51

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.

Which settings will meet this requirement? (Choose three.)

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Full Access
Question # 52

A business need a data warehouse system that stores data consistently and in a highly organized fashion. The organization demands rapid response times for end-user inquiries including current-year data, and users must have access to the whole 15-year dataset when necessary. Additionally, this solution must be able to manage a variable volume of incoming inquiries. Costs associated with storing the 100 TB of data must be maintained to a minimum.

Which solution satisfies these criteria?

A.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.

B.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.

C.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

D.

Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.

Full Access
Question # 53

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Full Access
Question # 54

An ecommerce company is running AWS Database Migration Service (AWS DMS) to replicate an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server. The company has set up an AWS Direct Connect connection from its on-premises data center to AWS. During the migration, the company's security team receives an alarm that is related to the migration. The security team mandates that the DMS replication instance must not be accessible from public IP addresses.

What should a database specialist do to meet this requirement?

A.

Set up a VPN connection to encrypt the traffic over the Direct Connect connection.

B.

Modify the DMS replication instance by disabling the publicly accessible option.

C.

Delete the DMS replication instance. Recreate the DMS replication instance with the publicly accessible option disabled.

D.

Create a new replication VPC subnet group with private subnets. Modify the DMS replication instance by selecting the newly created VPC subnet group.

Full Access
Question # 55

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

A.

Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B.

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C.

use on-demand capacity mode tor the DynamoDB table.

D.

use DynamoDB Accelerator (DAX).

Full Access
Question # 56

A financial services organization employs an Amazon Aurora PostgreSQL DB cluster to host an application on AWS. No log files detailing database administrator activity were discovered during a recent examination. A database professional must suggest a solution that enables access to the database and maintains activity logs. The solution should be simple to implement and have a negligible effect on performance.

Which database specialist solution should be recommended?

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Full Access
Question # 57

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Full Access
Question # 58

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.

How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

A.

Set the TCP keepalive parameters low

B.

Call the AWS CLI failover-db-cluster command

C.

Enable Enhanced Monitoring on the DB cluster

D.

Start a database activity stream on the DB cluster

Full Access
Question # 59

A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.

What is the MOST cost-effective action that should be taken to avoid downtime?

A.

Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB

B.

Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down

C.

Enable a read replicas and direct read traffic to it when Amazon RDS is down

D.

Enable an Amazon RDS for MySQL Multi-AZ configuration

Full Access
Question # 60

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

A.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.

Modify the system table to enable logging for each user.

Full Access
Question # 61

A company uses a large, growing, and high performance on-premises Microsoft SQL Server instance With an Always On availability group cluster size of 120 TIE. The company uses a third-party backup product that requires system-level access to the databases. The company will continue to use this third-party backup product in the future.

The company wants to move the DB cluster to AWS with the least possible downtime and data loss. The company needs a 2 Gbps connection to sustain Always On asynchronous data replication between the company's data center and AWS.

Which combination of actions should a database specialist take to meet these requirements? (Select THREE.)

A.

Establish an AWS Direct Connect hosted connection between the companfs data center and AWS

B.

Create an AWS Site-to-Site VPN connection between the companVs data center and AWS over the internet

C.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises SQL Server databases to Amazon RDS for SQL Server Configure Always On availability groups for SQL Server.

D.

Deploy a new SQL Server Always On availability group DB cluster on Amazon EC2 Configure Always On distributed availability groups between the on-premises DB cluster and the AWS DB cluster_ Fail over to the AWS DB cluster when it is time to migrate.

E.

Grant system-level access to the third-party backup product to perform backups of the Amazon RDS for SQL Server DB instance.

F.

Configure the third-party backup product to perform backups of the DB cluster on Amazon EC2.

Full Access
Question # 62

A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.

Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.

How should a database specialist fix this issue?

A.

Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.

B.

Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.

C.

Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.

D.

Add a ConditionExpression parameter in the PutItem request.

Full Access
Question # 63

A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.

Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

A.

Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.

B.

Increase the size of the ElastiCache cluster nodes to a larger instance size.

C.

Create an additional ElastiCache cluster and load-balance traffic between the two clusters.

D.

Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

Full Access
Question # 64

A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times.

What could be causing these slow response times?

A.

New volumes created from snapshots load lazily in the background

B.

Long-running statements on the master

C.

Insufficient resources on the master

D.

Overload of a single replication thread by excessive writes on the master

Full Access
Question # 65

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.

What will happen when the modification is submitted?

A.

The request will fail because this storage capacity is too large.

B.

The request will succeed only if the primary instance is in active status.

C.

The request will succeed only if CPU utilization is less than 10%.

D.

The request will fail as the most recent modification was too soon.

Full Access
Question # 66

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.

Which should the database specialist do to allow the database team to create the test tables?

A.

Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.

B.

Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.

C.

Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.

D.

Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.

Full Access
Question # 67

A company is developing an application that performs intensive in-memory operations on advanced data structures such as sorted sets. The application requires sub-millisecond latency for reads and writes. The application occasionally must run a group of commands as an ACID-compliant operation. A database specialist is setting up the database for this application. The database specialist needs the ability to create a new database cluster from the latest backup of the production cluster.

Which type of cluster should the database specialist create to meet these requirements?

A.

Amazon ElastiCache for Memcached

B.

Amazon Neptune

C.

Amazon ElastiCache for Redis

D.

Amazon DynamoDB Accelerator (DAX)

Full Access
Question # 68

A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when

survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error.

What can the Database Specialist do to resolve this error? (Choose two.)

A.

Change the table to use Amazon DynamoDB Streams

B.

Purchase DynamoDB reserved capacity in the affected Region

C.

Increase the write capacity units for the specific table

D.

Change the table capacity mode to on-demand

E.

Change the table type to throughput optimized

Full Access
Question # 69

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.

Which backup methodology should the database specialist use to MINIMIZE management overhead?

A.

Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.

B.

Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.

C.

Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.

D.

Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

Full Access
Question # 70

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Full Access
Question # 71

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app.

The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent.

Which solution will provide the MOST cost optimization of the DynamoDB database layer?

A.

Change the DynamoDB tables to use on-demand capacity.

B.

Use AWS Auto Scaling and configure time-based scaling.

C.

Enable DynamoDB capacity-based auto scaling.

D.

Enable DynamoDB Accelerator (DAX).

Full Access
Question # 72

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.

Which step can be taken to ensure that the application is not interrupted?

A.

Disable weekly maintenance on the DB cluster.

B.

Clone the DB cluster and migrate it to a new copy of the database.

C.

Choose to defer the upgrade and then find an appropriate down time for patching.

D.

Set up an Aurora Replica and promote it to primary at the time of patching.

Full Access
Question # 73

A database professional maintains a fleet of Amazon RDS database instances that are configured to utilize the default database parameter group. A database expert must connect a custom parameter group with certain database instances.

When will the instances be allocated to this new parameter group once the database specialist performs this change?

A.

Instantaneously after the change is made to the parameter group

B.

In the next scheduled maintenance window of the DB instances

C.

After the DB instances are manually rebooted

D.

Within 24 hours after the change is made to the parameter group

Full Access
Question # 74

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location.

A database specialist must use encryption to ensure that the credentials are not visible in the source code.

Which solution will meet these requirements?

A.

Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

B.

Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager.

C.

Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager.

D.

Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

Full Access
Question # 75

A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime.

Which solution would meet these requirements?

A.

Create a snapshot of the old databases and restore the snapshot with the required storage

B.

Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS

C.

Create a new database using native backup and restore

D.

Create a new read replica and make it the primary by terminating the existing primary

Full Access
Question # 76

A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the company’s security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared across various accounts to provision testing and staging environments.

Which solution meets these requirements?

A.

Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

B.

Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

C.

Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

D.

Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

Full Access
Question # 77

A company is using an Amazon Aurora PostgreSQL database for a project with a government agency. All database communications must be encrypted in transit. All non-SSL/TLS connection requests must be rejected.

What should a database specialist do to meet these requirements?

A.

Set the rds.force SSI parameter in the DB cluster parameter group to default.

B.

Set the rds.force_ssl parameter in the DB cluster parameter group to 1.

C.

Set the rds.force_ssl parameter in the DB cluster parameter group to 0.

D.

Set the SQLNET.SSL VERSION option in the DB cluster option group to 12.

Full Access
Question # 78

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season.

Which solution will meet these requirements at the lowest cost?

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Full Access
Question # 79

A business is launching a new Amazon RDS for SQL Server database instance. The organization wishes to allow auditing of the SQL Server database.

Which measures should a database professional perform in combination to achieve this requirement? (Select two.)

A.

Create a service-linked role for Amazon RDS that grants permissions for Amazon RDS to store audit logs on Amazon S3.

B.

Set up a parameter group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the parameter group with the DB instance.

C.

Disable Multi-AZ on the DB instance, and then enable auditing. Enable Multi-AZ after auditing is enabled.

D.

Disable automated backup on the DB instance, and then enable auditing. Enable automated backup after auditing is enabled.

E.

Set up an options group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the options group with the DB instance.

Full Access
Question # 80

A financial services company is using AWS Database Migration Service (AWS OMS) to migrate Its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing-

What could be the root causes for this high target latency? (Select TWO.)

A.

There was ongoing maintenance on the replication instance

B.

The source endpoint was changed by modifyng the task

C.

Loopback changes had affected the source and target instances-

D.

There was no primary key or index in the target database.

E.

There were resource bottlenecks in the replication instance

Full Access
Question # 81

A company is using an Amazon Aurora MySQL database with Performance Insights enabled. A database specialist is checking Performance Insights and observes an alert message that starts with the following phrase: `Performance Insights is unable to collect SQL Digest statistics on new queries`¦`

Which action will resolve this alert message?

A.

Truncate the events_statements_summary_by_digest table.

B.

Change the AWS Key Management Service (AWS KMS) key that is used to enable Performance Insights.

C.

Set the value for the performance_schema parameter in the parameter group to 1.

D.

Disable and reenable Performance Insights to be effective in the next maintenance window.

Full Access
Question # 82

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.

What is most secure solution to store the master password?

A.

Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.

B.

Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.

C.

Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.

D.

Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

Full Access
Question # 83

A gaming company is building a mobile game that will have as many as 25,000 active concurrent users in the first 2 weeks after launch. The game has a leaderboard that shows the 10 highest scoring players over the last 24 hours. The leaderboard calculations are processed by an AWS Lambda function, which takes about 10 seconds. The company wants the data on the leaderboard to be no more than 1 minute old.

Which architecture will meet these requirements in the MOST operationally efficient way?

A.

Deliver the player data to an Amazon Timestream database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

B.

Deliver the player data to an Amazon Timestream database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in DynamoDB. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

C.

Deliver the player data to an Amazon Aurora MySQL database. Create an Amazon DynamoDB table. Configure the Lambda function to store the results in MySQL. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the DynamoDB table for the leaderboard data.

D.

Deliver the player data to an Amazon Neptune database. Create an Amazon ElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis. Create a scheduled event with Amazon EventBridge to invoke the Lambda function once every minute. Reconfigure the game server to query the Redis cluster for the leaderboard data.

Full Access
Question # 84

A business's production databases are housed on a 3 TB Amazon Aurora MySQL DB cluster. The database cluster is installed in the region us-east-1. For disaster recovery (DR) requirements, the company's database expert needs to fast deploy the DB cluster in another AWS Region to handle the production load with an RTO of less than two hours.

Which approach is the MOST OPERATIONALLY EFFECTIVE in meeting these requirements?

A.

Implement an AWS Lambda function to take a snapshot of the production DB cluster every 2 hours, and copy that snapshot to an Amazon S3 bucket in the DR Region. Restore the snapshot to an appropriately sized DB cluster in the DR Region.

B.

Add a cross-Region read replica in the DR Region with the same instance type as the current primary instance. If the read replica in the DR Region needs to be used for production, promote the read replica to become a standalone DB cluster.

C.

Create a smaller DB cluster in the DR Region. Configure an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) enabled to replicate data from the current production DB cluster to the DB cluster in the DR Region.

D.

Create an Aurora global database that spans two Regions. Use AWS Database Migration Service (AWS DMS) to migrate the existing database to the new global database.

Full Access
Question # 85

A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company’s main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second.

Which solution meets these requirements?

A.

Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap- northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.

B.

Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Region. Use Amazon QuickSight for displaying dashboard results.

C.

Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.

D.

Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica ap-northeast-1 Region.

Full Access
Question # 86

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

A.

Stop the DB cluster and analyze how the website responds

B.

Use Aurora fault injection to crash the master DB instance

C.

Remove the DB cluster endpoint to simulate a master DB instance failure

D.

Use Aurora Backtrack to crash the DB cluster

Full Access
Question # 87

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run.

Which action will improve query performance with the LEAST operational effort?

A.

Migrate the database to a new Amazon Redshift data warehouse.

B.

Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on.

C.

Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode.

D.

Add an Aurora read replica.

Full Access
Question # 88

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Full Access
Question # 89

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.

Which solution meets these requirements?

A.

Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.

B.

Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

C.

Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).

D.

Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

Full Access
Question # 90

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

A.

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.

Enable Enhanced Monitoring will the appropriate settings

Full Access
Question # 91

A business just transitioned from an on-premises Oracle database to Amazon Aurora PostgreSQL. Following the move, the organization observed that every day around 3:00 PM, the application's response time is substantially slower. The firm has determined that the problem is with the database, not the application.

Which set of procedures should the Database Specialist do to locate the erroneous PostgreSQL query most efficiently?

A.

Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.

B.

Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.

C.

Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.

D.

Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Full Access
Question # 92

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled

Which solution will resolve this error?

A.

Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB

B.

Reduce the number of queries that users can run in parallel.

C.

Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size.

D.

Review and optimize queries that submit a large aggregation step to Redshift Spectrum.

Full Access