Month End Sale - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65percent

Welcome To DumpsPedia

DBS-C01 Sample Questions Answers

Questions 4

A global company is creating an application. The application must be highly available. The company requires an RTO and an RPO of less than 5 minutes. The company needs a database that will provide the ability to set up an active-active configuration and near real-time synchronization of data across tables in multiple AWS Regions.

Which solution will meet these requirements?

Options:

A.

Amazon RDS for MariaDB with cross-Region read replicas

B.

Amazon RDS With a Multi-AZ deployment

C.

Amazon DynamoDB global tables

D.

Amazon DynamoDB With a global secondary index (GSI)

Buy Now
Questions 5

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

Options:

A.

Stop the DB cluster and analyze how the website responds

B.

Use Aurora fault injection to crash the master DB instance

C.

Remove the DB cluster endpoint to simulate a master DB instance failure

D.

Use Aurora Backtrack to crash the DB cluster

Buy Now
Questions 6

A company has a on-premises Oracle Real Application Clusters (RAC) database. The company wants to migrate the database to AWS and reduce licensing costs. The company's application team wants to store JSON payloads that expire after 28 hours. The company has development capacity if code changes are required.

Which solution meets these requirements?

Options:

A.

Use Amazon DynamoDB and leverage the Time to Live (TTL) feature to automatically expire the data.

B.

Use Amazon RDS for Oracle with Multi-AZ. Create an AWS Lambda function to purge the expired data. Schedule the Lambda function to run daily using Amazon EventBridge.

C.

Use Amazon DocumentDB with a read replica in a different Availability Zone. Use DocumentDB change streams to expire the data.

D.

Use Amazon Aurora PostgreSQL with Multi-AZ and leverage the Time to Live (TTL) feature to automatically expire the data.

Buy Now
Questions 7

A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.

Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

Options:

A.

Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.

B.

Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.

C.

Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

D.

Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Buy Now
Questions 8

A database specialist is managing an application in the us-west-1 Region and wants to set up disaster recovery in the us-east-1 Region. The Amazon Aurora MySQL DB cluster needs an RPO of 1 minute and an RTO of 2 minutes.

Which approach meets these requirements with no negative performance impact?

Options:

A.

Enable synchronous replication.

B.

Enable asynchronous binlog replication.

C.

Create an Aurora Global Database.

D.

Copy Aurora incremental snapshots to the us-east-1 Region.

Questions 9

An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.

Which solution will meet these requirements?

Options:

A.

Turn on Performance Insights for the Aurora MySQL database. Configure and turn on Amazon DevOps Guru for RDS.

B.

Create a CPU usage alarm. Select the CPU utilization metric for the DB instance. Create an Amazon Simple Notification Service (Amazon SNS) topic to notify the database specialist when CPU utilization is over 75%.

C.

Use the Amazon RDS query editor to get the process ID of the query that is causing the database to lock. Run a command to end the process.

D.

Use the SELECT INTO OUTFILE S3 statement to query data from the database. Save the data directly to an Amazon S3 bucket. Use Amazon Athena to analyze the files for long-running queries.

Buy Now
Questions 10

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

Options:

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Buy Now
Questions 11

A company is using an Amazon ElastiCache for Redis cluster to host its online shopping website. Shoppers receive the following error when the website's application queries the cluster:

Which solutions will resolve this memory issues with the LEAST amount of effort? (Choose three.)

Options:

A.

Reduce the TTL value for keys on the node.

B.

Choose a larger node type.

C.

Test different values in the parameter group for the maxmemory-policy parameter to find the ideal value to use.

D.

Increase the number of nodes.

E.

Monitor the EngineCPUUtilization Amazon CloudWatch metric. Create an AWS Lambda function to delete keys on nodes when a threshold is reached.

F.

Increase the TTL value for keys on the node.

Questions 12

A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.

How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

Options:

A.

Set the TCP keepalive parameters low

B.

Call the AWS CLI failover-db-cluster command

C.

Enable Enhanced Monitoring on the DB cluster

D.

Start a database activity stream on the DB cluster

Questions 13

A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.

What should the Database Specialist do to meet these requirements?

Options:

A.

Restore a snapshot from the production cluster into test clusters

B.

Create logical dumps of the production cluster and restore them into new test clusters

C.

Use database cloning to create clones of the production cluster

D.

Add an additional read replica to the production cluster and use that node for testing

Buy Now
Questions 14

Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.

Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.

Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)

Options:

A.

Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.

B.

Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.

C.

Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.

D.

Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.

E.

Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Buy Now
Questions 15

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment

method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.

Which process should the Database Specialist recommend to meet these requirements?

Options:

A.

Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.

B.

Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.

C.

Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.

D.

Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

Questions 16

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

Options:

A.

Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

B.

Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

C.

Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

D.

Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

Questions 17

A huge gaming firm is developing a centralized method for storing the status of various online games' user sessions. The workload requires low-latency key-value storage and will consist of an equal number of reads and writes. Across the games' geographically dispersed user base, data should be written to the AWS Region nearest to the user. The design should reduce the burden associated with managing data replication across Regions.

Which solution satisfies these criteria?

Options:

A.

Amazon RDS for MySQL with multi-Region read replicas

B.

Amazon Aurora global database

C.

Amazon RDS for Oracle with GoldenGate

D.

Amazon DynamoDB global tables

Buy Now
Questions 18

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

Options:

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Questions 19

A company has an application that uses an Amazon DynamoDB table as its data store. During normal business days, the throughput requirements from the application are uniform and consist of 5 standard write calls per second to the DynamoDB table. Each write call has 2 KB of data.

For 1 hour each day, the company runs an additional automated job on the DynamoDB table that makes 20 write requests per second. No other application writes to the DynamoDB table. The DynamoDB table does not have to meet any additional capacity requirements.

How should a database specialist configure the DynamoDB table's capacity to meet these requirements MOST cost-effectively?

Options:

A.

Use DynamoDB provisioned capacity with 5 WCUs and auto scaling.

B.

Use DynamoDB provisioned capacity with 5 WCUs and a write-through cache that DynamoDB Accelerator (DAX) provides.

C.

Use DynamoDB provisioned capacity with 10 WCUs and auto scaling.

D.

Use DynamoDB provisioned capacity with 10 WCUs and no auto scaling.

Questions 20

A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.

Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?

Options:

A.

Create the database with the MasterUserName and MasterUserPassword properties set to the default values. Then, create the secret with the user name and password set to the same default values. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database. Finally, update the secret’s password value with a randomly generated string set by the GenerateS

B.

Add a Mapping property from the database Amazon Resource Name (ARN) to the secret ARN. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.

C.

Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property. Then, define the database user name in the SecureStringTemplate template. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword properties. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (

D.

Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString property. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database ARN. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in

Buy Now
Questions 21

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between

the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the

data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Options:

A.

Download the SSH RSA public key for Amazon DocumentDB. Update the application configuration to use the instance endpoint instead of the cluster endpoint and run queries over SSH.

B.

Download the SSL .pem public key for Amazon DocumentDB. Add the key to the application package and make sure the application is using the key while connecting to the cluster.

C.

Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as a new cluster with the —storage-encrypted parameter set to true. Update the application to point to the new cluster.

D.

Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to the Amazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only the application instance's security group to connect.

E.

Activate encryption at rest using the modify-db-cluster command with the —storage-encrypted parameter set to true. Set the security group of the cluster to allow only the application instance's security group to connect.

Buy Now
Questions 22

A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist decides to modify the instance immediately to increase its storage capacity by 20 GB.

What will happen when the modification is submitted?

Options:

A.

The request will fail because this storage capacity is too large.

B.

The request will succeed only if the primary instance is in active status.

C.

The request will succeed only if CPU utilization is less than 10%.

D.

The request will fail as the most recent modification was too soon.

Buy Now
Questions 23

A business maintains a SQL Server database on-premises. Active Directory authentication is used to provide users access to the database. The organization transferred their database successfully to Amazon RDS for SQL Server. The organization, however, has reservations regarding user authentication in the AWS Cloud environment.

Which authentication solution should a database professional provide?

Options:

A.

Deploy Active Directory Federation Services (AD FS) on premises and configure it with an on-premises Active Directory. Set up delegation between the on- premises AD FS and AWS Security Token Service (AWS STS) to map user identities to a role using theAmazonRDSDirectoryServiceAccess managed IAM policy.

B.

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Use AWS SSO to configure an Active Directory user delegated to access the databases in RDS for SQL Server.

C.

Use Active Directory Connector to redirect directory requests to the companyג€™s on-premises Active Directory without caching any information in the cloud. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

D.

Establish a forest trust between the on-premises Active Directory and AWS Directory Service for Microsoft Active Directory. Ensure RDS for SQL Server is using mixed mode authentication. Use the RDS master user credentials to connect to the DB instance and configure SQL Server logins and users from the Active Directory users and groups.

Questions 24

A large financial services company uses Amazon ElastiCache for Redis for its new application that has a global user base. A database administrator must develop a caching solution that will be available

across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers.

Which solution meets these requirements with the LEAST amount of operational effort?

Options:

A.

Enable cluster mode in ElastiCache for Redis. Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.

B.

Create a global datastore in ElastiCache for Redis. Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.

C.

Disable cluster mode in ElastiCache for Redis. Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.

D.

Create a snapshot of ElastiCache for Redis in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.

Buy Now
Questions 25

Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.

What method should the database administrator use to configure the encryption to fulfill these specifications?

Options:

A.

AWS CloudHSM

B.

AWS Key Management Service (AWS KMS) with an AWS managed key

C.

AWS Key Management Service (AWS KMS) with server-side encryption

D.

AWS Key Management Service (AWS KMS) CMK with customer-provided material

Buy Now
Questions 26

A company has a hybrid environment in which a VPC connects to an on-premises network through an AWS Site-to-Site VPN connection. The VPC contains an application that is hosted on Amazon EC2 instances. The EC2 instances run in private subnets behind an Application Load Balancer (ALB) that is associated with multiple public subnets. The EC2 instances need to securely access an Amazon DynamoDB table.

Which solution will meet these requirements?

Options:

A.

Use the internet gateway of the VPC to access the DynamoDB table. Use the ALB to route the traffic to the EC2 instances.

B.

Add a NAT gateway in one of the public subnets of the VPC_ Configure the security groups of the EC2 instances to access the DynamoDB table through the NAT gateway

C.

Use the Site-to-Site VPN connection to route all DynamoD8 network traffic through the on-premises network infrastructure to access the EC2 instances

D.

Create a VPC endpoint for DynamoDB_ Assign the endpoint to the route table of the private subnets that contain the EC2 instances.

Buy Now
Questions 27

A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.

What should the company do to achieve this in the shortest amount of time?

Options:

A.

Use a blue-green deployment with a complete application-level failover test

B.

Use the RDS console to reboot the DB instance by choosing the option to reboot with failover

C.

Use RDS fault injection queries to simulate the primary node failure

D.

Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Buy Now
Questions 28

A large company has a variety of Amazon DB clusters. Each of these clusters has various configurations that adhere to various requirements. Depending on the team and use case, these configurations can be organized into broader categories.

A database administrator wants to make the process of storing and modifying these parameters more systematic. The database administrator also wants to ensure that changes to individual categories of configurations are automatically applied to all instances when required.

Which AWS service or feature will help automate and achieve this objective?

Options:

A.

AWS Systems Manager Parameter Store

B.

DB parameter group

C.

AWS Config

D.

AWS Secrets Manager

Buy Now
Questions 29

A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the company’s security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared across various accounts to provision testing and staging environments.

Which solution meets these requirements?

Options:

A.

Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

B.

Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

C.

Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

D.

Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.

Questions 30

A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance.

Which strategy would allow for a successful migration with the LEAST amount of downtime?

Options:

A.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.

B.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.

C.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.

D.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises

Buy Now
Questions 31

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

Options:

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Buy Now
Questions 32

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.

Which AWS solution meets these requirements?

Options:

A.

Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.

B.

Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.

C.

Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.

D.

Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.

Buy Now
Questions 33

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

Options:

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Questions 34

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.

Which settings will meet this requirement? (Choose three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Buy Now
Questions 35

A database specialist wants to ensure that an Amazon Aurora DB cluster is always automatically upgraded to the most recent minor version available. Noticing that there is a new minor version available, the database specialist has issues an AWS CLI command to enable automatic minor version updates. The command runs successfully, but checking the Aurora DB cluster indicates that no update to the Aurora version has been made.

What might account for this? (Choose two.)

Options:

A.

The new minor version has not yet been designated as preferred and requires a manual upgrade.

B.

Configuring automatic upgrades using the AWS CLI is not supported. This must be enabled expressly using the AWS Management Console.

C.

Applying minor version upgrades requires sufficient free space.

D.

The AWS CLI command did not include an apply-immediately parameter.

E.

Aurora has detected a breaking change in the new minor version and has automatically rejected the upgrade.

Buy Now
Questions 36

A small startup firm wishes to move a 4 TB MySQL database from on-premises to AWS through an Amazon RDS for MySQL DB instance.

Which migration approach would result in the LEAST amount of downtime?

Options:

A.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.

B.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.

C.

Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.

D.

Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises

Questions 37

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?

Options:

A.

Implementing sharding to distribute the load to multiple RDS for MySQL databases.

B.

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

C.

Add an RDS for MySQL read replica.

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Buy Now
Questions 38

A company has two separate AWS accounts: one for the business unit and another for corporate analytics. The company wants to replicate the business unit data stored in Amazon RDS for MySQL in us-east-1 to its corporate analytics Amazon Redshift environment in us-west-1. The company wants to use AWS DMS with Amazon RDS as the source endpoint and Amazon Redshift as the target endpoint.

Which action will allow AVS DMS to perform the replication?

Options:

A.

Configure the AWS DMS replication instance in the same account and Region as Amazon Redshift.

B.

Configure the AWS DMS replication instance in the same account as Amazon Redshift and in the same Region as Amazon RDS.

C.

Configure the AWS DMS replication instance in its own account and in the same Region as Amazon Redshift.

D.

Configure the AWS DMS replication instance in the same account and Region as Amazon RDS.

Questions 39

A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.

The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.

Which solution will meet these requirements with minimal effort?

Options:

A.

Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

B.

Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.

C.

Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

D.

Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.

Buy Now
Questions 40

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.

What should a database specialist do to meet this requirement with the LEAST amount of downtime?

Options:

A.

Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.

B.

Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.

C.

Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.

D.

Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.

Buy Now
Questions 41

A company is using an Amazon Aurora PostgreSQL database for a project with a government agency. All database communications must be encrypted in transit. All non-SSL/TLS connection requests must be rejected.

What should a database specialist do to meet these requirements?

Options:

A.

Set the rds.force SSI parameter in the DB cluster parameter group to default.

B.

Set the rds.force_ssl parameter in the DB cluster parameter group to 1.

C.

Set the rds.force_ssl parameter in the DB cluster parameter group to 0.

D.

Set the SQLNET.SSL VERSION option in the DB cluster option group to 12.

Buy Now
Questions 42

In one AWS account, a business runs a two-tier ecommerce application. An Amazon RDS for MySQL Multi-AZ database instance serves as the application's backend. A developer removed the database instance in the production environment by accident. Although the organization recovers the database, the incident results in hours of outage and financial loss.

Which combination of adjustments would reduce the likelihood that this error will occur again in the future? (Select three.)

Options:

A.

Grant least privilege to groups, IAM users, and roles.

B.

Allow all users to restore a database from a backup.

C.

Enable deletion protection on existing production DB instances.

D.

Use an ACL policy to restrict users from DB instance deletion.

E.

Enable AWS CloudTrail logging and Enhanced Monitoring.

Buy Now
Questions 43

A financial organization must ensure that the most current 90 days of MySQL database backups are accessible. Amazon RDS for MySQL DB instances are used to host all MySQL databases. A database expert must create a solution that satisfies the criteria for backup retention with the least amount of development work feasible.

Which strategy should the database administrator take?

Options:

A.

Use AWS Backup to build a backup plan for the required retention period. Assign the DB instances to the backup plan.

B.

Modify the DB instances to enable the automated backup option. Select the required backup retention period.

C.

Automate a daily cron job on an Amazon EC2 instance to create MySQL dumps, transfer to Amazon S3, and implement an S3 Lifecycle policy to meet the retention requirement.

D.

Use AWS Lambda to schedule a daily manual snapshot of the DB instances. Delete snapshots that exceed the retention requirement.

Questions 44

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine.

The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO.

Which solution will meet these requirements?

Options:

A.

Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover.

B.

Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server.

C.

Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region.

D.

Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region.

Questions 45

A company is migrating its on-premises database workloads to the AWS Cloud. A database specialist performing the move has chosen AWS DMS to migrate an Oracle database with a large table to Amazon RDS. The database specialist notices that AWS DMS is taking significant time to migrate the data.

Which actions would improve the data migration speed? (Choose three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 46

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season.

Which solution will meet these requirements at the lowest cost?

Options:

A.

DynamoDB Streams

B.

DynamoDB with DynamoDB Accelerator

C.

DynamoDB with on-demand capacity mode

D.

DynamoDB with provisioned capacity mode with Auto Scaling

Buy Now
Questions 47

A ride-hailing application stores bookings in a persistent Amazon RDS for MySQL DB instance. This program is very popular, and the corporation anticipates a tenfold rise in the application's user base over the next several months. The application receives a higher volume of traffic in the morning and evening.

This application is divided into two sections:

✑ An internal booking component that takes online reservations in response to concurrent user queries.

✑ A component of a third-party customer relationship management (CRM) system that customer service professionals utilize. Booking data is accessed using queries in the CRM.

To manage this workload effectively, a database professional must create a cost-effective database system.

Which solution satisfies these criteria?

Options:

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Buy Now
Questions 48

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.

Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

Options:

A.

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

B.

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

C.

Create additional readers to cater to the different scenarios.

D.

Use custom endpoints to satisfy the different workloads.

Buy Now
Questions 49

A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.

Which solution addresses these requirements?

Options:

A.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.

B.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.

C.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.

D.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.

Buy Now
Questions 50

A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. The company recently conducted tests on the database after business hours, and

the tests generated additional database logs. As a result, free storage of the DB instance is low and is expected to be exhausted in 2 days.

The company wants to recover the free storage that the additional logs consumed. The solution must not result in downtime for the database.

Which solution will meet these requirements?

Options:

A.

Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save the changes.

B.

Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours for database logs to be deleted.

C.

Modify the temp file_limit parameter to a smaller value to reclaim space on the DB instance.

D.

Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to save the changes.

Buy Now
Questions 51

An ecommerce company is running AWS Database Migration Service (AWS DMS) to replicate an on-premises Microsoft SQL Server database to Amazon RDS for SQL Server. The company has set up an AWS Direct Connect connection from its on-premises data center to AWS. During the migration, the company's security team receives an alarm that is related to the migration. The security team mandates that the DMS replication instance must not be accessible from public IP addresses.

What should a database specialist do to meet this requirement?

Options:

A.

Set up a VPN connection to encrypt the traffic over the Direct Connect connection.

B.

Modify the DMS replication instance by disabling the publicly accessible option.

C.

Delete the DMS replication instance. Recreate the DMS replication instance with the publicly accessible option disabled.

D.

Create a new replication VPC subnet group with private subnets. Modify the DMS replication instance by selecting the newly created VPC subnet group.

Questions 52

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion.

Which step can be taken to ensure that the application is not interrupted?

Options:

A.

Disable weekly maintenance on the DB cluster.

B.

Clone the DB cluster and migrate it to a new copy of the database.

C.

Choose to defer the upgrade and then find an appropriate down time for patching.

D.

Set up an Aurora Replica and promote it to primary at the time of patching.

Buy Now
Questions 53

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

Options:

A.

Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C.

Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D.

Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Buy Now
Questions 54

A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company’s application team needs to encrypt the DB instance.

What should the team do to meet this requirement?

Options:

A.

Stop the DB instance and modify it to enable encryption. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.

B.

Stop the DB instance and create an encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

C.

Stop the DB instance and create a snapshot. Copy the snapshot into another encrypted snapshot. Restore the encrypted snapshot to a new encrypted DB instance. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to master. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

Buy Now
Questions 55

A company has an ecommerce website that runs on AWS. The website uses an Amazon RDS for MySQL database. A database specialist wants to enforce the use of temporary credentials to access the database.

Which solution will meet this requirement?

Options:

A.

Use MySQL native database authentication.

B.

Use AWS Secrets Manager to rotate the credentials.

C.

Use AWS Identity and Access Management (IAM) database authentication.

D.

Use AWS Systems Manager Parameter Store for authentication.

Buy Now
Questions 56

The website of a manufacturing firm makes use of an Amazon Aurora PostgreSQL database cluster.

Which settings will result in the LEAST amount of downtime for the application during failover? (Select three.)

Options:

A.

Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.

B.

Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.

C.

Edit and enable Aurora DB cluster cache management in parameter groups.

D.

Set TCP keepalive parameters to a high value.

E.

Set JDBC connection string timeout variables to a low value.

F.

Set Java DNS caching timeouts to a high value.

Questions 57

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

Options:

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Questions 58

A company uses Microsoft SQL Server on Amazon RDS in a Multi-AZ deployment as the database engine for its application. The company was recently acquired by another company. A database specialist must rename the database to follow a new naming standard.

Which combination of steps should the database specialist take to rename the database? (Choose two.)

Options:

A.

Turn off automatic snapshots for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on the automatic snapshots.

B.

Turn off Multi-AZ for the DB instance. Rename the database with the rdsadmin.dbo.rds_modify_db_name stored procedure. Turn on Multi-AZ Mirroring.

C.

Delete all existing snapshots for the DB instance. Use the rdsadmin.dbo.rds_modify_db_name stored procedure.

D.

Update the application with the new database connection string.

E.

Update the DNS record for the DB instance.

Questions 59

A database specialist deployed an Amazon RDS DB instance in Dev-VPC1 used by their development team. Dev-VPC1 has a peering connection with Dev-VPC2 that belongs to a different development team in the same department. The networking team confirmed that the routing between VPCs is correct; however, the database engineers in Dev-VPC2 are getting a timeout connections error when trying to connect to the database in Dev- VPC1.

What is likely causing the timeouts?

Options:

A.

The database is deployed in a VPC that is in a different Region.

B.

The database is deployed in a VPC that is in a different Availability Zone.

C.

The database is deployed with misconfigured security groups.

D.

The database is deployed with the wrong client connect timeout configuration.

Buy Now
Questions 60

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC.

Which solutions will reduce this latency? (Choose two.)

Options:

A.

Configure the DMS task to run in full large binary object (LOB) mode.

B.

Configure the DMS task to run in limited large binary object (LOB) mode.

C.

Create a Multi-AZ replication instance.

D.

Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions.

E.

Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions.

Buy Now
Questions 61

A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost- effective and able to handle unpredictable application traffic.

What should a Database Specialist recommend for this user?

Options:

A.

Create an Amazon DynamoDB table with provisioned capacity mode

B.

Create an Amazon DocumentDB cluster

C.

Create an Amazon DynamoDB table with on-demand capacity mode

D.

Create an Amazon Aurora Serverless DB cluster

Buy Now
Questions 62

A company is loading sensitive data into an Amazon Aurora MySQL database. To meet compliance requirements, the company needs to enable audit logging on the Aurora MySQL DB cluster to audit database activity. This logging will include events such as connections, disconnections, queries, and tables queried. The company also needs to publish the DB logs to Amazon CloudWatch to perform real-time data analysis.

Which solution meets these requirements?

Options:

A.

Modify the default option group parameters to enable Advanced Auditing. Restart the database for the changes to take effect.

B.

Create a custom DB cluster parameter group. Modify the parameters for Advanced Auditing. Modify the cluster to associate the new custom DB parameter group with the Aurora MySQL DB cluster.

C.

Take a snapshot of the database. Create a new DB instance, and enable custom auditing and logging to CloudWatch. Deactivate the DB instance that has no logging.

D.

Enable AWS CloudTrail for the DB instance. Create a filter that provides only connections, disconnections, queries, and tables queried.

Questions 63

A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and an S3 VPC endpoint, and 80% of the company’s network bandwidth is available.

How should the company perform this data load?

Options:

A.

Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

B.

Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

C.

Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

D.

Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.

Buy Now
Questions 64

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

Options:

A.

Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B.

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C.

use on-demand capacity mode tor the DynamoDB table.

D.

use DynamoDB Accelerator (DAX).

Buy Now
Questions 65

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for

SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials.

Which combination of steps should a database specialist take to meet this requirement? (Choose three.)

Options:

A.

Extend the on-premises Active Directory to AWS by using AD Connector.

B.

Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

C.

Create a directory by using AWS Directory Service for Microsoft Active Directory.

D.

Create an Active Directory domain controller on Amazon EC2.

E.

Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy.

F.

Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory.

Buy Now
Questions 66

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

Options:

A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

Questions 67

A company migrated one of its business-critical database workloads to an Amazon Aurora Multi-AZ DB cluster. The company requires a very low RTO and needs to improve the application recovery time after database failovers.

Which approach meets these requirements?

Options:

A.

Set the max_connections parameter to 16,000 in the instance-level parameter group.

B.

Modify the client connection timeout to 300 seconds.

C.

Create an Amazon RDS Proxy database proxy and update client connections to point to the proxy endpoint.

D.

Enable the query cache at the instance level.

Buy Now
Questions 68

Recently, an ecommerce business transferred one of its SQL Server databases to an Amazon RDS for SQL Server Enterprise Edition database instance. The corporation anticipates an increase in read traffic as a result of an approaching sale. To accommodate the projected read load, a database professional must establish a read replica of the database instance.

Which procedures should the database professional do prior to establishing the read replica? (Select two.)

Options:

A.

Identify a potential downtime window and stop the application calls to the source DB instance.

B.

Ensure that automatic backups are enabled for the source DB instance.

C.

Ensure that the source DB instance is a Multi-AZ deployment with Always ON Availability Groups.

D.

Ensure that the source DB instance is a Multi-AZ deployment with SQL Server Database Mirroring (DBM).

E.

Modify the read replica parameter group setting and set the value to 1.

Questions 69

A company has a 20 TB production Amazon Aurora DB cluster. The company runs a large batch job overnight to load data into the Aurora DB cluster. To ensure the company’s development team has the most up-to-date data for testing, a copy of the DB cluster must be available in the shortest possible time after the batch job completes.

How should this be accomplished?

Options:

A.

Use the AWS CLI to schedule a manual snapshot of the DB cluster. Restore the snapshot to a new DB cluster using the AWS CLI.

B.

Create a dump file from the DB cluster. Load the dump file into a new DB cluster.

C.

Schedule a job to create a clone of the DB cluster at the end of the overnight batch process.

D.

Set up a new daily AWS DMS task that will use cloning and change data capture (CDC) on the DB cluster to copy the data to a new DB cluster. Set up a time for the AWS DMS stream to stop when the new cluster is current.

Buy Now
Questions 70

A database professional maintains a fleet of Amazon RDS database instances that are configured to utilize the default database parameter group. A database expert must connect a custom parameter group with certain database instances.

When will the instances be allocated to this new parameter group once the database specialist performs this change?

Options:

A.

Instantaneously after the change is made to the parameter group

B.

In the next scheduled maintenance window of the DB instances

C.

After the DB instances are manually rebooted

D.

Within 24 hours after the change is made to the parameter group

Buy Now
Questions 71

A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.

What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

Options:

A.

Use a Multi-AZ deployment in each Region.

B.

Use read replica deployments in all Availability Zones of the secondary Region.

C.

Use Multi-AZ and read replica deployments within a Region.

D.

Use Multi-AZ and deploy a read replica in a secondary Region.

Buy Now
Questions 72

A company has an Amazon Redshift cluster with database audit logging enabled. A security audit shows that raw SQL statements that run against the Redshift cluster are being logged to an Amazon S3 bucket. The security team requires that authentication logs are generated for use in an intrusion detection system (IDS), but the security team does not require SQL queries.

What should a database specialist do to remediate this issue?

Options:

A.

Set the parameter to true in the database parameter group.

B.

Turn off the query monitoring rule in the Redshift cluster's workload management (WLM).

C.

Set the enable_user_activity_logging parameter to false in the database parameter group.

D.

Disable audit logging on the Redshift cluster.

Buy Now
Questions 73

A company plans to use AWS Database Migration Service (AWS DMS) to migrate its database from one Amazon EC2 instance to another EC2 instance as a full load task. The company wants the database to be inactive during the migration. The company will use a dms.t3.medium instance to perform the migration and will use the default settings for the migration.

Which solution will MOST improve the performance of the data migration?

Options:

A.

Increase the number of tables that are loaded in parallel.

B.

Drop all indexes on the source tables.

C.

Change the processing mode from the batch optimized apply option to transactional mode.

D.

Enable Multi-AZ on the target database while the full load task is in progress.

Buy Now
Questions 74

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

Options:

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Questions 75

Developers have requested a new Amazon Redshift cluster so they can load new third-party marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message:

“Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.”

The developers need to load this data soon, so a database specialist must act quickly to solve this issue.

What is the MOST secure solution?

Options:

A.

Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action.

B.

Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role.

C.

Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message.

D.

Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created.

Buy Now
Questions 76

A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.

What is the recommended strategy for this use case?

Options:

A.

Use ElastiCache for Memcached with write-through and long time to live (TTL)

B.

Use ElastiCache for Redis with lazy loading and short time to live (TTL)

C.

Use ElastiCache for Memcached with lazy loading and short time to live (TTL)

D.

Use ElastiCache for Redis with write-through and long time to live (TTL)

Buy Now
Questions 77

In North America, a business launched a mobile game that swiftly expanded to 10 million daily active players. The game's backend is hosted on AWS and makes considerable use of a TTL-configured Amazon DynamoDB table.

When an item is added or changed, its TTL is set to 600 seconds plus the current epoch time. The game logic is reliant on the purging of outdated data in order to compute rewards points properly. At times, items from the table are read that are many hours beyond their TTL expiration.

How should a database administrator resolve this issue?

Options:

A.

Use a client library that supports the TTL functionality for DynamoDB.

B.

Include a query filter expression to ignore items with an expired TTL.

C.

Set the ConsistentRead parameter to true when querying the table.

D.

Create a local secondary index on the TTL attribute.

Questions 78

A company’s ecommerce website uses Amazon DynamoDB for purchase orders. Each order is made up of a Customer ID and an Order ID. The DynamoDB table uses the Customer ID as the partition key and the Order ID as the sort key.

To meet a new requirement, the company also wants the ability to query the table by using a third attribute named Invoice ID. Queries using the Invoice ID must be strongly consistent. A database specialist must provide this capability with optimal performance and minimal overhead.

What should the database administrator do to meet these requirements?

Options:

A.

Add a global secondary index on Invoice ID to the existing table.

B.

Add a local secondary index on Invoice ID to the existing table.

C.

Recreate the table by using the latest snapshot while adding a local secondary index on Invoice ID.

D.

Use the partition key and a FilterExpression parameter with a filter on Invoice ID for all queries.

Buy Now
Questions 79

A business is transferring its on-premises database workloads to the Amazon Web Services (AWS) Cloud. A database professional migrating an Oracle database with a huge table to Amazon RDS has picked AWS DMS. The database professional observes that AWS DMS is consuming considerable time migrating the data.

Which activities would increase the pace of data migration? (Select three.)

Options:

A.

Create multiple AWS DMS tasks to migrate the large table.

B.

Configure the AWS DMS replication instance with Multi-AZ.

C.

Increase the capacity of the AWS DMS replication server.

D.

Establish an AWS Direct Connect connection between the on-premises data center and AWS.

E.

Enable an Amazon RDS Multi-AZ configuration.

F.

Enable full large binary object (LOB) mode to migrate all LOB data for all large tables.

Buy Now
Questions 80

A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

Options:

A.

Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.

B.

Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.

C.

Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.

D.

Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using a sed command.

Questions 81

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

Options:

A.

Copy the AWS Key Management Service (AWS KMS) customer managed key from the source Region to the destination Region. Set up an AWS Glue job in the source Region to copy the latest snapshot of the Amazon Redshift cluster from the source Region to the destination Region. Use a time-based schedule in AWS Glue to run the job on a daily basis.

B.

Create a new AWS Key Management Service (AWS KMS) customer managed key in the destination Region. Create a snapshot copy grant in the destination Region specifying the new key. In the source Region, configure cross-Region snapshots for the Amazon Redshift cluster specifying the destination Region, the snapshot copy grant, and retention periods for the snapshot.

C.

Copy the AWS Key Management Service (AWS KMS) customer-managed key from the source Region to the destination Region. Create Amazon S3 buckets in each Region using the keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function in the source Region to copy the latest snapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copy the snapshots to the destina

D.

Use the same customer-supplied key materials to create a CMK with the same private key in the destination Region. Configure cross-Region snapshots in the source Region targeting the destination Region. Specify the corresponding CMK in the destination Region to encrypt the snapshot.

Buy Now
Questions 82

A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.

What is the quickest way for the company to gather data on the migration compatibility?

Options:

A.

Perform a logical dump from the Db2 database and restore it to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.

B.

Run AWS DMS from the Db2 database to an Aurora DB cluster. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.

C.

Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.

D.

Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster. Create a migration assessment report to evaluate the migration compatibility.

Buy Now
Questions 83

A database specialist is planning to migrate a 4 TB Microsoft SQL Server DB instance from on premises to Amazon RDS for SQL Server. The database is primarily used for nightly batch processing.

Which RDS storage option meets these requirements MOST cost-effectively?

Options:

A.

General Purpose SSD storage

B.

Provisioned IOPS storage

C.

Magnetic storage

D.

Throughput Optimized hard disk drives (HDD)

Buy Now
Questions 84

An IT consulting company wants to reduce costs when operating its development environment databases. The company’s workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks.

Which of the following provides the MOST cost-effective solution?

Options:

A.

Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.

B.

Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.

C.

Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.

D.

Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.

Buy Now
Questions 85

A significant automotive manufacturer is switching a mission-critical finance application's database to Amazon DynamoDB. According to the company's risk and compliance policy, any update to the database must be documented as a log entry for auditing purposes. Each minute, the system anticipates about 500,000 log entries. Log entries should be kept in Apache Parquet files in batches of at least 100,000 records per file.

How could a database professional approach these needs while using DynamoDB?

Options:

A.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.

B.

Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.

C.

Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.

D.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.

Buy Now
Questions 86

A financial services organization employs an Amazon Aurora PostgreSQL DB cluster to host an application on AWS. No log files detailing database administrator activity were discovered during a recent examination. A database professional must suggest a solution that enables access to the database and maintains activity logs. The solution should be simple to implement and have a negligible effect on performance.

Which database specialist solution should be recommended?

Options:

A.

Enable Aurora Database Activity Streams on the database in synchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Kinesis Data Firehose destination to an Amazon S3 bucket.

B.

Create an AWS CloudTrail trail in the Region where the database runs. Associate the database activity logs with the trail.

C.

Enable Aurora Database Activity Streams on the database in asynchronous mode. Connect the Amazon Kinesis data stream to Kinesis Data Firehose. Set the Firehose destination to an Amazon S3 bucket.

D.

Allow connections to the DB cluster through a bastion host only. Restrict database access to the bastion host and application servers. Push the bastion host logs to Amazon CloudWatch Logs using the CloudWatch Logs agent.

Buy Now
Questions 87

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a

MySQL database that is hosted in Amazon RDS.

After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.

Which solution will meet this requirement?

Options:

A.

Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.

B.

Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.

C.

Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.

D.

Update the DB instance, and enable the Require Transport Layer Security option.

Questions 88

A company stores critical data for a department in Amazon RDS for MySQL DB instances. The department was closed for 3 weeks and notified a database specialist that access to the RDS DB instances should not be granted to anyone during this time. To meet this requirement, the database specialist stopped all the

DB instances used by the department but did not select the option to create a snapshot. Before the 3 weeks expired, the database specialist discovered that users could connect to the database successfully.

What could be the reason for this?

Options:

A.

When stopping the DB instance, the option to create a snapshot should have been selected.

B.

When stopping the DB instance, the duration for stopping the DB instance should have been selected.

C.

Stopped DB instances will automatically restart if the number of attempted connections exceeds the threshold set.

D.

Stopped DB instances will automatically restart if the instance is not manually started after 7 days.

Questions 89

A database professional is developing an application that will respond to single-instance requests. The program will query large amounts of client data and offer end users with results.

These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.

During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.

Which approach will be the most cost-effective in meeting these requirements?

Options:

A.

Amazon DynamoDB with provisioned capacity mode and auto scaling

B.

Amazon DynamoDB with on-demand capacity mode

C.

Amazon Aurora with auto scaling enabled

D.

Amazon Aurora in a serverless mode

Questions 90

A startup company in the travel industry wants to create an application that includes a personal travel assistant to display information for nearby airports based on user location. The application will use Amazon DynamoDB and must be able to access and display attributes such as airline names, arrival times, and flight numbers. However, the application must not be able to access or display pilot names or passenger counts.

Which solution will meet these requirements MOST cost-effectively?

Options:

A.

Use a proxy tier between the application and DynamoDB to regulate access to specific tables, items, and attributes.

B.

Use IAM policies with a combination of IAM conditions and actions to implement fine-grained access control.

C.

Use DynamoDB resource policies to regulate access to specific tables, items, and attributes.

D.

Configure an AWS Lambda function to extract only allowed attributes from tables based on user profiles.

Questions 91

A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses.

What should a Database Specialist do to meet these requirements with minimal effort?

Options:

A.

Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

B.

Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

C.

Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.

D.

Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.

Questions 92

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.

Which action will meet these requirements?

Options:

A.

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B.

Modify the DB instance and enable encryption.

C.

Restore a DB instance from the most recent automated snapshot and enable encryption.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

Buy Now
Questions 93

An ecommerce company is running Amazon RDS for Microsoft SQL Server. The company is planning to perform testing in a development environment with production data. The development environment and the production environment are in separate AWS accounts. Both environments use AWS Key Management Service (AWS KMS) encrypted databases with both manual and automated snapshots. A database specialist needs to share a KMS encrypted production RDS snapshot with the development account.

Which combination of steps should the database specialist take to meet these requirements? (Select THREE.)

Options:

A.

Create an automated snapshot. Share the snapshot from the production account to the development account.

B.

Create a manual snapshot. Share the snapshot from the production account to the development account.

C.

Share the snapshot that is encrypted by using the development account default KMS encryption key.

D.

Share the snapshot that is encrypted by using the production account custom KMS encryption key.

E.

Allow the development account to access the production account KMS encryption key.

F.

Allow the production account to access the development account KMS encryption key.

Buy Now
Questions 94

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location.

A database specialist must use encryption to ensure that the credentials are not visible in the source code.

Which solution will meet these requirements?

Options:

A.

Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption.

B.

Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager.

C.

Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager.

D.

Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates.

Buy Now
Questions 95

A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.

A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.

Which solution will meet these requirements?

Options:

A.

Use the STL SCAN view to understand how the tables are getting scanned. Identify the columns that are used in filter and group by conditions. Create a temporary table with the identified columns as sort keys and compression as Zstandard (ZSTD) by copying the data from the original table. Drop the original table. Give the temporary table the same name that the original table had.

B.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to RAW. Set the rest of the column compression encoding to AZ64.

C.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to I_ZO. Set the rest of the column compression encoding to Zstandard (ZSTD).

D.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Create a deep copy of the table with the identified columns as sort keys and compression for all columns as Zstandard (ZSTD) by using a bulk insert. Drop the original table. Give the copy table the same name that the original table had.

Buy Now
Questions 96

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

Options:

A.

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Questions 97

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.

Which backup methodology should the database specialist use to MINIMIZE management overhead?

Options:

A.

Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.

B.

Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.

C.

Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.

D.

Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

Buy Now
Status:
Expired
Exam Code: DBS-C01
Exam Name: AWS Certified Database - Specialty
Last Update: Jul 3, 2024
Questions: 324
$57.75  $164.99
$43.75  $124.99
$36.75  $104.99
buy now DBS-C01