Labour Day Sale - Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 575363r9

Welcome To DumpsPedia

Professional-Cloud-Database-Engineer Sample Questions Answers

Questions 4

Your online delivery business that primarily serves retail customers uses Cloud SQL for MySQL for its inventory and scheduling application. The required recovery time objective (RTO) and recovery point objective (RPO) must be in minutes rather than hours as a part of your high availability and disaster recovery design. You need a high availability configuration that can recover without data loss during a zonal or a regional failure. What should you do?

Options:

A.

Set up all read replicas in a different region using asynchronous replication.

B.

Set up all read replicas in the same region as the primary instance with synchronous replication.

C.

Set up read replicas in different zones of the same region as the primary instance with synchronous replication, and set up read replicas in different regions with asynchronous replication.

D.

Set up read replicas in different zones of the same region as the primary instance with asynchronous replication, and set up read replicas in different regions with synchronous replication.

Buy Now
Questions 5

You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices. What should you do?

Options:

A.

Maintain a target of 23% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone europe-west1-d

cluster-c in zone asia-east1-b

B.

Maintain a target of 23% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone us-central1-b

cluster-c in zone us-east1-a

C.

Maintain a target of 35% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone australia-southeast1-a

cluster-c in zone europe-west1-d

cluster-d in zone asia-east1-b

D.

Maintain a target of 35% CPU utilization by locating:

cluster-a in zone us-central1-a

cluster-b in zone us-central2-a

cluster-c in zone asia-northeast1-b

cluster-d in zone asia-east1-b

Questions 6

You are a DBA of Cloud SQL for PostgreSQL. You want the applications to have password-less authentication for read and write access to the database. Which authentication mechanism should you use?

Options:

A.

Use Identity and Access Management (IAM) authentication.

B.

Use Managed Active Directory authentication.

C.

Use Cloud SQL federated queries.

D.

Use PostgreSQL database's built-in authentication.

Buy Now
Questions 7

You are a DBA on a Cloud Spanner instance with multiple databases. You need to assign these privileges to all members of the application development team on a specific database:

Can read tables, views, and DDL

Can write rows to the tables

Can add columns and indexes

Cannot drop the database

What should you do?

Options:

A.

Assign the Cloud Spanner Database Reader and Cloud Spanner Backup Writer roles.

B.

Assign the Cloud Spanner Database Admin role.

C.

Assign the Cloud Spanner Database User role.

D.

Assign the Cloud Spanner Admin role.

Buy Now
Questions 8

You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance. What should you do?

Options:

A.

Take no backups, and turn off transaction log retention.

B.

Take one manual backup per day, and turn off transaction log retention.

C.

Turn on automated backup, and turn off transaction log retention.

D.

Turn on automated backup, and turn on transaction log retention.

Buy Now
Questions 9

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

Options:

A.

Use Cloud SQL with read replicas for throughput.

B.

Use Firestore, and rely on automatic serverless scaling.

C.

Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.

D.

Use Bigtable, and add nodes as necessary to achieve the required throughput.

Buy Now
Questions 10

You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command.

What should you do?

Options:

A.

Use gcloud sql instances failover .

B.

Use gcloud sql instances failover .

C.

Use gcloud sql instances promote-replica .

D.

Use gcloud sql instances promote-replica .

Buy Now
Questions 11

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy. What should you do?

Options:

A.

Use Database Migration Service to migrate all databases to Cloud SQL.

B.

Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.

C.

Use data replication tools and CDC tools to enable migration.

D.

Use a combination of Database Migration Service and partner tools to support the data migration strategy.

Questions 12

Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports. Which two approaches should you implement? (Choose two.)

Options:

A.

Create secondary indexes on the replica.

B.

Create additional read replicas, and partition your analytics users to use different read replicas.

C.

Disable replication on the read replica, and set the flag for parallel replication on the read replica. Re-enable replication and optimize performance by setting flags on the primary instance.

D.

Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.

E.

Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.

Buy Now
Questions 13

Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag. What should you do?

Options:

A.

Identify and optimize slow running queries, or set parallel replication flags.

B.

Stop all running queries, and re-create the replicas.

C.

Edit the primary instance to upgrade to a larger disk, and increase vCPU count.

D.

Edit the primary instance to add additional memory.

Questions 14

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

Options:

A.

Use the Google Cloud Console or gcloud CLI to manually create a new clone database.

B.

Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.

C.

Verify that the new replica is created automatically.

D.

Start the original primary instance and resume replication.

Buy Now
Questions 15

You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?

Options:

A.

Configure the automated backups to use a regional Cloud Storage bucket as a custom location.

B.

Use the default configuration for the automated backups location.

C.

Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.

D.

Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.

Buy Now
Questions 16

You need to issue a new server certificate because your old one is expiring. You need to avoid a restart of your Cloud SQL for MySQL instance. What should you do in your Cloud SQL instance?

Options:

A.

Issue a rollback, and download your server certificate.

B.

Create a new client certificate, and download it.

C.

Create a new server certificate, and download it.

D.

Reset your SSL configuration, and download your server certificate.

Questions 17

Your organization has a security policy to ensure that all Cloud SQL for PostgreSQL databases are secure. You want to protect sensitive data by using a key that meets specific locality or residency requirements. Your organization needs to control the key's lifecycle activities. You need to ensure that data is encrypted at rest and in transit. What should you do?

Options:

A.

Create the database with Google-managed encryption keys.

B.

Create the database with customer-managed encryption keys.

C.

Create the database persistent disk with Google-managed encryption keys.

D.

Create the database persistent disk with customer-managed encryption keys.

Questions 18

Your team is building an application that stores and analyzes streaming time series financial data. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?

Options:

A.

Use Firestore.

B.

Use Bigtable

C.

Use BigQuery.

D.

Use Cloud Spanner.

Buy Now
Questions 19

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Spanner

D.

Bigtable

Buy Now
Questions 20

Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices. What should you do? (Choose two.)

Options:

A.

Set up manual backups.

B.

Create a PostgreSQL database on-premises as the HA option.

C.

Configure single zone availability for automated backups.

D.

Enable point-in-time recovery.

E.

Schedule automated backups.

Buy Now
Questions 21

You are setting up a Bare Metal Solution environment. You need to update the operating system to the latest version. You need to connect the Bare Metal Solution environment to the internet so you can receive software updates. What should you do?

Options:

A.

Setup a static external IP address in your VPC network.

B.

Set up bring your own IP (BYOIP) in your VPC.

C.

Set up a Cloud NAT gateway on the Compute Engine VM.

D.

Set up Cloud NAT service.

Buy Now
Questions 22

Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime. What should you do?

Options:

A.

Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.

B.

Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.

C.

Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.

D.

Create an external replica, and use Cloud SQL to synchronize the data to the replica.

Questions 23

You work for a large retail and ecommerce company that is starting to extend their business globally. Your company plans to migrate to Google Cloud. You want to use platforms that will scale easily, handle transactions with the least amount of latency, and provide a reliable customer experience. You need a storage layer for sales transactions and current inventory levels. You want to retain the same relational schema that your existing platform uses. What should you do?

Options:

A.

Store your data in Firestore in a multi-region location, and place your compute resources in one of the constituent regions.

B.

Deploy Cloud Spanner using a multi-region instance, and place your compute resources close to the default leader region.

C.

Build an in-memory cache in Memorystore, and deploy to the specific geographic regions where your application resides.

D.

Deploy a Bigtable instance with a cluster in one region and a replica cluster in another geographic region.

Buy Now
Questions 24

Your organization has hundreds of Cloud SQL for MySQL instances. You want to follow Google-recommended practices to optimize platform costs. What should you do?

Options:

A.

Use Query Insights to identify idle instances.

B.

Remove inactive user accounts.

C.

Run the Recommender API to identify overprovisioned instances.

D.

Build indexes on heavily accessed tables.

Buy Now
Questions 25

Your hotel booking company is expanding into Country A, where personally identifiable information (PII) must comply with regional data residency requirements and audits. You need to isolate customer data in Country A from the rest of the customer data. You want to design a multi-tenancy strategy to efficiently manage costs and operations. What should you do?

Options:

A.

Apply a schema data management pattern.

B.

Apply an instance data management pattern.

C.

Apply a table data management pattern.

D.

Apply a database data management pattern.

Questions 26

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

Options:

A.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.

B.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.

C.

Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.

D.

Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.

Buy Now
Questions 27

You are the primary DBA of a Cloud SQL for PostgreSQL database that supports 6 enterprise applications in production. You used Cloud SQL Insights to identify inefficient queries and now need to identify the application that is originating the inefficient queries. You want to follow Google-recommended practices. What should you do?

Options:

A.

Shut down and restart each application.

B.

Write a utility to scan database query logs.

C.

Write a utility to scan application logs.

D.

Use query tags to add application-centric database monitoring.

Questions 28

Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?

Options:

A.

Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.

B.

Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.

C.

Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.

D.

Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.

Buy Now
Questions 29

You have a large Cloud SQL for PostgreSQL instance. The database instance is not mission-critical, and you want to minimize operational costs. What should you do to lower the cost of backups in this environment?

Options:

A.

Set the automated backups to occur every other day to lower the frequency of backups.

B.

Change the storage tier of the automated backups from solid-state drive (SSD) to hard disk drive (HDD).

C.

Select a different region to store your backups.

D.

Reduce the number of automated backups that are retained to two (2).

Buy Now
Questions 30

You are configuring the networking of a Cloud SQL instance. The only application that connects to this database resides on a Compute Engine VM in the same project as the Cloud SQL instance. The VM and the Cloud SQL instance both use the same VPC network, and both have an external (public) IP address and an internal (private) IP address. You want to improve network security. What should you do?

Options:

A.

Disable and remove the internal IP address assignment.

B.

Disable both the external IP address and the internal IP address, and instead rely on Private Google Access.

C.

Specify an authorized network with the CIDR range of the VM.

D.

Disable and remove the external IP address assignment.

Buy Now
Questions 31

You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances. What should you do?

Options:

A.

Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.

B.

Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.

C.

Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.

D.

Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.

Buy Now
Questions 32

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

Options:

A.

Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.

B.

Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.

C.

Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.

D.

Shut down your existing Cloud SQL for MySQL instance, and enable HA.

Buy Now
Questions 33

Your company is developing a new global transactional application that must be ACID-compliant and have 99.999% availability. You are responsible for selecting the appropriate Google Cloud database to serve as a datastore for this new application. What should you do?

Options:

A.

Use Firestore.

B.

Use Cloud Spanner.

C.

Use Cloud SQL.

D.

Use Bigtable.

Buy Now
Questions 34

Your organization is running a Firestore-backed Firebase app that serves the same top ten news stories on a daily basis to a large global audience. You want to optimize content delivery while decreasing cost and latency. What should you do?

Options:

A.

Enable serializable isolation in the Firebase app.

B.

Deploy a US multi-region Firestore location.

C.

Build a Firestore bundle, and deploy bundles to Cloud CDN.

D.

Create a Firestore index on the news story date.

Buy Now
Questions 35

Your project is using Bigtable to store data that should not be accessed from the public internet under any circumstances, even if the requestor has a valid service account key. You need to secure access to this data. What should you do?

Options:

A.

Use Identity and Access Management (IAM) for Bigtable access control.

B.

Use VPC Service Controls to create a trusted network for the Bigtable service.

C.

Use customer-managed encryption keys (CMEK).

D.

Use Google Cloud Armor to add IP addresses to an allowlist.

Buy Now
Questions 36

You host an application in Google Cloud. The application is located in a single region and uses Cloud SQL for transactional data. Most of your users are located in the same time zone and expect the application to be available 7 days a week, from 6 AM to 10 PM. You want to ensure regular maintenance updates to your Cloud SQL instance without creating downtime for your users. What should you do?

Options:

A.

Configure a maintenance window during a period when no users will be on the system. Control the order of update by setting non-production instances to earlier and production instances to later.

B.

Create your database with one primary node and one read replica in the region.

C.

Enable maintenance notifications for users, and reschedule maintenance activities to a specific time after notifications have been sent.

D.

Configure your Cloud SQL instance with high availability enabled.

Buy Now
Questions 37

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?

Options:

A.

Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.

B.

Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.

C.

Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.

D.

Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

Buy Now
Questions 38

Your organization has a busy transactional Cloud SQL for MySQL instance. Your analytics team needs access to the data so they can build monthly sales reports. You need to provide data access to the analytics team without adversely affecting performance. What should you do?

Options:

A.

Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team.

B.

Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team.

C.

Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team.

D.

Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.

Buy Now
Questions 39

Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime. What should you do?

Options:

A.

Use Cloud SQL for data storage.

B.

Use Cloud Spanner for data storage.

C.

Use Memorystore for data storage.

D.

Use Bigtable for data storage.

Buy Now
Exam Code: Professional-Cloud-Database-Engineer
Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
Last Update: May 5, 2024
Questions: 132
$64  $159.99
$48  $119.99
$40  $99.99
buy now Professional-Cloud-Database-Engineer