Month End Sale - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65percent

Welcome To DumpsPedia

Associate-Data-Practitioner Sample Questions Answers

Questions 4

Your organization has highly sensitive data that gets updated once a day and is stored across multiple datasets in BigQuery. You need to provide a new data analyst access to query specific data in BigQuery while preventing access to sensitive data. What should you do?

Options:

A.

Grant the data analyst the BigQuery Job User IAM role in the Google Cloud project.

B.

Create a materialized view with the limited data in a new dataset. Grant the data analyst BigQuery Data Viewer IAM role in the dataset and the BigQuery Job User IAM role in the Google Cloud project.

C.

Create a new Google Cloud project, and copy the limited data into a BigQuery table. Grant the data analyst the BigQuery Data Owner IAM role in the new Google Cloud project.

D.

Grant the data analyst the BigQuery Data Viewer IAM role in the Google Cloud project.

Buy Now
Questions 5

Your organization has several datasets in BigQuery. The datasets need to be shared with your external partners so that they can run SQL queries without needing to copy the data to their own projects. You have organized each partner’s data in its own BigQuery dataset. Each partner should be able to access only their data. You want to share the data while following Google-recommended practices. What should you do?

Options:

A.

Use Analytics Hub to create a listing on a private data exchange for each partner dataset. Allow each partner to subscribe to their respective listings.

B.

Create a Dataflow job that reads from each BigQuery dataset and pushes the data into a dedicated Pub/Sub topic for each partner. Grant each partner the pubsub. subscriber IAM role.

C.

Export the BigQuery data to a Cloud Storage bucket. Grant the partners the storage.objectUser IAM role on the bucket.

D.

Grant the partners the bigquery.user IAM role on the BigQuery project.

Buy Now
Questions 6

Your organization’s business analysts require near real-time access to streaming data. However, they are reporting that their dashboard queries are loading slowly. After investigating BigQuery query performance, you discover the slow dashboard queries perform several joins and aggregations.

You need to improve the dashboard loading time and ensure that the dashboard data is as up-to-date as possible. What should you do?

Options:

A.

Disable BiqQuery query result caching.

B.

Modify the schema to use parameterized data types.

C.

Create a scheduled query to calculate and store intermediate results.

D.

Create materialized views.

Buy Now
Questions 7

You are a data analyst at your organization. You have been given a BigQuery dataset that includes customer information. The dataset contains inconsistencies and errors, such as missing values, duplicates, and formatting issues. You need to effectively and quickly clean the data. What should you do?

Options:

A.

Develop a Dataflow pipeline to read the data from BigQuery, perform data quality rules and transformations, and write the cleaned data back to BigQuery.

B.

Use Cloud Data Fusion to create a data pipeline to read the data from BigQuery, perform data quality transformations, and write the clean data back to BigQuery.

C.

Export the data from BigQuery to CSV files. Resolve the errors using a spreadsheet editor, and re-import the cleaned data into BigQuery.

D.

Use BigQuery's built-in functions to perform data quality transformations.

Buy Now
Questions 8

You used BigQuery ML to build a customer purchase propensity model six months ago. You want to compare the current serving data with the historical serving data to determine whether you need to retrain the model. What should you do?

Options:

A.

Compare the two different models.

B.

Evaluate the data skewness.

C.

Evaluate data drift.

D.

Compare the confusion matrix.

Buy Now
Questions 9

Your company is setting up an enterprise business intelligence platform. You need to limit data access between many different teams while following the Google-recommended approach. What should you do first?

Options:

A.

Create a separate Looker Studio report for each team, and share each report with the individuals within each team.

B.

Create one Looker Studio report with multiple pages, and add each team's data as a separate data source to the report.

C.

Create a Looker (Google Cloud core) instance, and create a separate dashboard for each team.

D.

Create a Looker (Google Cloud core) instance, and configure different Looker groups for each team.

Buy Now
Questions 10

You work for an online retail company. Your company collects customer purchase data in CSV files and pushes them to Cloud Storage every 10 minutes. The data needs to be transformed and loaded into BigQuery for analysis. The transformation involves cleaning the data, removing duplicates, and enriching it with product information from a separate table in BigQuery. You need to implement a low-overhead solution that initiates data processing as soon as the files are loaded into Cloud Storage. What should you do?

Options:

A.

Use Cloud Composer sensors to detect files loading in Cloud Storage. Create a Dataproc cluster, and use a Composer task to execute a job on the cluster to process and load the data into BigQuery.

B.

Schedule a direct acyclic graph (DAG) in Cloud Composer to run hourly to batch load the data from Cloud Storage to BigQuery, and process the data in BigQuery using SQL.

C.

Use Dataflow to implement a streaming pipeline using anOBJECT_FINALIZEnotification from Pub/Sub to read the data from Cloud Storage, perform the transformations, and write the data to BigQuery.

D.

Create a Cloud Data Fusion job to process and load the data from Cloud Storage into BigQuery. Create anOBJECT_FINALIZE notification in Pub/Sub, and trigger a Cloud Run function to start the Cloud Data Fusion job as soon as new files are loaded.

Buy Now
Questions 11

You are a database administrator managing sales transaction data by region stored in a BigQuery table. You need to ensure that each sales representative can only see the transactions in their region. What should you do?

Options:

A.

Add a policy tag in BigQuery.

B.

Create a row-level access policy.

C.

Create a data masking rule.

D.

Grant the appropriate 1AM permissions on the dataset.

Buy Now
Questions 12

Your organization has decided to migrate their existing enterprise data warehouse to BigQuery. The existing data pipeline tools already support connectors to BigQuery. You need to identify a data migration approach that optimizes migration speed. What should you do?

Options:

A.

Create a temporary file system to facilitate data transfer from the existing environment to Cloud Storage. Use Storage Transfer Service to migrate the data into BigQuery.

B.

Use the Cloud Data Fusion web interface to build data pipelines. Create a directed acyclic graph (DAG) that facilitates pipeline orchestration.

C.

Use the existing data pipeline tool’s BigQuery connector to reconfigure the data mapping.

D.

Use the BigQuery Data Transfer Service to recreate the data pipeline and migrate the data into BigQuery.

Buy Now
Questions 13

You need to create a data pipeline that streams event information from applications in multiple Google Cloud regions into BigQuery for near real-time analysis. The data requires transformation before loading. You want to create the pipeline using a visual interface. What should you do?

Options:

A.

Push event information to a Pub/Sub topic. Create a Dataflow job using the Dataflow job builder.

B.

Push event information to a Pub/Sub topic. Create a Cloud Run function to subscribe to the Pub/Sub topic, apply transformations, and insert the data into BigQuery.

C.

Push event information to a Pub/Sub topic. Create a BigQuery subscription in Pub/Sub.

D.

Push event information to Cloud Storage, and create an external table in BigQuery. Create a BigQuery scheduled job that executes once each day to apply transformations.

Buy Now
Questions 14

You work for a financial services company that handles highly sensitive data. Due to regulatory requirements, your company is required to have complete and manual control of data encryption. Which type of keys should you recommend to use for data storage?

Options:

A.

Use customer-supplied encryption keys (CSEK).

B.

Use a dedicated third-party key management system (KMS) chosen by the company.

C.

Use Google-managed encryption keys (GMEK).

D.

Use customer-managed encryption keys (CMEK).

Buy Now
Questions 15

You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach. What should you do?

Options:

A.

Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the data within the notebook, and store the summaries in BigQuery.

B.

Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.

C.

Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.

D.

Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.

Buy Now
Questions 16

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular. You need to design a storage system that is simple and cost-effective. What should you do?

Options:

A.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

B.

Create a single-region bucket with Autoclass enabled.

C.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

D.

Create a single-region bucket with Archive as the default storage class.

Buy Now
Questions 17

You manage an ecommerce website that has a diverse range of products. You need to forecast future product demand accurately to ensure that your company has sufficient inventory to meet customer needs and avoid stockouts. Your company's historical sales data is stored in a BigQuery table. You need to create a scalable solution that takes into account the seasonality and historical data to predict product demand. What should you do?

Options:

A.

Use the historical sales data to train and create a BigQuery ML time series model. Use the ML.FORECAST function call to output the predictions into a new BigQuery table.

B.

Use Colab Enterprise to create a Jupyter notebook. Use the historical sales data to train a custom prediction model in Python.

C.

Use the historical sales data to train and create a BigQuery ML linear regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

D.

Use the historical sales data to train and create a BigQuery ML logistic regression model. Use the ML.PREDICT function call to output the predictions into a new BigQuery table.

Buy Now
Questions 18

Your company is adopting BigQuery as their data warehouse platform. Your team has experienced Python developers. You need to recommend a fully-managed tool to build batch ETL processes that extract data from various source systems, transform the data using a variety of Google Cloud services, and load the transformed data into BigQuery. You want this tool to leverage your team’s Python skills. What should you do?

Options:

A.

Use Dataform with assertions.

B.

Deploy Cloud Data Fusion and included plugins.

C.

Use Cloud Composer with pre-built operators.

D.

Use Dataflow and pre-built templates.

Buy Now
Questions 19

You are working with a large dataset of customer reviews stored in Cloud Storage. The dataset contains several inconsistencies, such as missing values, incorrect data types, and duplicate entries. You need toclean the data to ensure that it is accurate and consistent before using it for analysis. What should you do?

Options:

A.

Use the PythonOperator in Cloud Composer to clean the data and load it into BigQuery. Use SQL for analysis.

B.

Use BigQuery to batch load the data into BigQuery. Use SQL for cleaning and analysis.

C.

Use Storage Transfer Service to move the data to a different Cloud Storage bucket. Use event triggers to invoke Cloud Run functions to load the data into BigQuery. Use SQL for analysis.

D.

Use Cloud Run functions to clean the data and load it into BigQuery. Use SQL for analysis.

Buy Now
Questions 20

Your company currently uses an on-premises network file system (NFS) and is migrating data to Google Cloud. You want to be able to control how much bandwidth is used by the data migration while capturing detailed reporting on the migration status. What should you do?

Options:

A.

Use a Transfer Appliance.

B.

Use Cloud Storage FUSE.

C.

Use Storage Transfer Service.

D.

Use gcloud storage commands.

Buy Now
Questions 21

Your team uses the Google Ads platform to visualize metrics. You want to export the data to BigQuery to get more granular insights. You need to execute a one-time transfer of historical data and automatically update data daily. You want a solution that is low-code, serverless, and requires minimal maintenance. What should you do?

Options:

A.

Export the historical data to BigQuery by using BigQuery Data Transfer Service. Use Cloud Composer for daily automation.

B.

Export the historical data to Cloud Storage by using Storage Transfer Service. Use Pub/Sub to trigger a Dataflow template that loads data for daily automation.

C.

Export the historical data as a CSV file. Import the file into BigQuery for analysis. Use Cloud Composer for daily automation.

D.

Export the historical data to BigQuery by using BigQuery Data Transfer Service. Use BigQuery Data Transfer Service for daily automation.

Buy Now
Questions 22

Your team is building several data pipelines that contain a collection of complex tasks and dependencies that you want to execute on a schedule, in a specific order. The tasks and dependencies consist of files in Cloud Storage, Apache Spark jobs, and data in BigQuery. You need to design a system that can schedule and automate these data processing tasks using a fully managed approach. What should you do?

Options:

A.

Use Cloud Scheduler to schedule the jobs to run.

B.

Use Cloud Tasks to schedule and run the jobs asynchronously.

C.

Create directed acyclic graphs (DAGs) in Cloud Composer. Use the appropriate operators to connect to Cloud Storage, Spark, and BigQuery.

D.

Create directed acyclic graphs (DAGs) in Apache Airflow deployed on Google Kubernetes Engine. Use the appropriate operators to connect to Cloud Storage, Spark, and BigQuery.

Buy Now
Questions 23

You have a BigQuery dataset containing sales data. This data is actively queried for the first 6 months. After that, the data is not queried but needs to be retained for 3 years for compliance reasons. You need to implement a data management strategy that meets access and compliance requirements, while keeping cost and administrative overhead to a minimum. What should you do?

Options:

A.

Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.

B.

Partition a BigQuery table by month. After 6 months, export the data to Coldline storage. Implement a lifecycle policy to delete the data from Cloud Storage after 3 years.

C.

Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.

D.

Store all data in a single BigQuery table without partitioning or lifecycle policies.

Buy Now
Questions 24

Your organization needs to store historical customer order data. The data will only be accessed once a month for analysis and must be readily available within a few seconds when it is accessed. You need to choose a storage class that minimizes storage costs while ensuring that the data can be retrieved quickly. What should you do?

Options:

A.

Store the data in Cloud Storaqe usinq Nearline storaqe.

B.

Store the data in Cloud Storaqe usinq Coldline storaqe.

C.

Store the data in Cloud Storage using Standard storage.

D.

Store the data in Cloud Storage using Archive storage.

Buy Now
Questions 25

You have a Cloud SQL for PostgreSQL database that stores sensitive historical financial data. You need to ensure that the data is uncorrupted and recoverable in the event that the primary region is destroyed. The data is valuable, so you need to prioritize recovery point objective (RPO) over recovery time objective (RTO). You want to recommend a solution that minimizes latency for primary read and write operations. What should you do?

Options:

A.

Configure the Cloud SQL for PostgreSQL instance for multi-region backup locations.

B.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA). Back up the Cloud SQL for PostgreSQL database hourly to a Cloud Storage bucket in a different region.

C.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with synchronous replication to a secondary instance in a different zone.

D.

Configure the Cloud SQL for PostgreSQL instance for regional availability (HA) with asynchronous replication to a secondary instance in a different region.

Buy Now
Questions 26

You manage a web application that stores data in a Cloud SQL database. You need to improve the read performance of the application by offloading read traffic from the primary database instance. You want to implement a solution that minimizes effort and cost. What should you do?

Options:

A.

Use Cloud CDN to cache frequently accessed data.

B.

Store frequently accessed data in a Memorystore instance.

C.

Migrate the database to a larger Cloud SQL instance.

D.

Enable automatic backups, and create a read replica of the Cloud SQL instance.

Buy Now
Exam Code: Associate-Data-Practitioner
Exam Name: Google Cloud Associate Data Practitioner (ADP Exam)
Last Update: Sep 28, 2025
Questions: 106
$57.75  $164.99
$43.75  $124.99
$36.75  $104.99
buy now Associate-Data-Practitioner