Google Associate Data Practitioner Practice Exams
Last updated on Mar 30,2025- Exam Code: Associate Data Practitioner
- Exam Name: Google Cloud Associate Data Practitioner ( ADP Exam )
- Certification Provider: Google
- Latest update: Mar 30,2025
You are working with a large dataset of customer reviews stored in Cloud Storage. The dataset contains several inconsistencies, such as missing values, incorrect data types, and duplicate entries. You need to clean the data to ensure that it is accurate and consistent before using it for analysis.
What should you do?
- A . Use the PythonOperator in Cloud Composer to clean the data and load it into BigQuery. Use SQL for analysis.
- B . Use BigQuery to batch load the data into BigQuery. Use SQL for cleaning and analysis.
- C . Use Storage Transfer Service to move the data to a different Cloud Storage bucket. Use event triggers to invoke Cloud Run functions to load the data into BigQuery. Use SQL for analysis.
- D . Use Cloud Run functions to clean the data and load it into BigQuery. Use SQL for analysis.
Your company currently uses an on-premises network file system (NFS) and is migrating data to Google Cloud. You want to be able to control how much bandwidth is used by the data migration while capturing detailed reporting on the migration status.
What should you do?
- A . Use a Transfer Appliance.
- B . Use Cloud Storage FUSE.
- C . Use Storage Transfer Service.
- D . Use gcloud storage commands.
You are developing a data ingestion pipeline to load small CSV files into BigQuery from Cloud Storage. You want to load these files upon arrival to minimize data latency. You want to accomplish this with minimal cost and maintenance.
What should you do?
- A . Use the bq command-line tool within a Cloud Shell instance to load the data into BigQuery.
- B . Create a Cloud Composer pipeline to load new files from Cloud Storage to BigQuery and schedule it to run every 10 minutes.
- C . Create a Cloud Run function to load the data into BigQuery that is triggered when data arrives in Cloud Storage.
- D . Create a Dataproc cluster to pull CSV files from Cloud Storage, process them using Spark, and write the results to BigQuery.
Your organization’s ecommerce website collects user activity logs using a Pub/Sub topic. Your organization’s leadership team wants a dashboard that contains aggregated user engagement metrics. You need to create a solution that transforms the user activity logs into aggregated metrics, while ensuring that the raw data can be easily queried.
What should you do?
- A . Create a Dataflow subscription to the Pub/Sub topic, and transform the activity logs. Load the transformed data into a BigQuery table for reporting.
- B . Create an event-driven Cloud Run function to trigger a data transformation pipeline to run. Load the transformed activity logs into a BigQuery table for reporting.
- C . Create a Cloud Storage subscription to the Pub/Sub topic. Load the activity logs into a bucket using the Avro file format. Use Dataflow to transform the data, and load it into a BigQuery table for reporting.
- D . Create a BigQuery subscription to the Pub/Sub topic, and load the activity logs into the table. Create a materialized view in BigQuery using SQL to transform the data for reporting
You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach.
What should you do?
- A . Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the
data within the notebook, and store the summaries in BigQuery. - B . Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.
- C . Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.
- D . Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.
Your organization needs to implement near real-time analytics for thousands of events arriving each second in Pub/Sub. The incoming messages require transformations. You need to configure a pipeline that processes, transforms, and loads the data into BigQuery while minimizing development time.
What should you do?
- A . Use a Google-provided Dataflow template to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
- B . Create a Cloud Data Fusion instance and configure Pub/Sub as a source. Use Data Fusion to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
- C . Load the data from Pub/Sub into Cloud Storage using a Cloud Storage subscription. Create a Dataproc cluster, use PySpark to perform transformations in Cloud Storage, and write the results to BigQuery.
- D . Use Cloud Run functions to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
Your organization needs to implement near real-time analytics for thousands of events arriving each second in Pub/Sub. The incoming messages require transformations. You need to configure a pipeline that processes, transforms, and loads the data into BigQuery while minimizing development time.
What should you do?
- A . Use a Google-provided Dataflow template to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
- B . Create a Cloud Data Fusion instance and configure Pub/Sub as a source. Use Data Fusion to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
- C . Load the data from Pub/Sub into Cloud Storage using a Cloud Storage subscription. Create a Dataproc cluster, use PySpark to perform transformations in Cloud Storage, and write the results to BigQuery.
- D . Use Cloud Run functions to process the Pub/Sub messages, perform transformations, and write the results to BigQuery.
You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company’s database. You do not have the Develop permission.
What should you do?
- A . Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.
- B . Create a calculated field using the Add a field option in Looker Studio, and add it to your report.
- C . Create a table calculation from the field picker in Looker, and add it to your report.
- D . Create a custom field from the field picker in Looker, and add it to your report.
You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company’s database. You do not have the Develop permission.
What should you do?
- A . Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.
- B . Create a calculated field using the Add a field option in Looker Studio, and add it to your report.
- C . Create a table calculation from the field picker in Looker, and add it to your report.
- D . Create a custom field from the field picker in Looker, and add it to your report.
You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company’s database. You do not have the Develop permission.
What should you do?
- A . Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.
- B . Create a calculated field using the Add a field option in Looker Studio, and add it to your report.
- C . Create a table calculation from the field picker in Looker, and add it to your report.
- D . Create a custom field from the field picker in Looker, and add it to your report.