Google Associate Data Practitioner Practice Exams
Last updated on Mar 31,2025- Exam Code: Associate Data Practitioner
- Exam Name: Google Cloud Associate Data Practitioner ( ADP Exam )
- Certification Provider: Google
- Latest update: Mar 31,2025
You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company’s database. You do not have the Develop permission.
What should you do?
- A . Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.
- B . Create a calculated field using the Add a field option in Looker Studio, and add it to your report.
- C . Create a table calculation from the field picker in Looker, and add it to your report.
- D . Create a custom field from the field picker in Looker, and add it to your report.
Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department.
What should you do?
- A . Create a custom quota for each analyst in BigQuery.
- B . Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.
- C . Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.
- D . Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.
Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company’s products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently.
What should you do?
- A . Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.
- B . Define a new measure that calculates the profit margin by using the existing revenue and cost fields.
- C . Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).
- D . Apply a filter to only show products with a positive profit margin.
Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future.
What should you do?
- A . Create authorized views in the team’s Google Cloud project that is only accessible by the team.
- B . Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.
- C . Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.
- D . Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.
Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future.
What should you do?
- A . Create authorized views in the team’s Google Cloud project that is only accessible by the team.
- B . Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.
- C . Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.
- D . Export the dataset to a Cloud Storage bucket in the team’s Google Cloud project that is only accessible by the team.
You are designing a pipeline to process data files that arrive in Cloud Storage by 3:00 am each day.
Data processing is performed in stages, where the output of one stage becomes the input of the next.
Each stage takes a long time to run. Occasionally a stage fails, and you have to address
the problem. You need to ensure that the final output is generated as quickly as possible.
What should you do?
- A . Design a Spark program that runs under Dataproc. Code the program to wait for user input when an error is detected. Rerun the last action after correcting any stage output data errors.
- B . Design the pipeline as a set of PTransforms in Dataflow. Restart the pipeline after correcting any stage output data errors.
- C . Design the workflow as a Cloud Workflow instance. Code the workflow to jump to a given stage based on an input parameter. Rerun the workflow after correcting any stage output data errors.
- D . Design the processing as a directed acyclic graph (DAG) in Cloud Composer. Clear the state of the failed task after correcting any stage output data errors.
Your company’s ecommerce website collects product reviews from customers. The reviews are loaded as CSV files daily to a Cloud Storage bucket. The reviews are in multiple languages and need to be translated to Spanish. You need to configure a pipeline that is serverless, efficient, and requires minimal maintenance.
What should you do?
- A . Load the data into BigQuery using Dataproc. Use Apache Spark to translate the reviews by invoking the Cloud Translation API. Set BigQuery as the sink.U
- B . Use a Dataflow templates pipeline to translate the reviews using the Cloud Translation API. Set BigQuery as the sink.
- C . Load the data into BigQuery using a Cloud Run function. Use the BigQuery ML create model statement to train a translation model. Use the model to translate the product reviews within BigQuery.
- D . Load the data into BigQuery using a Cloud Run function. Create a BigQuery remote function that invokes the Cloud Translation API. Use a scheduled query to translate new reviews.
Your company’s ecommerce website collects product reviews from customers. The reviews are loaded as CSV files daily to a Cloud Storage bucket. The reviews are in multiple languages and need to be translated to Spanish. You need to configure a pipeline that is serverless, efficient, and requires minimal maintenance.
What should you do?
- A . Load the data into BigQuery using Dataproc. Use Apache Spark to translate the reviews by invoking the Cloud Translation API. Set BigQuery as the sink.U
- B . Use a Dataflow templates pipeline to translate the reviews using the Cloud Translation API. Set BigQuery as the sink.
- C . Load the data into BigQuery using a Cloud Run function. Use the BigQuery ML create model statement to train a translation model. Use the model to translate the product reviews within BigQuery.
- D . Load the data into BigQuery using a Cloud Run function. Create a BigQuery remote function that invokes the Cloud Translation API. Use a scheduled query to translate new reviews.
Your organization needs to store historical customer order data. The data will only be accessed once a month for analysis and must be readily available within a few seconds when it is accessed. You need to choose a storage class that minimizes storage costs while ensuring that the data can be retrieved quickly.
What should you do?
- A . Store the data in Cloud Storaqe usinq Nearline storaqe.
- B . Store the data in Cloud Storaqe usinq Coldline storaqe.
- C . Store the data in Cloud Storage using Standard storage.
- D . Store the data in Cloud Storage using Archive storage.
You work for an online retail company. Your company collects customer purchase data in CSV files and pushes them to Cloud Storage every 10 minutes. The data needs to be transformed and loaded into BigQuery for analysis. The transformation involves cleaning the data, removing duplicates, and enriching it with product information from a separate table in BigQuery. You need to implement a low-overhead solution that initiates data processing as soon as the files are loaded into Cloud Storage.
What should you do?
- A . Use Cloud Composer sensors to detect files loading in Cloud Storage. Create a Dataproc cluster, and use a Composer task to execute a job on the cluster to process and load the data into BigQuery.
- B . Schedule a direct acyclic graph (DAG) in Cloud Composer to run hourly to batch load the data from Cloud Storage to BigQuery, and process the data in BigQuery using SQL.
- C . Use Dataflow to implement a streaming pipeline using an OBJECT_FINALIZE notification from Pub/Sub to read the data from Cloud Storage, perform the transformations, and write the data to BigQuery.
- D . Create a Cloud Data Fusion job to process and load the data from Cloud Storage into BigQuery. Create an OBJECT_FINALI ZE notification in Pub/Sub, and trigger a Cloud Run function to start the Cloud Data Fusion job as soon as new files are loaded.