Microsoft DP-420 Practice Exams
Last updated on Mar 31,2025- Exam Code: DP-420
- Exam Name: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB
- Certification Provider: Microsoft
- Latest update: Mar 31,2025
HOTSPOT
You are creating a database in an Azure Cosmos DB Core (SQL) API account. The database will be used by an application that will provide users with the ability to share online posts. Users will also be able to submit comments on other users’ posts.
You need to store the data shown in the following table.
The application has the following characteristics:
– Users can submit an unlimited number of posts.
– The average number of posts submitted by a user will be more than 1,000.
– Posts can have an unlimited number of comments from different users.
The average number of comments per post will be 100, but many posts will exceed 1,000 comments.
Users will be limited to having a maximum of 20 interests.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point.
You have a database named db1 in an Azure Cosmos DB for NoSQL
You are designing an application that will use dbl.
In db1, you are creating a new container named coll1 that will store in coll1.
The following is a sample of a document that will be stored in coll1.
The application will have the following characteristics:
• New orders will be created frequently by different customers.
• Customers will often view their past order history.
You need to select the partition key value for coll1 to support the application.
The solution must minimize costs.
To what should you set the partition key?
- A . id
- B . customerId
- C . orderDate
- D . orderId
HOTSPOT
You have an Azure Cosmos DB container named container! that has a provisioned throughput and two physical partitions.
You monitor the following metrics for container1
• Normalized RU consumption
• The percentage of requests that have an HTTP status code of 429
You need to confirm that container1 is configured to maximize resource utilization.
What are the optimal values for each metric? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from the devices will be occur every second.
The following is a sample of the data.
You need to select a partition key that meets the following requirements for writes:
– Minimizes the partition skew
– Avoids capacity limits
– Avoids hot partitions
What should you do?
- A . Use timestamp as the partition key.
- B . Create a new synthetic key that contains deviceId and sensor1Value.
- C . Create a new synthetic key that contains deviceId and deviceManufacturer.
- D . Create a new synthetic key that contains deviceId and a random number.
HOTSPOT
You have an Azure subscription that contains an Azure Cosmos DB for NoSQL database named DB1. The shared throughput provisioned for DB1 is 10,000 DTU/s.
DB1 contains the containers shown in the following table.
You need to modify the throughput for the containers.
The solution must meet the following requirements:
• The maximum throughput for Container1 must be 4,000 DTU/s.
• The throughput for Contained must be shared across the containers.
• Administrative effort must be minimized.
What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account 1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1 exceeds a specific value.
Solution: You configure an application to use the change feed processor to read the change feed and you configure the application to trigger the function. Does this meet the goal?
- A . Yes
- B . No
The settings for a container in an Azure Cosmos DB Core (SQL) API account are configured as shown in the following exhibit.
Which statement describes the configuration of the container?
- A . All items will be deleted after one year.
- B . Items stored in the collection will be retained always, regardless of the items time to live value.
- C . Items stored in the collection will expire only if the item has a time to live value.
- D . All items will be deleted after one hour.
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- A . Throughput
- B . Write throughput budget
- C . Batch size
- D . Collection action
HOTSPOT
You need to select the capacity mode and scale configuration for account2 to support the planned changes and meet the business requirements.
What should you select? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.
Does this meet the goal?
- A . Yes
- B . No