DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB Part 4
Question #: 181
Topic #: 3
You have an Azure Cosmos DB Core (SQL) API account.
The change feed is enabled on a container named invoice.
You create an Azure function that has a trigger on the change feed.
What is received by the Azure function?
- only the changed properties and the system-defined properties of the updated items
B. only the partition key and the changed properties of the updated items
C. all the properties of the original items and the updated items
D. all the properties of the updated items
Selected Answer: D
———————————————————————-
Question #: 182
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that supports an application named App1. App1 uses the consistent prefix consistency level.
You configure account1 to use a dedicated gateway and integrated cache.
You need to ensure that App1 can use the integrated cache.
Which two actions should you perform for App1? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Change the consistency level of requests to session.
B. Change the account endpoint to http://account1.documents.azure.com.
C. Change the account endpoint to http://account1.sqlx.cosmos.azure.com.
D. Change the connection mode to direct.
E. Change the consistency level of requests to strong.
Selected Answer: AC
———————————————————————-
Question #: 183
Topic #: 1
You have an Azure Cosmos DB Core (SQL) API account.
You run the following query against a container in the account.
What is the output of the query?
- [{“A”: false, “B”: true, “C”: false}]
B. [{“A”: true, “B”: false, “C”: true}]
C. [{“A”: true, “B”: true, “C”: false}]
D. [{“A”: true, “B”: true, “C”: true}]
Selected Answer: A
———————————————————————-
Question #: 184
Topic #: 5
You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution must store the data in a compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector”
B. “key.converter”: “org.apache.kafka.connect.json.JsonConverter”
C. “key.converter”: “io.confluent.connect.avro.AvroConverter”
D. “connect.cosmos.containers.topicmap”: “iot#telemetry”
E. “connect.cosmos.containers.topicmap”: “iot”
F. “connector.class”: “com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector”
Selected Answer: ACD
———————————————————————-
Question #: 185
Topic #: 4
You have an application that queries an Azure Cosmos DB Core (SQL) API account.
You discover that the following two queries run frequently.
You need to minimize the request units (RUs) consumed by reads and writes.
What should you create?
- a composite index for (name DESC, timestamp ASC)
B. a composite index for (name ASC, timestamp ASC) and a composite index for (name DESC, timestamp DESC)
C. a composite index for (name ASC, timestamp ASC)
D. a composite index for (name ASC, timestamp DESC)
Selected Answer: D
———————————————————————-
Question #: 186
Topic #: 3
You have an Azure Cosmos DB Core (SQL) API account.
The change feed is enabled on a container named invoice.
You create an Azure function that has a trigger on the change feed.
What is received by the Azure function?
- only the changed properties and the system-defined properties of the updated items
B. only the partition key and the changed properties of the updated items
C. all the properties of the original items and the updated items
D. all the properties of the updated items
Selected Answer: D
———————————————————————-
Question #: 187
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 188
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 189
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 190
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 191
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: C
———————————————————————-
Question #: 192
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 193
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 194
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 195
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 196
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: D
———————————————————————-
Question #: 197
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 198
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 199
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 200
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 201
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: D
———————————————————————-
Question #: 202
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 203
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 204
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 205
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 206
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: D
———————————————————————-
Question #: 207
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 208
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 209
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 210
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 211
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: B
———————————————————————-
Question #: 212
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 213
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 214
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 215
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 216
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: A
———————————————————————-
Question #: 217
Topic #: 1
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Append pre to the name of the JavaScript function trigger.
B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.
Selected Answer: C
———————————————————————-
Question #: 218
Topic #: 5
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to write a dataset. The data flow will use 2,000 Apache
Spark partitions.
You need to ensure that the ingestion from each Spark partition is balanced to optimize throughput.
Which sink setting should you configure?
- Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Selected Answer: B
———————————————————————-
Question #: 219
Topic #: 4
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
- Configure the same processor name for all the instances
B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances
Selected Answer: ACF
———————————————————————-
Question #: 220
Topic #: 2
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
How will writes from the application be handled?
- Writes will use the eventual consistency level.
B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.
Selected Answer: C
———————————————————————-
Question #: 221
Topic #: 3
DRAG DROP –
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API. The solution must meet the following requirements:
✑ Support queries from the serverless SQL pool.
✑ Only pay for analytical compute when running queries.
✑ Ensure that analytical processes do NOT affect operational processes.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
No answers
Selected Answer: C
———————————————————————-
Question #: 222
Topic #: 5
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control (RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?
- CosmosDB Operator only
B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
C. DocumentDB Account Contributor only
D. Cosmos DB Built-in Data Contributor only
Selected Answer: D
———————————————————————-
Question #: 223
Topic #: 1
HOTSPOT –
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
What should you include in the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
No answers
Selected Answer: C
———————————————————————-
Question #: 224
Topic #: 2
HOTSPOT
–
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of strong.
You have an app named App1 that is configured to request a consistency level of session.
How will the read and write operations of App1 be handled? To answer, select the appropriate options in the answer area.
No answers
Selected Answer: B
———————————————————————-
Question #: 225
Topic #: 3
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.
Which should you do before you can use db1 as an output of Stream Analytics?
- In account1, add a private endpoint
B. In db1, create containers that have a custom indexing policy and analytical store disabled
C. In db1, create containers that have an automatic indexing policy and analytical store enabled
D. In account1, enable a dedicated gateway
Selected Answer: B
———————————————————————-
Question #: 226
Topic #: 4
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a database in an Azure Cosmos DB Core (SQL) API account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to Custom and you use the default settings for the policy.
Does this meet the goal?
- Yes
B. No
Selected Answer: A
———————————————————————-
Question #: 227
Topic #: 5
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control (RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?
- CosmosDB Operator only
B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
C. DocumentDB Account Contributor only
D. Cosmos DB Built-in Data Contributor only
Selected Answer: D
———————————————————————-
Question #: 228
Topic #: 1
HOTSPOT –
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
What should you include in the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
No answers
Selected Answer: D
———————————————————————-
Question #: 229
Topic #: 2
HOTSPOT
–
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of strong.
You have an app named App1 that is configured to request a consistency level of session.
How will the read and write operations of App1 be handled? To answer, select the appropriate options in the answer area.
No answers
Selected Answer: D
———————————————————————-
Question #: 230
Topic #: 3
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.
Which should you do before you can use db1 as an output of Stream Analytics?
- In account1, add a private endpoint
B. In db1, create containers that have a custom indexing policy and analytical store disabled
C. In db1, create containers that have an automatic indexing policy and analytical store enabled
D. In account1, enable a dedicated gateway
Selected Answer: B
———————————————————————-
Question #: 231
Topic #: 4
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a database in an Azure Cosmos DB Core (SQL) API account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to Custom and you use the default settings for the policy.
Does this meet the goal?
- Yes
B. No
Selected Answer: A
———————————————————————-
Question #: 232
Topic #: 5
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control (RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?
- CosmosDB Operator only
B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
C. DocumentDB Account Contributor only
D. Cosmos DB Built-in Data Contributor only
Selected Answer: D
———————————————————————-
Question #: 233
Topic #: 1
HOTSPOT –
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
What should you include in the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
No answers
Selected Answer: D
———————————————————————-
Question #: 234
Topic #: 2
HOTSPOT
–
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of strong.
You have an app named App1 that is configured to request a consistency level of session.
How will the read and write operations of App1 be handled? To answer, select the appropriate options in the answer area.
No answers
Selected Answer: A
———————————————————————-
Question #: 235
Topic #: 3
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.
Which should you do before you can use db1 as an output of Stream Analytics?
- In account1, add a private endpoint
B. In db1, create containers that have a custom indexing policy and analytical store disabled
C. In db1, create containers that have an automatic indexing policy and analytical store enabled
D. In account1, enable a dedicated gateway
Selected Answer: B
———————————————————————-
Question #: 236
Topic #: 4
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a database in an Azure Cosmos DB Core (SQL) API account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to Custom and you use the default settings for the policy.
Does this meet the goal?
- Yes
B. No
Selected Answer: A
———————————————————————-
Question #: 237
Topic #: 5
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based access control (RBAC). The solution must use the principle of least privilege.
Which roles should you assign to User1?
- CosmosDB Operator only
B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
C. DocumentDB Account Contributor only
D. Cosmos DB Built-in Data Contributor only
Selected Answer: D
———————————————————————-
Question #: 238
Topic #: 1
HOTSPOT –
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
What should you include in the code? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
No answers
Selected Answer: D
———————————————————————-
Question #: 239
Topic #: 2
HOTSPOT
–
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of strong.
You have an app named App1 that is configured to request a consistency level of session.
How will the read and write operations of App1 be handled? To answer, select the appropriate options in the answer area.
No answers
Selected Answer: D
———————————————————————-
Question #: 240
Topic #: 3
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.
Which should you do before you can use db1 as an output of Stream Analytics?
- In account1, add a private endpoint
B. In db1, create containers that have a custom indexing policy and analytical store disabled
C. In db1, create containers that have an automatic indexing policy and analytical store enabled
D. In account1, enable a dedicated gateway
Selected Answer: B
———————————————————————-
