[Free] 2019(Nov) EnsurePass Microsoft DP-201 Dumps with VCE and PDF 1-10

Get Full Version of the Exam
http://www.EnsurePass.com/DP-201.html

Question No.1

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are designing an HDInsight/Hadoop cluster solution that uses Azure Data Lake Gen1 Storage.

The solution requires POSIX permissions and enables diagnostics logging for auditing. You need to recommend solutions that optimize storage.

Proposed Solution: Implement compaction jobs to combine small files into larger files. Does the solution meet the goal?

  1. Yes

  2. No

Correct Answer: A

Explanation:

Depending on what services and workloads are using the data, a good size to consider for files is 256 MB or greater. If the file sizes cannot be batched when landing in Data Lake Storage Gen1, you can have a separate compaction job that combines these files into larger ones.

Note:

POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:

image

image

Lowering the authentication checks across multiple files Reduced open file connections

image

Faster copying/replication

image

Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions

References:

https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices

Question No.2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are designing an HDInsight/Hadoop cluster solution that uses Azure Data Lake Gen1 Storage.

The solution requires POSIX permissions and enables diagnostics logging for auditing. You need to recommend solutions that optimize storage.

Proposed Solution: Ensure that files stored are smaller than 250MB. Does the solution meet the goal?

  1. Yes

  2. No

Correct Answer: B

Explanation:

Ensure that files stored are larger, not smaller than 250MB.

You can have a separate compaction job that combines these files into larger ones. Note:

The file POSIX permissions and auditing in Data Lake Storage Gen1 comes with an overhead that becomes apparent when working with numerous small files. As a best practice, you must batch your data into larger files versus writing thousands or millions of small files to Data Lake Storage Gen1. Avoiding small file sizes can have multiple benefits, such as:

image

image

Lowering the authentication checks across multiple files Reduced open file connections

image

Faster copying/replication

image

Fewer files to process when updating Data Lake Storage Gen1 POSIX permissions

References:

https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-best-practices

Question No.3

You are designing an Azure Databricks cluster that runs user-defined local processes. You need to recommend a cluster configuration that meets the following requirements:

image

image

Minimize query latency. Reduce overall costs.

image

Maximize the number of users that can run queries on the cluster at the same time. Which cluster type should you recommend?

  1. Standard with Autoscaling

  2. High Concurrency with Auto Termination

  3. High Concurrency with Autoscaling

  4. Standard with Auto Termination

Correct Answer: A

Question No.4

You design data engineering solutions for a company.

A project requires analytics and visualization of large set of data. The project has the following requirements:

image

image

image

Notebook scheduling Cluster automation Power BI Visualization

You need to recommend the appropriate Azure service. Which Azure service should you recommend?

  1. Azure Batch

  2. Azure Stream Analytics

  3. Azure ML Studio

  4. Azure Databricks

  5. Azure HDInsight

Correct Answer: D

Explanation:

A databrick job is a way of running a notebook or JAR either immediately or on a scheduled basis.

Azure Databricks has two types of clusters: interactive and job. Interactive clusters are used to analyze data collaboratively with interactive notebooks. Job clusters are used to run fast and robust automated workloads using the UI or API.

You can visualize Data with Azure Databricks and Power BI Desktop. References:

https://docs.azuredatabricks.net/user-guide/clusters/index.html https://docs.azuredatabricks.net/user-guide/jobs.html

Question No.5

You are developing a solution that performs real-time analysis of IoT data in the cloud. The solution must remain available during Azure service updates.

You need to recommend a solution.

Which two actions should you recommend? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  1. Deploy an Azure Stream Analytics job to two separate regions that are not in a pair.

  2. Deploy an Azure Stream Analytics job to each region in a paired region.

  3. Monitor jobs in both regions for failure.

  4. Monitor jobs in the primary region for failure.

  5. Deploy an Azure Stream Analytics job to one region in a paired region.

Correct Answer: BC

Explanation:

Stream Analytics guarantees jobs in paired regions are updated in separate batches. As a result there is a sufficient time gap between the updates to identify potential breaking bugs and remediate them.

Customers are advised to deploy identical jobs to both paired regions.

In addition to Stream Analytics internal monitoring capabilities, customers are also advised to

monitor the jobs as if both are production jobs. If a break is identified to be a result of the Stream Analytics service update, escalate appropriately and fail over any downstream consumers to the healthy job output. Escalation to support will prevent the paired region from being affected by the new deployment and maintain the integrity of the paired jobs.

References:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-job-reliability

Question No.6

HOTSPOT

A company stores large datasets in Azure, including sales transactions and customer account information.

You must design a solution to analyze the data. You plan to create the following HDInsight clusters:

You need to ensure that the clusters support the query requirements.

Which cluster types should you recommend? To answer, select the appropriate configuration in the answer area.

NOTE: Each correct selection is worth one point.

image

Correct Answer:

image

Question No.7

A company purchases loT devices to monitor manufacturing machinery. The company uses an loT appliance to communicate with the loT devices.

The company must be able to monitor the devices in real-time. You need to design the solution.

What should you recommend?

  1. Azure Stream Analytics cloud job using Azure PowerShell

  2. Azure Analysis Services using Azure Portal

  3. Azure Data Factory instance using Azure Portal

  4. Azure Analysis Services using Azure PowerShell

Correct Answer: D

Question No.8

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are designing an Azure SQL Database that will use elastic pools. You plan to store data about customers in a table. Each record uses a value for CustomerID.

You need to recommend a strategy to partition data based on values in CustomerID. Proposed Solution: Separate data into customer regions by using horizontal partitioning. Does the solution meet the goal?

  1. Yes

  2. No

Correct Answer: B

Explanation:

We should use Horizontal Partitioning through Sharding, not divide through regions.

Note:

Horizontal Partitioning – Sharding: Data is partitioned horizontally to distribute rows across a scaled out data tier. With this approach, the schema is identical on all participating databases. This approach is also called quot;shardingquot;. Sharding can be performed and managed using (1) the elastic database tools libraries or (2) self-sharding. An elastic query is used to query or compile reports across many shards.

References:

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview

Question No.9

HOTSPOT

You are designing a recovery strategy for your Azure SQL Databases.

The recovery strategy must use default automated backup settings. The solution must include a Point-in time restore recovery strategy.

You need to recommend which backups to use and the order in which to restore backups.

What should you recommend? To answer, select the appropriate configuration in the answer area. NOTE: Each correct selection is worth one point.

image

Correct Answer:

image

Question No.10

You are designing a real-time stream solution based on Azure Functions. The solution will process data uploaded to Azure Blob Storage.

The solution requirements are as follows:

image

image

New blobs must be processed with a little delay as possible. Scaling must occur automatically.

image

Costs must be minimized. What should you recommend?

  1. Deploy the Azure Function in an App Service plan and use a Blob trigger.

  2. Deploy the Azure Function in a Consumption plan and use an Event Grid trigger.

  3. Deploy the Azure Function in a Consumption plan and use a Blob trigger.

  4. Deploy the Azure Function in an App Service plan and use an Event Grid trigger.

Correct Answer: C

Explanation:

Create a function, with the help of a blob trigger template, which is triggered when files are uploaded to or updated in Azure Blob storage.

You use a consumption plan, which is a hosting plan that defines how resources are allocated to your function app. In the default Consumption Plan, resources are added dynamically as required by your functions. In this serverless hosting, you only pay for the time your functions run. When you run in an App Service plan, you must manage the scaling of your function app.

References:

https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-storage-blob-triggered- function

Get Full Version of the Exam
DP-201 Dumps
DP-201 VCE and PDF

Leave a Reply

Your email address will not be published. Required fields are marked *