Microsoft DP-300 Exam Dumps & Practice Test Questions

Question 1:

You manage 20 Azure SQL databases using the vCore purchasing model and want to create an Azure SQL Database elastic pool to include all these databases. 

Which three metrics should you consider to accurately size the elastic pool to support your workload? (Choose three.)

A. Total storage size of all databases
B. Geo-replication support
C. Number of databases concurrently hitting peak CPU utilization multiplied by each database’s peak CPU usage
D. Maximum number of concurrent sessions across all databases
E. Total number of databases multiplied by average CPU usage per database

Correct Answers: A, C, E

Explanation:

When configuring an Azure SQL Database elastic pool, it is critical to estimate the resource needs carefully to balance performance and cost efficiency. Elastic pools share resources such as CPU, memory, and storage across multiple databases. To size the pool properly, you need to understand the cumulative resource demand.

Option A—the total size of all databases—is a fundamental metric because the pool must have enough allocated storage to hold all data without hitting limits. Since storage is a fixed allocation, underestimating total size can cause failures or require urgent scaling.

Option C is key for CPU capacity planning. Databases in the pool may have varying workloads with periods of peak usage. The pool must support the number of databases that simultaneously hit their peak CPU demands. By multiplying the number of concurrently peaking databases by their peak CPU usage, you can gauge the maximum CPU resources the pool must handle during busy periods, ensuring no bottlenecks.

Option E provides an aggregate baseline CPU estimate. It calculates total CPU demand based on the average usage of all databases combined. This helps size the pool to efficiently handle typical, non-peak workloads without overprovisioning resources.

The incorrect options clarify common misconceptions:
B—geo-replication support relates to disaster recovery and does not affect resource sizing of the pool.
D—the maximum number of concurrent sessions reflects connection volume, but Azure pools focus on resource consumption metrics (CPU, storage, IO) rather than session counts for sizing.

In summary, to size an Azure SQL elastic pool effectively, consider total storage requirements, peak CPU demands of concurrently busy databases, and overall average CPU consumption. These metrics enable balancing cost and performance for your 20 databases, making A, C, and E the correct selections.

Question 2:

You have an Azure SQL database with a very large table named factSales containing 6 billion rows. This table is refreshed nightly through batch processing. 

Which compression type should you use to maximize space savings and optimize query performance?

A. Page compression
B. Row compression
C. Columnstore compression
D. Columnstore archival compression

Correct Answer: D

Explanation:

Choosing the right compression method for massive datasets like the factSales table (6 billion rows) can greatly impact both storage costs and query performance. Each compression type offers different trade-offs suited to particular workloads.

Option A, page compression, reduces data size by compressing repeated values and patterns at the 8KB page level. It is effective for transactional databases where data changes frequently. However, it does not scale as well for very large analytic tables and offers less compression than columnstore methods.

Option B, row compression, saves space by minimizing row-level redundancies but is the least effective among these options for large datasets. It is more suited for OLTP workloads than for large-scale analytics.

Option C, columnstore compression, is designed specifically for large analytical workloads. It stores data in a columnar format, enabling high compression ratios and faster analytical queries like aggregations and filtering. It’s well-suited for tables with billions of rows, improving both space efficiency and performance.

Option D, columnstore archival compression, is an enhanced, more aggressive form of columnstore compression. It is optimized for large historical datasets that are infrequently updated but often queried for analysis. Since factSales is batch-loaded nightly and presumably contains large volumes of relatively static data, archival compression maximizes storage savings beyond standard columnstore compression. Although it slightly reduces query speed compared to regular columnstore, it strikes the best balance by minimizing storage footprint while maintaining good query performance for typical analytic workloads.

In summary, for a large, mostly read-only factSales table, columnstore archival compression (Option D) provides the greatest space reduction with acceptable performance, making it the ideal choice.

Question 3:

You have a Microsoft SQL Server 2019 database named DB1, which currently uses clustered columnstore indexes, automatic tuning, change tracking, and PolyBase. You want to migrate this database to Azure SQL Database. 

Which feature must you remove or replace before the migration?

A. Clustered columnstore indexes
B. PolyBase
C. Change tracking
D. Automatic tuning

Answer: B

Explanation:

When migrating from SQL Server 2019 to Azure SQL Database, it’s important to verify feature compatibility because not all SQL Server features are fully supported in Azure SQL Database. Among the features listed—clustered columnstore indexes, automatic tuning, change tracking, and PolyBase—only one is unsupported in Azure SQL Database.

Clustered columnstore indexes are fully supported in Azure SQL Database. These indexes improve performance for large-scale analytics by compressing data and optimizing queries, making them useful and compatible in the cloud environment.

Automatic tuning is another feature that Azure SQL Database actively supports. It automatically detects and applies performance improvements like creating or dropping indexes and fixing query plans, so there’s no need to remove or change this.

Change tracking is a lightweight method to track data changes and is supported in Azure SQL Database as well. It’s commonly used in synchronization scenarios and data auditing, making it a supported feature for migration.

PolyBase, however, is not supported in Azure SQL Database. PolyBase allows SQL Server to query external data sources like Hadoop or Azure Blob Storage directly. Because Azure SQL Database does not include PolyBase, any functionality relying on it must be replaced before migration. Common replacements include using Azure Data Factory or Azure Synapse Analytics, which provide more integrated external data query and ETL capabilities in the Azure ecosystem.

Therefore, PolyBase must be removed or replaced prior to migration, making option B the correct answer.

Question 4:

You manage a Microsoft SQL Server 2019 instance on-premises with a 4 TB database named DB1. You want to migrate DB1 to an Azure SQL Database managed instance while minimizing downtime and data loss. 

Which tool or method should you use?

A. Distributed availability groups
B. Database mirroring
C. Always On Availability Group
D. Azure Database Migration Service

Answer: D

Explanation:

Migrating a large database, such as a 4-terabyte on-premises SQL Server database, to an Azure SQL Database managed instance requires careful planning to reduce downtime and avoid data loss. Several SQL Server features and tools exist for high availability and disaster recovery, but their suitability for cloud migration varies.

Distributed availability groups are primarily designed for high availability across multiple geographically dispersed data centers. While they enhance availability, they don’t specifically facilitate migration to Azure SQL Database and aren’t the ideal tool for minimizing downtime during such migrations.

Database mirroring provides a redundant copy of a database to maintain high availability, but this technology is deprecated in newer SQL Server versions. Moreover, it’s not designed for cloud migration and involves complex setup with a principal and mirror server, making it less practical for migration tasks.

Always On Availability Groups offer high availability and disaster recovery through replication and failover capabilities in clustered environments. Although powerful for on-premises setups or hybrid configurations, they don’t provide a streamlined method to migrate databases directly to Azure SQL Database managed instances.

Azure Database Migration Service (DMS), however, is purpose-built to migrate databases from on-premises SQL Server to Azure with minimal downtime. DMS supports continuous data synchronization, allowing the source and target databases to stay in sync during the migration process. This reduces downtime because the final cutover happens quickly once the bulk of data is already copied. It also ensures transactional consistency and minimizes the risk of data loss, making it the most effective tool for migrating large databases like DB1.

In summary, Azure Database Migration Service (DMS) is the best option for minimizing downtime and data loss during migration, making option D the correct choice.

Question 5:

You are building a streaming data ingestion system that needs to handle fluctuating data volumes. 

Which Azure service should you select if you want the ability to change the number of partitions after the service has been created?

A. Azure Event Hubs Standard
B. Azure Stream Analytics
C. Azure Data Factory
D. Azure Event Hubs Dedicated

Correct Answer: A

Explanation:

The best choice for ingesting streaming data where partition count must be adjustable after creation is Azure Event Hubs Standard. Event Hubs is a highly scalable data streaming platform designed to ingest millions of events per second from multiple sources, making it ideal for variable and high-volume data workloads.

A key advantage of the Standard tier is its support for dynamic partition scaling. This means you can increase or decrease the number of partitions on your Event Hub after it’s been created, allowing you to adapt to changing data volumes without redeploying or reconfiguring the service. This flexibility ensures that your data pipeline can handle variable workloads efficiently and maintain optimal throughput.

Looking at the alternatives, Azure Stream Analytics (Option B) is primarily a real-time analytics engine rather than an ingestion service. It consumes data from sources like Event Hubs but does not manage partitions or ingestion at the infrastructure level. Its focus is on processing and analyzing data streams rather than controlling ingestion details.

Azure Data Factory (Option C) is an orchestration and integration service designed to create data pipelines and workflows. While it supports batch and streaming data movement, it does not provide direct control over partitions in streaming sources, so it isn’t suited for managing ingestion partitioning.

Azure Event Hubs Dedicated (Option D) is a premium offering for extremely high-throughput workloads. However, it does not allow partition count changes after creation; partitions are fixed at setup. This limits its suitability if you require partition flexibility.

In summary, Azure Event Hubs Standard is the right choice because it offers the ability to modify partition counts dynamically, supporting variable streaming data volumes without disruption. This makes it the optimal service for scenarios requiring adaptable, scalable data ingestion.

Question 6:

Which Azure service is best suited to host a fully managed relational database that offers built-in high availability and automated backup capabilities?

A. Azure Blob Storage
B. Azure SQL Database
C. Azure Virtual Machines
D. Azure Cosmos DB

Correct Answer: B

Explanation:

The most appropriate Azure service for hosting a fully managed relational database with built-in high availability and automatic backups is Azure SQL Database. This service is a Platform-as-a-Service (PaaS) offering that abstracts much of the operational complexity involved in managing a traditional SQL Server environment.

Azure SQL Database is designed to handle the core needs of modern relational databases, such as scalability, reliability, and ease of administration. It automatically manages infrastructure concerns like patching, backups, and failover, enabling developers and administrators to focus on application development rather than database maintenance.

A standout feature of Azure SQL Database is its built-in high availability. The service replicates data across multiple nodes within an Azure region and supports auto-failover groups that provide cross-region disaster recovery. These features ensure that the database remains accessible even during hardware failures or planned maintenance, maintaining uptime and resilience without manual intervention.

Automated backups are another critical feature; Azure SQL Database retains backups for up to 35 days (depending on the tier), allowing for point-in-time recovery. This protects data integrity and minimizes downtime during accidental data loss or corruption.

Compared to other options, Azure Virtual Machines (Option C) offer flexibility to run SQL Server, but as an Infrastructure-as-a-Service (IaaS) model, they require manual handling of OS updates, backups, and high availability configurations, increasing operational overhead. Azure Blob Storage (Option A) is designed for unstructured data and cannot host relational databases. Azure Cosmos DB (Option D) excels at globally distributed, multi-model NoSQL databases but is not tailored for traditional relational database workloads.

Additionally, Azure SQL Database includes intelligent features like automatic tuning, threat detection, auditing, and deep integration with Azure monitoring tools, reducing manual administrative tasks and improving security.

For anyone preparing for the DP-300 exam or managing Azure databases, understanding the capabilities of Azure SQL Database is fundamental, making it the clear choice for fully managed relational database hosting with robust availability and backup.

Question 7:

When setting up automatic tuning in an Azure SQL Database, which of the following actions can Azure SQL Database perform automatically as part of this tuning?

A. Analyze index fragmentation
B. Clean up the Query Store
C. Automatically create and remove indexes
D. Rotate backup encryption keys

Correct Answer: C

Explanation:

Azure SQL Database provides an intelligent feature called Automatic Tuning that helps optimize database performance without requiring manual intervention. This feature continuously monitors query workloads and index usage patterns, then automatically applies tuning actions to enhance efficiency and speed. Among these tuning actions, the automatic creation and removal of indexes is a core capability.

Indexes are critical for query performance, but managing them manually—especially in a dynamic environment where query patterns change frequently—can be cumbersome and error-prone. Azure SQL uses telemetry collected from the Query Store to understand which queries would benefit from new indexes and which existing indexes are underutilized or redundant. When the system detects a frequently run query that lacks a helpful index, it automatically creates the index to improve performance. Conversely, if an existing index is seldom used and degrades write performance or consumes unnecessary storage, Azure SQL will automatically drop it.

This proactive index management reduces the DBA’s workload and ensures that the database maintains optimal query performance and resource utilization. It’s important to note that other maintenance tasks such as index fragmentation analysis or query store cleanup are handled differently and are not part of automatic tuning. For example, index fragmentation is traditionally managed by manual maintenance or scheduled jobs, while Query Store cleanup is controlled through retention policies. Backup encryption key rotation is related to security and does not fall under performance tuning.

For candidates preparing for the DP-300 exam, understanding Azure SQL Database’s automatic tuning capabilities—especially automatic index management—is essential because it highlights how Azure’s intelligent features simplify database administration and improve performance in cloud environments.

Question 8:

To automate regular index maintenance tasks on your Azure SQL Database with minimal management effort, which tool should you choose?

A. Azure Logic Apps
B. SQL Server Agent
C. Elastic Jobs for Azure SQL
D. Azure DevOps Pipelines

Correct Answer: C

Explanation:

In Azure SQL Database environments, especially those involving single databases or elastic pools, automating routine tasks like index maintenance can be challenging since traditional tools such as SQL Server Agent are not available in these deployments. To address this, Microsoft offers Elastic Jobs for Azure SQL, which provide a cloud-native, scalable solution for managing administrative jobs across one or many databases.

Elastic Jobs are hosted in a dedicated job agent database within Azure SQL and allow database administrators to define jobs made up of multiple steps targeting one or multiple databases. These jobs can be scheduled or run on-demand and are ideal for tasks such as rebuilding fragmented indexes, updating statistics, or running custom maintenance scripts consistently across large numbers of databases.

Unlike SQL Server Agent, which is available only in SQL Server running on Azure VMs or Azure SQL Managed Instances, Elastic Jobs are the native choice for automating tasks within Azure SQL Database’s Platform as a Service (PaaS) offerings. They provide centralized management, support for retries on failure, and detailed logging to track job execution history.

Azure Logic Apps focus more on integrating workflows between cloud services and are not designed for database maintenance automation. Azure DevOps Pipelines, while powerful for continuous integration and delivery scenarios, are excessive and not specifically intended for routine database maintenance.

Understanding how to leverage Elastic Jobs is crucial for Azure Database Administrators preparing for the DP-300 exam, as it reflects Microsoft’s emphasis on scalable, automated cloud-native solutions to maintain performance and reduce manual overhead in Azure SQL environments.

Question 9:

You want to improve the performance of your Azure SQL Database by forcing a particular query plan known to be efficient. 

Which automatic tuning option should you enable to achieve this?

A. Automatic index creation
B. Force last good plan
C. Automatic plan cleanup
D. Query store retention policy

Correct Answer: B

Explanation:

One of the powerful capabilities within Azure SQL Database’s automatic tuning framework is the ability to force the use of a previously successful query execution plan. This is known as “Force last good plan.” When SQL Server detects that a query’s current execution plan is causing performance degradation (for example, after a plan regression due to statistics updates or query recompilations), it can automatically revert to a plan that previously provided better performance.

Enabling “Force last good plan” allows Azure SQL Database to monitor query performance continuously. If a new plan leads to slower query execution times, the system automatically switches back to the prior “good” plan without manual intervention. This helps prevent performance drops caused by plan regressions, ensuring more stable and reliable query performance.

This option differs from automatic index creation, which focuses on creating missing indexes, or index dropping, which removes unused indexes. Plan forcing is specifically about managing query execution plans rather than physical database structures.

Automatic plan cleanup or Query Store retention policies manage the lifecycle and storage of query performance data but do not directly impact which plan is used during execution. Query Store collects and retains historical query data, which supports features like forcing good plans and troubleshooting but isn’t a tuning action itself.

For database administrators preparing for the DP-300 exam, understanding “Force last good plan” is important because it demonstrates Azure SQL Database’s ability to self-correct query performance issues and reduce the need for manual tuning efforts. This feature is particularly valuable in cloud environments with dynamic workloads where query plans can frequently change due to updates or varying data distributions.

In summary, “Force last good plan” is the automatic tuning action that helps maintain stable query performance by reverting to effective execution plans, making it essential for efficient Azure SQL Database administration.

Question 10:

You are tasked with scheduling and automating T-SQL scripts across multiple Azure SQL Databases in an elastic pool. 

Which tool provides a native, scalable solution designed for this purpose?

A. Azure Automation Runbooks
B. SQL Server Agent on Azure VM
C. Elastic Jobs in Azure SQL
D. Azure Data Factory

Correct Answer: C

Explanation:

When managing multiple Azure SQL Databases grouped in an elastic pool, automating repetitive administrative tasks such as running T-SQL scripts across all databases can be challenging. Traditional SQL Server Agent jobs are unavailable in this Platform as a Service (PaaS) context unless you use SQL Server on Azure Virtual Machines or Managed Instances. To address this gap, Microsoft introduced Elastic Jobs as a native Azure SQL feature to handle job scheduling and execution across multiple databases efficiently.

Elastic Jobs consist of a job agent and jobs made up of multiple job steps targeting one or more databases. They allow DBAs to schedule maintenance tasks like rebuilding indexes, updating statistics, or running custom scripts across multiple Azure SQL Databases or elastic pools from a centralized location. This capability simplifies management and ensures consistency in maintenance operations at scale.

Azure Automation Runbooks, while useful for broader cloud automation and scripting tasks, are not specifically tailored to execute SQL commands directly within multiple Azure SQL Databases efficiently. Similarly, Azure Data Factory focuses on orchestrating data workflows and ETL pipelines rather than database maintenance.

SQL Server Agent is a familiar scheduling tool for on-premises SQL Server and Azure Managed Instances but is not supported in single databases or elastic pools within Azure SQL Database PaaS.

Understanding Elastic Jobs is critical for Azure Database Administrators preparing for the DP-300 exam, as it reflects Microsoft’s cloud-native approach to automation and scalability. Elastic Jobs allow administrators to reduce manual overhead and maintain performance through automated maintenance while scaling effortlessly with cloud database environments.


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.