• SQL Server training
  • Write for us!

Esat Erkec

A case study of SQL Query tuning in SQL Server

Gaining experience in SQL query tuning can be very difficult and complicated for database developers or administrators. For this reason, in this article, we will work on a case study and we are going to learn how we can tune its performance step by step. In this fashion, we will understand well how to approach query performance issues practically.

Pre-requirements

In this article, we will use the Adventureworks2017 sample database. At the same time, we will also use the Create Enlarged AdventureWorks Tables script to obtain an enlarged version of the SalesOrder and SalesOrderDetail tables because the size of this database is not sufficient to perform performance tests. After installation of the Adventureworks2017 database, we can execute the table enlarging script.

Case Study: SQL Query tuning without creating a new index

Imagine that, you are employed as a full-time database administrator in a company and this company is still using SQL Server 2017 version. You have taken an e-mail from the software development team and they are complaining about the following query performance in their e-mail.

Your objective is to improve the performance of the above query without creating a new index on the tables but you can re-write the query.

The first step of the SQL query tuning: Identify the problems

Firstly, we will enable the actual execution plan in the SSMS and execute the problematic query. Using the actual execution plan is the best approach to analyze a query because the actual plan includes all accurate statistics and information about a query. However, if a query is taking a long time, we can refer to the estimated execution plan. After this explanation, let’s examine the select operator of the execution plan.

  • Interpreting execution plans of T-SQL queries
  • Main Concepts of SELECT operators in SQL Server execution plans

The ElapsedTime attribute indicates the execution time of a query and we figure out from this value that this query is completed in 142 seconds. For this query, we also see the UdfElapsedTime attribute and it indicates how long the database engine deal to invoke the user-defined functions in the query. Particularly for this query, these two elapsed times are very close so we can deduce that the user-defined function might cause a problem.

A select operator details in the execution plan

Another point to take into consideration for this query is parallelism. For this query, the Estimated Subtree Cost value exceeds the Cost Threshold for Parallelism  setting of the server but the query optimizer does not generate a parallel execution plan because of the scalar function. The scalar functions prevent the query optimizer to generate a parallel plan.

Why a query does not generate a parallel execution plan?

The last problem with this query is the TempDB spill issue and this problem is indicated with the warning signs in the execution plan.

Analyze an execution plan for SQL query tuning

Outdated statistics, poorly written queries, ineffective index usage might be caused to tempdb spill issues.

Improve performance of the scalar-function in a query

The scalar-functions can be a performance killer for the queries, and this discourse would be exactly true for our sample query. Scalar-functions are invoked for every row of the result set by the SQL Server. Another problem related to the scalar-functions is the black box problem because the query optimizer has no idea about the code inside the scalar-function, due to this issue the query optimizer does not consider the cost impact of the scalar functions on the query.

A new feature has been announced with SQL Server 2019 and can help overcome most of the performance issues associated with scalar functions. The name of this feature is Scalar UDF Inlining in SQL Server 2019 . On the other hand, if we are using earlier versions of SQL Server, we should adapt the scalar function code explicitly to the query if it is possible. The common method is to transform the scalar-function into a subquery and implement it to query with the help of the CROSS APPLY operator. When we look at the inside of the ufnGetStock function, we can see that it is summing the quantity of products according to the ProductId only a specific LocationId column.

Scalar-functions affects SQL query tuning negatively

We can transform and implement the ufnGetStock scalar-function as shown below. In this way, we ensure that our sample query can run in parallel and will be faster than the first version of the query.

This query has taken 71 seconds to complete but when we look at the execution plan, we see a parallel execution plan. However, the tempdb spill issue is persisted. This case obviously shows that we need to expend more effort to overcome the tempdb spill problem and try to find out new methods.

Tempdb spill issue affects SQL query tuning negatively

Think more creative for SQL query tuning

To get rid of the tempdb spill issue, we will create a temp table and insert all rows to this temporary table. The temporary tables offer very flexible usage so we can add a computed column instead of the LEN function which is placed on the WHERE clause. The insert query will be as below.

When we analyze this query we can see the usage of the TABLOCK hint after the INSERT statement. The usage purpose of this keyword is to enable a parallel insert option. So that, we can gain more performance. This situation can be seen in the execution plan.

SQL query tuning and parallel insert

In this way, we have inserted the 1.286.520 rows into the temporary table in just one second. However, the temporary table still holds more data than we need because we haven’t filtered the CreditCard ApprovalCode column values ​​with character lengths are greater than 10 in the insert operation. At this point, we will make a little trick and delete the rows whose are character length smaller than 10 or equal to 10. After the insert statement, we will add the following delete statement so that we will obtain the all qualified records in the temp table.

SQL Query tuning: Using indexes to improve sort performance

When we design an effective index for the queries which include the ORDER BY clause, the execution plan does not require to sort the result set because the relevant index returns the rows in the required order. Moving from this idea, we can create a non-clustered index that satisfies sort operation requirements. The important point about this SQL query tuning practice is that we have to get rid of the sort operator and the generated index advantage should outweigh the disadvantage. The following index will be helping to eliminate sort operation in the execution plan.

Now, we execute the following query and then examine the execution plan of the select query.

Improve sort operator performance with an index

As we can see in the execution plan, the database engine used the IX_Sort index to access the records and it also did not require to use of a sort operator because the rows are the sorted manner. In the properties of the index scan operator, we see an attribute that name is Scan Direction .

Non-clustered index scan direction

The scan direction attribute explains that SQL Server uses the b-tree structure to read the rows from beginning to the end at the leaf levels. At the same time, this index helps us to overcome the tempdb spill issue.

Non-clustered index structure  and scan direction

Finally, we see that the query execution time was reduced from 220 seconds to 33 seconds.

In this article, we learned practical details about SQL query tuning and these techniques can help when you try to solve a query performance problem. In the case study, the query which has a performance problem contained 3 main problems. These are:

  • Scalar-function problem
  • Using a serial execution plan
  • Tempdb spill issue

At first, we transformed the scalar-function into a subquery and implement it to query with the CROSS APPLY operator. In the second step, we eliminated the tempdb spill problem to use a temporary table. Finally, the performance of the query has improved significantly.

  • Recent Posts

Esat Erkec

  • SQL Performance Tuning tips for newbies - April 15, 2024
  • SQL Unit Testing reference guide for beginners - August 11, 2023
  • SQL Cheat Sheet for Newbies - February 21, 2023

Related posts:

  • Mapping schema and recursively managing data – Part 1
  • SQL SUBSTRING function and its performance tips
  • Top SQL Server Books
  • Parallel Nested Loop Joins – the inner side of Nested Loop Joins and Residual Predicates
  • A complete guide to T-SQL Metadata Functions in SQL Server

SQL DBA School

Case Studies and Real-World Scenarios

Case Study 1: Query Optimization

A financial institution noticed a significant performance slowdown in their central database application, affecting their ability to serve customers promptly. After monitoring and analyzing SQL Server performance metrics, the IT team found that a specific query, part of a core banking operation, took much longer than expected.

Using SQL Server’s Query Execution Plan feature, they found that the query was doing a full table scan on a large transaction table. The team realized the query could be optimized by adding an index on the columns involved in the WHERE clause. After adding the index and testing, the query’s execution time dropped significantly, resolving the application slowdown.

Case Study 2: TempDB Contention

An online retail company was experiencing sporadic slowdowns during peak times, which affected its website’s responsiveness. SQL Server Performance Monitoring revealed that the tempDB database was experiencing latch contention issues, a common performance problem.

The company’s DBA team divided the tempDB into multiple data files equal to the number of logical cores, up to eight, as recommended by Microsoft. This reduced contention and improved the performance of operations using the tempDB.

Case Study 3: Inefficient Use of Hardware Resources

A software development company was experiencing poor performance on their SQL Server, despite running on a high-end server with ample resources. Performance metrics showed that SQL Server was not utilizing all the available CPU cores and memory.

Upon investigation, the team found that SQL Server was running on default settings, which did not allow it to utilize all available resources. By adjusting SQL Server configuration settings, such as max degree of parallelism (MAXDOP) and cost threshold for parallelism, they were able to allow SQL Server to better use the available hardware, significantly improving server performance.

Case Study 4: Database Locking Issues

A large manufacturing company’s ERP system started to experience slowdowns that were affecting their production lines. The IT department, upon investigation, found that there were blocking sessions in their SQL Server database, causing delays.

Using the SQL Server’s built-in reports for “All Blocking Transactions” and “Top Sessions,” they found a poorly designed stored procedure holding locks for extended periods, causing other transactions to wait. After refactoring the stored procedure to hold locks for as short as possible, the blocking issue was resolved, and the system’s performance was back to normal.

These case studies represent common scenarios in SQL Server performance tuning. The specifics can vary, but identifying the problem, isolating the cause, and resolving the issue remains the same.

Case Study 5: Poor Indexing Strategy

A hospital’s patient records system began to experience performance issues over time. The plan was built on a SQL Server database and took longer to pull up patient records. The IT team noticed that the database had grown significantly more prominent over the years due to increased patient volume.

They used SQL Server’s Dynamic Management Views (DMVs) to identify the most expensive queries regarding I/O. The team found that the most frequently used queries lacked appropriate indexing, causing SQL Server to perform costly table scans.

They worked on a comprehensive indexing strategy, including creating new indexes and removing unused or duplicates. They also set up periodic index maintenance tasks (rebuilding or reorganizing) to optimize them. Post these changes, the time to retrieve patient records improved dramatically.

Case Study 6: Outdated Statistics

An e-commerce platform was dealing with sluggish performance during peak shopping hours. Their SQL Server-based backend was experiencing slow query execution times. The DBA team found that several execution plans were inefficient even though there were appropriate indexes.

After further investigation, they discovered that the statistics for several large tables in the database were outdated. SQL Server uses statistics to create the most efficient query execution plans. However, with outdated statistics, it was starting poor execution plans leading to performance degradation.

The team updated the statistics for these tables and set up an automatic statistics update job to run during off-peak hours. This change brought a noticeable improvement in the overall system responsiveness during peak hours.

Case Study 7: Memory Pressure

A cloud-based service provider was experiencing erratic performance issues on their SQL Server databases. The database performance would degrade severely at certain times, affecting all their customers.

Performance monitoring revealed that SQL Server was experiencing memory pressure during these periods. It turned out that the SQL Server instance was hosted on a shared virtual machine, and other applications used more memory during specific times, leaving SQL Server starved for resources.

The team decided to move SQL Server to a dedicated VM where it could have all the memory it needed. They also tweaked the ‘min server memory’ and ‘max server memory’ configurations to allocate memory to SQL Server optimally. This reduced memory pressure, and the erratic performance issues were solved.

Case Study 8: Network Issues

A multinational company with several branches worldwide had a centralized SQL Server-based application. Departments complained about slow performance, while the head office had no issues.

Upon investigation, it turned out to be a network latency issue. The branches that were geographically far from the server had higher latency, which resulted in slow performance. The company used a Content Delivery Network (CDN) to cache static content closer to remote locations. It implemented database replication to create read replicas in each geographical region. This reduced network latency and improved the application performance for all branches.

These examples demonstrate the wide range of potential SQL Server performance tuning issues. The key to effective problem resolution is a thorough understanding of the system, systematic troubleshooting, and the application of appropriate performance-tuning techniques.

Case Study 9: Bad Parameter Sniffing

An insurance company’s SQL Server database was experiencing fluctuating performance. Some queries ran fast at times, then slowed down unexpectedly. This inconsistent behavior impacted the company’s ability to process insurance claims efficiently.

After studying the execution plans and the SQL Server’s cache, the DBA team discovered that the issue was due to bad parameter sniffing. SQL Server uses parameter sniffing to create optimized plans based on the parameters passed the first time a stored procedure is compiled. However, if later queries have different data distributions, the initial execution plan might be suboptimal.

To resolve this, they used OPTIMIZE FOR UNKNOWN query hint for the stored procedure parameters, instructing SQL Server to use statistical data instead of the initial parameter values to build an optimized plan. After implementing this, the fluctuating query performance issue was resolved.

Case Study 10: Inadequate Disk I/O

An online gaming company started receiving complaints about slow game loading times. The issue was traced back to their SQL Server databases. Performance metrics showed that the disk I/O subsystem was a bottleneck, with high disk queue length and disk latency.

Upon investigation, they found that all their databases were hosted on a single, slower disk. To distribute the I/O load, they moved their TempDB and log files to separate, faster SSD drives. They also enabled Instant File Initialization (IFI) for data files to speed up the creation and growth of data files. These changes significantly improved disk I/O performance and reduced game loading times.

Case Study 11: SQL Server Fragmentation

A logistics company’s SQL Server database began to experience slower data retrieval times. Their system heavily relied on GPS tracking data, and they found that fetching this data was becoming increasingly slower.

The DBA team discovered high fragmentation on the GPS tracking data table, which had frequent inserts and deletes. High fragmentation can lead to increased disk I/O and degrade performance. They implemented a routine maintenance plan that reorganized or rebuilt indexes depending on the fragmentation level. They set up fill factor settings to reduce future fragmentation. This greatly improved data retrieval times.

Case Study 12: Excessive Compilation and Recompilation

A web hosting provider had a SQL Server database with high CPU usage. No heavy queries were running, and the server was not low on resources.

The DBA team found that the issue was due to excessive compilations and recompilations of queries. SQL Server compiles queries into execution plans, which can be CPU intensive. When queries are frequently compiled and recompiled, it can lead to high CPU usage.

They discovered that the application used non-parameterized queries, which led SQL Server to compile a new plan for each query. They worked with the development team to modify the application to use parameterized queries or stored procedures, allowing SQL Server to reuse existing execution plans and thus reducing CPU usage.

These cases emphasize the importance of deep knowledge of SQL Server internals, observant monitoring, and a systematic approach to identifying and resolving performance issues.

Case Study 13: Database Auto-growth Misconfiguration

A social media company faced performance issues on its SQL Server database during peak usage times. Their IT team noticed that the performance drops coincided with auto-growth events on the database.

SQL Server databases are configured by default to grow automatically when they run out of space. However, this operation is I/O intensive and can cause performance degradation if it happens during peak times.

The team decided to manually grow the database during off-peak hours to a size that could accommodate several months of data growth. They also configured auto-growth to a fixed amount rather than a percentage to avoid more extensive growth operations as the database size increased. This prevented auto-growth operations from occurring during peak times, improving overall performance.

Case Study 14: Unoptimized Cursors

A travel booking company’s SQL Server application was suffering from poor performance. The application frequently timed out during heavy load times, frustrating their users.

Upon analyzing, the DBA team found that the application heavily used SQL Server cursors. Cursors perform poorly compared to set-based operations as they process one row at a time.

The team worked with the developers to refactor the application code to use set-based operations wherever possible. They also ensured that the remaining cursors were correctly optimized. The change resulted in a significant improvement in application performance.

Case Study 15: Poorly Configured SQL Server Instance

An IT service company deployed a new SQL Server instance for one of their clients, but the client reported sluggish performance. The company’s DBA team checked the server and found it was not correctly configured.

The server was running on the default SQL Server settings, which weren’t optimized for the client’s workload. The team performed a series of optimizations, including:

  • Configuring the ‘max server memory’ option to leave enough memory for the OS.
  • Setting ‘max degree of parallelism’ to limit the number of processors used for parallel plan execution.
  • Enabling ‘optimize for ad hoc workloads’ to improve the efficiency of the plan cache.

After these changes, the SQL Server instance ran much more efficiently, and the client reported a noticeable performance improvement.

Case Study 16: Lack of Partitioning in Large Tables

A telecommunications company stored call records in a SQL Server database. The call records table was huge, with billions of rows, which caused queries to take a long time to run.

The DBA team decided to implement table partitioning. They partitioned the call records table by date, a standard filter condition in their queries. This allowed SQL Server to eliminate irrelevant partitions and only scan the necessary data when running queries. As a result, query performance improved dramatically.

In all these cases, thorough investigation and an in-depth understanding of SQL Server’s features and best practices led to performance improvement. Regular monitoring and proactive optimization are crucial to preventing performance problems and ensuring the smooth operation of SQL Server databases.

Case Study 17: Inappropriate Data Types

An educational institution’s student management system, built on a SQL Server database, suffered from slow performance when dealing with student records. The IT department discovered that the database design included many columns with data types that were larger than necessary.

For instance, student ID numbers were stored as NVARCHAR(100) even though they were always 10-digit numbers. This wasted space and slowed down queries due to the increased data size. The IT team worked on redesigning the database schema to use more appropriate data types and transformed existing data. The database size was significantly reduced, and query performance improved.

Case Study 18: Lack of Database Maintenance

A software firm’s application was facing intermittent slow performance issues. The application was built on a SQL Server database which had not been maintained properly for a long time.

The DBA team discovered that several maintenance tasks, including index maintenance and statistics updates, had been neglected. High index fragmentation and outdated statistics were causing inefficient query execution. They implemented a regular maintenance plan, including index defragmentation and statistics updates, which helped improve the query performance.

Case Study 19: Deadlocks

A stock trading company faced frequent deadlock issues in their SQL Server database, affecting their trading operations. Deadlocks occur when two or more tasks permanently block each other by having a lock on a resource that the different functions try to lock.

Upon reviewing the deadlock graph (a tool provided by SQL Server to analyze deadlocks), the DBA team found that specific stored procedures accessed tables differently. They revised the stored procedures to access tables in the same order. They introduced error-handling logic to retry the operation in case of a deadlock. This reduced the occurrence of deadlocks and improved the application’s stability.

Case Study 20: Improper Use of SQL Server Functions

A retail company’s inventory management system was suffering from poor performance. The DBA team, upon investigation, discovered that a critical query was using a scalar-valued function that contained a subquery.

Scalar functions can cause performance issues by forcing SQL Server to perform row-by-row operations instead of set-based ones. They refactored the query to eliminate the scalar function and replaced the subquery with a join operation. This change significantly improved the performance of the critical query.

In all these situations, the DBA teams had first to understand the problem, investigate the cause, and apply appropriate techniques to resolve the issues. Understanding SQL Server internals and keeping up with its best practices is vital for the smooth functioning of any application built on SQL Server.

Case Study 21: Excessive Use of Temp Tables

A media company faced a slow response time in its content management system (CMS). A SQL Server database powered this CMS. The system became particularly sluggish during peak hours when content-related activities surged.

Upon investigating, the DBA team found that several stored procedures excessively used temporary tables for intermediate calculations. While temporary tables can be handy, their excessive use can increase I/O on tempDB, leading to slower performance.

The team revised these stored procedures to minimize the use of temporary tables. Wherever possible, they used table variables or derived tables, which often have lower overhead. After the optimization, the CMS significantly improved performance, especially during peak hours.

Case Study 22: Frequent Table Scans

An e-commerce company experienced a gradual decrease in its application performance. The application was backed by a SQL Server database, which was found to be frequently performing table scans on several large tables upon investigation.

Table scans can be resource-intensive, especially for large tables, as they involve reading the entire table to find relevant records. Upon closer examination, the DBA team realized that many of the queries issued by the application did not have appropriate indexes.

The team introduced well-thought-out indexes on the tables. It made sure the application queries were written in a way to utilize these indexes. After these adjustments, the application performance improved significantly, with most queries executing much faster due to the reduced number of table scans.

Case Study 23: Unoptimized Views

A financial institution noticed slow performance in their loan processing application. This application relied on several complex views in a SQL Server database.

On review, the DBA team found that these views were not optimized. Some views were nested within other views, creating multiple layers of complexity, and some were returning more data than needed, including columns not used by the application.

They flattened the views to remove the unnecessary nesting and adjusted them to return only the required data. They also created indexed views for the ones most frequently used. These optimizations significantly improved the performance of the loan processing application.

Case Study 24: Log File Management Issues

A data analytics firm was facing a slowdown in their SQL Server-based data processing tasks. On investigation, the DBA team discovered that the log file for their central database was becoming extremely large, causing slow write operations.

The team found that the recovery model for the database was set to Full. Still, no transaction log backups were taken. In the Full recovery model, transaction logs continue to grow until a log backup is taken. They set up regular transaction log backups to control the log file size. They also moved the log file to a faster disk to improve the write operation speed. These changes helped in speeding up the data processing tasks.

In all these situations, systematic problem identification, root cause analysis, and applying the appropriate solutions were vital to improving SQL Server performance. Regular monitoring, preventive maintenance, and understanding SQL Server’s working principles are crucial in maintaining optimal database performance.

Case Study 25: Locking and Blocking Issues

A healthcare institution’s patient management system, running on a SQL Server database, was encountering slow performance. This was especially noticeable when multiple users were updating patient records simultaneously.

Upon investigation, the DBA team identified locking and blocking as the root cause. In SQL Server, when a transaction modifies data, locks are placed on the data until the transaction is completed to maintain data integrity. However, excessive locking can lead to blocking, where other transactions must wait until the lock is released.

To reduce the blocking issues, the team implemented row versioning-based isolation levels (like Snapshot or Read Committed Snapshot Isolation). They also optimized the application code to keep transactions as short as possible, thus reducing the time locks were held. These steps significantly improved the system’s performance.

Case Study 26: Outdated Statistics

An online marketplace experienced slow performance with its product recommendation feature. The feature relied on a SQL Server database that contained historical sales data.

The DBA team identified that the statistics on the sales data table were outdated. SQL Server uses statistics to create efficient query plans. However, if the statistics are not up-to-date, SQL Server may choose sub-optimal query plans.

The team implemented a routine job to update statistics more frequently. They also enabled the ‘Auto Update Statistics’ option on the database to ensure statistics were updated automatically when necessary. This led to an immediate improvement in the recommendation feature’s performance.

Case Study 27: Non-Sargable Queries

A sports statistics website saw a decrease in its website performance, especially when visitors were querying historical game statistics. A SQL Server database backed their site.

Upon reviewing the SQL queries, the DBA team found several non-sargable queries. These queries cannot take full advantage of indexes due to how they are written (e.g., using functions on the column in the WHERE clause).

The team worked with the developers to rewrite these queries in a sargable manner, ensuring they could fully use the indexes. This led to a substantial increase in query performance and improved the website’s speed.

Case Study 28: Over-Normalization

An HR application backed by a SQL Server database ran slowly, particularly when generating reports. The database schema was highly normalized, following the principles of reducing data redundancy.

However, the DBA team found that over-normalization led to excessive JOIN operations, resulting in slow query performance. They implemented denormalization in certain areas, introducing calculated and redundant fields where it made sense. This reduced the need for some JOIN operations and improved the application’s overall performance.

These cases show that performance troubleshooting in SQL Server involves understanding various components and how they interact. Addressing performance problems often requires a comprehensive approach, combining database configuration, query tuning, hardware adjustments, and, occasionally, changes to application design.

Case Study 29: Poor Query Design

A manufacturing company’s inventory management system was experiencing slow performance, especially when generating specific reports. The system was built on a SQL Server database.

The DBA team found that some queries used in report generation were poorly designed. They used SELECT * statements, which return all columns from the table, even though only a few columns were needed. This caused unnecessary data transfer and slowed down the performance.

The team revised these queries only to fetch the necessary columns. They also made other optimizations, such as avoiding unnecessary nested queries and replacing correlated subqueries with more efficient JOINs. These changes significantly improved the performance of the report generation process.

Case Study 30: Inefficient Indexing

A logistics company’s tracking system, running on a SQL Server database, was experiencing slow performance. Users were complaining about long loading times when tracking shipments.

Upon investigation, the DBA team discovered that the main shipment table in the database was not optimally indexed. Some critical queries didn’t have corresponding indexes, leading to table scans, while some existing indexes were barely used.

The DBA team created new indexes based on the query patterns and removed the unused ones. They also ensured to keep the indexing balanced, as excessive indexing could hurt performance by slowing down data modifications. After these indexing changes, the tracking system’s performance noticeably improved.

Case Study 31: Network Latency

A multinational corporation used a SQL Server database hosted in a different geographical location from the main user base. Users were experiencing slow response times when interacting with the company’s internal applications.

The IT team identified network latency as a critical issue. The physical distance between the server and the users was causing a delay in data transfer.

To solve this, they used SQL Server’s Always On Availability Groups feature to create a secondary replica of the database closer to the users. The read-only traffic was then directed to this local replica, reducing the impact of network latency and improving application response times.

Case Study 32: Resource-Intensive Reports

A fintech company ran daily reports on their SQL Server database during business hours. These reports were resource-intensive and caused the application performance to degrade when they were running.

The DBA team offloaded the reporting workload to a separate reporting server using SQL Server’s transaction replication feature. This ensured that the resource-intensive reports didn’t impact the performance of the primary server. They also scheduled the reports during off-peak hours to minimize user impact. This significantly improved the overall application performance during business hours.

These case studies underline the necessity of a proactive and comprehensive approach to managing SQL Server performance. Regular monitoring, appropriate database design, optimized queries, and a good understanding of how the database interacts with hardware and network can go a long way in maintaining optimal performance.

Case Study 33: Application with Heavy Write Operations

A social media application powered by a SQL Server database was facing slow performance due to a high volume of write operations from user posts, likes, and comments.

The DBA team found that the frequent write operations were causing high disk I/O, slowing down the application performance. They decided to use In-Memory OLTP, a feature in SQL Server designed for high-performance transactional workloads, by migrating the most frequently accessed tables to memory-optimized tables.

The team also introduced natively compiled stored procedures for the most common operations. In-memory OLTP significantly improved the write operation speed and overall application performance.

Case Study 34: Large Transactional Tables with No Archiving

A telecom company’s billing system was experiencing performance degradation over time. The system was built on a SQL Server database and retained years of historical data in the main transactional tables.

The DBA team found that the large size of the transactional tables was leading to slow performance, especially for queries involving range or full table scans. They introduced a data archiving strategy, moving older data to separate archive tables and keeping only recent data in the main transactional tables.

This reduced the transactional tables’ size, leading to faster queries and improved performance. In addition, it made maintenance tasks such as backups and index rebuilds quicker and less resource-intensive.

Case Study 35: Suboptimal Storage Configuration

A gaming company’s game-state tracking application was experiencing slow response times. A SQL Server database backed the application.

Upon investigation, the DBA team discovered that the database files were spread across multiple disks in a way that was not optimizing I/O performance. Some of the heavily used database files were located on slower disks.

The team reconfigured the storage, placing the most frequently accessed database files on SSDs (Solid State Drives) to benefit from their higher speed. They also ensured that data files and log files were separated onto different disks to balance the I/O load. After these adjustments, the application’s performance improved noticeably.

Case Study 36: Inefficient Use of Cursors

A government department’s record-keeping system, built on a SQL Server database, ran slow. The system was particularly sluggish when executing operations involving looping over large data sets.

The DBA team identified that the system used SQL Server cursors to perform these operations. Cursors are database objects used to manipulate rows a query returns on a row-by-row basis. However, they can be inefficient compared to set-based operations.

The team rewrote these operations to use set-based operations, replacing cursors with joins, subqueries, or temporary tables. These changes significantly improved the efficiency and performance of the data looping operations.

Each case study presents a unique scenario and solution, highlighting that SQL Server performance tuning can involve many factors. From the application design to the database schema, from the hardware configuration to the SQL Server settings – each aspect can significantly impact performance. By taking a methodical approach to identifying and addressing performance bottlenecks, it is possible to achieve substantial improvements.

Case Study 37: Use of Entity Framework without Optimization

A logistics company’s web application, backed by a SQL Server database, was experiencing slow load times. The application was built using. NET’s Entity Framework (EF) allows developers to interact with the database using .NET objects.

Upon review, the DBA team found that the Entity Framework was not optimally configured. For instance, “lazy loading” was enabled, which can lead to performance problems due to excessive and unexpected queries.

The team worked with developers to make necessary optimizations, like turning off lazy loading and using eager loading where appropriate, filtering data at the database level instead of the application level, and utilizing stored procedures for complex queries. After these optimizations, the web application’s performance significantly improved.

Case Study 38: Poorly Defined Data Types

An e-commerce platform was noticing slow performance when processing transactions. The platform’s backend was a SQL Server database.

The DBA team discovered that some of the columns in the transaction table were using data types larger than necessary. For instance, a column storing a small range of values used an INT data type when a TINYINT would suffice.

They adjusted the data types to match the data being stored more closely. This reduced the storage space and memory used by these tables, resulted in faster queries, and improved overall performance.

Case Study 39: Fragmented Indexes

A banking application was experiencing slow response times during peak usage hours. The application’s data was stored in a SQL Server database.

Upon reviewing the database, the DBA team found that the indexes on several critical tables were heavily fragmented. Index fragmentation can happen over time as data is added, updated, or deleted, leading to decreased query performance.

The DBA team implemented a regular maintenance plan to rebuild or reorganize fragmented indexes. They also adjusted some indexes’ fill factors to leave more free space and reduce future fragmentation. These steps led to improved query performance and faster response times for the banking application.

Case Study 40: Misconfigured Memory Settings

A CRM system was running slow, especially during data-heavy operations. The system was running on a SQL Server database.

Upon checking the SQL Server settings, the DBA team found that the maximum server memory was not correctly configured. The server was not utilizing the available memory to its full potential, which can impact SQL Server’s performance.

The team adjusted the memory settings to allow SQL Server to use more of the available memory, leaving enough memory for the operating system and other applications. This allowed more data to be kept in memory, reducing disk I/O and improving SQL Server performance.

These case studies further illustrate that performance tuning in SQL Server requires a multifaceted approach involving the database system and the related applications. Regular monitoring and maintenance and a good understanding of SQL Server’s working principles are essential in ensuring optimal database performance.

Case Study 41: Underutilized Parallelism

An analytics company was struggling with slow data processing times. They had a SQL Server database with multi-core processors, but the performance was unexpected.

The DBA team found that the server’s parallelism settings were not optimally configured. The “max degree of parallelism” (MAXDOP) setting, which controls how many processors SQL Server can use for single query execution, was set to 1, which meant SQL Server was not fully utilizing the available cores.

The team adjusted the MAXDOP setting to a more appropriate value considering the number of available cores and the workload characteristics. This allowed SQL Server to execute large queries more efficiently by spreading the work across multiple centers, improving data processing times.

Case Study 42: Bad Parameter Sniffing

An insurance company’s application was experiencing sporadic slow performance. The application was built on a SQL Server database and used stored procedures extensively.

Upon investigation, the DBA team discovered that the performance issues were due to “bad parameter sniffing.” SQL Server can create sub-optimal execution plans for stored procedures based on the parameters of the first execution, which may not work for subsequent executions with different parameters.

The team implemented the OPTION (RECOMPILE) query hint for the problematic stored procedures to force SQL Server to generate a new execution plan for each execution. They also used parameter masking for some procedures. This helped avoid bad parameter sniffing and improved the application’s performance consistency.

Case Study 43: Auto-Shrink Enabled

A retail company’s inventory management system, backed by a SQL Server database, was experiencing performance problems, slowing down irregularly.

The DBA team found that the “auto-shrink” option was enabled on the database. Auto-shrink can cause performance issues because it is resource-intensive and can lead to index fragmentation.

The team disabled auto-shrink and implemented a proper database size management strategy, manually shrinking the database only when necessary and immediately reorganizing indexes afterward. This resolved the irregular performance slowdowns and stabilized the system’s performance.

Case Study 44: Tempdb Contention

A travel booking website was noticing performance degradation during peak hours. Their system was built on a SQL Server database.

Upon review, the DBA team found signs of contention in tempdb, a system database used for temporary storage. Tempdb contention can slow down the system as queries wait for resources.

The team implemented several measures to reduce tempdb contention, including configuring multiple equally sized tempdb data files, adding more tempdb files, and using trace flag 1118 to change how SQL Server allocates extents. These steps helped alleviate the tempdb contention and improved the system’s peak performance.

These case studies showcase that SQL Server performance tuning is dynamic, requiring ongoing adjustments and a deep understanding of SQL Server’s various features and settings. By monitoring the system closely and being ready to investigate and address issues promptly, you can ensure your SQL Server databases run efficiently and reliably.

Case Study 45: Locking and Blocking

A healthcare company’s patient record system, powered by a SQL Server database, was experiencing slow performance during high user activity periods.

Upon investigation, the DBA team found high locking and blocking. This was due to a few long-running transactions that were locking critical tables for a significant amount of time, preventing other transactions from accessing these tables.

The DBA team optimized the problematic transactions to make them more efficient and faster. They also implemented row versioning by enabling Read Committed Snapshot Isolation (RCSI) on the database to allow readers not to block writers and vice versa. This alleviated the locking and blocking issue and led to a significant improvement in performance.

Case Study 46: Over-normalization

An e-commerce website was experiencing slow load times, particularly in product categories and search pages. The company’s product catalog was stored in a SQL Server database.

Upon review, the DBA team found that the database schema was overly normalized. While normalization is generally a good practice as it reduces data redundancy, in this case, it led to an excessive number of query joins, causing slower performance.

The DBA team worked with the developers to denormalize the database schema slightly. They created computed columns for frequently calculated fields and materialized views for commonly executed queries with multiple joins. These changes reduced the number of joins required in the queries and improved the website’s performance.

Case Study 47: Suboptimal Statistics

A software company’s project management application was running slow. The application was built on a SQL Server database.

Upon checking the database, the DBA team found that the statistics were not up-to-date on several large tables. Statistics in SQL Server provide critical information about the data distribution in a table, which the query optimizer uses to create efficient query execution plans.

The team set up a maintenance job to regularly update statistics on the database tables. They also adjusted the “auto update statistics” option to ensure that statistics are updated more frequently. These steps helped the query optimizer generate more efficient execution plans, improving query performance.

Case Study 48: Improper Use of Functions in Queries

A media company’s content management system was experiencing slow response times. The system was built on a SQL Server database.

The DBA team identified several frequently executed queries using scalar functions on columns in the WHERE clause. This practice prevents SQL Server from effectively using indexes on those columns, leading to table scans and slower performance.

The team avoided using functions on indexed columns in the WHERE clause, allowing SQL Server to use the indexes efficiently. This significantly improved the performance of these queries and the overall response time of the system.

As these case studies illustrate, various issues can affect SQL Server performance. Addressing them requires a good understanding of SQL Server, a methodical approach to identifying problems, and collaboration with other teams, such as developers, to implement optimal solutions.

Case Study 49: Excessive Use of Temp Tables

A finance firm’s risk assessment software, built on SQL Server, was experiencing slower performance. The software was executing numerous calculations and transformations, using temp tables extensively.

Upon reviewing the operations, the DBA team found that the excessive use of temp tables led to high I/O operations and caused contention in tempdb. They also found that some temp tables were unnecessary as the same operations could be achieved using more straightforward queries or table variables, which have a lower overhead than temp tables.

The DBA team and developers collaborated to refactor the procedures to reduce the use of temp tables. They replaced temp tables with table variables where possible and sometimes rewrote queries to avoid needing temporary storage. This reduced the load on tempdb and improved the software’s performance.

Case Study 50: High Network Latency

An international company was experiencing slow performance with its distributed applications. These applications interacted with a centralized SQL Server database in their headquarters.

Upon investigation, the DBA team found that network latency was a significant factor causing the slow performance. The network latency was exceptionally high for the company’s overseas offices.

To address this, they implemented SQL Server’s data compression feature to reduce the amount of data sent over the network. They also combined caching data at the application level and local read-only replicas for overseas offices. This resulted in reduced network latency and improved application performance.

Case Study 51: Large Data Loads During Business Hours

A manufacturing company’s ERP system was experiencing slow performance during specific periods of the day. A SQL Server database backed the system.

The DBA team found that large data loads were being run during business hours, impacting the system’s performance. These data loads were locking tables and consuming significant server resources.

The team rescheduled the data loads to off-peak hours, ensuring minimal impact on business users. They also optimized the data load processes using techniques such as bulk insert and minimally logged operations to make them run faster and consume fewer resources.

Case Study 52: Inefficient Code

A software company’s internal tool was running slow. The tool was built on a SQL Server database and used stored procedures extensively.

The DBA team found that some of the stored procedures were written inefficiently. There were instances of cursor use where set-based operations would be more appropriate, and some procedures called other procedures in a loop, causing many executions.

The team worked with developers to optimize the stored procedures. They replaced cursors with set-based operations and unrolled loops where possible, reducing the number of procedure executions. They also added appropriate indexes to support the queries in the stored procedures. These changes improved the code’s efficiency and the tool’s overall performance.

These case studies underscore that SQL Server performance issues can arise from different areas – from inefficient code to infrastructure factors like network latency. Keeping a keen eye on performance metrics, proactively managing server resources, and maintaining efficient database code are all part of the toolkit for managing SQL Server performance.

Leave A Reply Cancel reply

You must be logged in to post a comment.

Login with your site account

Remember Me

Not a member yet? Register now

Register a new account

I accept the Terms of Service

Are you a member? Login now

Modal title

  • Support Portal Login

Productivity Add-ons

Operations add-ons, e-commerce solutions, integrated tools, software services.

ERP Without The Pain

ERP Without The Pain

Get a bespoke ERP system without the traditional pains by using Business Central through Dynamics Consultants

Business Central Training Centre

Business Central Training Centre

Learn everything you need to know about Business Central in the Business Central Training Centre

Business Central Support

Business Central Support

UK based Business Central support desk, contactable via phone, email and support portal. 

About Dynamics Consultants

I like working at DC because of the positive culture of the company and the core values of the company. At DC, we are always encouraged to do the right thing which makes DC a great place to work. Jagdeep Rattu Business Central Consultant

Meet the Team

Popular Blogs

Highlight

Subscribe to us on YouTube for regular content!

Microsoft SQL Server Case Study

Tom Jenkins

Microsoft SQL Server

Microsoft SQL Server® is a market leading enterprise level database solution used by any a large number and variety of applications on which to host their databases and store their data. Microsoft SQL Server is an incredibly powerful, scalable and robust solution, however it is it’s robustness that often leads customers into a false sense of security.

As with anything in life, things can go wrong, and this is true with SQL Server. Your valuable data can be lost for a number of reasons such as hardware failure, theft, fire, flood, user error etc. and so it is worth planning for such events to make recovery as painless as possible.

With SQL Server, there are many ways to improve the recovery from data loss, such as mirroriing, transaction log shipping and always on high availability, all of which offer differing levels of protection at a variety of price points. Here we will look at the simplest and most cost effective solution for an SME to protect their data – a decent backup.

A Real-Life Example

Before we look at how we should implement SQL Server backups, let us look at a real life example of how a good backup strategy works.

In this particular example, the customer was running Microsoft SQL Server standard to host their Microsoft Dynamics® NAV database. Microsoft SQL Server was running on its own dedicated server with the Dynamics NAV database configured to use a full recovery model with full backups running daily at night and log backups running hourly during the working day to a network share on a different server.

On this particular day, the customer’s IT manager decided to test the UPS protecting the SQL Server, by unplugging it from the wall, something he had diligently been doing on a regular basis, however this time the UPS failed and the server immediately lost power. The server was powered up, and at first, all seemed to be OK, until after a short while (about an hour) it became apparent that the G/L Entry table (a somewhat important table in a NAV database) was corrupt. The customer in question was a distribution company and had a number of shipments that they needed to get out of the door before the end of the day, and so the prospect of recovering the database at that point in time was not very appealing. After a short discussion with Dynamics Consultants, we made a small tweak to the setup to allow them to continue processing warehouse transactions without needing to write to the G/L Entry table, allowing them to continue to ship orders for the rest of the day.

This still left the customer with a corrupt database, and now with a number of shipments processed, as well as other database activity, since that corruption had happened. However, because their database was configured with a full recovery model, we were able to restore to a fresh database the last full backup prior to the failure and all transaction log backups since, including a final log backup before disabling the database, and in so doing leaving the customer an uncorrupted database with no data loss, very happy, and with an extremely relieved IT manager.

Backup Strategy

So, what can we learn from this. Firstly, don’t test your UPS during the working day, but more importantly, make sure you have an appropriate backup strategy. (NOTE: Many VM level backup strategies would not have worked in the above situation, as it would have also backup up the corruption).

So what is the correct backup strategy? Well, there is no right or wrong answer as it depends very much on the level of database updates, what the database is used for, the size of the database and individual’s assessment of the risks associated with a failure in terms of acceptable downtime and data loss, but as a starting point, an SME should consider the following for a production database.

  • Full Recovery Model
  • Backups to a separate network server.
  • Backups to an offsite location.
  • Daily full and hourly log backups
  • Backup encryption

If you are unsure about your database backups, then you should seek advice from an experienced SQL Server administrator, or alternatively attend our SQL Server Basics course to obtain a good working overview.

Find out More

If you would like to find out more about SQL Server, why not join one of our extremely popular SQL training courses? Based at our comfortable offices on the outskirts of Southampton, Hampshire, our expert consultants have 4.5star rating reviews.

SQL Server Training >

Tom Jenkins

Tom Jenkins

Tom is one of the founding directors of Dynamics Consultants. He has worked on ERP / CRM systems since 1995, initially as an end user and later as a developer/consultant. Before founding Dynamics Consultants, Tom worked in a management role for a machinery importer / reseller, where his work included inventory management and purchase control, systems development, and IT project management.

How AI Can Boost Predictive Maintenance In Manufacturing

How AI Can Boost Predictive Maintenance In Manufacturing

How can Artificial Intelligence be used for predictive maintenance in manufacturing? New software technologies are helping business operations.

Manufacturing and Technology in 2023

Manufacturing and Technology in 2023

Summarising technology changes for manufacturing companies in 2023 and what that means for 2024 such as artificial intelligence and industry 4.0

Manufacturing and Distribution Company Implements Business Central

Manufacturing and Distribution Company Implements Business Central

Manufacturing, Warehousing and distribution company Colorlites implements Business Central ERP with Dynamics Consultants in a phased approach

  • Process Automation
  • Data & Analytics
  • Microsoft Syntex
  • IT Help Desk
  • CyberSecurity
  • IT Modernization
  • Contact Center Operations
  • Custom Software Development
  • Construction
  • Oil & Gas
  • Microsoft Partner
  • Case Studies

SQL Server Case Study

Established in 1972, the port’s primary business is offloading foreign crude oil from tankers, and storing and distributing the inventory to refineries throughout the Gulf Coast and Midwest. As the single largest point of entry for crude oil coming into the U.S., the client must serve its customers 24 hours a day, seven days a week.

With significant SQL server infrastructure including a custom Oil Management System, internal SharePoint and various third-party applications, the client needed a strategic partner who could serve as an extension of its IT team to: Set up and review its disaster recovery solution, SQL Server maintenance plans, administrative auditing and basic SQL Server performance Develop centrally managed and automated backups and SQL Servers to address critical issues quickly and effectively Build consistent SQL database index maintenance plans and integrity checks for all staff  

Using a checklist-based approach, Sparkhound met with senior staff to review all production SQL Servers and bring all systems up to SQL Server service pack level for long-term success. Built custom-developed SQL scripts that work alongside the enterprise backup software for backup redundancy and automated maintenance plans for the client’s Oil Management System Implemented auditing and custom-developed SQL Server Traces to fulfill government standards Trained system administrators and developers on SQL Server best practices  

By utilizing Sparkhound’s SQL Server consultant, the client received a Microsoft-certified expert who understood the complexity of the client’s mission-critical data, ensured all disaster recovery and security auditing requirements were met, and delivered a seamless knowledge transfer post-implementation.

You May Also Like

These Related Stories

logistics-and-transportation-of-cargo-freight-ship

Citrix-XenApp Case Study

refinery-storage-tanks

IMTT Service Desk Case Study

 Oil & Gas Service Provider

Improving Forecasting and Utilization for Oil & Gas Service Provider

case study on microsoft sql server

  • Announcements
  • Best Practices
  • Thought Leadership
  • SQL Server 2022
  • SQL Server 2019
  • SQL Server Management
  • SQL Server on Azure VMs
  • SQL Server on Linux
  • Azure Data Studio
  • Azure SQL Database
  • Azure Synapse Analytics
  • Machine Learning Server
  • Data analytics
  • Data Security
  • Data warehousing
  • Hybrid data solutions
  • Uncategorized

New SQL Server 2008 Case Studies

  • By SQL Server Team

The momentum for SQL Server 2008 continues. Here are some of the latest case studies published:

  • Russia’s Baltika Breweries Links its ERP Databases using SQL Server 2008 Replication Baltika Breweries, the largest producer of beer in Russia, has about 12,000 employees and 11 breweries. Its popular brands have been the market leader in Russia for more than a decade and are exported to 46 countries. The company coordinates enterprise resource planning (ERP) across its operations using an ERP solution created by Microsoft® Certified Partner Monolit-Info. To enhance management of several geographically dispersed ERP databases holding more than 2 terabytes of information, the company deployed them as a 6-node multi-server configuration. Baltika is upgrading to Microsoft SQL Server® 2008, from the earlier version, to take advantage of new features introduced in SQL Server 2008. Baltika’s lab testing demonstrated that SQL Server 2008 provides the efficient replication required for its operations, as well as reliability.
  • Big Hammer – Software as a Service Provider Lets Shoppers Furnish Homes in an Online 3D World Edgenet, and its Big Hammer Data Division, helps retailers and manufacturers serve customers by hosting a Global Data Synchronization Network (GDSN) and Marketing data pools of basic product attributes, providing a common classification for products. As the company prepared to launch its Edgenet Vision™ product that enables users to create a 3-dimensional (3D) representation of their homes and populate rooms with 3D images of appliances and furniture from the Big Hammer data pools, it needed an enterprise-grade platform. Big Hammer deployed Edgenet Vision hosted on Microsoft® SQL Server® 2008 Enterprise Edition (64-bit) database software running on Windows Server® 2008, using a Software as a Service distribution model. The application developers used Microsoft Visual Studio® 2008. The solution runs on Unisys ES7000 enterprise server computers.
  • bwin – Global Online Gaming Company Deploying SQL Server 2008 to support 100 Terabytes Sports enthusiasts around the world place up to 1 million bets per day using the online sports betting services of Gibraltar-based bwin International. Performance is paramount at bwin which hosts more than 100 terabytes of information on some 100 instances of Microsoft® SQL Server®. The company was very happy with its deployment of SQL Server 2005, but it enjoyed 3-digit annual growth in 2005 and 2006 and is eager to take advantage of new technological advancements to help it keep pace with growth. The bwin Data Management Systems group has started upgrading its database infrastructure to SQL Server 2008 to take advantage of Backup Compression, management tools, and other new features. During peak gaming periods SQL Server processes 30,000 database transactions per second, while supporting the reliability required by the bwin database group’s motto: “Failure is not an option.”
  • McLaren Electronics Fuels Analysis of Formula One Racing Data with SQL Server McLaren Electronic Systems, part of a family of companies that includes the McLaren Racing organization, is a leader in developing specialized motor racing products including the engine control unit (ECU) that manages the complex engine, transmission, and other key elements of Formula One race cars. After the Fédération Internationale de l’Automobile (FIA), motor sport’s world governing body, awarded McLaren Electronics and Microsoft the contract to provide the ECU solution that will be used in all Formula One race cars, McLaren Electronics sought a more efficient way to manage the terabytes of ECU data that a team generates in a year. After performing a proof of concept study, McLaren Electronics found that Microsoft® SQL Server® 2008 provided the solution it needed for storing data while providing Formula One type retrieval speeds.
  • RSS Aggregator NewsGator Manages 2.5 Billion Articles with SQL Server 2008 NewsGator makes life easier for individuals and companies by aggregating Really Simple Syndication (RSS) data feeds from across the Web to provide users with customized content delivery, enabling everyone to essentially create their own electronic newspaper. The company, which also provides Software as a Service to more than 50 media outlets including CNN and USA Today, stores some 2.5 billion RSS articles totalling about 4 terabytes on clustered databases running Microsoft® SQL Server® database Software. NewsGator is upgrading its database infrastructure to SQL Server 2008 Enterprise Edition (64-bit) running on the Windows Server® 2008 for 64-Bit Systems operating system to take advantage of a number of new features, including enhanced Database Mirroring for high availability, Backup Compression to reduce storage needs, and Resource Governor for allocating processing resources.
  • Siemens PLM Software Validated to Easily Support 5,000 Users with SQL Server 2008 Siemens PLM Software scales to 5,000 concurrent users and gains 50 percent compression for database files running on SQL Server® 2008 and Windows Server® 2008 and Intel®-based hardware.

Avatar photo

Related Posts

Capitalize on your investments with the new centrally managed azure hybrid benefit for sql server  , 5 reasons to join us at securely migrate and optimize with azure  , sql server 2022: intel® quickassist technology overview  , azure data studio 1.41 release  .

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Contains solutions for #8WeekSQLChallenge case studies https://8weeksqlchallenge.com/

sharkawy98/sql-case-studies

Folders and files, repository files navigation, 8 weeks sql challenge.

This repository contains solutions for #8WeekSQLChallenge, they are interesting real-world case studies that will allow you to apply and enhance your SQL skills in many use cases. I used Microsoft SQL Server in writing SQL queries to solve these case studies.

Table of Contents

Sql skills gained.

  • Case study 1
  • Case study 2
  • Case study 3
  • Some interesting queries from my solutions
  • Data cleaning & transformation
  • Aggregations
  • Ranking (ROW_NUMBER, DENSE_RANK)
  • Analytics (LEAD, LAG)
  • CASE WHEN statements
  • UNION & INTERSECT
  • DATETIME functions
  • Data type conversion
  • TEXT functions, text and string manipulation

Case Study #1 : Danny's Diner

My solutions

image

Case Study #2 : Pizza Runner

image

Case Study #3 : Foodie-Fi

image

Case Study #4 : Data Bank

image

Some interesting queries

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

AdventureWorks sample databases

  • 12 contributors

This article provides direct links to download AdventureWorks sample databases, and instructions for restoring them to SQL Server, Azure SQL Database, and Azure SQL Managed Instance.

For more information about samples, see the Samples GitHub repository .

Prerequisites

  • SQL Server or Azure SQL Database
  • SQL Server Management Studio (SSMS) or Azure Data Studio

Download backup files

Use these links to download the appropriate sample database for your scenario.

  • OLTP data is for most typical online transaction processing workloads.
  • Data Warehouse (DW) data is for data warehousing workloads.
  • Lightweight (LT) data is a lightweight and pared down version of the OLTP sample.

If you're not sure what you need, start with the OLTP version that matches your SQL Server version.

Additional files can be found directly on GitHub:

  • SQL Server 2014 - 2022
  • SQL Server 2012
  • SQL Server 2008 and 2008R2

Restore to SQL Server

You can use the .bak file to restore your sample database to your SQL Server instance. You can do so using the RESTORE (Transact-SQL) command, or using the graphical interface (GUI) in SQL Server Management Studio (SSMS) or Azure Data Studio .

  • SQL Server Management Studio (SSMS)
  • Transact-SQL (T-SQL)
  • Azure Data Studio

If you're not familiar using SQL Server Management Studio (SSMS), you can see connect & query to get started.

To restore your database in SSMS, follow these steps:

Download the appropriate .bak file from one of links provided in the download backup files section.

Move the .bak file to your SQL Server backup location. This varies depending on your installation location, instance name and version of SQL Server. For example, the default location for a default instance of SQL Server 2019 (15.x) is:

C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup .

Open SSMS and connect to your SQL Server instance.

Right-click Databases in Object Explorer > Restore Database... to launch the Restore Database wizard.

Screenshot showing how to choose to restore your database by right-clicking databases in Object Explorer and then selecting Restore Database.

Select Device and then select the ellipses (...) to choose a device.

Select Add and then choose the .bak file you recently moved to the backup location. If you moved your file to this location but you're not able to see it in the wizard, this typically indicates a permissions issue - SQL Server or the user signed into SQL Server doesn't have permission to this file in this folder.

Select OK to confirm your database backup selection and close the Select backup devices window.

Check the Files tab to confirm the Restore as location and file names match your intended location and file names in the Restore Database wizard.

Select OK to restore your database.

Screenshot showing the Restore Database window with the backup set to restore highlighted and the OK option highlighted.

For more information on restoring a SQL Server database, see Restore a database backup using SSMS .

You can restore your sample database using Transact-SQL (T-SQL). An example to restore AdventureWorks2022 is provided below, but the database name and installation file path may vary depending on your environment.

To restore AdventureWorks2022 on Windows , modify values as appropriate to your environment and then run the following Transact-SQL (T-SQL) command:

To restore AdventureWorks2022 on Linux , change the Windows filesystem path to Linux, and then run the following Transact-SQL (T-SQL) command:

If you're not familiar using Azure Data Studio Studio , see connect & query to get started

To restore your database in Azure Data Studio, follow these steps:

Open Azure Data Studio and connect to your SQL Server instance.

Right-click on your server and select Manage .

Screenshot showing Azure Data Studio with the Manage option highlighted and called out.

Select Restore

Select restore from the top menu to restore your database.

On the General tab, fill in the values listed under Source .

  • Under Restore from , select Backup file .
  • Under Backup file path , select the location you stored the .bak file.

Select your backup file path

This auto-populates the rest of the fields such as Database , Target database and Restore to .

Once you have chosen a backup file path, the rest of the fields autopopulate

Select Restore to restore your database.

Once you're ready, select Restore to restore your database.

Deploy to Azure SQL Database

You have two options to view sample Azure SQL Database data. You can use a sample when you create a new database, or you can deploy a database from SQL Server directly to Azure using SSMS.

To get sample data for Azure SQL Managed Instance instead, see restore World Wide Importers to SQL Managed Instance .

Deploy new sample database

When you create a new database in Azure SQL Database, you can create a blank database, restore from a backup or select sample data to populate your new database.

Follow these steps to add a sample data to your new database:

Connect to your Azure portal.

Select Create a resource in the top left of the navigation pane.

Select Databases and then select SQL Database .

Fill in the requested information to create your database.

On the Additional settings tab, choose Sample as the existing data under Data source :

Choose sample as the data source on the Additional settings tab in the Azure portal when creating your Azure SQL Database

Select Create to create your new SQL Database, which is the restored copy of the AdventureWorksLT database.

Deploy database from SQL Server

SSMS allows you to deploy a database directly to Azure SQL Database. This method doesn't currently provide data validation so is intended for development and testing and shouldn't be used for production.

To deploy a sample database from SQL Server to Azure SQL Database, follow these steps:

Connect to your SQL Server in SSMS.

If you haven't already done so, restore the sample database to SQL Server .

Right-click your restored database in Object Explorer > Tasks > Deploy Database to Microsoft Azure SQL Database... .

Choose to deploy your database to Microsoft Azure SQL Database from right-clicking your database and selecting Tasks

Follow the wizard to connect to Azure SQL Database and deploy your database.

Creation scripts

Instead of restoring a database, alternatively, you can use scripts to create the AdventureWorks databases regardless of version.

The below scripts can be used to create the entire AdventureWorks database:

  • AdventureWorks OLTP Scripts Zip
  • AdventureWorks DW Scripts Zip

Additional information about using the scripts can be found on GitHub .

Once you've restored your sample database, using the following tutorials to get started with SQL Server:

  • Tutorials for SQL Server database engine
  • Connect and query with SQL Server Management Studio (SSMS)
  • Connect and query with Azure Data Studio

Was this page helpful?

Coming soon: Throughout 2024 we will be phasing out GitHub Issues as the feedback mechanism for content and replacing it with a new feedback system. For more information see: https://aka.ms/ContentUserFeedback .

Submit and view feedback for

Additional resources

Migrate to Microsoft SQL Server | Case Study

help4access logo

  • Latest Thinking
  • Our Clients
  • Let’s Talk

help4access search

Let's talk about your next big project.

Looking for a new career? View job openings.

United States Australia Canada Japan New Zealand United Kingdom Other

Your email address

Your message

By submitting, you consent to Help4 Access processing your information in accordance with our Privacy Policy . We take your privacy seriously; opt out of email updates at any time.

happy it consultants fixing critical system problems for business customers

Very few companies would want to immediately eliminate all of their Microsoft Access database applications. The best results are achieved by focusing modernizing then migrating those select legacy database applications that are limiting business efficiency and agility. With our team’s extensive Microsoft Access migration experience and tools, we’ve developed a 5-step proven approach to migrating any MS Access database application:

Learn how we approached the project, faced challenges and created solutions best suiting the client needs. Download case study now.

  • Global access to conformed data.
  • Increased scalability, security and reliability.
  • 360 degree view of your customers and products.
  • No clunky software to install nor maintain.

Data Breach Assessment

Help4Access is the only Microsoft Gold Cloud Partner and Amazon AWS Premier Partner in the world with a core competency of supporting and modernizing legacy Microsoft Access database applications, leveraging its US-based team of 250+ senior technical consultants.

Important Links

  • Microsoft Access Resources
  • Microsoft Access Training
  • New Development
  • Discovery & Assessment
  • Roadmap to Maturity
  • Data Goverance
  • Privacy Policy

© 2024 Help4Access. All Rights Reserved

case study on microsoft sql server

AZ-500 Microsoft Azure Security Exam Study Guide

By: Daniel Calbimonte   |   Updated: 2024-04-12   |   Comments   |   Related: More > Professional Development Certifications

Handling Azure security is critical for keeping information and resources safe. Is there a certification for Azure security?

This tip will help you pass the AZ-500 certification exam by answering common questions and providing resources for each of the exam objectives.

What is the AZ-500 Exam?

This official Microsoft exam is related to Azure security.  You will learn about Microsoft Entra, multi-factor authentication (MFA), single sign-on (SSO), Microsoft apps security, virtual network security, endpoints security, gateways, firewalls, Azure Kubernetes Service (AKS), encryption, and other related topics.

Is the Exam Difficult?

This exam will not be difficult if you already have a lot of experience in Azure, especially with security. If you do not have experience with Azure, it is strongly recommended that you take other Azure exams first.

The AZ-900 exam is recommended for beginners in the Azure world.

What is the Minimum Passing Score for the AZ-500 Exam?

The minimum score to pass is approximately 700/1000.

What Books are Recommended for this Exam?

The following books will help you pass this exam:

  • Microsoft Azure Security Technologies Certification and Beyond: Gain practical skills to secure your Azure environment and pass the AZ-500 exam
  • Exam Ref AZ-500 Microsoft Azure Security Technologies, 2/e
  • AZ-500: Microsoft Azure Security Technologies - Exam Cram Notes: Third Edition - 2023
  • AZ-500: Microsoft Azure Security Technologies - Study Guide with Practice Questions & Labs: Third Edition - 2023
  • MCA Microsoft Certified Associate Azure Security Engineer Study Guide: Exam AZ-500 (Sybex Study Guide)
  • AZ-500: Microsoft Azure Security Technologies +200 Exam Practice Questions with Detailed Explanations and Reference Links: Second Edition - 2023
  • Microsoft AZ-500 Certification: Azure Security Technologies Full Preparation: Pass Your Microsoft AZ-500 on the First Try (Latest Questions & Detailed ... Preparation Books - NEW & EXCLUSIVE Book 8)
  • Microsoft Azure Security Technologies (AZ-500) - A Certification Guide: Get qualified to secure Azure AD, Network, Compute, Storage and Data services through ... security best practices (English Edition)
  • AZURE AZ 500 STUDY GUIDE-2: Microsoft Certified Associate Azure Security Engineer: Exam-AZ 500

Are There Links Available for Studying for the Exam?

Yes. The following links can be helpful for the exam:

Administer Identity and Access

Administer Microsoft Entra Identities

  • Safeguard Microsoft Entra users
  • Safeguard Microsoft Entra groups
  • Advise on the appropriate use of external identities
  • Safeguard external identities
  • Deploy Microsoft Entra ID Protection

Administer Microsoft Entra Authentication

  • Set up Microsoft Entra Verified ID
  • Deploy multi-factor authentication (MFA)
  • Deploy passwordless authentication
  • Deploy password protection
  • Deploy single sign-on (SSO)
  • Integrate SSO and identity providers
  • Advise on and enforce modern authentication protocols

Administer Microsoft Entra Authorization

  • Set up Azure role permissions for management groups , resource groups, subscriptions , and resources
  • Assign the Microsoft Entra built-in roles
  • Assign the Azure built-in role
  • Create and assign customized roles like Azure roles and Microsoft Entra roles
  • Deploy and administer Microsoft Entra Permissions Management
  • Set up Microsoft Entra Privileged Identity Management
  • Set up role management and access reviews in Microsoft Entra
  • Deploy Conditional Access policies

Administer Microsoft Entra Application Access

  • Administer access to Enterprise applications in Microsoft Entra ID
  • Administer the Microsoft Entra app registrations
  • Set up app registration permission scopes
  • Administer app registration permission consent
  • Administer and utilize service principals
  • Administer managed identities for Azure resources
  • Advise on when to utilize and configure a Microsoft Entra Application Proxy, including authentication

Secure Networking

Design and Enforce Security for Virtual Networks

  • Design and enforce Network Security Groups (NSGs) and Application Security Groups (ASGs)
  • Design and enforce user-defined routes (UDRs)
  • Design and enforce Virtual Network peering or VPN gateway
  • Design and enforce Virtual WAN, including secured virtual hub
  • Ensure VPN connectivity security, including point-to-site and site-to-site
  • Implement encryption via ExpressRoute
  • Configure firewall configurations on PaaS resources
  • Monitor network security using Network Watcher, including NSG flow logging

Design and Enforce Security for Private Access to Azure Resources

  • Design and enforce virtual network Service Endpoints
  • Design and enforce Private Endpoints
  • Design and enforce Private Link services
  • Design and enforce network integration for Azure App Service and Azure Functions
  • Design and enforce network security settings for an App Service Environment (ASE)
  • Design and enforce network security settings for an Azure SQL Managed Instance

Design and Enforce Security for Public Access to Azure Resources

  • Design and enforce Transport Layer Security (TLS) for applications, including Azure App Service and API Management
  • Design, implement, and oversee an Azure Firewall, including Azure Firewall Manager and firewall policies
  • Design and implement an Azure Application Gateway
  • Design and implement an Azure Front Door, including Content Delivery Network (CDN)
  • Design and implement a Web Application Firewall (WAF)
  • Provide recommendations for the utilization of Azure DDoS Protection Standard

Secure Computing, Storage, and Databases

Design and Implement Advanced Security Measures for Computing

  • Design and implement remote access to public endpoints, including Azure Bastion and just-in-time (JIT) virtual machine (VM) access
  • Set up network segregation for Azure Kubernetes Service (AKS)
  • Secure and monitor Azure Kubernetes Service (AKS)
  • Set up authentication for Azure Kubernetes Service (AKS)
  • Set up security monitoring for Azure Container Instances (ACIs)
  • Set up security monitoring for Azure Container Apps (ACAs)
  • Administer access to Azure Container Registry (ACR)
  • Set up disk encryption, including Azure Disk Encryption (ADE), host-based encryption, and confidential disk encryption
  • Provide recommendations for security configurations for Azure API Management

Design and Implement Security for Storage

  • Set up access controls for storage accounts
  • Administer lifecycle management for storage account access keys
  • Choose and set up a suitable method for accessing Azure Files
  • Choose and set up a suitable method for accessing Azure Blob Storage
  • Choose and set up a suitable method for accessing Azure Tables
  • Choose and set up a suitable method for accessing Azure Queues
  • Choose and set up appropriate methods for safeguarding against data security threats, including soft delete, backups, versioning, and immutable storage
  • Setup Bring Your Own Key (BYOK )
  • Enable the dual encryption in the Azure Storage infrastructure level

Design and Implement Security for Azure SQL Database and Azure SQL Managed Instance

  • Set up Microsoft Entra database authentication
  • Apply database auditing
  • Identify when to use the Microsoft Purview governance portal
  • Apply data classification of sensitive information using the Microsoft Purview governance portal
  • Design and implement dynamic masking
  • Apply Transparent Database Encryption (TDE)
  • Provide recommendations for utilizing Azure SQL Database Always Encrypted based on specific scenarios

Administer Security Operations

Design, Implement, and Oversee Governance for Security

  • Establish, allocate, and interpret security protocols and strategies in Azure Policy
  • Adjust security configurations through Azure Blueprints
  • Deploy fortified infrastructures using a landing zone approach
  • Create and set up an Azure Key Vault
  • Advise on the appropriate usage of a dedicated Hardware Security Module (HSM)
  • Adjust access to Key Vault, encompassing vault access policies and Azure Role Based Access Control
  • Administer certificates, confidential information, and cryptographic keys
  • Set up key rotation procedures
  • Configure the backup and restoration of certificates, confidential information, and cryptographic keys

Administer Security Stance using Microsoft Defender for Cloud

  • Identify and rectify security vulnerabilities via Microsoft Defender for Cloud Secure Score and Inventory
  • Evaluate adherence to security frameworks and Microsoft Defender for Cloud
  • Incorporate industry and regulatory benchmarks into Microsoft Defender for Cloud
  • Incorporate tailored strategies into Microsoft Defender for Cloud
  • Connect hybrid cloud and multi-cloud environments with Microsoft Defender for Cloud
  • Identify and oversee external assets through Microsoft Defender External Attack Surface Management

Configure and Manage Threat Protection with Microsoft Defender for Cloud

  • Activate protective services within Microsoft Defender for Cloud, such as Microsoft Defender for Storage, Databases, Containers, App Service, Key Vault, Resource Manager, and DNS
  • Set up Microsoft Defender for Servers
  • Set up Microsoft Defender for Azure SQL Database
  • Handle and address security alerts within Microsoft Defender for Cloud
  • Set up workflow automation through Microsoft Defender for Cloud
  • Assess vulnerability scans conducted by Microsoft Defender for Server

Configure and Oversee Security Monitoring and Automation Solutions

  • Track security incidents via Azure Monitor
  • Configure data integrations in Microsoft Sentinel
  • Develop and tailor detection rules in Microsoft Sentinel
  • Assess alerts and events in Microsoft Sentinel
  • Configure automated processes in Microsoft Sentinel

For more information about Microsoft exams, refer to the following links.

  • DP 500 Certification Exam Preparation for Microsoft Azure and Power BI

Power BI Certification FAQ for Exams PL-300 and PL-900

  • Prepare for the AZ-900 Microsoft Azure Fundamentals Certification
  • Study material for exam AZ-100 Microsoft Azure Infrastructure and Deployment
  • Study material for exam AZ-400 Microsoft Azure DevOps Solutions
  • Study material for exam AZ-203 Developing Solutions for Microsoft Azure

sql server categories

About the author

MSSQLTips author Daniel Calbimonte

Comments For This Article

get free sql tips

Related Content

Study material for Exam PL-300: Microsoft Power BI Data Analyst

DP-900 Microsoft Azure Data Fundamentals Exam Study Guide

AI-102 Exam: Designing and Implementing a Microsoft Azure AI Solution Study Guide

Exam DP-300 Administering Microsoft Azure SQL Solutions Preparation

Exam DP-203 Data Engineering on Microsoft Azure Preparation Guide

AI-900: Microsoft Azure AI Fundamentals Certification Exam Tips and Tricks

Related Categories

Professional Development Branding

Professional Development Career

Professional Development Career Planning

Professional Development Certifications

Professional Development Community

Professional Development Interview Questions BI

Professional Development Interview Questions DBA

Professional Development Interview Questions Developer

Professional Development Interviewing

Professional Development Job Search

Professional Development Management

Professional Development Resume

Professional Development Skills Development

Development

Date Functions

System Functions

JOIN Tables

SQL Server Management Studio

Database Administration

Performance

Performance Tuning

Locking and Blocking

Data Analytics \ ETL

Microsoft Fabric

Azure Data Factory

Integration Services

Popular Articles

SQL Date Format Options with SQL CONVERT Function

SQL Date Format examples using SQL FORMAT Function

SQL Server CROSS APPLY and OUTER APPLY

DROP TABLE IF EXISTS Examples for SQL Server

SQL Server Cursor Example

SQL CASE Statement in Where Clause to Filter Based on a Condition or Expression

Rolling up multiple rows into a single row and column for SQL Server data

SQL Convert Date to YYYYMMDD

SQL NOT IN Operator

Resolving could not open a connection to SQL Server errors

Format numbers in SQL Server

SQL Server PIVOT and UNPIVOT Examples

Script to retrieve SQL Server database backup history and no backups

How to install SQL Server 2022 step by step

An Introduction to SQL Triggers

Using MERGE in SQL Server to insert, update and delete at the same time

List SQL Server Login and User Permissions with fn_my_permissions

SQL Server Loop through Table Rows without Cursor

How to monitor backup and restore progress in SQL Server

SQL Server Database Stuck in Restoring State

case study on microsoft sql server

Unlock AI Collaboration at Microsoft BUILD 2024 with Semantic Kernel

case study on microsoft sql server

April 16th, 2024 0 1

The moment we’ve all been waiting for is nearly here. Microsoft BUILD 2024 , happening from May 21 – 23rd, is poised to be a groundbreaking event, especially for our community working at the intersection of AI and application development.

I’m thrilled to announce our Semantic Kernel session – Bridge the chasm between your ML and app devs with Semantic Kernel .

Developing cutting-edge AI solutions requires a symphony between machine learning (AI experts) and app development teams, a collaboration that’s been historically hindered by different tech stacks and conceptual frameworks. However, with the release of our v1.0 kernels, available in Python , C# , and Java , we are pioneering a universal language for AI development. This transformational approach ensures consistency, efficiency, and most importantly, bridges the gap that has long existed between these two critical areas of AI product development, and we can’t wait to share more. Our session will be recorded and available after BUILD for those who will not be able to join us live.

For those joining us in person @ BUILD, the experience gets even richer with 7 additional sessions showing how to use Semantic Kernel:

  • Combine Semantic Kernel with your existing apps and services – this demo will show how to use AI Agents to call existing code.
  • Build AI Apps with Azure Cosmos DB for MongoDB and Semantic Kernel – this hands-on lab you will use Azure Cosmos DB for MongoDB with Semantic Kernel to create a RAG pattern app over transactional data.
  • Transform your RAG deployment at scale with real time results – this demo will show how WikiChat was built (RAG-based chatbot for Wikipedia) which updates in Realtime.
  • Build apps from your data and LLMs to find answers to key questions – this demo will show how to use MongoDB developer data platform to ground the AI in actual data.
  • Generative AI application stack and providing long term memory to LLMs – this pre-recorded session will show how to use long-term memory for LLMs and AI-agent-powered applications using a variety of tools.
  • Build scalable chat history and conversational memory into LLM apps – this demo will show how to implement highly scalable chat history solutions using Semantic Kernel with DiskANN- based vector database in Azure Cosmos DB for NoSQL.
  • Learn how to easily integrate AI into your .NET apps – this hands-on lab you will learn how to use Semantic Kernel with Azure services to integrate AI into your .NET applications.  You will also learn more about .NET Aspire!

Don’t miss this unique opportunity to elevate your skills, network with experts, and understand the power of Semantic Kernel. Whether it’s through attending our session, connecting with the team in person, or engaging with our open-source community, your journey into integrated AI development begins at BUILD .

See you there!

case study on microsoft sql server

Evan Chaki Semantic Kernel

case study on microsoft sql server

Leave a comment Cancel reply

Log in to start the discussion.

light-theme-icon

Insert/edit link

Enter the destination URL

Or link to existing content

case study on microsoft sql server

Please enter your information to subscribe to the Microsoft Fabric Blog.

Microsoft fabric updates blog.

  • Monthly update

Microsoft Fabric March 2024 Update

  • Monthly Update

Headshot of article author

Welcome to the March 2024 update.

We have a lot of great features this month including OneLake File Explorer, Autotune Query Tuning, Test Framework for Power Query SDK in VS Code, and many more!

Earn a free Microsoft Fabric certification exam!  

We are thrilled to announce the general availability of Exam DP-600 , which leads to the Microsoft Certified: Fabric Analytics Engineer Associate certification.   

Microsoft Fabric’s common analytics platform is built on the instantly familiar Power BI experience , making your transition to Fabric Analytics Engineer easier. With Fabric, you can build on your prior knowledge – whether that is Power BI, SQL, or Python – and master how to enrich data for analytics in the era of AI.  

To help you learn quickly and get certified, we created the Fabric Career Hub. We have curated the best free on-demand and live training, exam crams, practice tests and more .  

And because the best way to learn is live, we will have free live learning sessions  led by the best Microsoft Fabric experts from Apr 16 to May 8, in English and Spanish. Register now at the Learn Together page.

  Also, become eligible for a free certification exam by completing the Fabric AI Skills Challenge. But hurry! The challenge only runs from March 19 – April 19 and free certs are first-come, first-served. (limit one per participant, terms and conditions apply).  

Visual calculations update (preview)

On-object interaction updates.

  • Mobile layout auto-create (preview)
  • Expanding Spatial Data Integration: Shapefile Support in Azure Maps Visual 

Data bars in matrix subtotal/total conditional formatting

Data labels alignment, write dax queries in dax query view with copilot (preview), enhanced row-level security editor is enabled by default (preview), selection expressions for calculation groups (preview), dax query view improvements (preview), edit your data model in the power bi service – updates.

  • Undo/Redo, Clear all, and New filter cards in Explore 
  • Deliver report subscriptions to OneDrive SharePoint (Preview)

Custom visual SSO support

  • New title flyout for Power BI Desktop developer mode 
  • Rename to “Semantic Model” in Power BI Project files 
  • System file updates for Git integration 

Hierarchal Identity filter API

New visuals in appsource, dumbbell bar chart by nova silva.

  • Date Picker by Powerviz 
  • Drill Down Combo PRO 

PDF Uploader/Viewer

Inforiver premium matrix, connect to new data sources from power bi report builder (preview), localized parameter prompts in power bi report builder, system file updates for git integration.

  • OneLake File Explorer: Editing via Excel 
  • Simplifying table clones: Automatic RLS and DDM Transfer 
  • Extract and publish. sqlproj from the Warehouse Editor 
  • Cold query performance improvements 
  • Warehouse Takeover API 
  • Autotune Query Tuning 
  • Experimental Runtime 1.3 (Spark 3.5 and Delta 3.0 OSS) 
  • Queueing for Notebook Jobs 

New validation enhancement for “Load to table”

Notebook spark executors resource utilization, spark advisor feedback setting, enable upstream view for notebooks and spark job definitions’ related pipelines.

  • New AI Samples 

Accessibility Improvements

Support for mandatory mip label enforcement.

  • Compare Nested Runs 

Code-First AutoML in Public Preview

  • Code-First Hyperparameter Tuning in Public Preview 
  • Eventhouse 
  • Eventhouse Minimum Consumption 

Query Azure Data Explorer data from Queryset

  • Update records in a KQL Database (Public Preview) 

Recent Enhancements to the Event Processor in Eventstream

  • Incoming events throughput upto 100 MB/s 

Retention for longer than 1 day

  • Capacity Metrics support for Pause and Resume 
  • Privacy levels support in Dataflows 
  • Enhancement to Manage Connections 

Test Framework for Power Query SDK in VS Code

General availability of vnet gateway, the vnet data gateway for fabric and power bi is generally available, browse azure resources in get data, block sharing scc tenant level, allow schema changes in output destinations.

  • Cancel Dataflow Refresh 
  • Certified connector updates
  • UC Support in Azure Databricks activity 
  • Semantic Model Refresh activity 

Performance tuning tips: improve performance tuning tips experience including wording, visualization, etc.

  • On-Premises Connectivity with Fabric Pipeline Public Preview 
  • New Expressions “Changes by”, “Increases by”, and “Decreases by” 
  • Compliance 

You can now add and edit visual calculations on the service. You can add a visual calculation by selecting New calculation from the context menu on a visual after you publish a report to the service.

case study on microsoft sql server

Also, after you publish a report that has visual calculations in it, you can access the visual calculations edit mode by selecting a visual calculation and choosing Edit calculation .

case study on microsoft sql server

To learn more about visual calculations, read our announcement blog and our documentation.

Blogs: https://powerbi.microsoft.com/blog/visual-calculations-preview/

Docs: https://aka.ms/visual-calculations-docs

Why not both? To balance the needs of our existing users who prefer to build visuals quickly in the pane, with the needs of our new users that need guidance when picking a visual type or appropriate field wells, you no longer must choose one or the other path, now there’s both!

This month, we streamlined the build pane and moved the visual suggestions feature to be inside the on-object build button only. Need help building your visual? Use the on-object “suggest a visual” experience. Already know your way around, use the build pane as you already do today.

case study on microsoft sql server

Gauge visual is now supported! The gauge visual now supports the new on-object formatting sub selections. Simply double click on your gauge visual to enter format mode, then right-click on which part of the visual you’d like to format using the mini-toolbar.

case study on microsoft sql server

Mobile layout auto-create (Preview)

You know that mobile optimized report layouts are the best way to view data in the Power BI mobile apps. But you also know that it requires extra work to create that layout. Well, not anymore…

As of this monthly update, you can generate a mobile-optimized layout with the click of a button! This long-awaited feature allows you to easily create mobile-optimized layouts for any new or existing report page, saving you tons of time!

When you switch to the mobile layout view in Power BI Desktop , if the mobile canvas is empty, you can generate a mobile layout just by selecting the Auto-create button.

The auto-create engine understands the desktop layout of your report and builds a mobile layout that considers the position, size, type, and order of the visuals that the report contains. It places both visible and hidden visuals, so if you have bookmarks that change a visual’s visibility, they will work in the automatically created mobile layout as well.

You can edit the automatically created mobile layout, so if the result is not exactly what you expected, you can tweak it to make it perfect for your needs. Think of it as a starting point you can use to shorten the way to that beautiful, effective, mobile-optimized report you envision.

To enjoy the new mobile layout auto-create capabilities, switch on the “ Auto-create mobile layout ” preview feature in Power BI Desktop: File > Options and settings > Options > Preview features > Auto-create mobile layout .

case study on microsoft sql server

We invite you to try out the mobile layout Auto-create feature and share your feedback with us!

Expanding Spatial Data Integration: Shapefile Support in Azure Maps Visual

After successfully integrating WKT and KML formats in February, we’re now stepping it up a notch by extending our support to include the Shapefile format. With just two clicks, you can now seamlessly overlay your spatial data onto Azure Maps’ base map. Whether through file upload or a hosted file, Azure Maps’ reference layer empowers you to effortlessly incorporate your data. Get ready to elevate your data storytelling to new heights, embracing flexibility and unlocking fresh insights with our upcoming release!

case study on microsoft sql server

In this Power BI release, we are happy to upgrade the data bars for Matrix and table to apply them to values only, values and totals, or total only. This enhancement gives you better control and reduces unnecessary noise to keep your tabular visuals nice and clean.

In this Power BI release, we’re excited to introduce an upgrade to the data bars for Matrix and Table visuals. Now, you have the flexibility to apply data bars to the following options:

Values Only: Display data bars based solely on the values within your visual.

Values and Totals: Extend data bars to include both individual values and their corresponding totals.

Total Only: Show data bars exclusively for the overall total.

This enhancement provides better control over your tabular visuals, reducing unnecessary noise and ensuring cleaner presentation.

case study on microsoft sql server

We’ve made significant improvements to the data labels in our charts. Now, when you use a multi-line layout with title, value, and detail labels, you have the flexibility to horizontally align them. This means you can create cleaner, more organized visualizations by ensuring that your labels are neatly positioned. To experience this enhancement, follow these steps: 1) navigate to the Data Labels section, 2) click on Layout , and finally, 3) explore the Horizontal alignment options for aligning your labels.

case study on microsoft sql server

The DAX query view with Copilot is now available in public preview! Enable the feature in the Preview section of File > Options and settings > Options, click on DAX query view, and launch the in-line Copilot by clicking the Copilot button in the ribbon or using the shortcut CTRL+I.

With Fabric Copilot, you can generate DAX queries from natural language, get explanations of DAX queries and functions, and even get help on specific DAX topics. Try it out today and see how it can boost your productivity with DAX query view!

case study on microsoft sql server

A more detailed blog post will be available soon.

We are excited to announce the enhanced row-level security editor as the default experience in Desktop! With this editor, you can quickly and easily define row-level security roles and filters without having to write any DAX! Simply choose ‘Manage roles’ from the ribbon to access the default drop-down interface for creating and editing security roles. If you prefer using DAX or need it for your filter definitions, you can switch between the default drop-down editor and a DAX editor.

case study on microsoft sql server

Calculation groups just got more powerful! This month, we are introducing the preview of selection expressions for calculation groups, which allow you to influence what happens in case the user makes multiple selections for a single calculation group or does not select at all. This provides a way to do better error handling, but also opens interesting scenarios that provide some good default behavior, for example, automatic currency conversion. Selection expressions are optionally defined on a calculation group and consist of an expression and an optional dynamic format expression.

This new capability comes with an extra benefit from potential performance improvements when evaluating complex calculation group items.

To define and manage selection expressions for calculation groups you can leverage the same tools you use today to work with calculation groups.

On a calculation group you will be able to specify the following selection expressions both consisting of the Expression itself and a FormatStringDefinition:

  • multipleOrEmptySelectionExpression . This expression has a default value of SELECTEDMEASURE() and will be returned if the user selects multiple calculation items on the same calculation group or if a conflict between the user’s selections and the filter context occurs.
  • noSelectionExpression . This expression has a default value of SELECTEDMEASURE() and will be returned if the user did not select any items on the calculation group.

Here’s an overview of the type of selection compared to the current behavior that we shipped before this preview, and the new behavior both when the expression is defined on the calculation group and when it’s not. The items in bold are where the new behavior differs from the current behavior.

Let’s look at some examples.

Multiple or Empty selections

If the user makes multiple selections on the same calculation group, the current behavior is to return the same result as if the user did not make any selections. In this preview, you can specify a multiOrEmptySelectionExpression on the calculation group. If you did, then we evaluate that expression and related dynamic format string and return its result. You can, for example, use this to inform the user about what is being filtered:

In case of a conflict or empty selection on a calculation group you might have seen this error before:

case study on microsoft sql server

With our new behavior this error is a thing of the past and we will evaluate the multipleOrEmptySelectionExpression if present on the calculation group. If that expression is not defined, we will not filter the calculation group.

No selections

One of the best showcases for this scenario is automatic currency conversion. Today, if you use calculation groups to do currency conversion, the report author and user must remember to select the right calculation group item for the currency conversion to happen. With this preview, you are now empowered to do automatic currency conversion using a default currency. On top of that, if the user wants to convert to another currency altogether, they can still do that, but even if they deselect all currencies the default currency conversion will still be applied.

Note how both the currency to convert to as well as the “conversion” calculation group item is selected.

case study on microsoft sql server

Notice how the user must only select the currency to convert to.

case study on microsoft sql server

The selection expressions for calculation groups are currently in preview. Please let us know what you think!

We released the public preview of DAX query view in November 2023, and in this release, we made the following improvements:

  • Re-ordering of query tabs is now available.
  • The share feedback link has been added to the command bar.
  • Coach marks for DAX query view.

And we have released additional INFO DAX functions.

  • INFO.STORAGETABLECOLUMNS() equivalent to DISCOVER_STORAGE_TABLE_COLUMNS
  • INFO.STORAGETABLECOLUMNSEGMENTS() equivalent to DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS
  • INFO.STORAGETABLES() equivalent to DISCOVER_STORAGE_TABLES

Learn more with these resources.

  • DAX query view: https://learn.microsoft.com/power-bi/transform-model/dax-query-view
  • DAX queries: https://aka.ms/dax-queries

Below are the improvements coming this month to the data model editing in the Service preview:

Autodetect relationships

Creating relationships for your semantic model on the web is now easier using autodetect relationships. Simply go to the Home ribbon and select the Manage relationships dialog . Then, choose ‘Autodetect’ and let Power BI find and create relationships for you.

case study on microsoft sql server

Sort by column

Within the web you can now edit the sort by property for a column in your semantic model.

Row-level security

We have made several improvements to the row-level security editor in the web. In the DAX editor you can now do the following actions:

  • Utilize IntelliSense to assist in defining your DAX expression.
  • Verify the validity of your DAX expression by clicking the check button.
  • Revert changes to your DAX expression by selecting the X button.

case study on microsoft sql server

Please continue to submit your feedback directly in the comments of this blog post or in our  feedback forum.

Undo/Redo, Clear all, and New filter cards in Explore

This month we’ve added a few new features to the new Explore experience.

Undo/Redo  

Now it’s simply to undo your previous action or use the ‘Reset all changes’ to go back to the last save state of your exploration.

Note: If you haven’t saved your exploration yet, then reset will clear your canvas back to blank.

case study on microsoft sql server

Clear all  

The new ‘clear all’ feature allows you to wipe your canvas back to blank. This works great when using Explore as a whiteboarding space, maybe you have a new thought you’d like to explore and want to essentially erase what you have in one click. This is made simple with the new ‘clear all’ option.

case study on microsoft sql server

New filter card styling  

When using the filtering experience in Explore you’ll now notice an update to the filter cards style and readability. We hope these improvements make filters easier to use and accessible for more users. Let us know what you think!

case study on microsoft sql server

Deliver report subscriptions to OneDrive SharePoint (Preview) [Nirupama Srinivasan]

You can now send subscriptions to OneDrive SharePoint (ODSP). With this update, all your large reports, both PBIX and paginated reports, can be sent to ODSP. At this time, the workspace must be backed by a premium capacity or equivalent fabric capacity .

case study on microsoft sql server

We currently support “Standard” subscriptions.

case study on microsoft sql server

You need to select the “Attach full report” option.

case study on microsoft sql server

We support more output formats for paginated reports.

case study on microsoft sql server

Once you select the output format, you can select the OneDrive or SharePoint option, the location and enter the subscription schedule to have your report delivered.

case study on microsoft sql server

Learn more about subscribing to ODSP here . This feature will start lighting up in certain regions as soon as this week, but depending on the geography in which your Power BI tenant is located, it may take up to three weeks to appear. Also, since this feature will not be supported in Sov clouds while in preview.

Custom visuals that use the new authentication API are also supported when viewed in the Power BI Mobile apps. No additional authentication is required, making sure that the data exploration experience in the mobile app is as delightful as possible, without any interruptions.

New title flyout for Power BI Desktop developer mode

You can quickly recognize when you are working on a Power BI Project (PBIP) by looking at the title bar:

case study on microsoft sql server

If you click on the title bar, you will see a new flyout that is specific for Power BI Project. This lets you easily locate the Power BI Project files as well as the display name settings for the report and the semantic model. You can also open the folder in file explorer by clicking on the paths.

case study on microsoft sql server

Rename to “Semantic Model” in Power BI Project files

Following the rename to “Semantic Model,” announced last November, Power BI Project files (PBIP) also adhere to that naming change. Now, when saving as PBIP, the following changes will be verified:

  • Semantic Model folder, “\*. Dataset \”, will be saved as “\*. SemanticModel \”
  • Only applied to new PBIP files, existing will keep the current folder name.
  • “definition.pbidataset” file is renamed to “definition.pbism”

Currently, when synchronizing Fabric items with Git, every item directory is equipped with two automatically generated system files— item.metadata.json and item.config.json . These files are vital for establishing and maintaining the connection between the two platforms.

case study on microsoft sql server

As part of our continuous efforts to simplify the integration with Git, we have consolidated these files into a single system file -. platform. This new system file will encompass all the attributes that were previously distributed between the two files.

case study on microsoft sql server

API 5.9.0 introduces a new filter API. This API allows you to create a visual that can filter matrix data hierarchically based on data points. This is useful for custom visuals that leverage group-on keys and allow hierarchical filtering using identities. For more information see the documentation .   

Visualizations

Waterfall-Visual-Extended Stacked Insights Waterfall – What’s driving my variation? Untap Text Box CloudScope Image

neas-spc Donut Chart image

orcaviz-enterprise

Your valuable feedback continues to shape our Power BI visuals, and we’re thrilled to announce exciting enhancements to the Dumbbell Bar Chart. In the latest release, we’ve introduced the capability to display multiple dumbbell bars in a single row, allowing for the presentation of more than two values in a streamlined manner. This update opens new possibilities, including the creation of the Adverse Event Timeline plot, or AE Timeline.

case study on microsoft sql server

Experience the enhanced Dumbbell Bar Chart and the innovative AE Timeline by downloading it from AppSource . All features are readily accessible within Power BI Desktop, empowering you to evaluate this visual on your own data. Dive into enhanced functionality and discover new insights effortlessly.

Questions or remarks? Visit us at: https://visuals.novasilva.com/ .

Date Picker by Powerviz

The Ultimate Date Slicer for Power BI.

The “First Day of Week” option was added in the recent version update.

The Date Picker visual offers a modern calendar view, Presets, Pop-up mode, Default Selection, Themes, and more, making it a must-have date slicer for Power BI reports.  Its rich formatting options help with brand consistency and a seamless UI experience.

Key Features:

  • Display Mode:  Choose between Pop-up and Canvas modes.
  • Presets:  Many commonly used presets like Today, Last Week, YTD, MTD, or create your preset using field.
  • Default Selection:  Control the date period selected when the user refreshes or reopens the report.
  • Filter Type : Choose between Range and Start/End types.
  • Month Style : Select single- or double-month date slicer.
  • Multiple Date Ranges : Flexibility to select multiple date ranges.
  • Themes:  15+ pre-built themes with full customization.
  • Holidays and Weekends : Customize holidays/weekends representation.
  • Import/Export JSON:  Build templates and share your designs.

Many more features and customizable options.

🔗 Try Date Picker for FREE from  AppSource

📊 Check out all features of the visual: Demo file

📃 Step-by-step instructions: Documentation 💡 YouTube Video:  Video Link

📍 Learn more about visuals: https://powerviz.ai/

✅ Follow Powerviz: https://lnkd.in/gN_9Sa6U

case study on microsoft sql server

Drill Down Combo PRO

Drill Down Combo PRO lets report creators build impressive charts of categorical data. Choose from multiple chart types and create column, line, area, and their combination charts. Use vast customization options to make your chart unique while enhancing the readability of your data with features like conditional formatting and dynamic thresholds.

MAIN FEATURES:

  • Conditional formatting – compare results against forecasts by automatically adjusting formatting based on a numerical value.
  • Full customization – customize X and Y axes, the legend, outline, and fill settings.
  • Choose normal, 100% proportional, or zero-based stacking.
  • Set up to 4 static and/or dynamic thresholds to demonstrate targets.
  • Customize multiple series simultaneously with series and value label defaults.

POPULAR USE CASES:

  • Sales and marketing – sales strategies, results, marketing metrics
  • Human resources – hiring, overtime, and efficiency ratios by department.
  • Accounting and finance – financial performance by region, office, or business line
  • Manufacturing – production and quality metrics

ZoomCharts Drill Down Visuals are known for interactive drilldowns, cross-filtering, and rich customization options. They support interactions, selections, custom and native tooltips, filtering, bookmarks, and context menu.

Try Drill Down Combo PRO now by downloading the visual from AppSource. 

Learn More about Drill Down Combo PRO by ZoomCharts.  

case study on microsoft sql server

Upload and securely share any PDF file with your colleagues.

Introducing our PDF Uploader/Viewer visual !

Simply upload any PDF file and instantly share it with your colleagues.

This visual boasts several impressive capabilities:

  • Microsoft certification ensures that the visual does not interact with external services, guaranteeing that your PDF files are securely stored and encrypted within the report, in alignment with your report sensitivity settings.
  • It automatically saves your preferences , allowing you to navigate pages, adjust the zoom level, and scroll to emphasize specific sections. Your colleagues will view the exact portion of the PDF you highlighted.
  • You have the flexibility to add text or draw lines to underline key content.
  • Users can conveniently download the PDF file directly from the visual.

case study on microsoft sql server

Learn more: https://appsource.microsoft.com/en-us/product/power-bi-visuals/pbicraft1694192953706.pdfuploaderandviewer?tab=Overview

Inforiver Premium Matrix by Lumel delivers superior reporting capabilities for financial, paginated, IBCS, variance, management reporting, and executive scorecards with the flexibility and familiar user experience of Excel.

To bring visual formulas and ton of additional functionalities frequently sought after by the Power BI community, Inforiver leveraged a differentiated architecture compared to the native matrix. With the recently released dynamic drill SDK/API , we now offer the Performance Mode , so you don’t have to compromise between the initial load performance offered by the native matrix and the advanced capabilities offered by Inforiver. You can now load the first two levels as the default dimensions of the hierarchy and then drill down to the lower levels as needed on demand, giving you the best of both worlds.

In addition to manual data input and what-if simulation capabilities, Inforiver’ s planning and forecasting capabilities are significantly enhanced with the upcoming 2.8 release . This includes a dedicated forecast toolbar, support for automatic rolling forecasts, dynamic handling of time series extensions, and an option to distribute deficits to other time periods.

Inforiver notes and annotations are now context-aware and are dynamically updated based on the filter/slicer selection.

Try Inforiver today!

case study on microsoft sql server

YouTube video: https://youtu.be/uBLw8xOWujc

Paginated Reports

You can now connect to new data sources such as Snowflake and Databricks using the “Get Data” button in Power BI Report Builder.

case study on microsoft sql server

Follow the simple, click-through experience of Power Query online. Select the data source that you want to connect to.

case study on microsoft sql server

If you want to use AAD, you need to create a shareable cloud connection. You can create one as documented here or use one that has been shared with you.

case study on microsoft sql server

You might also select the shareable cloud connection from the “Connection” dropdown.  Make sure that the report consumer has permissions to the shareable cloud connection.

Once you have a connection, select Next.

case study on microsoft sql server

You can transform the data that was selected.

case study on microsoft sql server

In the Power Query editor, you can perform all the operations supported.  Learn more about the capabilities of the Power Query editor .

case study on microsoft sql server

The M-Query will be used to build your RDL dataset.

case study on microsoft sql server

You can use this dataset to build your paginated report. You can publish the report to the service and share it. Learn more about connecting to more data sources from Power BI Report builder here .

Need a paginated report to support parameter prompts in more than one language? You no longer need to create several reports. You can simply set an expression for the prompt in Power BI Report Builder and specify the translated labels for a given language that the prompt should be displayed in. Learn more from the documentation on Localizing parameter prompts .

case study on microsoft sql server

As part of our continuous efforts to simplify the integration with Git, we have consolidated these files into a single system file – .platform . This new system file will encompass all the attributes that were previously distributed between the two files.

case study on microsoft sql server

When you make new changes to Git, your system files will be automatically updated to the new version in conjunction with your modifications. Both your own changes and the new file updates will show as part of the commit operation. Additionally, any new projects exported from Power BI desktop via developer mode will adopt the new system file format, which implies that you need to update to the latest Power BI Desktop version in order to open exported items from Fabric. Beyond these adjustments, there will be no impact on your Git workflow.

More about this file and the attributes within it can be found here .

OneLake File Explorer: Editing via Excel

With our latest release v1.0.11.0 of file explorer, we are excited to announce that you can now update your files directly using Excel, mirroring the user-friendly experience available in OneDrive. This enhancement aims to streamline your workflow and provide a more intuitive approach to managing and editing your Excel documents.

Here’s how it works:

  • Open your file using Excel within your OneLake file explorer.
  • Make the necessary updates and save your data.
  • Close the file.

And that’s it! The moment you close the file, your file is updated, and you can view the latest changes through your browser online. This feature offers a convenient, hassle-free way to manage and update your data files via Excel.

Data Warehouse

Simplifying table clones: automatic rls and ddm transfer.

In the sphere of data management, ensuring the security and confidentiality of sensitive information is critical. As part of our previous releases of table clones , we landed the ability to clone tables within and across schemas as of current point-in-time as well as clone with time travel . However, the process of cloning tables inherently involves cloning the sensitive data they contain, presenting potential risks to data security and privacy. So, the table clones in synapse data warehouse within Microsoft Fabric now offers the innate ability to automatically transfer over the row-level security ( RLS ) and dynamic data masking ( DDM ) from the source to the cloned table almost near-instantaneously.

Row-level security (RLS) enables organizations to restrict access to rows in a table. When a table is cloned, the same limitations that exist at the source table are automatically applied to the cloned table as well. Dynamic data masking (DDM) allows organizations to define masking rules on specific columns, thereby helping protect sensitive information from unauthorized access. When a table is cloned, the masking rules that are applied at the source table are automatically applied to the cloned table.

Effective data management is interwoven with robust security practices. During the process of cloning, it is crucial not only to transfer security configurations accurately but also to ensure the tables that are cloned inherit the security and privacy configurations. This helps ensure compliance with the organization’s privacy regulations.

Extract and publish. sqlproj from the Warehouse Editor

We’re excited to announce the ability to extract and publish a SQL database project directly through the DW editor!

SQL Database Projects is an extension to design, edit, and publish schemas for SQL databases from a source-controlled environment. A SQL project is a local representation of SQL objects that comprise the schema for a single database, such as tables, stored procedures, or functions.

This feature enables 3 main uses cases with no need for additional tooling:

Download a database project – can be used to develop DW schema in client tools like SQL database projects in Azure Data Studio or VScode.

Publish existing database projects to a new Fabric Warehouse

Extract a schema from a warehouse/SQL analytics endpoint and publish it to another warehouse.

To extract:

Click download database project in the ribbon (or click on the context menu of the database in the object explorer):

case study on microsoft sql server

To publish:

Create a brand-new warehouse in the Fabric Portal. Upon entry, select SQL database projects:

case study on microsoft sql server

Cold query performance improvements

Fabric stores data in Delta tables and when the data is not cached , it needs to transcode data from parquet file format structures to in-memory structures for query processing. With this feature improvement transcoding is optimized further and we observed up to 9% faster queries in our tests when data is not previously cached.

Warehouse Takeover API

Warehouses use the data item’s owner’s identity to connect to OneLake. This causes issues when the owner of the warehouse leaves the organization, has their account disabled, or has a password expired.

To solve this problem, we are happy to announce the availability of the Takeover API, which allows you to change the warehouse owner from the current owner to a new owner, which can be an SPN or an Organizational Account.

For more information, see Change Ownership of Fabric Warehouse .

Data Engineering

Autotune query tuning.

We’re excited to introduce the Autotune Query Tuning feature for Apache Spark, now available across all regions. Autotune leverages historical data from your Spark SQL queries to automatically fine-tune your configurations with the usage of the newest Machine Learning algorithms, ensuring faster execution times and enhanced efficiency. With Autotune, you can now surpass the performance gains of manually tuned workloads without the extensive effort and experimentation traditionally required. It starts with a baseline model for initial runs and iteratively improves as more data becomes available from repeated executions of the same workload. These smart tuning covers key Spark configurations, including spark.sql.shuffle.partitions, spark.sql.autoBroadcastJoinThreshold, and spark.sql.files.maxPartitionBytes, optimizing your Spark environment dynamically.

To activate on the session level, simply enable it in your Spark session with:

If you use Spark SQL:

SET spark.ms.autotune.enabled=TRUE

If you use PySpark:

spark.conf.set(‘spark.ms.autotune.enabled’, ‘true’)

If you use Scala:

spark.conf.set(“spark.ms.autotune.enabled”, “true”)

If you use SparkR:

library(SparkR)

sparkR.conf(“spark.ms.autotune.enabled”, “true”)

To enable Autotune Query Tuning for all notebooks and jobs attached to the environment, you can configure the Spark Setting on the environment level. This way, you can enjoy the benefits of automatic tuning without having to set it up for each session.

This feature aligns with our commitment to Responsible AI, emphasizing transparency, security, and privacy. It stands as a testament to our dedication to enhancing customer experience through technology, ensuring that Autotune not only meets but exceeds the performance standards and security requirements expected by our users.

Experimental Runtime 1.3 (Spark 3.5 and Delta 3.0 OSS)

We are introducing the Experimental Public Preview of Fabric Runtime 1.3 — the latest update to our Azure-integrated big data execution engine, optimized for data engineering and science workflows based on Apache Spark.

Fabric Runtime 1.3, in its experimental public preview phase, allows users early access to test and experiment with the newest Apache Spark 3.5 and Delta Lake 3.0 OSS features and APIs.

case study on microsoft sql server

Queueing for Notebook Jobs

We are thrilled to announce a new feature Job Queueing for Notebook Jobs. This feature aims to eliminate manual retries and improve the user experience for our customers who run notebook jobs on Microsoft Fabric.

Notebook jobs are a popular way to run data analysis and machine learning workflows on Fabric. They can be triggered by pipelines or a job scheduler, depending on the user’s needs. However, in the current system, notebook jobs are not queued when the Fabric capacity is at its max utilization. They are rejected with a Capacity Limit Exceeded error, which forces the user to retry the job later when the resources are available. This can be frustrating and time-consuming, especially for enterprise users who run many notebook jobs.

With Job Queueing for Notebook Jobs, this problem is solved. Notebook jobs that are triggered by pipelines or job scheduler will be added to a queue and will be retried automatically when the capacity frees up. The user does not need to do anything to resubmit the job. The status of these notebook jobs will be Not Started when in queued state and will be changed to In Progress when they start the execution.

We hope that this feature will help our customers run their notebook jobs more smoothly and efficiently on Fabric.

We are excited to announce an enhancement to the beloved “Load to table” feature to help mitigate any validation issues and make your data loading experience smoother and faster.

The new validation features will run on the source files before the load to table job is fired to catch any probable failures that might cause the job to fail. This way, you can fix the issues immediately, without needing to wait until the job runs into an error. The validation features will check for the following:

  • Unsupported table name: The validation feature will alert you if the table name is not in the right format and provide you with the supported naming conventions.
  • Unsupported file extension: The load to table experience currently only supports CSV and Parquet files, therefore the validation feature will alert you if the file is not in one of those formats ahead of time.
  • Incompatible file format: The file format of the source files must be compatible with the destination table. For example, if the destination table is in Parquet format, the source files must be in a format that can be converted to Parquet, such as CSV or JSON. The validation feature will alert you if the file format is not compatible.
  • Invalid CSV file header: If your CSV file header is not valid, the validation feature will catch it and alert you before the job is fired.
  • Unsupported relative path: The validation feature will alert you if the relative path is not supported so you can make the needed changes.
  • Empty data files: The source files must contain some data loaded onto the table. The validation feature will alert you if the source files are empty and suggest you remove them or add some data.

The validation feature is fully integrated with the “Load to table” feature therefore you won’t require any additional steps to leverage this functionality.

We hope you enjoy the new validation enhancement and find it useful for your data loading needs.

We are excited to inform you that the feature for analyzing executors’ resource utilization has been integrated into Synapse Notebook. Now, you can view both the allocated and the running executor cores, as well as monitor their utilization during Spark executions within a Notebook cell. This new feature offers insights into the allocation and utilization of Spark executors behind the scenes, enabling you to identify resource bottlenecks and optimize the utilization of your executors.

case study on microsoft sql server

We are thrilled to announce the introduction of new feedback settings for the Fabric Spark Advisor. With these settings, you can choose whether to show or hide specific types of Spark advice according to your needs. Additionally, you have the flexibility to enable or disable the Spark Advisor for your Notebooks within a workspace, based on your preferences.

Incorporating the Spark Advisor settings at the Fabric Notebook level allows you to maximize the benefits of the Spark Advisor, while ensuring a productive Notebook authoring experience.

case study on microsoft sql server

With the introduction of the hierarchy view in the Fabric Monitoring Hub, you can now observe the relationship between the Pipeline and Spark activities for your Synapse Notebook and Spark Job Definition (SJD). In the new ‘Upstream’ column of your Notebook or SJD run, you can see the corresponding parent Pipeline and click to view all sibling activities within that pipeline.

case study on microsoft sql server

Data Science

New ai samples.

We are happy to announce the addition of three new samples to the Quick Tutorial category of DS samples on Microsoft Fabric. Two of these samples are designed to help streamline your data science workflow, enabling you to automatically and efficiently determine the optimal machine learning model for your case. The third sample walks you through the process to seamlessly access the data in your Power BI semantic model, while also empowering Power BI users to streamline their workflows by leveraging Python for various automation tasks.

Our AutoML sample guides you through the process of automatically selecting the best machine learning model for your dataset. By automating repetitive tasks, such as model selection, feature engineering, and hyperparameter tuning, AutoML allows users to concentrate on data analysis, insights, and problem-solving.

Our Model Tuning sample provides a comprehensive walkthrough of the necessary steps to fine-tune your models effectively using FLAML. From adjusting hyperparameters to optimizing model architecture, this sample empowers you to enhance model accuracy and efficiency without the need for extensive manual adjustments.

Our Semantic Link sample provides a walkthrough on how to extract and calculate Power BI measures from a Fabric notebook using both Sempy Python library and Spark APIs. Additionally, it explains how to use Tabular Model Scripting Language to retrieve and create semantic models, as well as how to utilize the advanced refresh API to automate data refreshing for Power BI users.

We are confident these new samples are useful resources to maximize the efficiency and effectiveness of your machine learning workflows. Please check them out and let us know your thoughts, as we are committed to continually improving your data science experience on Microsoft Fabric.

case study on microsoft sql server

Exciting news! We’ve introduced several accessibility enhancements for ML experiments and model items in Fabric. Now, when you resize your window, the item pages will dynamically reflow to accommodate the change, ensuring a seamless user experience and improved accessibility for users with different screen sizes and devices. Additionally, we’ve added the ability to resize the customized columns and filter panels, empowering users to customize their view according to their preferences. Furthermore, users can hover over property, metric, or parameter names to see the full text, which is particularly helpful for quick browsing of the various properties.

case study on microsoft sql server

ML Model and Experiment items in Fabric now offer enhanced support for Microsoft Information Protection (MIP) labels. These labels ensure secure handling and classification of sensitive data. With the mandatory enforcement of MIP labels enabled, users are prompted to provide a label when creating an ML experiment or model. This feature ensures compliance with data protection policies and reinforces security measures throughout the development process.

case study on microsoft sql server

Compare Nested Runs

We have added support for nested child runs in the Run List View for ML Experiments. This enhanced experience streamlines the analysis of nested runs, allowing users to effortlessly view various parent and child runs within a single view and seamlessly interact with them to visually compare results. At its core, MLflow empowers users to track experiments, which are essentially named groups of runs. A “run” denotes a single execution of a model training event, where parameters, metrics, tags, and artifacts associated with the training process are logged. The introduction of parent and child runs introduces a hierarchical structure to these runs. This approach brings several benefits, including organizational clarity, enhanced traceability, scalability, and improved collaboration among team members.

case study on microsoft sql server

With the new AutoML feature in Fabric, you can automate your machine learning workflow and get the best results with less effort. AutoML, or Automated Machine Learning, is a set of techniques and tools that can automatically train and optimize machine learning models for any given data and task type. You don’t need to worry about choosing the right model and hyperparameters, as AutoML will do that for you. You can also track and examine your AutoML runs using Fabric’s MLFlow integration and use the new flaml.visualization module to generate interactive plots of your outcomes. Fabric also supports many Spark and single-node model learners, ensuring that you can find the best fit for your machine learning problem.

case study on microsoft sql server

Read this article for more information on how to get started with AutoML in Fabric notebooks.

Code-First Hyperparameter Tuning in Public Preview

Hyperparameters are set prior to the training phase and include elements like learning rate, number of hidden layers in a neural network, and batch size. These settings are crucial as they greatly influence a model’s accuracy and ability to generalize from the training data.

We’re excited to announce that FLAML is now integrated into Fabric for hyperparameter tuning. Fabric’s `flaml.tune` feature streamlines this process, offering a cost-effective and efficient approach to hyperparameter tuning. The workflow involves three key steps: defining your tuning objectives, setting a hyperparameter search space, and establishing tuning constraints.

Additionally, Fabric now also includes enhanced MLFlow Integration, allowing for more effective tracking and management of your tuning trials. Plus, with the new `flaml.visualization` module, you can easily analyze your tuning trial. This suite of plotting tools is designed to make your data exploration and analysis more intuitive and insightful.

case study on microsoft sql server

Real-time Analytics

Eventhouse is now available for external customers, offering a groundbreaking solution that optimizes performance and cost by sharing capacity and resources across multiple databases. With unified monitoring and management features, Eventhouse provides comprehensive oversight at both aggregate and per-database levels.

This tool efficiently handles large data volumes, making it ideal for real-time analytics and exploration scenarios. It excels in managing real-time data streams, allowing organizations to ingest, process, and analyze data with near real-time capabilities. Eventhouses are scalable, ensuring optimal performance and resource utilization as data volumes grow.

In Fabric, Eventhouses serve as the storage solution for streaming data and support semi-structured and free-text analysis. They provide a flexible workspace for databases, enabling efficient management across multiple projects.

Learn more:

Create an Eventhouse (Preview) – Microsoft Fabric | Microsoft Learn

Eventhouse overview (Preview) – Microsoft Fabric | Microsoft Learn

Eventhouse Minimum Consumption

To optimize costs, Eventhouse suspends the service when not in use, with a brief reactivation latency. For highly time-sensitive systems, use Minimum Consumption to maintain service availability at a selected minimum level, paying for the chosen compute without premium storage charges. This compute is available to all databases within the Eventhouse.

For instructions on enabling minimum consumption, see Enable minimum consumption .

Connecting to and using data in Azure Data explorer cluster is now available from Fabric’s KQL Queryset. This feature enables you to connect to Azure Data Explorer clusters from Fabric using a user-friendly interface. Once a connection is made, you can easily and seamlessly access and analyze your data in Azure Data Explorer.

Fabric’s powerful query management and collaboration tools are now available for you, over Azure Data Explorer clusters data. You can save, organize, and share your queries using Fabric’s KQL Queryset, which supports different levels of sharing permissions for your team members. Whether you want to explore your data, or collaborate on insights, you can do it all with Fabric and Azure Data Explorer.

Learn more:   Query data in a KQL queryset – Microsoft Fabric | Microsoft Learn

Update records in a KQL Database (Public Preview)

Fabric KQL Database are optimized for append ingestion.

KQL Databases already support the .delete  command allowing you to selectively delete records.

We are now introducing the .update  command .  This command allows you to update records by deleting existing records and appending new ones in a single transaction.

This command comes with two syntaxes, a simplified syntax covering most scenarios efficiently and an expanded syntax giving you the maximum of control.

For more details, please go to this dedicated blog.

Event Processor is a no-code editor in Eventstream that enables you to design stream transformation logic, such as filtering, aggregating, and converting data types, before routing to various destinations in Fabric. With the recent enhancements to the Event Processor, you now have even greater flexibility in transforming your data stream. Here are the updates:

  • Personalize operation nodes and easily filter out ‘null’ values from your data
  • Manage and rename your column fields easily in the Aggregate operation
  • Change your values to different data types using the Manage Field operation

case study on microsoft sql server

Incoming events throughput upto 100 MB/s

With the introduction of the ‘Event Throughput’ setting, you now can select the incoming events throughput rate for your Eventstream. This feature allows you to scale your Eventstream, ranging from less than 1 MB/s to 100 MB/s.

case study on microsoft sql server

With the addition of the ‘Retention’ setting, you now can specify the duration for which your incoming data needs to be retained. The default retention period is set to 1 day.

case study on microsoft sql server

Platform Monitoring

Capacity metrics support for pause and resume.

Fabric Pause and Resume is a capacity management feature that lets you pause F SKU capacities to manage costs.  When your capacity isn’t operational, you can pause it to enable cost savings and then later, when you want to resume work on your capacity you can reactivate it.  Fabric Capacity Metrics has been updated with new system events and reconciliation logic to simplify analysis of paused capacities.

Pause and resume your capacity – Microsoft Fabric | Microsoft Learn

Monitor a paused capacity – Microsoft Fabric | Microsoft Learn

Data Factory

Dataflow gen2, privacy levels support in dataflows.

You can now set privacy levels for your connections in Dataflows. Privacy levels are critical to configure correctly so that sensitive data is only viewed by authorized users.

Furthermore, data sources must also be isolated from other data sources so that combining data has no undesirable data transfer impact. Incorrectly setting privacy levels may lead to sensitive data being leaked outside of a trusted environment.

You can set this privacy level when creating a new connection:

case study on microsoft sql server

Enhancement to Manage Connections

Manage connections is a feature that allows you to see briefly the connections that you have in use for your Dataflows and the general information about those connections.

We are happy to release a new enhancement to this experience where now you can see a list of all the data sources available in your Dataflow: even the ones without a connection set for them!

For the data sources without a connection, you can set a new connection from within the manage connections experience by clicking the plus sign in the specific row of your source.

case study on microsoft sql server

Furthermore, whenever you unlink a connection now the data source will not disappear from this list if it still exists in your Dataflow definition. It will simply appear as a data source without a connection set until you can link a connection either in this dialog or throughout the Power Query editor experience.

We’re excited to announce the availability of a new Test Framework in the latest release of Power Query SDK ! The Test Framework allows Power Query SDK Developers to have access to standard tests and a test harness to verify the direct query (DQ) capabilities of an extension connector. With this new capability, developers will have a standard way of verifying connectors and a platform for adding additional custom tests.  We envision this as the first step in enhancing the developer workflow with increased flexibility & productivity in terms of the testing capabilities provided by the Power Query SDK.

The Power Query SDK Test Framework is available on Github . It would need the latest release of Power Query SDK which wraps the Microsoft.PowerQuery.SdkTools NuGet package containing the PQTest compare command.

What is Power Query SDK Test Framework?

Power Query SDK Test Framework is a ready-to-go test harness with pre-built tests to standardize the testing of new and existing extension connectors by providing ability to test functional , compliance and regression testing that can be extended to perform testing-at-scale. It will help address the need for a comprehensive test framework to satisfy the testing needs of extension connectors.

case study on microsoft sql server

Follow the links below to get started:

  • Power Query SDK overview
  • Create your first Power Query custom connector
  • Get started with the new Test Framework for the Power Query SDK

The VNET Data Gateway is a network security offer that lets you connect your Azure and other data services to Microsoft Fabric and the Power Platform. You can run Dataflow Gen2, Power BI Semantic Models, Power Platform Dataflows, and Power BI Paginated Reports on top of a VNET Data Gateway to ensure that no traffic is exposed to public endpoints. In addition, you can force all traffic to your data source to go through a gateway, allowing for comprehensive auditing of secure data sources. To learn more and get started, visit VNET Data Gateways .

Using the regular path in Get Data to create a new connection, you always need to fill in your endpoint, URL or server and database name when connecting to Azure resources like Azure Blob, Azure Data Lake gen 2 and Synapse. This is a bit of a tedious process and does not allow for easy data discovery.

With the new ‘browse Azure’ functionality in Get Data, you can easily browse all your Azure resources and automatically connect to them, without going through manually setting up a connection, saving you a lot of time.

case study on microsoft sql server

Browse Azure resources with Get Data | Microsoft Fabric Blog | Microsoft Fabric

By default, any user in Fabric can share their connections if they have the following user role on the connection:

  • Connection owner or admin
  • Connection user with sharing

Sharing a connection in Fabric is sometimes needed for collaboration within the same workload or when sharing the workload with others. Connection sharing in Fabric makes this easy by providing a secure way to share connections with others for collaboration, but without exposing the secrets at any time. These connections can only be used within the Fabric environment.

If your organization does not allow connection sharing or wants to limit the sharing of connections, a tenant admin can restrict sharing as a tenant policy. The policy allows you to block sharing within the entire tenant.

case study on microsoft sql server

When loading into a new table , by default the automatic settings are on. Using the automatic settings, dataflows gen 2 manages the mapping for you. This will allow you the following behavior:

  • Update method replace : Data will be replaced at every dataflow refresh. Any data in the destination will be removed. The data in the destination will be replaced with the output data of the dataflow.
  • Managed mapping: Mapping is managed for you. When you need to make changes to your data/query to add an additional column or change a data type, mapping is automatically adjusted for this when you republish your dataflow. You do not have to go into the data destination experience every time you make changes to your dataflow, allowing you for easy schema changes when you republish the dataflow.
  • Drop and recreate table: To allow for these schema changes, on every dataflow refresh, the table will be dropped and recreated. Your dataflow refresh will fail if you have any relationships or measures added to your table.

case study on microsoft sql server

Manual settings

By un-toggling the use automatic setting, you get full control over how to load your data into the data destination. You can make any changes to the column mapping by changing the source type or excluding any column that you do not need in your data destination.

case study on microsoft sql server

Cancel Dataflow Refresh

Canceling a dataflow refresh is useful when you want to stop a refresh during peak time, if a capacity is nearing its limits, or if refresh is taking longer than expected. Use the refresh cancellation feature to stop refreshing dataflows.

To cancel a dataflow refresh, select Cancel refresh option found in workspace list or lineage views for a dataflow with in-progress refresh:

case study on microsoft sql server

Once a dataflow refresh is canceled, the dataflow’s refresh history status is updated to reflect cancelation status:

case study on microsoft sql server

Certified connector updates  

We’re pleased to announce the following updates to certified connectors:  

  • Delta Sharing  

Data pipeline

Uc support in azure databricks activity.

We are excited to announce that Unity Catalog support for Databricks Activity is now supported. With this update, you will now be able to configure your Unity Catalog Access Mode for added data security.

Find this update under Additional cluster settings. 

case study on microsoft sql server

For more information about this activity, read https://aka.ms/AzureDatabricksActivity . 

Semantic Model Refresh activity

We are excited to announce the availability of the Semantic Model Refresh activity for data pipelines. With this new activity, you will be able to create connections to your Power BI semantic model datasets and refresh them.

case study on microsoft sql server

To learn more about this activity, read https://aka.ms/SemanticModelRefreshActivity

The more intuitive user experience and more insightful performance tuning tips are available in Data Factory data pipelines. These tips will provide useful and accurate advice regarding staging, degree of copy parallelism settings, etc. to optimize your pipeline performance.

case study on microsoft sql server

On-Premises Connectivity with Fabric Pipeline Public Preview

On-premises connectivity for Fabric Pipeline is public preview now. This enhancement empowers users to effortlessly transfer data from their on-premises environments to Fabric OneLake, Fabric’s centralized data lake solution.

With this capability, users can harness high-performance data copying mechanisms to efficiently move their data to Fabric OneLake. Whether it’s critical business information, historical records, or sensitive data residing in on-premises systems, on-premises connectivity ensures seamless integration into Fabric’s centralized data lake infrastructure.

On-premises connectivity with Fabric pipeline enables faster and more reliable data transfers, significantly reducing the time and resources required for data migration and integration tasks. This efficiency not only streamlines data integration processes but also enhances the accessibility and availability of on-premises data within the Fabric ecosystem.

Data Activator

New expressions “changes by”, “increases by”, and “decreases by”.

When setting conditions on a trigger, we’ve added a feature that allows you to detect when there’s been a change in your data by absolute number or percentage.

You can also specify whether the condition should be in comparison to the last measurement or from a specified point in time, which is denoted as “from time ago”. “From last measurement” computes the difference between two consecutive measurements, regardless of the amount of time that elapsed between the two measurements.

Meanwhile, “from time ago” compares your data to a previous point in time that you have specified. For example, you can monitor your refrigerator temperature and see if the temperature has changed from 32 degrees five minutes ago. If the temperature changes after five minutes, the trigger will be sent an alert. However, if the temperature within those five minutes spiked then fell back to 32 degrees, the trigger will not send an alert.

When setting conditions on a trigger, we’ve added a feature that allows you to detect when new data does or doesn’t arrive on a specified column.

To use “New data arrives”, you simply specify the column you want to monitor in the “Select” card. In the “Detect” card, specify that you want to monitor “New data arrives”. Your trigger will now send an alert every time new data comes in. Note, even if new data comes in and the “value” of that data is the same, you will be sent an alert. Also, keep in mind that null values will not cause an alert.

For example, suppose you want to be sent an alert every time there’s new data on a truck’s location. If the system gets data that says the truck is in Redmond, an alert will be sent. Next, if the system gets data that says the truck is in Bellevue, an alert will be sent. Then if the system gets more data that says the truck is in Bellevue, an alert will be sent.

To use “No new data arrives”, in the “Detect” card, you need to specify the duration over which the trigger monitors. Duration is the maximum time that you want the trigger to monitor if new data has come in. If new data has not come in, an alert will be sent.

For example, suppose you have a temperature sensor that sends data every second. You want to be alerted if the sensor stops sending data for more than 10 seconds. You can set up the “No new data arrives” condition with duration = 10. If the sensor keeps sending data, you will not get any alert.

Microsoft Fabric is now HIPAA compliant. We are excited to announce that Microsoft Fabric, our all-in-one analytics solution for enterprises, has achieved new certifications for HIPAA and ISO 27017, ISO 27018, ISO 27001, ISO 27701. These certifications demonstrate our commitment to providing the highest level of security and privacy for our customers’ data.  Read the full announcement .

Related blog posts

Microsoft fabric february 2024 update.

Welcome to the February 2024 update. We have a lot of great features this month including Fabric Git Integration REST APIs, Fabric notebook status bar upgrade, Copilot in Dataflow Gen2, and many more! Fabric Community Conference Join us at the Microsoft Fabric Community Conference the ultimate Microsoft Data & AI learning event, on March 26-28, … Continue reading “Microsoft Fabric February 2024 Update”

Microsoft Fabric December 2023 Update

Welcome to the December 2023 update. We have lots of features this month including More styling options for column and bar charts, calculating distinct counts in Power BI running reports on KQL Databases, Changes to workspace retention settings in Fabric and Power BI, and many more.

Azure Hero Image

AI + Machine Learning , Azure AI , Azure AI Services , Azure AI Studio , Azure OpenAI Service , Best practices

AI study guide: The no-cost tools from Microsoft to jump start your generative AI journey

By Natalie Mickey Product Marketing Manager, Data and AI Skilling, Azure

Posted on April 15, 2024 4 min read

The world of AI is constantly changing. Every day it seems there are new ways we can work with generative AI and large language models. It can be hard to know where to start your own learning journey when it comes to AI. Microsoft has put together several resources to help you get started. Whether you are ready to build your own copilot or you’re at the very beginning of your learning journey, read on to find the best and free resources from Microsoft on generative AI training.

Microsoft datacenter server rack door metal perforated cover upright

Build intelligent apps at enterprise scale with the Azure AI portfolio

Azure AI fundamentals

If you’re just starting out in the world of AI, I highly recommend Microsoft’s Azure AI Fundamentals course . It includes hands on exercises, covers Azure AI Services, and dives into the world of generative AI. You can either take the full course in one sitting or break it up and complete a few modules a day.

Learning path: Azure AI fundamentals

Course highlight: Fundamentals of generative AI module

Azure AI engineer

For those who are more advanced in AI knowledge, or are perhaps software engineers, this learning path is for you. This path will guide you through building AI infused applications that leverage Azure AI Services, Azure AI Search, and Open AI.

Course highlight: Get started with Azure OpenAI Service module

Let’s get building with Azure AI Studio

Imagine a collaborative workshop where you can build AI apps, test pre-trained models, and deploy your creations to the cloud, all without getting lost in mountains of code. In our newest learning path , you will learn how to build generative AI applications like custom copilots that use language models to provide value to your users.

Learning path: Create custom copilots with Azure AI Studio (preview)

Course highlight: Build a RAG-based copilot solution with your own data using Azure AI Studio (preview) module

Dive deep into generative AI with Azure OpenAI Service

If you have some familiarity with Azure and experience programming with C# or Python, you can dive right into the Microsoft comprehensive generative AI training.

Learning path: Develop generative AI solutions with Azure OpenAI Service

Course highlight: Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service module

Cloud Skills Challenges

Microsoft Azure’s Cloud Skills Challenges are free and interactive events that provide access to our tailored skilling resources for specific solution areas. Each 30-day accelerated learning experience helps users get trained in Microsoft AI. The program offers learning modules, virtual training days, and even a virtual leaderboard to compete head-to-head with your peers in the industry. Learn more about Cloud Skills Challenges here , then check out these challenges to put your AI skills to the test.

Invest in App Innovation to Stay Ahead of the Curve

Challenges 1-3 will help you prepare for Microsoft AI Applied Skills, scenario-based credentials. Challenges 4 and 5 will help you prepare for Microsoft Azure AI Certifications, with the potential of a 50% exam discount on your certification of choice 1 .

Challenge #1: Generative AI with Azure OpenAI

In about 18 hours, you’ll learn how to train models to generate original content based on natural language input. You should already have familiarity with Azure and experience programming with C# or Python. Begin now!

Challenge #2: Azure AI Language

Build a natural language processing solution with Azure AI Language. In about 20 hours, you’ll learn how to use language models to interpret the semantic meaning of written or spoken language. You should already have familiarity with the Azure portal and experience programming with C# or Python. Begin now!

Challenge #3: Azure AI Document Intelligence

Show off your smarts with Azure AI Document Intelligence Solutions. In about 21 hours, you’ll learn how to use natural language processing (NLP) solutions to interpret the meaning of written or spoken language. You should already have familiarity with the Azure portal and C# or Python programming. Begin now!

Challenge #4: Azure AI Fundamentals

Build a robust understanding of machine learning and AI principles, covering computer vision, natural language processing, and conversational AI. Tailored for both technical and non-technical backgrounds, this learning adventure guides you through creating no-code predictive models, delving into conversational AI, and more—all in just about 10 hours.

Complete the challenge within 30 days and you’ll be eligible for 50% off the cost of a Microsoft Certification exam. Earning your Azure AI Fundamentals certification can supply the foundation you need to build your career and demonstrate your knowledge of common AI and machine learning workloads—and what Azure services can solve for them. Begin now!

Challenge #5: Azure AI Engineer

Go beyond theory to build the future. This challenge equips you with practical skills for managing and leveraging Microsoft Azure’s Cognitive Services. Learn everything from secure resource provisioning to real-time performance monitoring. You’ll be crafting cutting-edge AI solutions in no time, all while preparing for Exam AI-102 and your Azure AI Engineer Associate certification . Dive into interactive tutorials, hands-on labs, and real-world scenarios. Complete the challenge within 30 days and you’ll be eligible for 50% off the cost of a Microsoft Certification exam 2 . Begin now!

Finally, our free Microsoft AI Virtual Training Days are a great way to immerse yourself in free one or two-day training sessions. We have three great options for Azure AI training:

  • Azure AI Fundamentals
  • Generative AI Fundamentals
  • Building Generative Apps with Azure OpenAI Service

Start your AI learning today

For any and all AI-related learning opportunities, check out the Microsoft Learn AI Hub including tailored AI training guidance . You can also follow our Azure AI and Machine Learning Tech Community Blogs for monthly study guides .

  • Microsoft Cloud Skills Challenge | 30 Days to Learn It – Official Rules
  • https://developer.microsoft.com/en-us/offers/30-days-to-learn-it/official-rules#terms-and-conditions

Let us know what you think of Azure and what you would like to see in the future.

Provide feedback

Build your cloud computing and Azure skills with free courses by Microsoft Learn.

Explore Azure learning

Related posts

AI + Machine Learning , Azure AI , Azure VMware Solution , Events , Microsoft Copilot for Azure , Microsoft Defender for Cloud

Get ready for AI at the Migrate to Innovate digital event   chevron_right

AI + Machine Learning , Azure AI Speech , Azure AI Studio , Azure OpenAI Service , Azure SQL Database

What’s new in Azure Data, AI, and Digital Applications: Helping you navigate the fast pace of change   chevron_right

AI + Machine Learning , Announcements , Azure AI , Azure AI Search

Announcing updates to Azure AI Search to help organizations build and scale generative AI applications   chevron_right

AI + Machine Learning , Azure AI , Industry trends

Azure Maia for the era of AI: From silicon to software to systems   chevron_right

IMAGES

  1. SQL Server School Case Study

    case study on microsoft sql server

  2. Microsoft SQL Server 2014

    case study on microsoft sql server

  3. Max Sql: A Case Study On Utilizing The Maximum Function In Database Queries

    case study on microsoft sql server

  4. SQL CASE Statement (With Examples)

    case study on microsoft sql server

  5. Advanced SQL Case Studies

    case study on microsoft sql server

  6. SQL Server Architecture (Explained)

    case study on microsoft sql server

VIDEO

  1. All e Technologies (Alletec)

  2. Learning about SQL Server data platform improvements

  3. Exciting new features in SQL Server for developers

  4. Datamart Analysis

  5. Learn SQL

  6. Cloud connected SQL Server 2022

COMMENTS

  1. A case study of SQL Query tuning in SQL Server

    In this article, we learned practical details about SQL query tuning and these techniques can help when you try to solve a query performance problem. In the case study, the query which has a performance problem contained 3 main problems. These are: Scalar-function problem. Using a serial execution plan.

  2. Case Studies and Real-World Scenarios

    These case studies represent common scenarios in SQL Server performance tuning. The specifics can vary, but identifying the problem, isolating the cause, and resolving the issue remains the same. Case Study 5: Poor Indexing Strategy. A hospital's patient records system began to experience performance issues over time.

  3. MS SQL Server Solutions

    Data Science Solution for Sales Analysis and Forecasting. ScienceSoft supported a leading FMCG manufacturer by delivering science-based sales forecasting and attainable sales targets. ScienceSoft's case studies: MS SQL Server Solutions. Check out the success stories of a software company that has been operating since 1989.

  4. CASE (Transact-SQL)

    The CASE expression can't be used to control the flow of execution of Transact-SQL statements, statement blocks, user-defined functions, and stored procedures. For a list of control-of-flow methods, see Control-of-Flow Language (Transact-SQL). The CASE expression evaluates its conditions sequentially and stops with the first condition whose ...

  5. Microsoft SQL Server Case Study

    Microsoft SQL Server Case Study. Tom Jenkins. 19 September 2016 Microsoft SQL Server. Microsoft SQL Server® is a market leading enterprise level database solution used by any a large number and variety of applications on which to host their databases and store their data. Microsoft SQL Server is an incredibly powerful, scalable and robust ...

  6. SQL Server Case Study

    SQL Server Case Study. by Case Studies. 1 min read Jun 19, 2019 10:14:50 AM Executive Summary ... By utilizing Sparkhound's SQL Server consultant, the client received a Microsoft-certified expert who understood the complexity of the client's mission-critical data, ensured all disaster recovery and security auditing requirements were met ...

  7. PDF The Total Economic Of Microsoft SQL Server

    $15,695 per server. With Microsoft SQL Server 2014 and SQL Server 2012, there was a 20% improvement in IT resource productivity and a 22% reduction in data errors and issues with mission-critical applications, improved profit from direct and sales-led revenue, reduction in customer churn, and cost reductions from improved data issues with security.

  8. SQL Server 2008 case studies

    By SQL Server Team. January 28, 2008. As we get closer to launch, I wanted to spend a moment to highlight some recently published case studies for SQL Server 2008. Its great to see the excellent work that our customers and partners are doing on our upcoming release. Healthcare Group Upgrading to SQL Server 2008 to Better Protect 2 Terabytes of ...

  9. New SQL Server 2008 Case Studies

    The momentum for SQL Server 2008 continues. Here are some of the latest case studies published: Russia's Baltika Breweries Links its ERP Databases using SQL Server 2008 Replication Baltika Breweries, the largest producer of beer in Russia, has about 12,000 employees and 11 breweries. Its popular brands have been the market leader in Russia for

  10. The Ultimate T-SQL and Microsoft SQL Server Bootcamp

    T-SQL is a powerful language that enables developers and database administrators to create complex queries, automate tasks, and optimize database performance. In this course, students will learn the fundamentals of T-SQL syntax, including creating, managing, and querying databases. Students will work with hands-on exercises, case studies, and ...

  11. GitHub

    This repository contains solutions for #8WeekSQLChallenge, they are interesting real-world case studies that will allow you to apply and enhance your SQL skills in many use cases. I used Microsoft SQL Server in writing SQL queries to solve these case studies.

  12. Sony DADC NMS Case Study

    The company was looking to enhance its disaster recovery capabilities, because the application was running on two Microsoft SQL Server 2008 R2 database software nodes in a data center. "Both nodes were located in the data center, so if the data center was ever impacted, we knew the application would be impacted too," Gassner says.

  13. In-Memory OLTP Updated Overview and Case Studies

    We just published updated slide decks about In-Memory OLTP. In-Memory OLTP overview : this is an overview of the technology and details what's new in SQL Server 2016/2017 and Azure SQL Database. In-Memory OLTP case studies : this discusses when you and do not want to use In-Memory OLTP, as well as a number of application patterns and customer ...

  14. In-Memory OLTP overview and usage scenarios

    In-Memory OLTP is the premier technology available in SQL Server and SQL Database for optimizing performance of transaction processing, data ingestion, data load, and transient data scenarios. This article includes an overview of the technology and outlines usage scenarios for In-Memory OLTP. Use this information to determine whether In-Memory ...

  15. DOC Lava Technology

    The Lava DMA solution was built using the Microsoft® Application Platform, including Microsoft SQL Server® 2005 database software. A pioneer in creating DMA solutions, the Lava technology team prides itself in having been the first to provide access to all the U.S. market centers. A survey by Elkins/McSherry, which was published in ...

  16. A Case Study for Mysql and Microsoft Sql Server Database Reporting a

    creating a case study providing a clear idea about how to use Microsoft SQL Server and MySQL to create and maintain reports using a web-based server. As a first step, the user installs Microsoft SQL Server and MySQL Server on the user machine. Once the database servers have been installed and configured, next step is to set

  17. AdventureWorks sample databases

    Follow these steps to add a sample data to your new database: Connect to your Azure portal. Select Create a resource in the top left of the navigation pane. Select Databases and then select SQL Database. Fill in the requested information to create your database.

  18. DOC Microsoft Case Studies: Target Corporation

    Target Corporatio= nTarget Development Technologies Group Upgrades Knowledge Base to Microsoft SQL Ser= ver 2000, Improves PerformanceThis case study has been archived.Publ= ished: April 18, 2001. Solution Overvi= ew. Company Target Corporation Customer Profile Target Corporation is America's fourth-largest general-merchandise retail= er, with ...

  19. Migrate to Microsoft SQL Server

    With our team's extensive Microsoft Access migration experience and tools, we've developed a 5-step proven approach to migrating any MS Access database application: Learn how we approached the project, faced challenges and created solutions best suiting the client needs. Download case study now. Global access to conformed data.

  20. AZ-500 Microsoft Azure Security Exam Study Guide

    This official Microsoft exam is related to Azure security. You will learn about Microsoft Entra, multi-factor authentication (MFA), single sign-on (SSO), Microsoft apps security, virtual network security, endpoints security, gateways, firewalls, Azure Kubernetes Service (AKS), encryption, and other related topics.

  21. SSIS Case Study

    SSIS Case Study - Innovapost. By. SSIS-Team. Published Mar 25 2019 03:32 PM 1,067 Views. undefined. First published on MSDN on Apr 04, 2012. An interesting case study about how SSIS is used by Canada Post has recently been published . The article mentions how they built an SSIS scale-out solution which processes 10-15 GB of data per day (with ...

  22. Generally available: Defender for Cloud supports Azure Database for

    Modernize SQL Server applications with a managed, always-up-to-date SQL instance in the cloud. Azure Database for MySQL Fully managed, scalable MySQL Database. SQL Server on Azure Virtual Machines Migrate SQL Server workloads to the cloud at lower total cost of ownership (TCO)

  23. Critical Patches Issued for Microsoft Products, April 09, 2024

    DATE (S) ISSUED: 04/09/2024. OVERVIEW: Multiple vulnerabilities have been discovered in Microsoft products, the most severe of which could allow for remote code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create ...

  24. Unlock AI Collaboration at Microsoft BUILD 2024 with Semantic Kernel

    Microsoft BUILD 2024, happening from May 21 - 23rd, is poised to be a groundbreaking event, especially for our community working at the intersection of AI and application development. I'm thrilled to announce our Semantic Kernel session - Bridge the chasm between your ML and app devs with Semantic Kernel. Developing cutting-edge AI ...

  25. Lesson Learned #458: High Login Impact on Azure SQL Database Worker

    Lesson Learned #458: High Login Impact on Azure SQL Database Worker Utilization: A Case Study. Back to Blog; Newer Article; Older Article; Lesson Learned #458: High Login Impact on Azure SQL Database Worker Utilization: A Case Study. ... We are using Microsoft ODBC Driver 18 for SQL Server with connection pooling enable.

  26. Microsoft Fabric March 2024 Update

    Microsoft Fabric March 2024 Update. Welcome to the March 2024 update. We have a lot of great features this month including OneLake File Explorer, Autotune Query Tuning, Test Framework for Power Query SDK in VS Code, and many more! Earn a free Microsoft Fabric certification exam! We are thrilled to announce the general availability of Exam DP ...

  27. Microsoft's April 2024 Patch Tuesday includes two actively exploited

    The April 2024 Patch Tuesday update includes patches for 149 Microsoft vulnerabilities and republishes 6 non-Microsoft CVEs. Three of those 149 vulnerabilities are listed as critical, and one is listed as actively exploited by Microsoft. Another vulnerability is claimed to be a zero-day by researchers that have found it to be used in the wild.

  28. AI study guide: The no-cost tools from Microsoft to jump start your

    Build your business case for the cloud with key financial and technical guidance from Azure. ... Securely migrate Windows Server and SQL Server to Microsoft Azure. Back Data and analytics. Back ... Share AI study guide: The no-cost tools from Microsoft to jump start your generative AI journey on Facebook Share AI study guide: ...