SQL Server Performance Troubleshooting Free Scripts and Tools List

Back in the days, I used to collect a lot of different scripts, tools, and other goodies for troubleshooting SQL Server performance issues. These days, however, I tend to use what is publicly and freely available (as well as some internal stuff), and keep a list of those in my head.

I’ve meant to write that list down for a while, and today Chrissy asked:

So here it is…

Disclaimer: While I do work as a Premier Field Engineer for Microsoft, this is my list Рthis is not an official list from my employer. Others likely have a list that differs from mine.

Free scripts and tools from Microsoft
These are scripts and tools provided for free by Microsoft that I use. Some of them come “as-is” without any official support.

  • BPCheck – This is by far my favorite script and one I use all the time. I like to joke that there is one good thing and one bad thing about this script. The good thing is that it gives me a lot of information. The bad thing is that it gives me a lot of information. It can be quite overwhelming the first time you run it, but once you have used it a few times, it is really useful.
  • Pssdiag/Sqldiag Manager – This tool creates a customized pssdiag utility that uses sqldiag to collect diagnostics data, such as profiler trace,¬†extended events, perfmon and DMV information. I use this all the time¬†for data collection used to troubleshoot performance problems. There is also version for Linux available (see this blog post, for how to use the Linux version).
  • SQL Nexus – Loads and analyzes performance data collected by Pssdiag. It loads the data into a database and does some reporting on top of that. Very useful.
  • PAL – The Performance Analysis of Logs (PAL) tool reads a perfmon counter log and analyzes it using different threadholds. It generates an HTML based report (or XML, if you prefer), which shows which counters to look out for. It has templates for SQL Server workloads, but work with a lot of other workloads too.
  • RML – The Replay Markup Language (RML) utilities can be used to analyze profiler traces or replay¬†the trace file against another instance of SQL Server. For how to use it, read this blog post.
  • WinDbg – Windows Debugger. Useful for debugging a memory dump.
  • Tigertoolbox – This is the repository for the SQL Server Tiger Team¬†and contains a lot of good stuff. Maybe you need some SQL Performance Baseline reports, fix your VLFs, or view the waits and latches. It is also the home of BPCheck – go check it out!
  • diskspd – Storage load generator / performance test tool. It replaces SQLIO. I use this if I want to benchmark the IO subsystem of a new machine / VM. There are a couple of good blog posts about how to use diskspd¬†here¬†and here.
  • SQL Server Management Studio – SSMS is now a standalone application (it used to ship with the SQL Server installation), and got quite a few improvements in the newer versions.
  • sysinternals – Tools for advanced troubleshooting on Windows, which I use from time to time.
  • SQL Server Data Tools – Not really a tool that I use for performance troubleshooting, but it is in my toolbox for when I have to look at an SSIS package.
  • Visual Studio Code – My go-to editor for… pretty much everything.

Free scripts, tools, and resources from the community
I also use a lot of stuff that is made available by the awesome SQL Server community.

  • SQL Server Diagnostic Information Queries – Before I discovered Glenn Berry‘s DMV queries, I used to have my own script collection. Not anymore. Glenn updates his queries every month. While I personally prefer to use BPCheck to get the information from the DMVs, I find that Glenn’s scripts might be a bit better structured and easier to use.
  • sp_whoisactive – Think sp_who2… just much much better. This script by¬†Adam Machanic shows which statements are being executed right now, and can include the estimated execution plan, acquired locks, tempdb and resource allocation, and¬† more.
  • sp_helpindex rewrite – sp_helpindex that ships with SQL Server does not show newer functionality (such as included columns and filters), so Kimberly Tripp rewrote it to include this information. Also, be sure to read her blog post on indexing strategies.
  • Ola Hallengren’s SQL Server Maintenance Solution – Ola’s scripts are great and really easy use, and make life easier for a lot of DBAs around the world. The index and statistics maintenance part of the script is an easy way to reduce index fragmentation and make sure your statistics are up to date.
  • Plan Explorer – While SSMS can show the execution plan just fine, Plan Explorer makes it just a little easier to read.
  • Statistics Parser – Parses the output of SET STATISTICS IO and SET STATISTICS TIME¬†and makes it a lot easier to read and analyze.
  • Poor SQL – Ever got a SQL query that is just really poor formatted? Or even just one line? This site will help you format it to something readable.
  • dbatools – I don’t use these PowerShell modules for performance troubleshooting, but they are useful for checking best practices and other DBA tasks.
  • SQL Server Version List – I use this up-to-date list of SQL Server versions to check if any service packs or commutative updates are missing.
  • SQL Server Wait Types Library and Latches Classes Library – While I do remember the most common wait types and latches classes that I see all the time, I sometime run into one that I haven’t seen before or just can’t remember what is. Paul Randal‘s libraries are really useful resources to look up those.

That’s it! If you have a free tool or script that you use (or wrote yourself), feel free to leave a comment – would love to check it out!

SQL Server Performance Troubleshooting Free Scripts and Tools List

Data movement when partitioning existing table in SQL Server

The other day, J√łrgen Guldmann (@jorgenguldmann) asked if data would move if we would partition an existing table. The table was quite large, billions of rows, so moving data could take quite a bit of time. He wanted all the existing table in one partition, and then new data loaded in the future to be placed in new partitions. I have seen this with a number of Data Warehouses, where the partitioning strategy for the fact tables was designed and applied after the database had been put into production and been running for a while. So, I thought to test it out to make sure I would provide a good answer.

In one of my previous blog posts, I wrote about when a partition split move data. So the question is, would we see the same behavior where we see LOB_INSERT_ROWS and LOB_DELETE_ROWS entries in the transaction log?

tl;dr: We don’t see data movement with LOB_INSERT_ROWS and LOB_DELETE_ROWS entries in the transaction log. However, the creation of the clustered index copy the data pages from the old structure (heap or existing clustered index) to the new structure – even if it is on the same filegroup. So yes, data will move and it can have a significant performance impact.

In my test, I have a small-ish heap called Facts.Sales, with 1,000,000 rows, in a database called PartitonTest. I have then created a right range partition function SalesPF on the TransactionDate column and a partition scheme SalesPS, which are using the same filegroup as the heap. The partitioning function would have the boundary value right of the existing values in TransactionDate, making sure that all existing data will end up in the first partition.

To partition the existing table, we need to create a clustered index on the heap. If the table was a clustered index, we would need to do the same thing using WITH (DROP_EXISTING = ON).

But before we create the clustered index, and partition our table, let’s take a look at the pages the data is located on in our heap by using DBCC IND:

DBCC IND ('PartitionTest', 'Facts.Sales', 1);

This shows that data is placed on the data pages between page 8 and page 3131.

Now, let’s create the clustered index on the partition scheme SalesPS:

CREATE UNIQUE CLUSTERED INDEX CIX_FactsSales
ON Facts.Sales(SalesID, TransactionDate)
ON SalesPS(TransactionDate);

After this, DBCC IND shows us that the data has been moved to the data pages between page 24,600 and page 27,599.

So, data has been moved.

If we take a look at the Books Online article for index disk space, it states that:

Whenever an index is created, rebuilt, or dropped, disk space for both the old (source) and new (target) structures is required in their appropriate files and filegroups. The old structure is not deallocated until the index creation transaction commits. Additional temporary disk space for sorting operations may also be needed.

This aligns with what we saw in the above test.

If we take a look at the transaction log using fn_dblog for the entries related the transaction for the clustered index creation, we only get 3,123 entries. These entries are related to metadata operations and extend allocation. This is a very small footprint on the transaction log, compared to the when data is moved due to a partition split. I reran the test where data is placed in several partitions, and the result was the same. That being said, large-scale index operations can generate large data loads that can cause the transaction log to fill quickly.

In conclusion, when designing Data Warehouses and large fact tables, it is a good idea to design the partition strategy up front rather than apply it after the database is already in production and the fact table contains a large amount of data. If applied afterwards, partitioning an already existing table will cause data movement when creating the clustered index and have a significant performance impact.

Data movement when partitioning existing table in SQL Server

Partitioning on Microsoft SQL Server

Here is a few links to articles, white papers, and best practices on partitioning on Microsoft SQL Server. I have these at the end of my slide deck on layered partitioning, which uses a combination of partitioned tables and a partitioned view, and thought that I would put them up here as well.
Partitioning on Microsoft SQL Server