Data movement when partitioning existing table in SQL Server

The other day, Jørgen Guldmann (@jorgenguldmann) asked if data would move if we would partition an existing table. The table was quite large, billions of rows, so moving data could take quite a bit of time. He wanted all the existing table in one partition, and then new data loaded in the future to be placed in new partitions. I have seen this with a number of Data Warehouses, where the partitioning strategy for the fact tables was designed and applied after the database had been put into production and been running for a while. So, I thought to test it out to make sure I would provide a good answer.

In one of my previous blog posts, I wrote about when a partition split move data. So the question is, would we see the same behavior where we see LOB_INSERT_ROWS and LOB_DELETE_ROWS entries in the transaction log?

tl;dr: We don’t see data movement with LOB_INSERT_ROWS and LOB_DELETE_ROWS entries in the transaction log. However, the creation of the clustered index copy the data pages from the old structure (heap or existing clustered index) to the new structure – even if it is on the same filegroup. So yes, data will move and it can have a significant performance impact.

In my test, I have a small-ish heap called Facts.Sales, with 1,000,000 rows, in a database called PartitonTest. I have then created a right range partition function SalesPF on the TransactionDate column and a partition scheme SalesPS, which are using the same filegroup as the heap. The partitioning function would have the boundary value right of the existing values in TransactionDate, making sure that all existing data will end up in the first partition.

To partition the existing table, we need to create a clustered index on the heap. If the table was a clustered index, we would need to do the same thing using WITH (DROP_EXISTING = ON).

But before we create the clustered index, and partition our table, let’s take a look at the pages the data is located on in our heap by using DBCC IND:

DBCC IND ('PartitionTest', 'Facts.Sales', 1);

This shows that data is placed on the data pages between page 8 and page 3131.

Now, let’s create the clustered index on the partition scheme SalesPS:

CREATE UNIQUE CLUSTERED INDEX CIX_FactsSales
ON Facts.Sales(SalesID, TransactionDate)
ON SalesPS(TransactionDate);

After this, DBCC IND shows us that the data has been moved to the data pages between page 24,600 and page 27,599.

So, data has been moved.

If we take a look at the Books Online article for index disk space, it states that:

Whenever an index is created, rebuilt, or dropped, disk space for both the old (source) and new (target) structures is required in their appropriate files and filegroups. The old structure is not deallocated until the index creation transaction commits. Additional temporary disk space for sorting operations may also be needed.

This aligns with what we saw in the above test.

If we take a look at the transaction log using fn_dblog for the entries related the transaction for the clustered index creation, we only get 3,123 entries. These entries are related to metadata operations and extend allocation. This is a very small footprint on the transaction log, compared to the when data is moved due to a partition split. I reran the test where data is placed in several partitions, and the result was the same. That being said, large-scale index operations can generate large data loads that can cause the transaction log to fill quickly.

In conclusion, when designing Data Warehouses and large fact tables, it is a good idea to design the partition strategy up front rather than apply it after the database is already in production and the fact table contains a large amount of data. If applied afterwards, partitioning an already existing table will cause data movement when creating the clustered index and have a significant performance impact.

Data movement when partitioning existing table in SQL Server

Integration Services (SSIS) for the DBA

Below you will find slides, demos, and references for my PASS Summit 2014 talk on “Integration Services (SSIS) for the DBA”.

Slides

Demos

Demos can be found at github.

Further reading / references

Integration Services (SSIS) for the DBA

Bulk load: methods for better data warehouse load performance

Slides and demo scripts are now available for my talk at SQLSaturday #337 Oregon today.

Abstract

When loading a data warehouse you want the data inserted into the tables as fast as possible. You know you have to use bulk loading, but what do you need to do to ensure a minimal footprint on the transaction log? In this session, you will learn about minimal logged operations. You will also learn about the different methods for bulk loading data into your data warehouse; using SSIS, BCP and T-SQL.

Slides

Demos

Demos can be found at github.

Bulk load: methods for better data warehouse load performance

Regional and language settings for Oracle client used with SQL Server linked server

Today, I ran into a problem while migrating a Data Warehouse solution to another server. The solution have linked servers for extracting data from an Oracle source system.

The query would go something like this:

INSERT INTO dbo.MyTable
SELECT *
FROM OPENQUERY(UNDPFSP1ICC, 'SELECT
	TO_CHAR(COL1_DT, ''YYYY'') AS Col1Year
	, COL2
	, COL3
	, SUM(AMOUNT) AS Amount
	, SYSDATE
FROM MYTABLE
GROUP BY
	TO_CHAR(COL1_DT, ''YYYY'')
	, COL2
	, COL3
')

Everything had been migrated, but while running the extract job, which ran queries like the above, we would get errors like these:

Conversion failed when converting the nvarchar value '123456,78' to data type int.

Error converting data type nvarchar to numeric.

The problem was that instead of using a dot (.) as the decimal symbol, a comma (,) was being returned. This would make the data convert to nvarchar, and an implicit conversion to a NUMERIC or INT would fail.

Everything was running fine on the old server. The collation of the server and databases were identical (Latin1_General_CI_AS), but the regional and language settings were different. On the old server they were set to English (United States) and on the new server to Danish. Changing the settings on the new server to English (United States) did not help.

After troubleshooting a bit on #sqlhelp with Amit Banerjee (twitter | blog) and William Durkin (twitter | blog), as well as via Skype with Jørgen Guldmann (twitter | blog), Jørgen ended up pointing me in the right direction.

The problem as that the NLS_LANG parameter was set to Danish (where we use comma (,) as decimal symbol). It is stored in the registry under the following subkey:

HKEY_LOCAL_MACHINESOFTWAREORACLEHOMEIDNLS_LANG

Where ID is the unique number identifying the Oracle home.

The trick was to change this subkey from:

DANISH_DENMARK.WE8MSWIN1252

To:

AMERICAN_AMERICA.WE8MSWIN1252

And then everything worked fine, and the decimal symbol was now a dot (.) instead of a comma (,).

Regional and language settings for Oracle client used with SQL Server linked server

Create Data Warehouse extract tables from SQL Server source system

I have often found that I needed a quick script for giving me CREATE TABLE statements for each table in a source system database, so I could create these tables in my extract schema in my Data Warehouse. This can obviously be done using SQL Server Management Studio, but I found that I would then manually have to edit each statement to remove keys, indexes, identity columns, and everything else that I usually do not want in my extract tables.

Having not being able to find one, I wrote my own, and though I’d share it here. The following script writes a CREATE TABLE statement for each table in a SQL Server source database, so they can be used for creating extract tables in a data warehouse.

DECLARE @ExtractSchema NVARCHAR(100) = 'Extract'

SELECT 'CREATE TABLE [' + @ExtractSchema + '].['
      + SCHEMA_NAME (t.schema_id)
      + '_' + t.name + '] (' + LEFT(cols .column_list, LEN(cols .column_list) - 1) + ')'
FROM sys.tables t
CROSS APPLY (SELECT '[' + c .name + '] ' + UPPER(ty .name)
                    + CASE WHEN ty.name IN ('decimal', 'numeric') THEN
                              '(' + CONVERT(NVARCHAR(10), c.precision)
                              + ', '
                              + CONVERT (NVARCHAR(10), c.scale)
                              + ')'
                      WHEN ty.name = 'float' THEN
                         '(' + CONVERT(NVARCHAR(10), c.precision ) + ')'
                      WHEN ty.name IN ('datetime2', 'datetimeoffset', 'time') THEN
                         '(' + CONVERT(NVARCHAR(10), c.scale) + ')'
                      WHEN ty.name IN ('char', 'nchar', 'varchar', 'nvarchar') THEN
                         '(' + CASE WHEN c.max_length = - 1
                                   THEN 'MAX'
                                   ELSE CAST (c.max_length AS NVARCHAR(10))
                              END + ')'
                      ELSE ''
                      END
                    + CASE WHEN c.is_nullable = 0 THEN ' NOT'
                      ELSE ''
                      END
                    + ' NULL, '
               FROM sys.columns c
               INNER JOIN sys.types ty
               ON c.user_type_id = ty.user_type_id
               WHERE t.object_id = c.object_id
               ORDER BY c.column_id
               FOR XML PATH('')
      ) cols (column_list)
Create Data Warehouse extract tables from SQL Server source system