The other day, Jørgen Guldmann (@jorgenguldmann) asked if data would move if we would partition an existing table. The table was quite large, billions of rows, so moving data could take quite a bit of time. He wanted all the existing table in one partition, and then new data loaded in the future to be placed in new partitions. I have seen this with a number of Data Warehouses, where the partitioning strategy for the fact tables was designed and applied after the database had been put into production and been running for a while. So, I thought to test it out to make sure I would provide a good answer.
In one of my previous blog posts, I wrote about when a partition split move data. So the question is, would we see the same behavior where we see
LOB_DELETE_ROWS entries in the transaction log?
tl;dr: We don’t see data movement with
LOB_DELETE_ROWS entries in the transaction log. However, the creation of the clustered index copy the data pages from the old structure (heap or existing clustered index) to the new structure – even if it is on the same filegroup. So yes, data will move and it can have a significant performance impact.
In my test, I have a small-ish heap called
Facts.Sales, with 1,000,000 rows, in a database called
PartitonTest. I have then created a right range partition function
SalesPF on the
TransactionDate column and a partition scheme
SalesPS, which are using the same filegroup as the heap. The partitioning function would have the boundary value right of the existing values in
TransactionDate, making sure that all existing data will end up in the first partition.
To partition the existing table, we need to create a clustered index on the heap. If the table was a clustered index, we would need to do the same thing using
WITH (DROP_EXISTING = ON).
But before we create the clustered index, and partition our table, let’s take a look at the pages the data is located on in our heap by using DBCC IND:
DBCC IND ('PartitionTest', 'Facts.Sales', 1);
This shows that data is placed on the data pages between page 8 and page 3131.
Now, let’s create the clustered index on the partition scheme
CREATE UNIQUE CLUSTERED INDEX CIX_FactsSales
ON Facts.Sales(SalesID, TransactionDate)
DBCC IND shows us that the data has been moved to the data pages between page 24,600 and page 27,599.
So, data has been moved.
If we take a look at the Books Online article for index disk space, it states that:
Whenever an index is created, rebuilt, or dropped, disk space for both the old (source) and new (target) structures is required in their appropriate files and filegroups. The old structure is not deallocated until the index creation transaction commits. Additional temporary disk space for sorting operations may also be needed.
This aligns with what we saw in the above test.
If we take a look at the transaction log using
fn_dblog for the entries related the transaction for the clustered index creation, we only get 3,123 entries. These entries are related to metadata operations and extend allocation. This is a very small footprint on the transaction log, compared to the when data is moved due to a partition split. I reran the test where data is placed in several partitions, and the result was the same. That being said, large-scale index operations can generate large data loads that can cause the transaction log to fill quickly.
In conclusion, when designing Data Warehouses and large fact tables, it is a good idea to design the partition strategy up front rather than apply it after the database is already in production and the fact table contains a large amount of data. If applied afterwards, partitioning an already existing table will cause data movement when creating the clustered index and have a significant performance impact.