SQL Server Index Fill Factor

Index Fill Factor

FILLFACTOR, The FILLFACTOR option allows you to allocate a percentage (0 to 100) of free space on the leaf-level index pages to reduce page splitting. This option is many-a-times overlooked and just left set to the default engine setting without knowing the fact that, this could lead to serious performance issues.

As stated earlier also, the availability of free space in an index page can have a significant effect on the Performance of index updates operations. If an index record must be inserted and there is no free space, a new index page must be created and the contents of the old page split across the two pages. This can affect performance if it happens too frequently. SQL Server offers two important options that give you control over the amount of free space maintained within an index: FILLFACTOR and PAD_INDEX.

Fill Factor

The Fill Factor specifies the % of fullness of the leaf level pages of an index. When an index is created or rebuilt the leaf level pages are written to the level where the pages are filled up to the threshold specified and the remainder of the page is left blank for future usage. This is the case when a value other than 0 or 100 is specified.  For example, if a fill factor value of 60 is chosen, the index pages are all written with the pages being 60 % full, leaving 40 % of space for future usage. By default, the Index fill factor is retrieved from the Instance Database settings as displayed below:

1_SQL_Server_Index_Fill_Factor

The PAD_INDEX Option

The PAD_INDEX option enables you to specify whether or not the fill factor applied to the leaf pages also applies to the non-leaf pages. You can use the PAD_INDEX option only when FILLFACTOR is specified, because the PAD_INDEX percentage value is determined by the percentage value specified for FILLFACTOR.

By default, SQL Server always leaves enough room to accommodate at least one row of the maximum index size for each non-leaf-level page, regardless of the fill factor value.

The sysindexes system table stores the fill factor value that was last applied to an index, along with other index information. You can change the default fill factor value at the server level by either using the sp_configure system stored procedure or using SQL Server Management Studio.

Setting and finding the ideal Fill Factor for your database is not so easy and straightforward. There is no magic number to this; we need to consider multiple facts before changing this. A high Fill Factor will ensure more rows are packed on each data page but there is a big chance that page splits will increase, especially in OLTP systems. This is something that is not desirable since the less page splits the better for performance. On the other hand, a low Fill Factor will store less rows per data page, which will decrease page splits but will require more resources, such as IO, to read the same amount of data since the data is spread across more data pages.

Apart from setting a Fill Factor for the whole database, we can also override this setting for individual index also while we are Rebuilding, Reorganizing or creating an index.

Guidelines for Setting the FILLFACTOR Option

The fill factor value that you specify on a table depends on how often data is modified (INSERT and UPDATE statements) and your organization’s environment. Generally, you should:

• Use a low fill factor value for online transaction processing (OLTP) environments. This provides maximum room for growth in tables where rows are inserted frequently or index key values are frequently modified.

• Use a high fill factor value for online analytical processing (OLAP) environments.

Test Scenarios

Now let’s validate some facts:

  1. Using Fill Factor other than 0 & 100 will force SQL Server to leave some space in Leaf Node for future use, because of which Index will always have more Pages.
  2. If we’re not using Low Fill Factor SQL Server needs to traverse more pages in order to get the same resultset.
  3. Using Low Fill Factor will prevent Fragmentation and page splitting which could not be mitigated when we use High Fill Factor.

Scenario 1

I’ve created two tables named “Table_FF_0” “Table_FF_50” and inserted 100000 records in both the tables, after that I created one clustered index on each table. While creating first clustered index on Table “Table_FF_0” I used Fill Factor 100% and on second table ““Table_FF_50” I used Fill Factor 50%

Now when you check the space used for each table you’ll notice that the index size of Second table is almost Double than the First table, which is because SQL Server kept half the pages blank in order to accommodate later entries, if any and that is why SQL Server reserved double pages to store the same number of records. (Proves Fill Factor works)

2_SQL_Server_Index_Fill_Factor

Index Size of both the tables

Scenario 2 – Select Performance

Now lets analyse the SELECT performance for both tables/indexes. The SET STATISTICS IO option is enabled which will return IO information, needed to analyse the performance difference.

When you’ll try to read some data from both the tables SQL Server will have to read more Pages in order to get the same rowset. This can be proved with the help of Statistics IO, Just enable Statistics IO and execute Select command over both the tables you’ll get the IO Info along with the output.

You can refer the below screenshot which clearly shows that SQL Engine reads 45 pages from First Table where the pages where 100% Full whereas in the second table SQL Engine ends up reading 86 Pages to get the same data because the data pages were half blank.

3_SQL_Server_Index_Fill_Factor

Scenario 3

Execute DBCC ShowContig Command against both the tables and you can verify both the above mentioned points from the output.

  1. Pages scanned would be almost double for second table.
  2. Avg. Bytes Free per Page would be quite high for second table.
  3. Avg. Page Density would be around 50%.

4_SQL_Server_Index_Fill_Factor

Scenario 4 – Page Splitting

You can also test whether Page Splitting happens with high Fill Factor or not.

I’ve created two tables named “Table_FF_0” “Table_FF_50” and inserted 100000 records in both the tables, after that I created one clustered index on each table. While creating first clustered index on Table “Table_FF_0” I used Fill Factor 100% and on second table ““Table_FF_50” I used Fill Factor 50%

The only trick we can play here is: – took one INT column in each table and insert EVEN numbers in both the tables and after creating Indexes on this column we’ll insert ODD Numbers. This way SQL Server will be forced to Split the pages in the first Index because there is no Room Space left for later insertions and you can see that with the help of below screenshot. This is evident.

Here you can see that the Logical Fragmentation of the First Table has increased to 66%.

When you actually look into the scan density count you’ll notice that the Actual Count is many folds greater than the Best Count because of the Page Splitting.

5_SQL_Server_Index_Fill_Factor

Conclusion

So in Nutshell, it is clear that a high Fill Factor will improve SELECT but will penalize DML due to more page splits, while a low Fill Factor will improve DML statements but will penalize SELECT statements.

After going through this Doc, DO NOT run to your production server and quickly modify the Fill Factor of your indexes of your database. However, from the tests performed, it’s clear that by carefully monitoring the Index fragmentation of heavily updated tables, the Fill Factor option will help in reducing page splits and index fragmentation, thus speeding up DML statements. The best value is the one which will find the right balance between SELECTS and DML operations.

 

Regards

Sarabpreet Anand

Like us on FaceBook Follow us on Twitter | Join the fastest growing SQL Server group on FaceBook

Follow me on Twitter  |  Follow me on FaceBook

Data Platform Virtual Summit 2020

Subscribe to SQLServerGeeks YouTube channel. If you want more learning content in your inbox, subscribe to SQLServerGeeks Bulletin.

SQLServerGeeks YouTube | SQLServerGeeks Bulletin | SQLServerGeeks Twitter

4 Comments on “SQL Server Index Fill Factor”

  1. Sarab,

    Great article.

    But important point you are missing here is the size of the data you have.Page splitting causes only on indexes(clustered index to be precise) and not on the physical storage of the data.
    If I have an index on a int datatype whose size is 4 bytes my index will use only single page till the number of rows reach 2015.

    Since each page size is 8192 bytes with 8060 bytes of actual user data the no of rows on a single page in our situation will be 2015 (8060/4=2015 rows).So no matter how many updates you have in those 2015 rows if your limit is 2015 rows page splits wont occur.So if you have defined a Fill factor of 50 even though your no of rows are under 2015 you will be unecessarily having an extra page to hold same no of records which a single page could had held.

    Now once you want to insert 2016th row that single page will move down the level and become a leaf page and 2 more pages will be allocated to the structure with one becoming the root page and other becoming the second leaf page(i.e a page split occurs).

    So now you will have 3 pages(one root page,two intermediate pages)to hold 2015*2=4030 rows+ 2rows in root page to hold references to the two intermediate pages.

    The logical structure will continue to remain same till the root page itself holds 2015 rows and no of leaf pages being 2015.So till the no of rows are 2015 * 2015=4060225 there will be just a single root page and 2015 leaf pages.

    Now if you are inserting a 4060226th row the root page becomes a intermediate page with a second intermediate page coming into picture causing a page split at intermediate level and a new root page is allocated.The structure continues to remain the same till (2015 rows 1st Intermediate page * 2015 rows 2nd intermediate page)*2015 pages at the leaf level=8181353375 rows.

    The current root page continues to be root page until the no of pages at intermediate level does not exceed 2015 pages and no of rows at leaf levels is 8181353375 rows

    So in short what I suggest is fill factor value should be a set after a proper analysis ONLY,a thorough understanding of the BTree and depending on the kind of data you are going to have in the index.

    Also one should not blindly follow a thumb rule of having a Fill factor value of 50 or 60 or 40 etc etc or else you will unecessarily end up having lot of empty pages and wastage of space and a neverending performance bottleneck.

  2. Thanks Sachin,

    But just to clear few things………..

    Page splitting causes only on indexes(clustered index to be precise) and not on the physical storage of the data

    What exactly goes under the leaf node of the Clustered Index? .It’s Data only.

    Physical storage –> Leaf Node only, thats a part of C.Index

    index will use only single page till the number of rows reach 2015

    Where is the rest of the BTree Structure? Root Node, Intermediate Node.

    So no matter how many updates you have in those 2015 rows if your limit is 2015 rows page splits wont occur

    I completely agree on this & even i made the same point, the only additional thing i was refering too is when u create index with 0 Fill Factor & insert even Numbers and fill the complete page and later on try to insert odd numbers………… since index was created with Ascending order (By Default) Index is bound to keep all the data in ASC order in leaf node……. and at the same time my page is already full with values like 2,4,6,8,…….n Now when u insert value 1,3,5 index leaf node page will split.

    Now once you want to insert 2016th row that single page will move down the level and become a leaf page and 2 more pages will be allocated to the structure with one

    becoming the root page and other becoming the second leaf page(i.e a page split occurs).

    Even if you have a single record SQL Engine is bound to create Full BTree structure for your Index.

    The current root page continues to be root page until the no of pages at intermediate level does not exceed 2015 pages and no of rows at leaf levels is 8181353375 rows

    No matter how many records you have in your Index, Root page will remain the same, ony the pointers & values will get changed.

    Also one should not blindly follow a thumb rule of having a Fill factor value of 50 or 60 or 40 etc etc

    Even i concluded on same statement and we both have no doubts about this.

    Hope i was able to clear all the doubts.

    Thanks

    Sarabpreet Singh

  3. What exactly goes under the leaf node of the Clustered Index? .It’s Data only.

    Yes it is.But the data is specific for the index not for the underlying table.

    Where is the rest of the BTree Structure? Root Node, Intermediate Node.
    Even if you have a single record SQL Engine is bound to create Full BTree structure for your Index.

    Its not necessary.There will be a ONLY a single page assigned for the BTree till the particular page i.e the (Root page or Leaf page does not matter what you call it)does not get full.

    No matter how many records you have in your Index, Root page will remain the same, ony the pointers & values will get changed.
    That also is not true.SQL Server works on dynamic page allocation algorithm.It will assign pages(8KB) or extents(64 KB) only when it feels that there arent enough pages or exetents for the data to be saved.

    Try running this.For a start lets insert just 5 records in the table

    Create table test(id int identity)
    go
    Create clustered index test_index on test(id)
    go

    insert into test default values
    while scope_identity()<=5
    insert into test default values

    SELECT avg_page_space_used_in_percent
    ,avg_fragmentation_in_percent
    ,index_level
    ,record_count
    ,page_count
    ,fragment_count
    ,avg_record_size_in_bytes,
    index_depth
    FROM sys.dm_db_index_physical_stats
    (DB_ID(N'sac'), OBJECT_ID(N'test'), NULL, NULL , 'DETAILED');
    GO

    Drop table test

    Have look at the o/p for the above query(sys.dm_db_index_physical_stats) for "index_level" and "page_count".The index level will always be 0 i.e (indicating root page or leaf page as in this case it does wont matter) and the page_count will be 1(since only a single page has been allocated to the BTree) till that page DOES NOT get FULL.

    Now try inserting inserting 2000 records in the table by replacing 5 with 2000 in the above query.You will see that Index has now assigned a extra leaf page and the count of 2 will be displayed in the column "page_count" and 0 for "index_level" indicating that there are now 2 pages at leaf level and single page at the root level.

    In this particular example if you see the "avg_record_size_in_bytes" it displays 11 bytes being occupied because there are lot more things it stores in the page except the actual data, example data offset,page header metadata etc etc.You can get more details by using DBCC page command.
    So my example in the previous comment of using exact 4 bytes may not hold true.
    If you use a smallint datatype instead of the int datatype in my example you will see that there is still a single page assigned even for record count more than 1000 as smallint takes only 2 bytes of data storage.

    Now to get more information of actual storage of data saved on the disk and NOT in an index try using
    DBCC IND command.
    Example in our case.

    DBCC IND('YourDatabaseName','Test',-1)

    You will understand the difference.

    Now to get more information of actual storage of data saved on the disk and NOT in an index try using
    DBCC IND command.
    Example in our case.

    DBCC IND('YourDatabaseName','Test',-1)

    You will understand the difference.

Leave a Reply

Your email address will not be published. Required fields are marked *