SQL Server: Guidelines for Effective Transaction Log Management

Transaction Log  Management.

Transaction Log file is the key for every database to prove itself to be a highly available during an event of disaster. As a database administrator we need to ensure that the transaction logs are managed efficiently for high availability and better throughput. Below are summary of some of the best practices suggested for transaction logs

  1. Create only ONE Transaction log file for a single database. Though SQL Server allows creation of multiple transaction log files, it doesn’t stripe the data across multiple transaction log files. SQL Server writes data in a sequential manner .Hence there is no use of having multiple log files.

Note: If your current architecture already has multiple log files, consider deleting the second log file by taking log backup.

Please refer   [http://technet.microsoft.com/en-us/library/ms191433(v=sql.105).aspx] for more information on how to delete the transaction log from a database.

  1. If possible try to dedicate a separate drive for transaction log files for all the user databases. The advantage for having this tweaking on is the disk header will always be dedicated to write the next byte into the log file. If the transaction log files are placed on a disk that hosts other files then there is a chance that the disk header is used by other files in the disk. This performance gain is highly recommended for large volume of OLTP systems.

Note: If you have the log files spread across disks then consider moving log files to a dedicated disk.

  1. External fragmentation is one of the reasons for slowness of the transaction log throughput.  The best way to get rid of this problem is to perform a disk fragmentation. This would clear the fragmentation. However this is recommended as a one-time operation to be carried out and recommended during agreed business downtime.

Note: This is a server level operation typically carried out by administrators of the server. Please ensure you have proper permissions In place if you want to defrag the disk.

   
  1. Choosing the correct RAID configuration is one of the important decisions while building a new database system. For greater performance and capacity any combination of RAID 1 + 0 or RAID 0+1 is highly recommended for transaction logs. The availability of database is determined through its ability to recover during an event of disaster. Point in time recovery is the only way to achieve this and to achieve point in time recovery we need to have the availability of the last transaction log before the database is crashed. By choosing the correct RAID combination point in time recovery can be achieved.
  2. Consider allocating a certain amount of space for the transaction log while creating the database. The initial size holds the value of how much space can be allocated to a Tlog file at the time of creation. While there are many rules on how much the initial size should be, below are the factors to be considered while allocating initial size to a transaction log
  • The type of activities the database will carry.
  • Frequency of DML statements on a database.
  • Recovery Model of the Database.
  • How frequent the Tlogs are backed up
  1. Internal Fragmentation is one of the major suspects for any underperforming transaction log.  This fragmentation depends on the growth rate specified for a transaction log. The Auto growth / Manual growth value should be sufficient enough to hold a good amount of increase in the database activities. Typically any transaction log files are internally broken into smaller units called Virtual Log Files [VLF’s]. The size and number of VLF’s depends on the amount of size that is added to the transaction log. Having an excessive number of VLFs can negatively impact all transaction log related activities and there is degradation in performance when transaction log backups occur.
  • Always set all the values [Initial Size, Max size, growth] for the transaction log depending on the amount of transactions.
  • Always give a value instead of percentage while using auto growth. Specifying a percentile increase is always a overhead for SQL server.

 

Regards

Rama Chandra Gowtham Peddada

Like us on FaceBook Follow us on Twitter

Join the fastest growing SQL Server group on FaceBook

   

6 Comments on “SQL Server: Guidelines for Effective Transaction Log Management”

  1. Nice articles Rama. Good luck for your future articles… Include block size and offset for disks would be one more point can be added here…

  2. Nice Article Rama. Good Luck for your future articles…. Include Disk Block size and Offset also to this article..

  3. One very important clarification.

    The extent size of the logfile is very important as there will always be a number of Virtual Log Files (VLFs) in each extent (either initial or by file growth). The number of VLFs is different depending on the extent size. On recovery every single VLF has to be inspected, to see if there is needed REDO or UNDO information in the VLF.

    To avoid external fragmentation, allways keep the extent size uniform for all files on a disk.

    One example from a recovery scenario a few years ago, I had a logfile of 150GB in 1MB extents.

    Restoring one single transactionlog backup took 2-3 minutes, which could be OK, if it wasn’t for the fact that there where transaction log backups for every hour, and I had to restore 7 months of transaction log backups, as the client did not have a more recent full backup.

    And btw. it makes sense to have multiply transaction log files, if they resides on different physical disks.

  4. Thank you Leif for the suggestions.

    I will disscuss on VLF and factors that effect transaction log file performance in my next article ” Internal Fragmentation – Explained”

  5. Bit disagree on Single Log file.. if you really want to enhance performance in HIGHLY Transaction base environment then SIngle file will be the Primary Cause of your Performace HIT…

    So Always create Secondary log files depends on DB size and its I\O ratio, if this is high then break log file into multiple with defult size and put them in DIfferent DISK.

    Another important thing, Log file breakup also depends on Avaliable Prcessors as well… So your LOg file distribution also effect load mgmt.. if you create as per your Processors, DISk and Usuage.

Leave a Reply

Your email address will not be published.