Transaction Log  Management.

Transaction Log file is the key for every database to prove itself to be a highly available during an event of disaster. As a database administrator we need to ensure that the transaction logs are managed efficiently for high availability and better throughput. Below are summary of some of the best practices suggested for transaction logs

  1. Create only ONE Transaction log file for a single database. Though SQL Server allows creation of multiple transaction log files, it doesn’t stripe the data across multiple transaction log files. SQL Server writes data in a sequential manner .Hence there is no use of having multiple log files.

Note: If your current architecture already has multiple log files, consider deleting the second log file by taking log backup.

Please refer   [http://technet.microsoft.com/en-us/library/ms191433(v=sql.105).aspx] for more information on how to delete the transaction log from a database.

  1. If possible try to dedicate a separate drive for transaction log files for all the user databases. The advantage for having this tweaking on is the disk header will always be dedicated to write the next byte into the log file. If the transaction log files are placed on a disk that hosts other files then there is a chance that the disk header is used by other files in the disk. This performance gain is highly recommended for large volume of OLTP systems.

Note: If you have the log files spread across disks then consider moving log files to a dedicated disk.

  1. External fragmentation is one of the reasons for slowness of the transaction log throughput.  The best way to get rid of this problem is to perform a disk fragmentation. This would clear the fragmentation. However this is recommended as a one-time operation to be carried out and recommended during agreed business downtime.

Note: This is a server level operation typically carried out by administrators of the server. Please ensure you have proper permissions In place if you want to defrag the disk.

  1. Choosing the correct RAID configuration is one of the important decisions while building a new database system. For greater performance and capacity any combination of RAID 1 + 0 or RAID 0+1 is highly recommended for transaction logs. The availability of database is determined through its ability to recover during an event of disaster. Point in time recovery is the only way to achieve this and to achieve point in time recovery we need to have the availability of the last transaction log before the database is crashed. By choosing the correct RAID combination point in time recovery can be achieved.
  2. Consider allocating a certain amount of space for the transaction log while creating the database. The initial size holds the value of how much space can be allocated to a Tlog file at the time of creation. While there are many rules on how much the initial size should be, below are the factors to be considered while allocating initial size to a transaction log
  • The type of activities the database will carry.
  • Frequency of DML statements on a database.
  • Recovery Model of the Database.
  • How frequent the Tlogs are backed up
  1. Internal Fragmentation is one of the major suspects for any underperforming transaction log.  This fragmentation depends on the growth rate specified for a transaction log. The Auto growth / Manual growth value should be sufficient enough to hold a good amount of increase in the database activities. Typically any transaction log files are internally broken into smaller units called Virtual Log Files [VLF’s]. The size and number of VLF’s depends on the amount of size that is added to the transaction log. Having an excessive number of VLFs can negatively impact all transaction log related activities and there is degradation in performance when transaction log backups occur.
  • Always set all the values [Initial Size, Max size, growth] for the transaction log depending on the amount of transactions.
  • Always give a value instead of percentage while using auto growth. Specifying a percentile increase is always a overhead for SQL server.

 

Regards

Rama Chandra Gowtham Peddada

Like us on FaceBook Follow us on Twitter

Join the fastest growing SQL Server group on FaceBook