Last month, we decided to migrate one of our Data Center located in NewYork to Los Angeles. Due to Data Center Migration, we were going to perform the following tasks:

  • Change the IP Addresses of the nodes, MSDTC, Windows Cluster, and SQL Server Virtual Name.
  • Change the IP Address of the subnet mask.

This Data Center contained all the Development as well as Staging database Servers. As per our corporate policy, we transfer the copy of the Production database backups (Servers residing in Dallas Data Center) onto the staging machines located in NewYork,also the copy of the Transactional log backups generated every 15 minutes through Transactional LogShipping of the Production database was also transferred onto the staging server where the DR copy of the same database resided.

Due to the Data Center Migration, the staging and dev servers would be shut down for a period of 4 days which meant that with the existing backup policies in place, I won’t be able to Perform Log Shipping anymore. Without LogShipping in place means that my Data is always at risk. Being a Production DBA, I never wanted my Production data to be in danger at all. I decided to have a thorough look at the infrastructure so as to decide How a proper backup strategy could be implemented for a period of 4 days so that even though the Data Centre gets migrated still there is no threat at all to my Production server. Also it wasn’t possible at all for me to build a separate dedicated box with additional hardware just for the purpose of the Database Backups because the servers were maintained by an external vendor and they were having around 30 days SLA once the Service Request for hardware was approved. Also this would just have resulted in additional storage cost for my client.

After having a detailed look at all the servers in my infrastructure, I identified there was one database server named ABC located at a Data Center in Washington which had around 488 GB in total on the D drive out of which around 388 GB was free.This was a SAN drive with RAID 10. With 388 GB of free space on the SAN, 32 GB of Memory and 32 CPU’s, I was 100% sure that it can hold the Database Backups of one of my Production Server,when i say database backups i meant to say the Full, Differential as well as the Transactional Log Backup so that Point in Time Recovery is possible in case if any disaster happens on the Production server.

The database server which I am referring above and present in Washington Data Center used a different domain account for SQL Server as compared to the Domain Account being used for the Production servers residing in Dallas. On the machine named ABC, I decided to a make a Folder named DBBackups on the D drive which would be a shared folder. Obviously, since the D drive on the machine ABC was a SAN drive, I would need to make a File Share Resource on the Cluster such that the folder becomes shared and assign the domain accounts on both the machines with the Full Permissions. Domain account Used to start the SQL Services on the Production Servers in Dallas Data Center needs to be given full permissions to the shared folder named DBBackups which we are making on the machine named ABC otherwise when the backup command gets fired from the Production box, the system will not be able to detect the shared folder on the SAN drive. Once the File Share resource got created I decided to perform a manual failover just to be 100% sure that there wouldn’t be any issues at all if a Failover Happens. When I did a Manual Failover of the Cluster, all the resources moved properly onto the other node successfully which indicated everything was fine.

I then logged on to my Production Server named PQR which was located in Dallas Data Center. One important point, I would like to mention here is that all our servers located across different Data Centers were under the same domain. On the server PQR, I had a drive named B whose total size was 149 GB. This was a SAN drive with RAID 5. The server had around 5 databases of size 84 GB, 7 GB, 680 MB, 3.8 GB as well as 896 MB. Keeping the sizes in mind, I was pretty sure that implementing a Daily Full Backup would be a very tedious task because even if I kept 1 day retention period, it will mean that only after the second Full Backup happens successfully, the first copy would get deleted but this option would never be possible because the overall size of the disk wouldn’t permit me to do that at all. I never wanted to go with the option of deleting the original copy before the new backup happens successfully because that would have been a biggest mistake committed.

I decided to change the Backup Strategy as follows:

The production server named PQR located in Dallas Data Centre had around 5 databases as I mentioned above named P, Q, R, S, and T. On the D drive of the server named ABC, inside the shared folder named DBBackups,I created 5 sub folders named P,Q,R,S,T. Each subfolder had 2 more sub folders inside it named Full and Differential to hold the copy of the Full and Differential Backups. To perform a Full Backup of the database named P, I executed the below T-SQL query against the master database on the server named PQR.

The above script was included in an SQL Server Agent Job scheduled to execute daily at 0300 hrs EST.

The retention period of the Full Backups was kept as 1 day which meant that at any point of time, I will have atleast 1 copy of the Full Backup on the disk.

The next step was to perform a Differential Backup of the database. I decided to keep the schedule every 3 hrs. I decided to keep the least minimum time just to ensure that in case of any emergency, I would have the Differential Backup which happened just a couple of hours back with me. To perform a differential backup of the database, I wrote the below T-SQL script.

I included the above T-SQL script in a SQL Server Agent job scheduled to execute every 3 hours.

Once the Differential Backup strategy was in place, the next step was to perform a Transactional Log Backup. The Production server already had LogShipping in place. LogShipping always contain 3 jobs

  • Backup
  • Copy
  • Restore

Since the staging servers were down, I disabled the Copy and the Restore job. The retention period of the copies of the Transactional Log Backup was specified as 10 hrs. Transactional Log Backup will not only ensure that we can perform a Point in Time Recovery but will also ensure that the size of the ldf file of the database doesn’t increase drastically at all.One more important point which i would like here is that the copies of the Transactional Log Backups were kept on the OLTP Production Server itself because their size was pretty small.

The same strategy was implemented for the remaining 4 databases as well. This was one such approach which i used just to keep my Data secured so that even though our staging servers were down due to Data Centre migration there would be no threat at all to the Backup of my production server.

Many Thanks to all the readers for giving their valuable time in reading the article.Please do let us know if there is any feedback/recommendation which would have made the approach much much better.



Satnam Singh

Like us on FaceBook Follow us on Twitter

Follow me on FaceBook| Join the fastest growing SQL Server group on FaceBook