Writing a proper T-SQL query which doesn’t have any bad impact on the server should be ideally a job of a good developer. But Many a times, the developers don’t focus on this area thus resulting in an improper query structure causing a bad damage to the server resources. In this article, Satnam Singh, a Senior SQL Server DBA from Larsen&Toubro Infotech, Mumbai India discusses one such case wherein a T-SQL query took more than 36 minutes to execute just to fetch around 89 rows of data thereby causing a timeout error on the Application itself.

A developer comes to me and says that a particular portion of the application is timing out. Her further investigation said that there is an issue with the T-SQL query involved in it. She suspected that there could be some problem with the SQL Server Indexes and asked me to investigate further.

In order to take the case forward, I asked her to send me the T-SQL query involved in it and the same is as follows:

When I studied the logic involved in the query, I noticed that there is something wrong with the date condition written inside the where clause.

The date conditions inside the WHERE clause was written as follows:

Now if you have a look, the below condition is the one which was getting repeated again and again.

Now if you clearly have a look at the select statement above, we are trying  to access the maximum InsertedWhen by joining the tables named tblstatuslog and the view vewwork. The main aim was just to have a single value which needs to be compared for which there was an attemp to perform 3 iterations as shown above, each of them scanning more than 1000000 rows of data which will slow down the entire operation.

Now in order to get rid of this problem, we will need to use a temporary table and dump the data into it using the same condition as shown below:

Now instead of checking the entire set of data as written in the original code, you can just refer the insertedwhen column present in the temporary table itself. Another question which comes in mind is whether it would be an INNER Join or a LEFT OUTER Join. Let’s see the query execution plan in both the cases:



Now from the above screen capture, we can see that there is an HASH MATCH with an operator cost of 135 and scanning 3963160 records which will slow down the entire operation.


Now let’s have a look at the query execution plan involved in LEFT OUTER Join.


Now as you can see from the above screen capture, there is just an Index scan happening which fetches just 1 row of the required data which is excellent. This means that the query now uses the proper index associated with it thus making the performance excellent.

Also since you are trying to fetch the maximum record from the tblstatuslog table itself therefore I would say LEFT OUTER join would be a much better candidate than the INNER JOIN.

The overall query should now be written like below:

With this change,the query now executes just within 28 seconds when it fetches the data for the period of 2 years instead of the earlier 28-39 minutes.

My sincere Thanks to all the viewers for providing their valuable time in reading the article. This is my first ever article on Performance Tuning therefore I would like to hear much more comments from the users which would have helped to make the approach much more better. My sincere Thanks to the entire OffShore Development Team at L&T Infotech with whom I spend a considerable amount of time researching on various topics.Sincere Thanks to my colleague Arti Samkaria as well with whom I spend a considerable amount of time focusing on Performance Tuning. My sincere thanks to Amit Bansal and Sarabpreet Singh for their valuable guidance which helps me to present more and more interesting topics to the Audience.



Satnam Singh

Like us on FaceBook Follow us on Twitter

Follow me on FaceBook| Join the fastest growing SQL Server group on FaceBook