SQL Server: Why is NTEXT so bad to give a PAIN.

Whenever, we talk about Performance Tuning in an SQL Server environment, not necessary always due to bad indexes,statistics or perhaps due to poor query structure the query is behaving badly, in some cases it is possible due to a Incorrect Data Type the query is responding very poor. In this article, Satnam Singh,a Senior SQL Server DBA from Larsen and Toubro Infotech,Mumbai,India discusses one such situation which he faced a couple of days back wherein a T-SQL query was performing very badly due to the column involved in the query using an NTEXT data type.

Scenario:

One of my developers suddenly told me that the query being involved in the SSRS report is behaving very poorly such that it is taking a long long time to generate the output.Initially I thought that there might be some communication problem due to which the application was taking a pretty long time to communicate with the database server as it resided in Australia but later on I came to know that wasn’t the case. After having a detailed look at the query, I noticed that there were few columns in the T-SQL which were having NTEXT data type and causing the damage. NTEXT causes a very heavy Clustered Index Scan along with a very heavy LOB Logical Reads, now let us understand the same via an example.

Let us create a dummy table named comment using the T-SQL below:

create table comment
(
comment_text NTEXT
)

Once the table gets created, I insert few dummy records into the table using the below T-SQL:

insert comment
select 'ABC'

insert comment
select 'PQR'

insert comment
select 'XYZ'

Now let us check the Statistics IO count of the table named comment using the below T-SQL.

set statistics io on
select * from comment
set statistics io off

The Statistics IO count is as follows:

<pre class="brush:sql">(3 row(s) affected)
Table 'comment'. Scan count 1, 
logical reads 1, 
physical reads 0, 
read-ahead reads 0, 
lob logical reads 6, 
lob physical reads 0, 
lob read-ahead reads 0.</pre>
<pre class="brush:sql">
</pre>

As you can see from the above T-SQL that though the value of the Logical Reads is mere 1 but the value of the LOB Logical Reads is 6 which is pretty high for the T-SQL to behave very poorly. This is an expected behaviour as NTEXT never stores the data in the form of Table, Instead it stores it in the form of Large Objects structure.

Now in order to get rid of this situation, I decided to convert the NTEXT datatype column to a NVARCHAR(MAX) data type as shown in the screen capture below:

alter table comment
alter column comment_text nvarchar(max)

Once I change the data type of the column to NVARCHAR, I then decided to check the Statistics IO count using the below T-SQL,

set statistics io on
select * from comment
set statistics io off

The Statistics IO is as shown below:

(3 row(s) affected)
Table 'comment'. Scan count 1, 
logical reads 1, 
physical reads 0, 
read-ahead reads 0, 
lob logical reads 6, 
lob physical reads 0, 
lob read-ahead reads 0.

Oops that still very bad, inspite of changing the Data Type of the column, the LOB Logical Read count is still the same i.e 6. This is because the system has still kept the data in the Large Object structure and uses a pointer as a reference to extract the text from this column.Now in order to fix this problem, I executed the below T-SQL.

update comment
set comment_text = comment_text

Once executed, Now I will view the Statistics IO Count again using the below T-SQL.

(3 row(s) affected)
Table 'comment'. Scan count 1, logical reads 1, 
physical reads 0, read-ahead reads 0, 
lob logical reads 0, 
lob physical reads 0, 
lob read-ahead reads 0.

Now you can see that the LOB Logical Read count has dropped to 0 which is excellent. Also this is due to the query optimiser extracting data directly from the table instead of the Large Objects structure.

This was my recommendation on How to improve the Performance when the columns having NTEXT data type is present in the table. A detailed POC needs to be done once the above strategy is decided to be implemented. Normally, one should avoid using NTEXT in future developments because Microsoft have confirmed that this is going to be a depreciated feature in future versions of the product. Also, I would never recommend anybody using a NVARCHAR(MAX) or VARCHAR(MAX) as a data type of the column in a particular table because it would not allow you to perform an ONLINE Rebuilding of Indexes in a table.

Many Thanks to all the viewers for providing their valuable time in reading this article.This article is dedicated to my colleagues Arti Samkaria, Yadu Nair and the members of the Support team at the OffShore Development Centre with whom I have done plenty of cases on SQL Server Performance Tuning . Please do let us know if there are any valuable suggestions for improving the article.

 

Regards

Satnam Singh

Like us on FaceBook Follow us on Twitter

Follow me on FaceBook| Join the fastest growing SQL Server group on FaceBook

Data Platform Virtual Summit 2020

Subscribe to SQLServerGeeks YouTube channel. If you want more learning content in your inbox, subscribe to SQLServerGeeks Bulletin.

SQLServerGeeks YouTube | SQLServerGeeks Bulletin | SQLServerGeeks Twitter

4 Comments on “SQL Server: Why is NTEXT so bad to give a PAIN.”

  1. Hi,

    This is great article written by you for all SQL developer. One thing I want to confirm that in the above example once we changed the NText to NVarchar(Max) and then check the statics it is showing lob logical reads as 3 not 6, while as per your article it is showing 6. Please explain me.

Leave a Reply

Your email address will not be published. Required fields are marked *