Hello Geeks,

Welcome to the new series of seven wonders of SQL Server. These are not the wonders which make you say Aww. These are the ones which make you really wonder why. I will be covering the seven of such jaw dropping situations where you wonder what is happening and few bizarre reasons why it is the way it is.

Last week we have seen the disastrous feature of auto shrink. I have also covered few steps to follow if shrinking a data file is inevitable. Today I will be taking about another feature which is very important to be set – Max Server Memory.

It is a very common practice to install SQL Server by system administrators in many organizations. And some of these are as part of OS image. There is no harm in doing so unless a DBA is not asked to do the initial configuration. Yes, it is a DBAs responsibility to make sure the SQL Server is configured correctly. If a DBA has control on deciding the machine configuration, in terms of processor, memory and storage, he should be the one who should have a say on these requirements. Unfortunately a DBA is only seen as a person who needs to be available 24/7 when a database performs slow or breaks. A very handful of organizations involve a DBA in the design phase.

So, what is max server memory? Max server memory, by definition is the maximum amount of memory that can be acquired by SQL Server for buffer pool. This means that a SQL Server can acquire more than max server memory. But, buffer pool is the major component of SQL Server memory and is controlled by max server memory setting.

The other configuration setting to control buffer pool memory is min server memory. This setting is used to set the minimum amount of memory used for buffer pool which will never be released once acquired.

When a SQL Server service starts, the memory is allocated to buffer pool while reading data from physical storage into the memory. Till the buffer pool increases to the min server memory no pages are deallocated from the memory. Once the min server memory is reached, this memory is not released from SQL Server buffer pool. Buffer pool keeps growing till max server memory size. Once the max server memory is reached, the pages will be deallocated from memory either by checkpoint or lazy writer processes.

Max server memory does not guarantee the space to be allocated at all times. The operating system, when under pressure can trim the SQL Server buffer pool to free the memory. In which case, the SQL Server has to perform a disk IO to fetch the data pages. This has a direct performance impact. If the max server memory is not set, the chances of trimming of buffer pool is high. This leads to noticeable performance issues. One main reason the max server memory should be set.

In multiple applications running on single machine where SQL Server is installed, it is important to set the max server memory. In such machines, if the max server memory is set too high, other applications may take more time to start.

If there are multiple instances on a single machine, it is a recommended practice to set the max server memory of both instances proportional to their workload. For example, there is a machine with 128 GB of memory, instance 1 need 40% of resources and instance 2 require 60%. It is recommended to leave a 10% or 10 GB, whichever is lower for OS and allocate the remaining to SQL Servers. I would allocate 48 GB to instance 1 and 70 GB to instance 2. This will give appropriate memory for both instances to share the resources.

To conclude, as a DBA it is our responsibility to configure the SQL Server to achieve the best possible results. So, find a wonder today in your environment where the max server memory is not set and correct it.

Happy Learning,
Manu

Like us on FaceBook | Join the fastest growing SQL Server group on FaceBook |
Follow me on Twitter | Follow me on FaceBook