Replication Agent has not logged a message in 10 minutes

I saw multiple posts on MSDN regarding the following message: 

“The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity.” 

More often than not the above message can be ignored safely. The next obvious question is when can this message be ignored. If you find that your replication agent history is not reporting any error messages, then the above message is benign. Typically, this message means the agent is busy doing its work and has no resources to respond to status inquiries. 

Why does this error message get generated? 

This error message gets generated because of the Distribution heartbeat interval property. This property governs how long an agent can run without logging a progress message. If your replication agents are not reporting an error message and you are seeing the above message, then you could change your heartbeat interval to a higher value. One of the option could be that you changed the history logging option for your replication agent so that it doesn’t log any message. 

Query: 

exec sp_changedistributor_property @property = 'heartbeat_interval', @value = <value in minutes>;

 

Reference: sp_changedistributor_property (Transact-SQL) 

I have read lots of opinions as to why this is a bug and if you subscribe to that school of thought, please create a Connect item at http://connect.microsoft.com/sqlserver.

Advertisement

Non Bpool region on 64bit SQL instances may be larger than you think

One of my older posts talked about SQL Server Memory Basics. The infamous MemToLeave region was shown with a default value of 384MB. This post was penned a long time back and the SQL landscape has changed since then. Since the post was mainly based on showing the memory architecture for SQL Servers in a 32-bit environment, the calculations shown were based on the 2GB user address space limitation for x86 architecture.

With the adoption of 64-bit architecture, the 2GB user mode address space (3GB if you use the /3GB switch) went out of the window. One of the lesser known facts that I have seen multiple times while addressing memory related concerns for non-BPool memory consumption on MSDN is that the non-BPool region is now not just 384MB.

Before I go onto explain this, I would like to point out that the Max Server Memory setting for SQL Server (irrespective of architecture) is concerned with the BPool region. This doesn’t limit the amount of memory that can be used by components which request for contiguous allocations above 8KB in size. The non-BPool region is now limited not by your virtual address space in 64-bit environments but by your physical memory (RAM) on the server. Let me explain this with an example.

If you have a SQL Server instance on a 64-bit box with Max Server Memory set to 24GB and the total physical memory on the server is 32GB, then your SQL instance can now theoretically use the remaining 8GB for contiguous memory allocations requested above 8KB in size. This is where I keep advocating the understanding of the environment before setting Max Server Memory for the SQL instance. Leaving 2 or 3GB for your Operating System and the non-BPool memory is just not going to cut it if you have components active within SQL Server which require huge amounts of contiguous memory. I have already listed out the non-BPool memory consumers in a previous post which outlines the most common consumers. The reason this happens is that on 64-bit environments you are no longer constrained by the 2 or 3GB user address space limitation because your address space runs into a few TBs. So, the limiting factor here is your physical memory (RAM). Once you have set Max Server Memory to finite value, the SQL instance is told that the Buffer Pool cannot grow beyond this value but this doesn’t limit non-BPool allocations. As long as you have available physical memory on the server, SQL can continue using that for contiguous allocation requests above 8KB in size. This makes it highly important for you to evaluate which components in your SQL environment can make such allocation request and set the Max Server Memory in such a manner that you do not starve either of the two memory regions in the SQL Server or any of the other applications running on the server or the Operating System. Unless you do this, you could end up troubleshooting SQL Memory Pressure very frequently on the box especially issues like paging or general server unresponsiveness/sluggishness before you hit the nail on the head!

To summarize, two points that you need to keep in mind for 64-bit SQL environments:

1. Max Server Memory setting only limits Buffer Pool

2. Non-BPool memory region for SQL is effectively now: Total Physical Memory – (Max Server Memory + Physical Memory used by OS and other applications)

How to open Management Studio with the Object Explorer and a Query Window from command line

SQL Server Management Studio provides you the option of launching from command line. If you want to launch SSMS with a New Query Window and Object Explorer using command line, then you need to do the following changes.

Tools –> Options will open up the Options page for SSMS.

Under that you would see an At Startup property under Environment –> General. Use the drop down list to select the “Open a Object Explorer and New Query”. The default is shown below in the screen shot.

image

Once you have made the change, you can use the following command to launch SSMS connected to the instance you want:

ssms -S <server name> -E –nosplash

SSMS Command Line Options

—————————
Microsoft SQL Server Management Studio
—————————
Usage:
ssms.exe [-S server_name[\instance_name]] [-d database] [-U user] [-P password] [-E] [file_name[, file_name]] [/?]

    [-S    The name of the SQL Server instance to which to connect]
    [-d    The name of the SQL Server database to which to connect]
    [-E]    Use Windows Authentication to login to SQL Server
    [-U    The name of the SQL Server login with which to connect]
    [-P    The password associated with the login]
    [file_name[, file_name]] names of files to load
    [-nosplash] Supress splash screen
    [/?]    Displays this usage information

Cursor coding horrors

Most coders have stereotypes of one kind or another. Given an opportunity, the developer will give into their stereotype. The coding guidelines is what keeps them from using it. I personally have a coding stereotype in T-SQL when defining a loop. Cursors are one of the most commonly used coding constructs. However, I tend to stick to my trusted friend a WHILE loop. I have worked on multiple issues where the T-SQL developer got the next FETCH wrong due to a oversight. Let me demonstrate that with an example. I have a T-SQL sample code which prints out all the user sessions connect to the SQL instance. 

declare @session_id int 

declare cur_sysprocesses cursor for 

select session_id from sys.dm_exec_requests 

where session_id > 50 

open cur_sysprocesses 

fetch next from cur_sysprocesses into @session_id 

while @@FETCH_STATUS <> -1 

begin 

print @session_id 

if (@session_id > 50) 

fetch next from cur_sysprocesses into @session_id 

end 

close cur_sysprocesses 

deallocate cur_sysprocesses

I have the next FETCH inside the while loop and the loop is governed by the value of @@FETCH_STATUS. Since, I have defined it as NOT EQUAL TO –1, the cursor will end when the next fetch is unsuccessful. This is a perfectly alright. Not all cursors in a production environment are that simple. Some of them have levels of nesting and WHILE loops which have deep levels of nesting along with conditional blocks. So, let me modify the example and show you: 

declare @session_id int 

declare cur_sysprocesses cursor for 

select session_id from sys.dm_exec_requests 

open cur_sysprocesses 

fetch next from cur_sysprocesses into @session_id 

while @@FETCH_STATUS <> -1 

begin 

print @session_id 

if (@session_id > 50) 

fetch next from cur_sysprocesses into @session_id 

end 

close cur_sysprocesses 

deallocate cur_sysprocesses

Now the problem with the above cursor is that most system processes have a session id below 50. So, now my cursor will loop through infinitely because the next fetch will never occur since the check in the conditional block will never evaluate to true as the first fetch from the cursor will always result in a system spid lesser than 50. This seems like a really amateur mistake which you think is not possible. But believe me, this happens! Since, I am showing this to you with a common system DMV, you think this is not possible. However, when you are using user defined tables and cursors, this is quite possible. Never define your next fetch from the cursor within the WHILE loop in a conditional block which is dependant on the data being fetched from the table. 

Always have the next FETCH defined in the first level of the while loop. Never ever put this in a conditional block. Always find an alternative way to find out a workaround or a different way to decide on whether to process the data fetched from the cursor. The next FETCH from the cursor should NOT be defined in an IF conditional block. 

It’s a human being that is coding the cursor. There is always a chance of some unforeseen circumstance (un-thought-of scenario) which makes your cursor go into a doomed state (read: infinite loop). 

You can never be dependant on your data to decide the fate of your code path. 

To summarize, the next FETCH while your looping through your cursor should be in the first level of your code unless and until you have no other choice and want to avoid an infinite loop scenario.

Blog post update

I published a new post on SQL Server FAQ on how incorrect buffercount values used or the lack of using the buffercount value can lead to Out of Memory errors in SQL Server.

Read about it here:

Incorrect BufferCount data transfer option can lead to OOM condition