SQL Server 2012: Debugging Deadlocked Schedulers

I had blogged about debugging deadlocked schedulers earlier for SQL Server 2008 R2 and below releases. Since there have been some fundamental changes in SQL Server 2012, I thought it would be a good idea to post about debugging the same scenario on SQL Server 2012 instances.

Older posts:
Debugging Deadlocked Schedulers Part 1
Debugging Deadlocked Schedulers Part 2

Continue reading

Advertisement

SQL Server is hung!

In the recent past, I had to work on a SQL Server 2000 instance which became unresponsive after a short period of time the service was restarted. Since this was SQL Server 2000, I didn’t have the opportunity to use a Dedicated Administrator Connection (DAC) to log into the SQL Server instance to see if a DAC connection succeeded. And if it did, could I figure out what was happening on the SQL Server engine that it was not accepting a new connection.

Post the SQL Server service restart, the ERRORLOG very happily indicated no issues and if you weren’t already ready to tear your hair out due to the lack of error messages, the connection failure reported the most generic of errors messages:

Server: Msg 11, Level 16, State 1
[Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]General network error. Check your network documentation.

I did the basic due diligence to check if the network protocols were enabled and if the port on which the SQL Server instance was supposed to listen on was actually open. I did happen to check the netstat output to check the activity on the port and found a large number of connections on the SQL Server port. I did a quick check of the count of the number of connections showing up to determine if this was a TCP port exhaustion issue. But that was not the case either! The Errorlog didn’t even report a Deadlocked Schedulers condition for me to know that there was an issue.

Continue reading

SQL Saturday 76: The WiFi gods were happy

 

I just concluded my presentation on “Debugging the Deadlock for the Scheduler” at SQL Saturday 76 over a Lync meeting. The WiFi and Lync gods decided to be benevolent today and let me present without an untoward incident! As always it was fun to demonstrate the use of a debugger…Winking smileThe attempt of the presentation was to demonstrate what can be done proactively and reactively when dealing with a deadlocked schedulers condition.

A special thank you is due to all the sponsors without whom the event today wouldn’t have been possible: Microsoft New Zealand, RedGate, SQL Pass, New Zealand MS Communities and SQL Services!

The slide deck used for presentation today is available on Slide Share and is embedded below:

The demo files used are available at SkyDrive in the file SQLSat76_Demo.zip The walkthrough for the debugging that I explained along with the use of the Powershell script to automatically collect DMV outputs based on messages in the Windows Application event log are available using the posts available under the walkthroughs link below as well as the zip file mentioned above.

And last but not the least, a big Thank You to Dave [Twitter | Blog] for helping arrange all the logistics so that I could present and also for giving me this opportunity!

Resources:

Whitepaper: How To Diagnose and Correct Errors 17883, 17884, 17887, and 17888
http://msdn.microsoft.com/en-us/library/cc917684.aspx

Walkthroughs
https://troubleshootingsql.com/tag/deadlocked-schedulers/

WinDBG
http://msdn.microsoft.com/en-us/windows/hardware/gg463009

Microsoft Public Symbol Server
http://msdl.microsoft.com/download/symbols

KB: New concurrency and scheduling diagnostics have been added to SQL Server
http://support.microsoft.com/kb/319892/en-us

Cheers!

Debugging that latch timeout

My last post of debugging an assertion didn’t have any cool debugging tips since there is not much that you can do with an assertion dump unless you have access to private symbols and sometimes even access to the source code. In this post, I am going to not disappoint and show you some more cool things that the windows debugger can do for you with public symbols for a latch timeout issue.

When you encounter a latch timeout (buffer or non-buffer latch), the first occurrence of it’s type generates a mini-dump. If there are further occurrences of the same latch timeout, then that is reported as an error message in the SQL Errorlog.

Buffer latch timeouts are typically reported using Error: 844 and 845. The common reasons for such errors are documented in a KB Article. For a non-buffer latch timeout, you will get the an 847 error.

Error # Error message template (from sys.messages)
844 Time out occurred while waiting for buffer latch — type %d, bp %p, page %d:%d, stat %#x, database id: %d, allocation unit id: %I64d%ls, task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p.  Continuing to wait.
845 Time-out occurred while waiting for buffer latch type %d for page %S_PGID, database ID %d.
846 A time-out occurred while waiting for buffer latch — type %d, bp %p, page %d:%d, stat %#x, database id: %d, allocation unit Id: %I64d%ls, task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p. Not continuing to wait.
847

Timeout occurred while waiting for latch: class ‘%ls’, id %p, type %d, Task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p. Continuing to wait.

This is what you will see in the SQL Errorlog when a latch timeout occurs.

spid148     Time out occurred while waiting for buffer latchtype 4, bp 0000000832FE1200, page 3:11234374, stat 0x7c20009, database id: 120, allocation unit id: 72057599731367936, task 0x0000000003C4F2E8 : 0, waittime 300, flags 0x1a, owning task 0x0000000003C129B8.  Continuing to wait.
spid148     **Dump thread – spid = 148, PSS = 0x000000044DC17BD0, EC = 0x000000044DC17BE0
spid148     ***Stack Dump being sent to D:\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0001.txt

spid148     * Latch timeout
spid148     * Input Buffer 84 bytes –
spid148     *             DBCC CHECKDB WITH ALL_ERRORMSGS
External dump process returned no errors.

I have only pasted the relevant portion from the Errorlog for brevity. As I have outlined in my previous blog posts on similar topics, that there is a large opportunity for due diligence that can be done with the help of the Windows Event Logs and the SQL Server Errorlogs before you start spawning off windows debugger to analyze the memory dump on your system. The first few obvious things that you will notice is that SPID 148 encountered the issue while performing a CHECKDB on database ID 120. The timeout occurred while waiting for a buffer latch on a page (Page ID is available in the message above). I don’t see a “Timeout waiting for external dump process” message in the SQL Errorlog which means that I have a good chance of extracting useful information from the mini-dump that was generated by SQLDumper.

Latch timeouts are typically victims of either a system related issue (hardware or drivers or operating system or a previous error encountered by SQL Server). So the next obvious action item would be to look into the SQL Errorlogs and find out if there were any additional errors prior to the latch timeout issue. I see a number of OS Error 1450 reported by the same SPID 148 for the same file handle but different offsets.

spid148     The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x0000156bf36000 in file with handle 0x0000000000001358. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists then immediate action must be taken to correct it.

Additionally, I see prior and post (within 5-15 minutes) the latch timeout issue, multiple other SPIDs reporting the same 1450 error message for different offsets but again on the same file.

spid137     The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x000007461f8000 in file with handle 0x0000000000001358. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists then immediate action must be taken to correct it.

I also see the latch timeout message being reported after every 300 ms for the same page and the database.

spid148     Time out occurred while waiting for buffer latch — type 4, bp 0000000832FE1200, page 3:11234374, stat 0x7c20009, database id: 120, allocation unit id: 72057599731367936, task 0x0000000003C4F2E8 : 0, waittime 82800, flags 0x1a, owning task 0x0000000003C129B8.  Continuing to wait.

Notice the waittime above, it has increased from 300 to 82800!! So the next thing I do is look up issues related to CHECKDB and 1450 error messages on the web using Bing (Yes, I do use BING!!). These are relevant posts related to the above issue.

http://blogs.msdn.com/b/psssql/archive/2008/07/10/sql-server-reports-operating-system-error-1450-or-1452-or-665-retries.aspx
http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx

As of now, it is quite clear that the issue is related to a possible sparse file issue related to file fragmentation. Now it is time for me to check if there are other threads in the dump waiting on SyncWritePreemptive calls.

Use the location provided in the Errorlog snippet reporting the Latch Timeout message to locate the mini-dump for the issue (in this case SQLDump0001.mdmp).

Now when you load the dump using WinDBG, you will see the following information:

Loading Dump File [D:\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0001.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Latch timeout’

This dump file has an exception of interest stored in it.

The above tells you that this is a mini-dump for a Latch Timeout condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine.

.sympath srv*D:\PublicSymbols*http://msdl.microsoft.com/download/symbols

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:005> lmvm sqlservr

start             end                 module name
00000000`01000000 00000000`03668000   sqlservr T (pdb symbols)          d:\publicsymbols\sqlservr.pdb\2A3969D78EE24FD494837AF090F5EDBC2\sqlservr.pdb

 

If symbols were not loaded, then you will see an output as shown below.

0:005> lmvm sqlservr
start end module name
00000000`01000000 00000000`03668000 sqlservr (export symbols) sqlservr.exe

I will use the !findstack command to locate all threads which have the function call SyncWritePreemptive on their callstack.

0:070> !findstack sqlservr!FCB::SyncWritePreemptive 0

Thread 069, 1 frame(s) match
Thread 074, 1 frame(s) match
Thread 076, 1 frame(s) match
Thread 079, 1 frame(s) match
Thread 081, 1 frame(s) match
Thread 082, 1 frame(s) match
Thread 086, 1 frame(s) match
Thread 089, 1 frame(s) match
Thread 091, 1 frame(s) match
Thread 095, 1 frame(s) match
Thread 098, 1 frame(s) match
Thread 099, 1 frame(s) match
Thread 104, 1 frame(s) match
Thread 106, 1 frame(s) match
Thread 107, 1 frame(s) match
Thread 131, 1 frame(s) match
Thread 136, 1 frame(s) match

0:070> ~81s
ntdll!ZwWaitForSingleObject+0xa:
00000000`77ef0a2a c3              ret
0:081> kL100

ntdll!ZwDelayExecution
kernel32!SleepEx
sqlservr!FCB::SyncWritePreemptive
sqlservr!FCB::PullPageToReplica
sqlservr!alloca_probe
sqlservr!BUF::CopyOnWrite
sqlservr!PageRef::PrepareToDirty
sqlservr!RecoveryMgr::DoCOWPreWrites

You could get all the callstacks with the function that you are searching for using the command: !findstack sqlservr!FCB::SyncWritePreemptive 3

If you look at the thread that raised ended up raising the Latch Timeout warning was also performing a CHECKDB.

0:074> .ecxr

0:074> kL100

kernel32!RaiseException
sqlservr!CDmpDump::Dump
sqlservr!CImageHelper::DoMiniDump
sqlservr!stackTrace
sqlservr!LatchBase::DumpOnTimeoutIfNeeded
sqlservr!LatchBase::PrintWarning
sqlservr!alloca_probe
sqlservr!BUF::AcquireLatch


sqlservr!UtilDbccCreateReplica
sqlservr!UtilDbccCheckDatabase
sqlservr!DbccCheckDB
sqlservr!DbccCommand::Execute

If you cannot find the thread which raised the Latch Timeout warning, you could print out all the callstacks in the dump using ~*kL100 and the searching for the function call in blue above. It is quite clear from the callstack above that the thread was also involved in performing a CHECKDB operation as reported in the SQL Errorlog in the input buffer for the Latch Timeout dump.

If case you were not able to identify the issue right off the bat, you need to check the build that you are on and look for issues that were addressed related to Latch Timeouts for the SQL Server release  that you are using. The symptoms section would have sufficient amount of information for you to compare with your current symptoms and scenario and determine if the KB Article that you are looking at is applicable in your case.

Now is the time, when you need to have some context about the operations that were occurring on the server to actually determine what the actual issue is. Based on what I heard from the system administrators that there was a CHECKDB being executed on the database while the application was executing DML operations on the database. Additionally, the volume on which the disk resides on has fragmentation and the database in question is large (>750GB).

Based on the two MSDN blog posts that I mentioned above, it is quite clear that it is possible to run into sparse file limitations when there is high amount of fragmentation on the disks or if there are a large number of concurrent changes occurring on the database when a CHECKDB is executing on it. A number of Windows and SQL Server updates along with workarounds to run CHECKDB on such databases is mentioned in the second blog post mentioned above. On a separate note, this is not an issue with CHECKDB! It is limitation that you are hitting with sparse files on the OS layer. Remember CHECKDB, starting from SQL Server 2005, creates an internal snapshot (makes sparse file) to execute the consistency check. Paul Randal’s tweet made me add this line to call this out explicitly!

As always… if you are still stuck, contact Microsoft CSS with the mini-dump file, SQL Errorlog and the Windows Event Logs. It might be quite possible that CSS might ask you to collect additional data as most Latch Timeout issues are generally an after-effect of a previous issue. In this case, it was the OS Error 1450.

Well… That’s it for today! Hope this is helpful for the next time you encounter a Latch Timeout issue.

Additional References

Four stages of NTFS File Growth
KB 315263 – How to read the small memory dump files that Windows creates for debugging

 

Debugging that Assert condition: Maybe Not

Last week I had shown how to debug non-yielding scheduler and deadlocked schedulers memory dumps. In this post, I shall talk about Assertions. When an assert condition check fails in the SQL Server code base, a mini-dump of the SQL Server process is created which is found by default in the SQL Server LOG folder.

An assert is basically a predicate (true-false condition) put in a program’s code by the developer which he/she thinks should always evaluate to TRUE. If this fails, then a the assert failure code written by the developer will be executed.

This is one of those scenarios where you will NOT actually need to debug the assert dump and still be able to achieve a lot without opening a debugger. So if you were hoping for some more cool debugging steps, I will have to disappoint you till my next post! Whenever an assert condition fails, the message is logged in the SQL Server Errorlog along with the failing assert condition.

Let me walk you through an example. You would see messages similar to the one below in the Windows Application Event log when an Assertion check fails:

MSSQLSERVER    Error    (2)    3624    N/A    <server name> A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
MSSQLSERVER    Error    (2)    17066    N/A    <server name> "SQL Server Assertion: File: <""logmgr.cpp"">, line=<line number> Failed Assertion = ‘!(minLSN.m_fSeqNo < lfcb->lfcb_fSeqNo)‘. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted."

So if you are monitoring your application event logs, 17066 or 3624 is the error number that you need to monitor for. Once you are aware that an exception occurred, you can check the SQL Server Errorlog from the time period when the issue was reported and see what errors were reported. You will find the same verbiage reported in the SQL Errorlog as in the application logs. (See errorlog excerpt below)

Location: "logmgr.cpp" <line number>
Expression: !(minLSN.m_fSeqNo < lfcb->lfcb_fSeqNo)


While acting as a mirroring partner for database ‘<Database name>’, server instance ‘<database name>’ encountered error 3624, status 1, severity 20. Database mirroring will be suspended.  Try to resolve the error and resume mirroring.

Notice the additional errorlog entry reported above pertaining to Database Mirroring. I find that the database mirroring setup for an user database encountered an assertion which was reported earlier in the SQL Errorlog. The build number for the SQL Server instance in question is 5000 (SQL Server 2005). The first thing you should do is search for the assert expression [!(minLSN.m_fSeqNo < lfcb->lfcb_fSeqNo)] on Bing (Or any other search engine if you are really persistent on using a particular search engine 😉 ).

Additionally, as the error message points out, run a CHECKDB on the database in question and find out if there are any inconsistencies reported. If yes, then the first task is to restore the last known good backup or correct the inconsistencies before doing anything else.

Getting back to the search results, you will find that you get KB981273 and KB2403218 in the search results. Pay close attention to the symptoms section in the KB Article as fixes for assertion failures are very specific. If the symptoms do not match, then the KB Article you found might not apply to your situation.

Symptoms from KB981273:

In Microsoft SQL Server 2005, when you restart the SQL Server service, error messages that resemble the following are logged in the SQL Server Errorlog file.

Symptoms from KB2403218:

Consider the following scenario:

  • You create a database mirroring session between two instances of SQL Server 2005 and SQL Server 2008 and SQL Server 2008 R2 by using the High-Performance (asynchronous) mode. One instance is the principal SQL Server that contains the principal database, and the other instance is the mirror SQL Server that contains the mirror database.
  • A long delay occurs between the principal and mirror databases. For example, one of the following delays occurs:
    • The redo process of the mirror database is slower than the transfer of the transaction from the principal database to the mirror database.
    • A transaction that requires a long time runs on the principal database. For example, an ALTER INDEX REBUILD query runs on the principal database.

In this scenario, the session is suspended, and a mini-dump file is generated in the log folder on the mirror server.

As you can see that KB2403218 is what is applicable in this scenario where as the symptoms described in the first KB Article doesn’t match the issue! Next check the release vehicle for the fix for the SQL Server release that you are using and the releases affected by this issue. You will find that there are two builds for SQL Server 2005 which contain the fix for this issue:

2438344 Cumulative update package 13 for SQL Server 2005 Service Pack 3 (Build: 4315)
2489409 Cumulative update package 2 for SQL Server 2005 Service Pack 4 (Build: 5254)

The interesting point to note here is that the SQL Server version for the instance in question is above the build that had the fix for Cumulative Update #13 for SQL Server 2005 Service Pack 3 which had the issue. However, this fix was present for Service Pack 4 (Build: 5000) but was included for the Cumulative Update #2 for Service Pack 4. So, now you would need to either apply CU2 or the latest Cumulative Update package for Service Pack 4 to resolve this issue.

As always, if in doubt contact Microsoft CSS with assertion mini-dump, SQL Errorlogs and the Windows Event logs. CSS Engineers might request for additional data like profiler traces, filtered dump from the next occurrence or maybe even a repro if the available data is not sufficient to perform a complete root cause analysis and identify the problem.

To summarize:
1. Just because an assert condition matches a KB Article, it doesn’t mean that you have identified your issue.
2. Verify that the symptoms match the condition that you are facing as assert conditions might be common for two completely different issues.
3. Even though you might be on a higher build than the build version for a particular release vehicle, you still might not have the fix on the Service Pack level that you are on as the fix might have been included in a later cumulative update.
4. If you cannot find a KB Article match, don’t’ have any inconsistencies in the database and are able to reproduce the scenario consistently, then check if you can reproduce this on the latest available build for the SQL Server release that you are using or contact CSS.

Debugging is not always a necessity to investigate an issue!! Smile