Debugging the deadlock but for the scheduler: Part 1 of 2

I had very recently blogged about debugging a non-yielding scheduler condition for a SQL Server instance. In today’s post, I shall talk about debugging a “deadlocked schedulers” issue which is a.k.a the infamous 17884 to you old-school folks. If ALL schedulers are considered stuck and SQL Server is not under active memory pressure, then the 17884 or 17888 condition is reported starting from SQL Server 2005. In this post, I shall show you how to dig into the root cause for a 17888 message reported in the SQL Errorlog. There are subtle differences between a 17888 and 17884 condition detected and reported by the SchedulerMonitor. A 17884 generally tells you that no work is being processed by a particular NUMA Node (for non-NUMA machines, it denotes the entire instance). On the other hand, 17888 tells you that no new work request is being processed on a particular NUMA node by the SQL server instance due to worker threads waiting on particular resource(s). The whitepaper mentioned at the end of this post covers the nitty-gritty details of the detection algorithm.

Description of the error messages from sys.messages catalog

17784 – New queries assigned to process on Node %d have not been picked up by a worker thread in the last %d seconds. Blocking or long-running queries can contribute to this condition, and may degrade client response time. Use the "max worker threads" configuration option to increase number of allowable threads, or optimize current running queries. SQL Process Utilization: %d%%. System Idle: %d%%.

17888 – All schedulers on Node %d appear deadlocked due to a large number of worker threads waiting on %ls. Process Utilization %d%%.

When a 17888 is reported, you will find messages of the following nature in the SQL Errorlog:

2008-03-25 14:14:53.37 Server      ***Stack Dump being sent to g:\mssql\data\MSSQL\LOG\SQLDump0006.txt
2008-03-25 14:14:53.38 Server      * BEGIN STACK DUMP:
2008-03-25 14:14:53.38 Server      *   03/25/08 14:14:53 spid 0
2008-03-25 14:14:53.38 Server      *
2008-03-25 14:14:53.38 Server      * Deadlocked Schedulers
2008-03-25 14:14:53.38 Server      * Short Stack Dump
2008-03-25 14:14:53.41 Server      Stack Signature for the dump is 0x00000260
2008-03-25 14:14:56.44 Server      External dump process return code 0x20000001.
External dump process returned no errors.

2008-03-25 14:14:56.44 Server      All schedulers on Node 0 appear deadlocked due to a large number of worker threads waiting on LCK_M_S. Process Utilization 1%.

Again note the differently colored lines and text whose significance, if not self-explanatory at this time, I shall talk about them later in the post. The last message in the above excerpt very clearly tells you that you have a blocking issue on the SQL instance and most of the worker threads on Node 0 are waiting for a LCK_M_S which they cannot acquire due to a conflicting lock(s) held on the object(s) by another session. This time around the SQLDumpXXXX.txt filename is mentioned in the Errorlog unlike the non-yielding scheduler message. So what needs to be done is locate the .mdmp file (in this case SQLDump0006.mdmp) from the same location and use WinDBG to load the dump for analysis.

If you encounter this condition on a SQL Server Failover Cluster instance, then this will result in IsAlive check failures and the SQL service being re-started on the same node or could even result in a failover, if the SQL instance doesn’t respond to the stop request in time. Note the “External dump process returned no errors.” in the above output. This means that the dump generation routine didn’t encounter any errors. Before I move onto the interesting part of looking through the data, let me set expectation here that there is very little information that you can coax a mini-dump for a deadlocked schedulers issue to provide you with while using public symbols. Even with the help of private symbols which CSS engineers have access to, it might not be possible to perform a complete root cause analysis of the issue. The main reason for this is that even if CSS engineers are able to find out the queries being executed on the SQL instance from the mini-dump, the blocking chain origin (in this case because it is clear from the error message excerpt in the Errorlog) cannot be determined from the mini-dump. This invariably ends up requiring collection of SQL Server Perf Stats script output (for capturing blocking information) and profiler traces to get the queries being executed by the head blocker which caused the blocking condition.

Now when you load the dump using WinDBG, as noted in the debugging a non-yielding scheduler issue, you will note similar things:

When you load the dump using WinDBG, you will see the following information:

Loading Dump File [g:\mssql\data\MSSQL\LOG\SQLDump0006.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Deadlocked Schedulers’

The above tells you that this is a mini-dump for a deadlocked schedulers condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine.

.sympath srv*D:\PublicSymbols*http://msdl.microsoft.com/download/symbols

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:019> lmvm sqlservr
start end module name

01000000 02c0a000   sqlservr T (pdb symbols)          d:\publicsymbols\sqlservr.pdb\389EF554D94A4947846D85FCDC4233382\sqlservr.pdb

If symbols were not loaded, then you will see an output as shown below.

0:005> lmvm sqlservr
start    end        module name
01000000 02c0a000   sqlservr (export symbols) sqlservr.exe

Since I know that there is a blocking issue due to which a deadlocked schedulers condition was reported, I shall use the !findstack debugger command to find out all the callstacks which are potentially waiting for a lock:

!findstack sqlservr!LockOwner::Sleep 2

Output for one of the threads from the above command:

Thread 271, 1 frame(s) match
          00 6db5eef8 7c821fc4 ntdll!KiFastSystemCallRet
          01 6db5eefc 77e43c1d ntdll!NtSignalAndWaitForSingleObject
          02 6db5ef70 010034f2 kernel32!SignalObjectAndWait
          03 6db5ef94 01002172 sqlservr!SOS_Scheduler::Switch
          04 6db5f108 01002cb9 sqlservr!SOS_Scheduler::SwitchContext
          05 6db5f120 01001c75 sqlservr!SOS_Scheduler::SuspendNonPreemptive
          06 6db5f138 01002f25 sqlservr!SOS_Scheduler::Suspend
          07 6db5f15c 01056f6a sqlservr!SOS_Event::Wait
       * 08 6db5f198 01057081 sqlservr!LockOwner::Sleep
          09 6db5f238 0100a75d sqlservr!lck_lockInternal
          10 6db5f2b4 0100d506 sqlservr!GetLock
          11 6db5f324 010432c7 sqlservr!BTreeRow::AcquireLock
          12 6db5f3a4 0100dc00 sqlservr!IndexDataSetSession::GetRowByKeyValue
      ….
      ….
          33 6db5feb0 010b9064 sqlservr!SystemThread::RunWorker
          34 6db5ff10 010b9201 sqlservr!SystemThreadDispatcher::ProcessWorker
          35 6db5ff78 781329bb sqlservr!SchedulerManager::ThreadEntryPoint
          36 6db5ffb0 78132a47 msvcr80!_callthreadstartex
          37 6db5ffb8 77e6608b msvcr80!_threadstartex
          38 6db5ffec 00000000 kernel32!BaseThreadStart

As you can see from the above call-stack, the worker thread is waiting for the lock that it requested for to become available. I found that most of the call-stacks reported in the !findstack output were similar. I found that 183 threads (counted the number of match keyword occurrences in the !findstack output) reported LockOwner::Sleep in their call-stack. This confirms that I have a serious blocking issue either due to large number of concurrent requests asking for the same conflicting lock(s) which are held by other thread(s) or it could be a single long running transaction or uncommitted transaction which is causing all threads to wait for the lock to be released post the commit or completion of the query. In my experience of handling such issues, I have seen the latter cause to be most prevalent.

If you need to find out the total number of threads in the dump, then use the ~* command to get the number of threads.

0:278> ~*
   0  Id: 45c.43c Suspend: 1 Teb: 7ffdd000 Unfrozen
      Start: sqlservr!wmainCRTStartup (02432df8)
      Priority: 0  Priority class: 32  Affinity: ff
   1 Id: 45c.51c Suspend: 1 Teb: 7ffda000 Unfrozen
      Start: msvcr80!_threadstartex (781329e1)
      Priority: 0  Priority class: 32  Affinity: 22



.278  Id: 45c.2b4c Suspend: 1 Teb: 7fea3000 Unfrozen
      Start: msvcr80!_threadstartex (781329e1)
      Priority: 0  Priority class: 32  Affinity: ff

This means that 183 threads out of 278 threads (65% of total worker threads) are waiting for locks!!! In case you got a wait type reported in the 17888 message which is not a LOCK waittype, then what keyword do you search for using the !findstack command?? That would lead to an interesting scenario. You can then use another debugger command !uniqstack to list out the unique call-stacks in the dump.

0:278> !uniqstack

Partial output of the above command:
Processing 279 threads, please wait

.134  Id: 45c.30a8 Suspend: 1 Teb: 7ff36000 Unfrozen
      Start: msvcr80!_threadstartex (781329e1)
      Priority: 0  Priority class: 32  Affinity: 22
ChildEBP RetAddr
6db5eef8 7c821fc4 ntdll!KiFastSystemCallRet
6db5eefc 77e43c1d ntdll!NtSignalAndWaitForSingleObject
6db5ef70 010034f2 kernel32!SignalObjectAndWait
6db5ef94 01002172 sqlservr!SOS_Scheduler::Switch
6db5f108 01002cb9 sqlservr!SOS_Scheduler::SwitchContext
6db5f120 01001c75 sqlservr!SOS_Scheduler::SuspendNonPreemptive
6db5f138 01002f25 sqlservr!SOS_Scheduler::Suspend
6db5f15c 01056f6a sqlservr!SOS_Event::Wait
6db5f198 01057081 sqlservr!LockOwner::Sleep
6db5f238 0100a75d sqlservr!lck_lockInternal
6db5f2b4 0100d506 sqlservr!GetLock
6db5f324 010432c7 sqlservr!BTreeRow::AcquireLock
6db5f3a4 0100dc00 sqlservr!IndexDataSetSession::GetRowByKeyValue
….
….
6db5feb0 010b9064 sqlservr!SystemThread::RunWorker
6db5ff10 010b9201 sqlservr!SystemThreadDispatcher::ProcessWorker
6db5ff78 781329bb sqlservr!SchedulerManager::ThreadEntryPoint
6db5ffb0 78132a47 msvcr80!_callthreadstartex
6db5ffb8 77e6608b msvcr80!_threadstartex
6db5ffec 00000000 kernel32!BaseThreadStart

Total threads: 279
Duplicate callstacks: 219 (windbg thread #s follow):
3, 12, 13, 15, 16, 17, 19, 21, 22, … …. 276, 277, 278

 

With the thread IDs reported in the duplicate call-stacks output (italicized numbers), you will manually (with some soul searching) have to look through a few thread call stacks using the following commands (below) and check what is most common callstack pattern that you notice from the list of duplicate threads. Disclaimer: This is not an easy task unless you do engage in such kind of debugging on a regular basis! And definitely not for the faint hearted and easily irritated!

~[Thread ID]s
kL100

Example:

0:052> ~53s
eax=553985c0 ebx=7ff2ce38 ecx=00000000 edx=00000000 esi=00001bac edi=00000000
eip=7c82ed54 esp=693aeefc ebp=693aef70 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
ntdll!KiFastSystemCallRet:
7c82ed54 c3              ret

0:053> kL100

ChildEBP RetAddr
6db5eef8 7c821fc4 ntdll!KiFastSystemCallRet
6db5eefc 77e43c1d ntdll!NtSignalAndWaitForSingleObject
6db5ef70 010034f2 kernel32!SignalObjectAndWait
6db5ef94 01002172 sqlservr!SOS_Scheduler::Switch
6db5f108 01002cb9 sqlservr!SOS_Scheduler::SwitchContext
6db5f120 01001c75 sqlservr!SOS_Scheduler::SuspendNonPreemptive
6db5f138 01002f25 sqlservr!SOS_Scheduler::Suspend
6db5f15c 01056f6a sqlservr!SOS_Event::Wait
6db5f198 01057081 sqlservr!LockOwner::Sleep
6db5f238 0100a75d sqlservr!lck_lockInternal
6db5f2b4 0100d506 sqlservr!GetLock
6db5f324 010432c7 sqlservr!BTreeRow::AcquireLock
6db5f3a4 0100dc00 sqlservr!IndexDataSetSession::GetRowByKeyValue
….
….
6db5feb0 010b9064 sqlservr!SystemThread::RunWorker
6db5ff10 010b9201 sqlservr!SystemThreadDispatcher::ProcessWorker
6db5ff78 781329bb sqlservr!SchedulerManager::ThreadEntryPoint
6db5ffb0 78132a47 msvcr80!_callthreadstartex
6db5ffb8 77e6608b msvcr80!_threadstartex
6db5ffec 00000000 kernel32!BaseThreadStart

TIP: The interesting thing to note is the thread IDs that you see above will directly match the OS Thread ID reported in the sys.dm_os_threads os_thread_id column value or the sys.sysprocesses kpid column value. So if you have a monitoring tool that is capturing the output of queries being executed by the active SQL Server sessions, then you can go back to your monitoring repository and dig up the queries which these threads were executing. If such data is not available from the time of the issue, more often that not you will have to end up collecting additional data for the next problem occurrence!

Also check if the Deadlocked Schedulers matches any KB Article (update) already released. Remember that deadlocked schedulers is a symptom and the not the cause!! So make sure that if you find a KB Article which talks about a Deadlocked Schedulers issue being addressed in a later build, the symptoms match the symptoms that you are experiencing on your server! If in doubt, ask CSS!

If you are stuck and have no clue why the issue happened, then this would be a good time to engage Microsoft CSS Support and provide them with the SQL Errorlogs, the dump file and the windows event logs in .txt format for further analysis.

Reference:
KB 315263 – How to read the small memory dump files that Windows creates for debugging
How To Diagnose and Correct Errors 17883, 17884, 17887, and 17888 whitepaper

13 thoughts on “Debugging the deadlock but for the scheduler: Part 1 of 2

  1. Good article, looking forward to section two. If I may offer a suggestion: Attach a dump file to your post so people can play around with the stack analysis themselves.

    As you know, WinDbg is a wonder of modern user interface engineering, so is always good to get the anger out of the way fast and have people get some hands on with it.

    Like

  2. Thanks for the suggestion. The dump that I was working on cannot be uploaded for confidentiality reasons. I shall try and create repro scenarios for the debugging series on clean virtual machines and upload them at a later time.

    Like

  3. Pingback: Debugging the deadlock but for the scheduler: Part 2 of 2 « TroubleshootingSQL

  4. Great technical material on a subject not often covered – symbols & debugging stack dumps can be tricky as you well know
    WinDbg is an excellent tool to learn buit one sadly few have tried or understadn what to do
    Thanks – enjoyed the read – I’ve still a lot to learn on this areathat I do know

    Like

  5. Pingback: Microsoft – SQLServer (v2005) – Crashed with error info: Deadlocked Schedulers | Daniel Adeniji's Trail

  6. Pingback: Handling Deadlocked Schedulers is a piece of cake now « TroubleshootingSQL

  7. Pingback: SQL Server 2012: Debugging Deadlocked Schedulers « TroubleshootingSQL

  8. Pingback: Troubleshooting THREADPOOL Waits - SQLpassion

  9. Pingback: Troubleshooting THREADPOOL Waits – SQL Meal

Comments are closed.