SQL Saturday #76: Gearing up for the fun remotely


imageI am quite excited about this Saturday (January 28th, 2012) which will witness SQL Saturday #76 being organized at Auckland. This time around I will not be physically present for the event but will be presenting via Live Meeting on “Debugging the deadlock for the scheduler”. It would have been awesome to be there in person but I shall make the most of the remote opportunity! As they say… If wishes were horses, even beggars would ride! I am going to show you some cool stuff that you can do when you encounter a Deadlocked Schedulers scenario in SQL Server including a demo on debugging a SQL dump!

There are a bunch of other interesting sessions on a whole gamut of topics for you to pick and choose from! Space is limited so if you want to attend and be part of the fun and learning, then get yourself registered here pronto. The list of sessions that for the day are available here.

Venue: Microsoft New Zealand, Level 5, 22 Viaduct Harbour Ave, Auckland CBD
Date: January 28th, 2012
Time: Registration opens at 8.30AM NZ time | Event will conclude around 5PM NZ time

Looking forward to another fun-filled Saturday! Winking smile

Deadlocked Schedulers and event notifications with a bit of Powershell


I had written 2 posts recently about Deadlocked Schedulers debugging (Part 1 | Part 2) which basically walked through a 17784 and 17888 condition and the steps that you could take to debug such an issue. A quote from the both these articles:

So if you have a monitoring tool that is capturing the output of queries being executed by the active SQL Server sessions, then you can go back to your monitoring repository and dig up the queries which these threads were executing. If such data is not available from the time of the issue, more often that not you will have to end up collecting additional data for the next problem occurrence!

You will need to collect additional data if there wasn’t a monitoring software collecting the same to determine the set of queries and sequence of events leading upto the issue. CSS Engineers can dig into the dump using the private symbols and probably provide you with additional information which can help you along the right direction. But alas, even they do not have magic wands if you cannot provide them with additional information like blocking output/profiler traces for 17884 and 17888 conditions caused by excessive blocking or parallelism.

SQL Server 2008 Service Pack 2 provided an update which made the FailoverAnalysis.sql script available again. This was available in SQL Server 2005 and now is available for SQL Server 2008 as well. This is particularly useful for SQL Server Failover Clusters where you can configure SQLDumper to generate a dump before the failover occurs and also set the SqlDMVScriptTimeout property to get an output of the DMVs used in the script before a failover happens.

image

You can refer “Cluster failovers and the Sqldumper.exe utility” section in the KB917825 for more details on the above mentioned feature. So what am I doing writing this post??

This post is aimed at SQL Server Standalone instances where you cannot benefit from sqldumper. Deadlocked Schedulers are reported with EventIDs 17884 or 17888. I am going to use this to create an scheduled task on a Event Log entry for these two specific EventIDs. I am big fan of automating repetitive tasks and the benefit of such an automation is that you can get additional data from the problem occurrence without having to just rely on a SQL Server mini-dump memory file and be at the mercy of the Windows Debugging tools. For this post, you can say a small good-bye to your favorite debugging tool and read on for some automation in monitoring!!

However, there are some requirements for this kind of indigenous monitoring to work:

1. Dedicated Administrator Connection needs to be enabled for your SQL Server instance(s) – This is required because during a Deadlocked Schedulers condition, you would have run out of worker threads. So, new connection requests cannot be processed unless and until you establish a connection through DAC. If you are planning to use Event Subscription and remote data gathering, then you need Remote DAC enabled for your SQL instance(s) being monitored. Note: Event subscription is a feature available on Windows Server 2008 and above.

2. You need sysadmin privileges for the account that will be used to collect the output of the FailoverAnalysis.sql script.

3. Powershell and SQLPS needs to be available on the server being monitored. If you don’t want to install Powershell components, then the same functionality can be achieved using SMO and VBScript which I have used extensively in the past till I became a fan of POSH awesomeness!! Smile

Now that I have the pre-requisites out of the way, let’s get down to the good part…

The Powershell Script

I had already mentioned that I will be using a Powershell script to get the output of the FailoverAnalysis.sql script for the SQL instance that is encountering the 17884 condition. The piping feature of Powershell really makes it easy to write compact scripts to perform a wide variety of tasks. The script is divided into three parts.

The first part of the script actually uses the script in the section “Adding the SQL Server Snap-ins to Windows PowerShell” of the article “Running SQL Server PowerShell” to initialize the SQLPS snap-in in the session that you will be running the script.

The second part of the script reads the Application event log for the first occurrence of an informational message with the text substring “have not been picked up by a worker thread in the last 60 seconds” from any SQL Server instance as the source.

The third part of the script figures out the SQL Server instance name and connects to it to execute the FailoverAnalysis.sql script. You can modify the output location of the script output below as appropriate for your environment.

Script filename: DeadlockSchedulerMonitor.ps1


#################################################################################

# Script Name: DeadlockSchedulerMonitor #

# Author: Amit Banerjee #

# Date: 9/1/2011 #

# Description: #

# The script reads the first 17884 (Deadlocked Schedulers) event from the #

# Windows Application Event log and identifies the source. Based on the #

# source name, it connects to the SQL Server instance using a DAC #

# connection and collects the FailoverAnalysis.sql script output. #

#################################################################################

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Performing SQLPS snap-in initialization`n"

Write-Host $vMsg

# Call the initialization script to load the SQLPS snap-in in the session that you running your script

# Add the SQL Server Provider.

$ErrorActionPreference = "Stop"

$sqlpsreg="HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps"

if (Get-ChildItem $sqlpsreg -ErrorAction "SilentlyContinue")

{

Write-Host "SQL Server Provider for Windows PowerShell is not installed.`n"

}

else

{

$item = Get-ItemProperty $sqlpsreg

$sqlpsPath = [System.IO.Path]::GetDirectoryName($item.Path)

}

# Set mandatory variables for the SQL Server provider

Set-Variable -scope Global -name SqlServerMaximumChildItems -Value 0

Set-Variable -scope Global -name SqlServerConnectionTimeout -Value 30

Set-Variable -scope Global -name SqlServerIncludeSystemObjects -Value $false

Set-Variable -scope Global -name SqlServerMaximumTabCompletion -Value 1000

# Load the snapins, type data, format data

Push-Location

cd $sqlpsPath

Add-PSSnapin SqlServerCmdletSnapin100 -ErrorAction SilentlyContinue

Add-PSSnapin SqlServerProviderSnapin100 -ErrorAction SilentlyContinue

Update-TypeData -PrependPath SQLProvider.Types.ps1xml -ErrorAction SilentlyContinue

update-FormatData -prependpath SQLProvider.Format.ps1xml -ErrorAction SilentlyContinue

Pop-Location

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":SQLPS snap-in initialization completed`n"

Write-Host $vMsg

# Local variables to store the event log source and the instance name to which the DAC connection needs to be made

[string] $vSource

[string] $vInstName

[string] $vFileName

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Reading Application Event Log`n"

Write-Host $vMsg

# Get the source which generated the 17884 message

$vSource = Get-EventLog -LogName "Application" -Message "*have not been picked up by a worker thread in the last 60 seconds*" -Source MSSQL* -Newest 1

$vSource = $vSource.Source.ToString()

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Connecting to SQL instance and executing script`n"

Write-Host $vMsg

# Check if it is a default instance or named instance and accordingly collect the output

if ($vSource.Equals("MSSQLSERVER"))

{

# Create the filename

$vFileName = "F:\" + (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + (Get-Date).Second.ToString() + "_" + $env:ComputerName.ToString() + "_FailoverAnalysis.OUT"

# Use Invoke-Sqlcmd to get the output of the script

Invoke-Sqlcmd -InputFile "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install\FailoverAnalysis.sql" -ServerInstance $env:ComputerName -DedicatedAdministratorConnection | Out-File -filepath $vFileName

}

else

{

# Construct the SQL Server name to connect to for a named instance

$vInstName = $env:ComputerName + "\" + $vSource.Split("$")[-1]

# Create the filename

$vFileName = "F:\" + (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + (Get-Date).Second.ToString() + "_" + $vInstName + "_FailoverAnalysis.OUT"

# Use Invoke-Sqlcmd to get the output of the script

Invoke-Sqlcmd -InputFile "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install\FailoverAnalysis.sql" -ServerInstance $vInstName -DedicatedAdministratorConnection | Out-File -filepath $vFileName

}

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Script execution completed`n"

Write-Host $vMsg

The Task Scheduler Configuration

The next thing that is needed to be done is create Task Scheduler task to execute the Powershell script above when a 17884 condition is reported.

1. Open the Task Scheduler MMC using Start –> Administrative Tools –> Task Scheduler to get the Task Scheduler UI. Right click on Event Viewer Tasks and click on Create Task (see screenshot below). Provide the appropriate settings, description and name. The salient points to keep in mind here are:
a. Use the Change User or Group option to provide an account which has permissions to log into the SQL Server instance(s) to gather the output of the FailoverAnalysis.sql script.
b. You can use the Configure for drop-down box to configure the task for Windows Server 2003 as well.image

2. Next click on the Triggers tab. Click on New, which will open the New Trigger dialog box. Use the Begin the Task drop down box to select On an event option (see screenshot below). Then select the Custom radio button and click on New Event Filter button which will open the New Event Filter dialog box. Provide the options as shown by the highlighted items in the screenshot. In the event sources drop down list, I selected all the SQL Server instances installed on my server. You can choose to leave the default option in case you plan to install more instances on this server. Using the default option can be beneficial if you choose to export this task to other servers as well. Provide the Advanced Settings option in the New Trigger dialog box. I chose to Stop task if it runs more than 30 minutes. Once that is done, you should see new trigger entry in the Trigger tab as On an Event with a custom event filter and the status as enabled.image

3. Under the Actions tab, click on New. Select Start a Program from the Action drop-down box. In the Program/script text box, provide Powershell.exe. In the Add arguments (optional) text box, provide the following: -File “F:\DeadlockSchedulerMonitor.ps1”. Or you can choose to provide a .cmd file path whose contents are “Powershell.exe -File “F:\DeadlockSchedulerMonitor.ps1” > F:\DeadlockSchedulerMonitorLog.txt”. I went with the second option. F: drive root is where my .ps1 script and .cmd file are present. You would need to modify this accordingly depending on where your script is. Once that is done, your action tab should be similar to the screenshot below.

image

4. I used the default settings for the Conditions and Settings tab. You can modify this as appropriate for your environment.

Once the task is created, all you need is for a 17884 condition to occur for the script output to be generated.

Let’s see the configuration in action

image

I simulated a 17884 condition on my SQL Server instance, see if the newly created task gets fired. (see above Application Log snippet). Looking into the F: drive, I find that the following files were created

20119212197_<sql instance name>_FailoverAnalysis.OUT
201192121918_<sql instance name>_FailoverAnalysis.OUT

The reason I have two files is that this is a two NUMA node box, and the deadlock schedulers condition was reported for both nodes. Note that this script reports first occurrence which is reported at 60 seconds since the message text snippet that is used to fetch the event from the Application log specifically looks for the 60 seconds keyword.

NOTE: You can monitor for 17888 events using the following message text instead of the one used above in the script for 17884 script and create a new task for the same. Message text: *appear deadlocked due to a large number of worker threads waiting*

What next?

Well with automation the possibilities are endless… Some of the quick things that come to my mind are:

1. You can create an event subscription and use the Forwarded Events log to track all 17884 from different servers which have SQL Server instances installed on them.
2. You can modify the powershell script to perform additional post processing or send out email notifications to your DBAs for the occurrence of such events.
3. You could even add additional post processing to the Powershell script to perform additional tasks.

If you have SCOM or any other Event log monitoring software, the only thing that you need to do is setup a custom alert for 17884 or 17888 error messages.

Have fun monitoring and customizing further!

Reference:

Understanding and Using PowerShell Support in SQL Server 2008
Windows Server 2008 Event Subscription with Task Scheduling

Disclaimer: This script and the solution provided is “AS IS” and any deployment that you do using a similar logic described requires due-diligence and testing on your part. The testing that I did for this was on Windows Server 2008 R2 & SQL Server 2008 R2.

Debugging the deadlock but for the scheduler: Part 2 of 2


This is the second part of my Deadlocked Schedulers debugging posts. In the first post, I discussed about the difference between a 17888 and 17884 dump and showed the amount of information that can be extracted from a mini-dump that is generated for the first occurrence of a Deadlocked Scheduler issue. In this post, I shall walk you through a 17884 condition.

The first difference that you will notice with a 17884 message i.e. Deadlocked Schedulers reported in the SQL Errorlog is that no information about the blocking resource is mentioned in the SQL Errorlog unlike the 17888 message. If you inspect the SQL Errorlog, you will see only an error message of the following type:

2010-09-20 14:04:07.78 Server      ***Stack Dump being sent to D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLServer\MSSQL\LOG\SQLDump0001.txt
2010-09-20 14:04:07.78 Server      *   09/20/10 14:04:07 spid 27276
2010-09-20 14:04:07.78 Server      *
2010-09-20 14:04:07.78 Server      *
Deadlocked Schedulers
2010-09-20 14:04:07.78 Server      *
2010-09-20 14:04:07.78 Server      * *******************************************************************************
2010-09-20 14:04:07.78 Server      * ——————————————————————————-
2010-09-20 14:04:07.78 Server      * Short Stack Dump
2010-09-20 14:04:07.82 Server      Stack Signature for the dump is 0x0000000000000285
External dump process returned no errors.

As mentioned in my previous posts, the dump process returned no errors (Note message in green). This means that I can extract some useful information regarding the issue from the dump. One of the important things about a Deadlocked Schedulers (17884) is that you need to look through the Errorlogs for any other Access Violation or Assertion that occurred prior to the Deadlocked Schedulers event. Always check for previous errors first. The 17884 error messages are often symptoms pointing to a problem. Before investigating the dump, always review overall system health information such as performance information, event logs, SQL Server error logs, and other application logging information. Understanding prior problems on the system may quickly clarify why SQL Server detected and reported a health error. Previous errors should always be addressed before the investigation continues.

The common reasons for a Deadlocked Schedulers are:

  1. All schedulers have encountered a 17883 condition.
  2. All workers are blocked on a critical resource.
  3. All workers are executing a long-running query.

I have very rarely seen #1 being the cause of Deadlocked Schedulers. More often than not it is #2 or #3.

Now when you load the dump using WinDBG, as noted in the debugging a non-yielding scheduler issue, you will note similar things. When you load the dump using WinDBG, you will see the following information:

Loading Dump File [D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLServer\MSSQL\LOG\SQLDump0001.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Deadlocked Schedulers’

The above tells you that this is a mini-dump for a deadlocked schedulers condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine.

.sympath srv*D:\PublicSymbols*http://msdl.microsoft.com/download/symbols

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:005> lmvm sqlservr
start end module name

00000000`00530000 00000000`040de000   sqlservr T (pdb symbols)          d:\publicsymbols\sqlservr.pdbD90C4EC67D442D692A791A19FBA638F2\sqlservr.pdb

If symbols were not loaded, then you will see an output as shown below.

0:005> lmvm sqlservr
start end module name
00000000`00530000 00000000`040de000 sqlservr (export symbols) sqlservr.exe

Since I didn’t have any prior errors to the deadlocked schedulers issue in the Errorlog, I don’t have a search string to look for in the dump. This makes it tricky. So I will have to manually look through the callstacks for all the threads in the dump and establish a common pattern. You can use the !uniqstack command to dump out all the unique callstacks in the dump along with the count of the duplicate callstacks.

0:005> !uniqstack

Output excerpt from previous command:

Total threads: 868
Duplicate callstacks: 810 (windbg thread #s follow):
4, 8, 10, … … 865, 866, 867

As you can see above, there are 868 threads in the dump and there are 810 callstacks which are listed as duplicate.

On manual inspection of the threads using the thread ids mentioned in the Duplicate callstacks list (italicized numbers above), I did find a common pattern. There are multiple threads waiting for locks and there are quite a few threads which are executing in parallel.

Two common thread stacks that emerged were:

Waiting on lock:

0:804> kL100
ntdll!ZwSignalAndWaitForSingleObject
kernel32!SignalObjectAndWait
sqlservr!SOS_Scheduler::Switch
sqlservr!SOS_Scheduler::SuspendNonPreemptive
sqlservr!SOS_Scheduler::Suspend
sqlservr!EventInternal<Spinlock<153,1,0> >::Wait
sqlservr!LockOwner::Sleep
sqlservr!lck_lockInternal


sqlservr!process_request
sqlservr!process_commands
sqlservr!SOS_Task::Param::Execute
sqlservr!SOS_Scheduler::RunTask
sqlservr!SOS_Scheduler::ProcessTasks
sqlservr!SchedulerManager::WorkerEntryPoint
sqlservr!SystemThread::RunWorker
sqlservr!SystemThreadDispatcher::ProcessWorker
sqlservr!SchedulerManager::ThreadEntryPoint
msvcr80!endthreadex
msvcr80!endthreadex
kernel32!BaseThreadInitThunk
ntdll!RtlUserThreadStart

Parallel thread:

0:852> kL100

ntdll!ZwWaitForSingleObject
KERNELBASE!WaitForSingleObjectEx
sqlservr!SOS_Scheduler::SwitchContext
sqlservr!SOS_Scheduler::SuspendNonPreemptive
sqlservr!SOS_Scheduler::Suspend
sqlservr!EventInternal<Spinlock<153,1,0> >::Wait
sqlservr!EventInternal<Spinlock<153,1,0> >::WaitAllowPrematureWakeup
sqlservr!XchngWait<Spinlock<38,10,0> >
sqlservr!CXPort::Open
sqlservr!CXTransLocal::Synchronize
sqlservr!CQScanXProducerNew::Open
sqlservr!FnProducerOpen
sqlservr!FnProducerThread
sqlservr!SubprocEntrypoint
sqlservr!SOS_Task::Param::Execute
sqlservr!SOS_Scheduler::RunTask
sqlservr!SOS_Scheduler::ProcessTasks
sqlservr!SchedulerManager::WorkerEntryPoint
sqlservr!SystemThread::RunWorker
sqlservr!SystemThreadDispatcher::ProcessWorker
sqlservr!SchedulerManager::ThreadEntryPoint
msvcr80!endthreadex
msvcr80!endthreadex
kernel32!BaseThreadInitThunk
ntdll!RtlUserThreadStart

The next thing that I shall do is pick one of the common relevant functions from the above callstacks and search for the number of occurrences of these frames in the all the threads present in the dump using the !findstack command.

!findstack sqlservr!LockOwner::Sleep 0

Excerpt from the output of the above command:

Thread 054, 1 frame(s) match
Thread 056, 1 frame(s) match
Thread 059, 1 frame(s) match
Thread 061, 1 frame(s) match
Thread 062, 1 frame(s) match


Thread 799, 1 frame(s) match
Thread 800, 1 frame(s) match
Thread 801, 1 frame(s) match
Thread 802, 1 frame(s) match
Thread 804, 1 frame(s) match
Thread 805, 1 frame(s) match

!findstack sqlservr!FnProducerThread 0

Excerpt from the output of the above command:

Thread 055, 1 frame(s) match
Thread 057, 1 frame(s) match
Thread 066, 1 frame(s) match
Thread 082, 1 frame(s) match
Thread 087, 1 frame(s) match
Thread 093, 1 frame(s) match


Thread 840, 1 frame(s) match
Thread 843, 1 frame(s) match
Thread 844, 1 frame(s) match
Thread 850, 1 frame(s) match
Thread 852, 1 frame(s) match

I found that there were 584 threads (67%) waiting for locks and 166 threads (19%) involved in parallelism. Note that the lines colored in blue are the callstacks which I had mentioned above to determine that there were waits on locks and parallelism. This confirms that the problem is due to concurrent requests waiting for lock(s) on object(s) and some of the queries are executing in parallelism. If a CSS engineer is examining this dump, then they could probably extract additional information like the queries being executed by the threads (if the information is available in the dump) using private symbols. If you were monitoring the server using a monitoring tool which captured the query requests, then you could go back to your monitoring repository and get the queries that the above threads were executing using the thread ids that you have identified.

TIP: The interesting thing to note is the thread IDs that you see above will directly match the OS Thread ID reported in the sys.dm_os_threads os_thread_id column value or the sys.sysprocesses kpid column value. So if you have a monitoring tool that is capturing the output of queries being executed by the active SQL Server sessions, then you can go back to your monitoring repository and dig up the queries which these threads were executing. If such data is not available from the time of the issue, more often that not you will have to end up collecting additional data (profiler traces along with DMV outputs to track blocking chains) for the next problem occurrence!

Also check if the Deadlocked Schedulers matches any KB Article (update) already released. Remember that deadlocked schedulers is a symptom and the not the cause!! So make sure that if you find a KB Article which talks about a Deadlocked Schedulers issue being addressed in a later build, the symptoms match the symptoms that you are experiencing on your server! If in doubt, ask CSS!

If you are stuck and have no clue why the issue happened, then this would be a good time to engage Microsoft CSS Support and provide them with the SQL Errorlogs, the dump file and the windows event logs in .txt format for further analysis.

Reference:
KB 315263 – How to read the small memory dump files that Windows creates for debugging
How To Diagnose and Correct Errors 17883, 17884, 17887, and 17888 whitepaper