SQL Server 2008 Trace Flags

SQL Server exposes multiple trace flags which are required to set specific server characteristics or to switch off a particular behavior. Some trace flags are required to enable the fix post the installation of the update.

This is currently a work-in-progress. The trace flags mentioned in the table below have the associated public article links mentioned which provide more information about the trace flag.

If you want a trace flag to be mentioned in the list below, then feel free to leave a comment. Note that only publicly documented trace flags will appear in this list.

The list of trace flags available for SQL Server 2005 are documented here.

Disclaimer
1. Data mentioned below is as of 17th July, 2012.
2. Trace flags should be used under the guidance of Microsoft SQL Server support.  They are used in this post for discussion purposes only and may not be supported in future versions.
3. Trace flags for hotfixes should be only enabled if the fix is applicable to the SQL Server instance that you are enabling the trace flag on.

Continue reading

System Health Session: Part 4

This is the last post for the System Health Session series. In this post, I shall show you how to extract deadlock related information from the deadlock graph captured by the System Health Session.

The deadlock graph captured typically has three distinct nodes:

victim-list – Deadlock victim’s process identifier
process-list – Information pertaining to all the processes involved in the deadlock
resource-list – Information about the resources involved in the deadlock

The query below will provide you with the time stamp when the deadlock was reported along with victim process identifier.

 -- Fetch the Health Session data into a temporary table

SELECT CAST(xet.target_data AS XML) AS XMLDATA
INTO #SystemHealthSessionData
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'system_health'
-- Gets the Deadlock Event Time and Victim Process
SELECT C.query('.').value('(/event/@timestamp)[1]', 'datetime') as EventTime,
CAST(C.query('.').value('(/event/data/value)[1]', 'varchar(MAX)') AS XML).value('(<a>/deadlock/victim-list/victimProcess/@id)[1]','varchar(100)'</a>) VictimProcess
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event') as T(C)
WHERE C.query('.').value('(/event/@name)[1]', 'varchar(255)') = 'xml_deadlock_report'
-- Drop the temporary table
DROP TABLE #SystemHealthSessionData 

The next query (when provided with an event time from the above query output)
will provide you a parsed version of the process list in a tabular format which
can be easier to read when you have a large number of sessions involved in the
deadlock.

 -- Fetch the Health Session data into a temporary table

SELECT CAST(xet.target_data AS XML) AS XMLDATA
INTO #SystemHealthSessionData
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = 'system_health'

-- Parses the process list for a specific deadlock once provided with an event time for the deadlock from the above output

;WITH CTE_HealthSession (EventXML) AS
(
SELECT CAST(C.query('.').value('(/event/data/value)[1]', 'varchar(MAX)') AS XML) EventXML
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event') as T(C)
WHERE C.query('.').value('(/event/@name)[1]', 'varchar(255)') = 'xml_deadlock_report'
AND C.query('.').value('(/event/@timestamp)[1]', 'datetime') = '2011-09-28 06:24:44.700' -- Replace with relevant timestamp
)
SELECT DeadlockProcesses.value('(@id)[1]','varchar(50)') as id
,DeadlockProcesses.value('(@taskpriority)[1]','bigint') as taskpriority
,DeadlockProcesses.value('(@logused)[1]','bigint') as logused
,DeadlockProcesses.value('(@waitresource)[1]','varchar(100)') as waitresource
,DeadlockProcesses.value('(@waittime)[1]','bigint') as waittime
,DeadlockProcesses.value('(@ownerId)[1]','bigint') as ownerId
,DeadlockProcesses.value('(@transactionname)[1]','varchar(50)') as transactionname
,DeadlockProcesses.value('(@lasttranstarted)[1]','varchar(50)') as lasttranstarted
,DeadlockProcesses.value('(@XDES)[1]','varchar(20)') as XDES
,DeadlockProcesses.value('(@lockMode)[1]','varchar(5)') as lockMode
,DeadlockProcesses.value('(@schedulerid)[1]','bigint') as schedulerid
,DeadlockProcesses.value('(@kpid)[1]','bigint') as kpid
,DeadlockProcesses.value('(@status)[1]','varchar(20)') as status
,DeadlockProcesses.value('(@spid)[1]','bigint') as spid
,DeadlockProcesses.value('(@sbid)[1]','bigint') as sbid
,DeadlockProcesses.value('(@ecid)[1]','bigint') as ecid
,DeadlockProcesses.value('(@priority)[1]','bigint') as priority
,DeadlockProcesses.value('(@trancount)[1]','bigint') as trancount
,DeadlockProcesses.value('(@lastbatchstarted)[1]','varchar(50)') as lastbatchstarted
,DeadlockProcesses.value('(@lastbatchcompleted)[1]','varchar(50)') as lastbatchcompleted
,DeadlockProcesses.value('(@clientapp)[1]','varchar(150)') as clientapp
,DeadlockProcesses.value('(@hostname)[1]','varchar(50)') as hostname
,DeadlockProcesses.value('(@hostpid)[1]','bigint') as hostpid
,DeadlockProcesses.value('(@loginname)[1]','varchar(150)') as loginname
,DeadlockProcesses.value('(@isolationlevel)[1]','varchar(150)') as isolationlevel
,DeadlockProcesses.value('(@xactid)[1]','bigint') as xactid
,DeadlockProcesses.value('(@currentdb)[1]','bigint') as currentdb
,DeadlockProcesses.value('(@lockTimeout)[1]','bigint') as lockTimeout
,DeadlockProcesses.value('(@clientoption1)[1]','bigint') as clientoption1
,DeadlockProcesses.value('(@clientoption2)[1]','bigint') as clientoption2
FROM (select EventXML as DeadlockEvent FROM CTE_HealthSession) T
CROSS APPLY DeadlockEvent.nodes('//deadlock/process-list/process') AS R(DeadlockProcesses)

-- Drop the temporary table
DROP TABLE #SystemHealthSessionData

The script file for the above queries can be downloaded from here.

A sample output of the above two queries is shown below:

The second dataset which shows the parsed process list from the deadlock graph is for the timestamp highlighted above.

As mentioned in one of my previous posts, the custom reports used in the previous posts can be downloaded from here (Filename: System_Health_Session_Custom_Reports.zip).

Modification April 20th, 2012: Just updated the .sql files and added the Deadlock Report to the SkyDrive location.

Note: To make use of the deadlock graph captured by the System Health Session, you need to have the required update applied to avoid the issue mentioned in KB978629. The issue is addressed in:
981355    Cumulative Update package 1 for SQL Server 2008 R2
http://support.microsoft.com/default.aspx?scid=kb;EN-US;981355
977443    Cumulative update package 6 for SQL Server 2008 Service Pack 1
http://support.microsoft.com/default.aspx?scid=kb;EN-US;977443

If you don’t have the updates installed, then Jonathan (Blog | Twitter) has shown in his article Retrieving Deadlock Graphs with SQL Server 2008 Extended Events how to workaround the issue. Michael Zilberstein’s (Blog) article, Parsing Extended Events xml_deadlock_report, has an updated/corrected version of the T-SQL to fetch the deadlock information.

References:

Bart Duncan’s Weblog:
Deadlock Troubleshooting, Part 1
Deadlock Troubleshooting, Part 2
Deadlock Troubleshooting, Part 3

System Health Session: Part 3

In my last post, I had demonstrated how to fetch the waits information captured by the System Health session. In this post, I shall show the custom reports that can be built using the waits information fetched.

image

The above report gives a summary of all the waits recorded by the health session. I have a second level drill-down available which allows me to get specific information for each of the distinct waits reported above and lets me get the SQL Query that experienced the wait.

image

As you can see above, that I have the SQL query, the time when the wait was reported along with duration statistics of the wait.

Another category of events that are tracked by the System Health session are non-yielding scheduler conditions (reported using the error number 17883) in SQL Server. So if your SQL Server instance encountered a non-yielding scheduler condition, then this information can be tracked using System Health session. The query below will give you the information of all the non-yielding conditions detected by the SQL Server instance subject to condition that the ring buffer storing this information has not been overwritten.

-- Query to fetch non-yielding errors captured by the System Health Session
SET NOCOUNT ON

SELECT CAST(xet.target_data AS XML) AS XMLDATA
INTO #SystemHealthSessionData
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xe ON (xe.address = xet.event_session_address)
WHERE xe.name = 'system_health'

;WITH CTE_HealthSession (EventXML) AS
( SELECT C.query('.') EventXML
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event') as T(C)
WHERE C.query('.').value('(/event/@name)[1]', 'varchar(255)') = 'scheduler_monitor_non_yielding_ring_buffer_recorded' )
SELECT EventXML.value('(/event/@timestamp)[1]', 'datetime') as EventTime,
EventXML.value('(/event/data/value)[4]', 'int') as NodeID,
EventXML.value('(/event/data/value)[5]', 'int') as SchedulerID,
CASE EventXML.value('(/event/data/value)[3]', 'int') WHEN 0 THEN 'BEGIN' WHEN 1 THEN 'END' ELSE '' END AS DetectionStage,
EventXML.value('(/event/data/value)[6]', 'varchar(50)') as Worker,
EventXML.value('(/event/data/value)[7]', 'bigint') as Yields,
EventXML.value('(/event/data/value)[8]', 'int') as Worker_Utilization,
EventXML.value('(/event/data/value)[9]', 'int') as Process_Utilization,
EventXML.value('(/event/data/value)[10]', 'int') as System_Idle,
EventXML.value('(/event/data/value)[11]', 'bigint') as User_Mode_Time,
EventXML.value('(/event/data/value)[12]', 'bigint') as Kernel_Mode_Time,
EventXML.value('(/event/data/value)[13]', 'bigint') as Page_Faults,
EventXML.value('(/event/data/value)[14]', 'float') as Working_Set_Delta,
EventXML.value('(/event/data/value)[15]', 'bigint') as Memory_Utilization
FROM CTE_HealthSession
ORDER BY EventTime,Worker

DROP TABLE #SystemHealthSessionData

The .sql file for the above script is available here.

image

A sample output of the above query is shown on the left. An important information in this output is the worker address. Using this worker address, I can get the relevant messages pertaining to the non-yielding condition.

2011-09-27 21:57:51.560 Server       Process 0:0:0 (0x18c0) Worker 0x000000000606A1A0 appears to be non-yielding on Scheduler 5. Thread creation time: 12961597452926. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%. System Idle 98%. Interval: 70077 ms.
2011-09-27 21:58:51.660 Server       Process 0:0:0 (0x18c0) Worker 0x000000000606A1A0 appears to be non-yielding on Scheduler 5. Thread creation time: 12961597452926. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 0%. System Idle 97%. Interval: 133017 ms.
2011-09-27 21:59:51.760 Server       Process 0:0:0 (0x18c0) Worker 0x000000000606A1A0 appears to be non-yielding on Scheduler 5. Thread creation time: 12961597452926. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 2%. System Idle 94%. Interval: 193116 ms.
2011-09-27 22:00:51.860 Server       Process 0:0:0 (0x18c0) Worker 0x000000000606A1A0 appears to be non-yielding on Scheduler 5. Thread creation time: 12961597452926. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 3%. System Idle 93%. Interval: 253215 ms.

If you look at the Errorlog snippet above, you will notice that there are entries for the highlighted worker address are same as the one reported in the management studio output screen shot above. The time reported in the Errorlog is the server time (in my case it is UTC+5:30) where as the time stamps reported by the health session is UTC time which means that the timestamps reported also match. Using the events from the System Health Session and the Errorlog entries, I will be able to figure out when a non-yielding condition occurred on the server instance.

The non-yielding condition report is quite plain and has a table output showing the different columns returned by the query above:

image

In the last post for this series, I shall provide the set of queries that can be used to parse a deadlock graph collected by the System Health session along with the .rdl files for the custom reports used in this series.

System Health Session: Part 2

I had written an introductory post on monitoring the system health using the default extended events sessions that runs on a SQL Server 2008 instance and above. Now it is time for next part for this post. In the first post, I provided a set of queries which would be used for getting all the errors that were recorded by the System Health Extended Events session. Now I can create a set of reports using Business Intelligence Development Studio which can be used by the Custom Reports feature of Management Studio.

I put together a three-level drill down report to get a summary report for all the errors reported by the T-SQL queries in my previous post. The dashboard report which will basically serve as the landing page for what I am terming as the System Health Dashboard looks something like this: image

As you can see above, the report shows me the different events captured by the Extended Events session. The first level drill down provides a summary of all the different errors reported.

image

The last drill-down option is to go look into every occurrence of a specific error number. This report basically shows all the occurrences of a specific error number along with the query text (if captured) and specifics for the event recorded rather than the generic error description that you see in the above report.

image

The other category of events captured by the Health Session are wait information which fall under the following category:

  • Any sessions that have waited on latches (or other interesting resources) for > 15 seconds.
  • Any sessions that have waited on locks for > 30 seconds.
  • Any sessions that have waited for a long time for preemptive waits. The duration varies by wait type. A preemptive wait is where SQL Server is waiting for external API calls.

The query below will give you the query which experienced the wait along with the wait times which were recorded by the System Health session.

SELECT CAST(xet.target_data AS XML) AS XMLDATA

INTO #SystemHealthSessionData

FROM sys.dm_xe_session_targets xet

JOIN sys.dm_xe_sessions xe

ON (xe.address = xet.event_session_address)

WHERE xe.name = 'system_health'

;WITH CTE_HealthSession (EventXML) AS

(

SELECT C.query('.') EventXML

FROM #SystemHealthSessionData a

CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event') as T(C)

WHERE C.query('.').value('(/event/@name)[1]', 'varchar(255)') in ('wait_info','wait_info_external')

)

SELECT

EventXML.value('(/event/@timestamp)[1]', 'datetime') as EventTime,

EventXML.value('(/event/data/text)[1]', 'varchar(50)') as WaitType,

EventXML.value('(/event/data/value)[3]', 'int') as Duration,

EventXML.value('(/event/data/value)[4]', 'int') as Max_Duration,

EventXML.value('(/event/data/value)[5]', 'int') as Total_Duration,

EventXML.value('(/event/action/value)[2]', 'varchar(10)') as Session_ID,

EventXML.value('(/event/action/value)[3]', 'varchar(max)') as sql_text

FROM CTE_HealthSession

ORDER BY EventTime DESC

DROP TABLE #SystemHealthSessionData

A sample output of the above query is shown above. The .sql file for the above query can be downloaded from here.

image

Note: Beware of false positives for PREEMPTIVE_OS_GETPROCADDRESS waits described here.

In the next post, I shall give a preview of the wait reports and provide another set of queries to track a category of events tracked by the System Health session.

Once I have completed this series, I shall upload all the series of reports to my SkyDrive so that they can be downloaded for your benefit.

System Health Session: Part 1

SQL Server 2008 introduced a new feature called Extended Events which opened a new vista for troubleshooting SQL Server issues. Bob Ward in a post on the CSS SQL Server Engineers blog had written about the system_health Extended Events session which is started by default on any instance of SQL Server 2008 or above. This session uses a ring buffer target so the information stored about the events tracked by this session don’t stick around forever much like the SQL Server default trace. Jonathan (Blog | Twitter) blogged about the System Health session and way it is setup in detail. Note that if you have an event session setup for a ring_buffer target and the data you feed the target exceeds 4Mb, you may not be able to retrieve all XML nodes from the target data. Bob Ward mentioned about this in detail in his blog post. So the System_Health session also suffers from this drawback.

This is the first post of a series of post that I will be writing about the System Health session and how you can use this to your SQL Server 2008 monitoring on the next level without having to install anything new on your servers… Maybe for the first few posts Winking smile

The first thing that you would need to do is get the different events reported with the oldest event time present in the ring buffer and the last event reported time using the query below:

SET NOCOUNT ON

— Store the XML data in a temporary table
SELECT CAST(xet.target_data as xml) as XMLDATA
INTO #SystemHealthSessionData
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = ‘system_health’

— Get the different events reported in a grouped manner
;WITH CTE_HealthSession AS
(
SELECT C.query(‘.’).value(‘(
/event/@name)[1]’, ‘varchar(255)’) as EventName,
C.query(‘.’).value(‘(
/event/@timestamp)[1]’, ‘datetime’) as EventTime
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes(‘/RingBufferTarget/event’) as T(C))
SELECT EventName,
COUNT(*) as Occurrences,
MAX(EventTime) as LastReportedEventTime,
MIN(EventTime) as OldestRecordedEventTime
FROM CTE_HealthSession
GROUP BY EventName
ORDER BY 2 DESC
— Drop the temporary table
DROP TABLE #SystemHealthSessionData

One of the set of  events tracked are errors reported by the database engine. The u_tables.sql(C:\Program Files\Microsoft SQL Server\<install folder>\MSSQL\Install) script which contains the specification of the system_health session shows that the following errors are tracked:
1. Any error which has a severity greater than or equal to 20.
2. Errors reported for out of memory conditions (Error #: 17803, 701, 802, 8645, 8651, 8657 or 8902)

The script below will give you a total number of errors reported along with the error reported counts. The second query will list out all the recorded information for a particular error number shown in the first queries output:

SET NOCOUNT ON

— Store the XML data in a temporary table
SELECT CAST(xet.target_data as xml) as XMLDATA
INTO #SystemHealthSessionData
FROM sys.dm_xe_session_targets xet
JOIN sys.dm_xe_sessions xe
ON (xe.address = xet.event_session_address)
WHERE xe.name = ‘system_health’

— Get statistical information about all the errors reported
;WITH CTE_HealthSession (EventXML) AS
(
SELECT C.query(‘.’) EventXML
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes(‘/RingBufferTarget/event’) as T(C)
),

CTE_ErrorReported (EventTime, ErrorNum) AS
(
SELECT EventXML.value(‘(
/event/@timestamp)[1]’, ‘datetime’) as EventTime,
EventXML.value(‘(/event/data/value)[1]’, ‘int’) as ErrorNum
FROM CTE_HealthSession
WHERE EventXML.value(‘(
/event/@name)[1]’, ‘varchar(255)’) = ‘error_reported’
)
SELECT ErrorNum,
MAX(EventTime) as LastRecordedEvent,
MIN(EventTime) as FirstRecordedEvent,
COUNT(*) as Occurrences,
b.[text] as ErrDescription
FROM CTE_ErrorReported a
INNER JOIN sys.messages b
ON a.ErrorNum = b.message_id
WHERE b.language_id = SERVERPROPERTY(‘LCID’)
GROUP BY a.ErrorNum,b.[text]

— Get information about each of the errors reported
;WITH CTE_HealthSession (EventXML) AS
(
SELECT C.query(‘.’) EventXML
FROM #SystemHealthSessionData a
CROSS APPLY a.XMLDATA.nodes(‘/RingBufferTarget/event’) as T(C)
WHERE C.query(‘.’).value(‘(
/event/@name)[1]’, ‘varchar(255)’) = ‘error_reported’

)
SELECT
EventXML.value(‘(
/event/@timestamp)[1]’, ‘datetime’) as EventTime,
EventXML.value(‘(/event/data/value)[1]’, ‘int’) as ErrNum,
EventXML.value(‘(/event/data/value)[2]’, ‘int’) as ErrSeverity,
EventXML.value(‘(/event/data/value)[3]’, ‘int’) as ErrState,
EventXML.value(‘(/event/data/value)[5]’, ‘varchar(max)’) as ErrText,
EventXML.value(‘(/event/action/value)[2]’, ‘varchar(10)’) as Session_ID
FROM CTE_HealthSession

— Drop the temporary table
DROP TABLE #SystemHealthSessionData

Now you have a set of queries which you can use to fetch information about all the errors reported for your SQL Server instance. Remember that the target of the extended events session is a ring buffer. So, if your server instance have been running for a long period of time, the data will be overwritten.

Modification: Sept 15, 2011: WordPress doesn’t seem to like the @name XML attribute in the XQuery. After multiple attempts at formatting the query above, I finally decided to upload the query as System_Health_Session_1.sql to SkyDrive. You can download it here.

Modification: Sept 21, 2011: Jonathan (Blog | Twitter) dropped me an email stating that the above script could fall prey to incorrect timestamp issue. This issue is already filed as a Connect item. Jonathan described on a viable workaround for this solution in his blog post.

Modification: March 3, 2012: The timestamp value reported is in UTC (GMT) format. So if you want to report event time as per your SQL Server instance time, then you would need to perform the calculation and account for the GMT time difference.

As the title says, Part 1… There is definitely more to come on this topic. Stay tuned!

Deadlocked Schedulers and event notifications with a bit of Powershell

I had written 2 posts recently about Deadlocked Schedulers debugging (Part 1 | Part 2) which basically walked through a 17784 and 17888 condition and the steps that you could take to debug such an issue. A quote from the both these articles:

So if you have a monitoring tool that is capturing the output of queries being executed by the active SQL Server sessions, then you can go back to your monitoring repository and dig up the queries which these threads were executing. If such data is not available from the time of the issue, more often that not you will have to end up collecting additional data for the next problem occurrence!

You will need to collect additional data if there wasn’t a monitoring software collecting the same to determine the set of queries and sequence of events leading upto the issue. CSS Engineers can dig into the dump using the private symbols and probably provide you with additional information which can help you along the right direction. But alas, even they do not have magic wands if you cannot provide them with additional information like blocking output/profiler traces for 17884 and 17888 conditions caused by excessive blocking or parallelism.

SQL Server 2008 Service Pack 2 provided an update which made the FailoverAnalysis.sql script available again. This was available in SQL Server 2005 and now is available for SQL Server 2008 as well. This is particularly useful for SQL Server Failover Clusters where you can configure SQLDumper to generate a dump before the failover occurs and also set the SqlDMVScriptTimeout property to get an output of the DMVs used in the script before a failover happens.

image

You can refer “Cluster failovers and the Sqldumper.exe utility” section in the KB917825 for more details on the above mentioned feature. So what am I doing writing this post??

This post is aimed at SQL Server Standalone instances where you cannot benefit from sqldumper. Deadlocked Schedulers are reported with EventIDs 17884 or 17888. I am going to use this to create an scheduled task on a Event Log entry for these two specific EventIDs. I am big fan of automating repetitive tasks and the benefit of such an automation is that you can get additional data from the problem occurrence without having to just rely on a SQL Server mini-dump memory file and be at the mercy of the Windows Debugging tools. For this post, you can say a small good-bye to your favorite debugging tool and read on for some automation in monitoring!!

However, there are some requirements for this kind of indigenous monitoring to work:

1. Dedicated Administrator Connection needs to be enabled for your SQL Server instance(s) – This is required because during a Deadlocked Schedulers condition, you would have run out of worker threads. So, new connection requests cannot be processed unless and until you establish a connection through DAC. If you are planning to use Event Subscription and remote data gathering, then you need Remote DAC enabled for your SQL instance(s) being monitored. Note: Event subscription is a feature available on Windows Server 2008 and above.

2. You need sysadmin privileges for the account that will be used to collect the output of the FailoverAnalysis.sql script.

3. Powershell and SQLPS needs to be available on the server being monitored. If you don’t want to install Powershell components, then the same functionality can be achieved using SMO and VBScript which I have used extensively in the past till I became a fan of POSH awesomeness!! Smile

Now that I have the pre-requisites out of the way, let’s get down to the good part…

The Powershell Script

I had already mentioned that I will be using a Powershell script to get the output of the FailoverAnalysis.sql script for the SQL instance that is encountering the 17884 condition. The piping feature of Powershell really makes it easy to write compact scripts to perform a wide variety of tasks. The script is divided into three parts.

The first part of the script actually uses the script in the section “Adding the SQL Server Snap-ins to Windows PowerShell” of the article “Running SQL Server PowerShell” to initialize the SQLPS snap-in in the session that you will be running the script.

The second part of the script reads the Application event log for the first occurrence of an informational message with the text substring “have not been picked up by a worker thread in the last 60 seconds” from any SQL Server instance as the source.

The third part of the script figures out the SQL Server instance name and connects to it to execute the FailoverAnalysis.sql script. You can modify the output location of the script output below as appropriate for your environment.

Script filename: DeadlockSchedulerMonitor.ps1


#################################################################################

# Script Name: DeadlockSchedulerMonitor #

# Author: Amit Banerjee #

# Date: 9/1/2011 #

# Description: #

# The script reads the first 17884 (Deadlocked Schedulers) event from the #

# Windows Application Event log and identifies the source. Based on the #

# source name, it connects to the SQL Server instance using a DAC #

# connection and collects the FailoverAnalysis.sql script output. #

#################################################################################

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Performing SQLPS snap-in initialization`n"

Write-Host $vMsg

# Call the initialization script to load the SQLPS snap-in in the session that you running your script

# Add the SQL Server Provider.

$ErrorActionPreference = "Stop"

$sqlpsreg="HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps"

if (Get-ChildItem $sqlpsreg -ErrorAction "SilentlyContinue")

{

Write-Host "SQL Server Provider for Windows PowerShell is not installed.`n"

}

else

{

$item = Get-ItemProperty $sqlpsreg

$sqlpsPath = [System.IO.Path]::GetDirectoryName($item.Path)

}

# Set mandatory variables for the SQL Server provider

Set-Variable -scope Global -name SqlServerMaximumChildItems -Value 0

Set-Variable -scope Global -name SqlServerConnectionTimeout -Value 30

Set-Variable -scope Global -name SqlServerIncludeSystemObjects -Value $false

Set-Variable -scope Global -name SqlServerMaximumTabCompletion -Value 1000

# Load the snapins, type data, format data

Push-Location

cd $sqlpsPath

Add-PSSnapin SqlServerCmdletSnapin100 -ErrorAction SilentlyContinue

Add-PSSnapin SqlServerProviderSnapin100 -ErrorAction SilentlyContinue

Update-TypeData -PrependPath SQLProvider.Types.ps1xml -ErrorAction SilentlyContinue

update-FormatData -prependpath SQLProvider.Format.ps1xml -ErrorAction SilentlyContinue

Pop-Location

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":SQLPS snap-in initialization completed`n"

Write-Host $vMsg

# Local variables to store the event log source and the instance name to which the DAC connection needs to be made

[string] $vSource

[string] $vInstName

[string] $vFileName

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Reading Application Event Log`n"

Write-Host $vMsg

# Get the source which generated the 17884 message

$vSource = Get-EventLog -LogName "Application" -Message "*have not been picked up by a worker thread in the last 60 seconds*" -Source MSSQL* -Newest 1

$vSource = $vSource.Source.ToString()

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Connecting to SQL instance and executing script`n"

Write-Host $vMsg

# Check if it is a default instance or named instance and accordingly collect the output

if ($vSource.Equals("MSSQLSERVER"))

{

# Create the filename

$vFileName = "F:\" + (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + (Get-Date).Second.ToString() + "_" + $env:ComputerName.ToString() + "_FailoverAnalysis.OUT"

# Use Invoke-Sqlcmd to get the output of the script

Invoke-Sqlcmd -InputFile "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install\FailoverAnalysis.sql" -ServerInstance $env:ComputerName -DedicatedAdministratorConnection | Out-File -filepath $vFileName

}

else

{

# Construct the SQL Server name to connect to for a named instance

$vInstName = $env:ComputerName + "\" + $vSource.Split("$")[-1]

# Create the filename

$vFileName = "F:\" + (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + (Get-Date).Second.ToString() + "_" + $vInstName + "_FailoverAnalysis.OUT"

# Use Invoke-Sqlcmd to get the output of the script

Invoke-Sqlcmd -InputFile "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install\FailoverAnalysis.sql" -ServerInstance $vInstName -DedicatedAdministratorConnection | Out-File -filepath $vFileName

}

$vMsg = (Get-Date).Year.ToString() + (Get-Date).Month.ToString() + (Get-Date).Day.ToString() + (Get-Date).Hour.ToString() + (Get-Date).Minute.ToString() + ":Script execution completed`n"

Write-Host $vMsg

The Task Scheduler Configuration

The next thing that is needed to be done is create Task Scheduler task to execute the Powershell script above when a 17884 condition is reported.

1. Open the Task Scheduler MMC using Start –> Administrative Tools –> Task Scheduler to get the Task Scheduler UI. Right click on Event Viewer Tasks and click on Create Task (see screenshot below). Provide the appropriate settings, description and name. The salient points to keep in mind here are:
a. Use the Change User or Group option to provide an account which has permissions to log into the SQL Server instance(s) to gather the output of the FailoverAnalysis.sql script.
b. You can use the Configure for drop-down box to configure the task for Windows Server 2003 as well.image

2. Next click on the Triggers tab. Click on New, which will open the New Trigger dialog box. Use the Begin the Task drop down box to select On an event option (see screenshot below). Then select the Custom radio button and click on New Event Filter button which will open the New Event Filter dialog box. Provide the options as shown by the highlighted items in the screenshot. In the event sources drop down list, I selected all the SQL Server instances installed on my server. You can choose to leave the default option in case you plan to install more instances on this server. Using the default option can be beneficial if you choose to export this task to other servers as well. Provide the Advanced Settings option in the New Trigger dialog box. I chose to Stop task if it runs more than 30 minutes. Once that is done, you should see new trigger entry in the Trigger tab as On an Event with a custom event filter and the status as enabled.image

3. Under the Actions tab, click on New. Select Start a Program from the Action drop-down box. In the Program/script text box, provide Powershell.exe. In the Add arguments (optional) text box, provide the following: -File “F:\DeadlockSchedulerMonitor.ps1”. Or you can choose to provide a .cmd file path whose contents are “Powershell.exe -File “F:\DeadlockSchedulerMonitor.ps1” > F:\DeadlockSchedulerMonitorLog.txt”. I went with the second option. F: drive root is where my .ps1 script and .cmd file are present. You would need to modify this accordingly depending on where your script is. Once that is done, your action tab should be similar to the screenshot below.

image

4. I used the default settings for the Conditions and Settings tab. You can modify this as appropriate for your environment.

Once the task is created, all you need is for a 17884 condition to occur for the script output to be generated.

Let’s see the configuration in action

image

I simulated a 17884 condition on my SQL Server instance, see if the newly created task gets fired. (see above Application Log snippet). Looking into the F: drive, I find that the following files were created

20119212197_<sql instance name>_FailoverAnalysis.OUT
201192121918_<sql instance name>_FailoverAnalysis.OUT

The reason I have two files is that this is a two NUMA node box, and the deadlock schedulers condition was reported for both nodes. Note that this script reports first occurrence which is reported at 60 seconds since the message text snippet that is used to fetch the event from the Application log specifically looks for the 60 seconds keyword.

NOTE: You can monitor for 17888 events using the following message text instead of the one used above in the script for 17884 script and create a new task for the same. Message text: *appear deadlocked due to a large number of worker threads waiting*

What next?

Well with automation the possibilities are endless… Some of the quick things that come to my mind are:

1. You can create an event subscription and use the Forwarded Events log to track all 17884 from different servers which have SQL Server instances installed on them.
2. You can modify the powershell script to perform additional post processing or send out email notifications to your DBAs for the occurrence of such events.
3. You could even add additional post processing to the Powershell script to perform additional tasks.

If you have SCOM or any other Event log monitoring software, the only thing that you need to do is setup a custom alert for 17884 or 17888 error messages.

Have fun monitoring and customizing further!

Reference:

Understanding and Using PowerShell Support in SQL Server 2008
Windows Server 2008 Event Subscription with Task Scheduling

Disclaimer: This script and the solution provided is “AS IS” and any deployment that you do using a similar logic described requires due-diligence and testing on your part. The testing that I did for this was on Windows Server 2008 R2 & SQL Server 2008 R2.

Debugging that latch timeout

My last post of debugging an assertion didn’t have any cool debugging tips since there is not much that you can do with an assertion dump unless you have access to private symbols and sometimes even access to the source code. In this post, I am going to not disappoint and show you some more cool things that the windows debugger can do for you with public symbols for a latch timeout issue.

When you encounter a latch timeout (buffer or non-buffer latch), the first occurrence of it’s type generates a mini-dump. If there are further occurrences of the same latch timeout, then that is reported as an error message in the SQL Errorlog.

Buffer latch timeouts are typically reported using Error: 844 and 845. The common reasons for such errors are documented in a KB Article. For a non-buffer latch timeout, you will get the an 847 error.

Error # Error message template (from sys.messages)
844 Time out occurred while waiting for buffer latch — type %d, bp %p, page %d:%d, stat %#x, database id: %d, allocation unit id: %I64d%ls, task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p.  Continuing to wait.
845 Time-out occurred while waiting for buffer latch type %d for page %S_PGID, database ID %d.
846 A time-out occurred while waiting for buffer latch — type %d, bp %p, page %d:%d, stat %#x, database id: %d, allocation unit Id: %I64d%ls, task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p. Not continuing to wait.
847

Timeout occurred while waiting for latch: class ‘%ls’, id %p, type %d, Task 0x%p : %d, waittime %d, flags 0x%I64x, owning task 0x%p. Continuing to wait.

This is what you will see in the SQL Errorlog when a latch timeout occurs.

spid148     Time out occurred while waiting for buffer latchtype 4, bp 0000000832FE1200, page 3:11234374, stat 0x7c20009, database id: 120, allocation unit id: 72057599731367936, task 0x0000000003C4F2E8 : 0, waittime 300, flags 0x1a, owning task 0x0000000003C129B8.  Continuing to wait.
spid148     **Dump thread – spid = 148, PSS = 0x000000044DC17BD0, EC = 0x000000044DC17BE0
spid148     ***Stack Dump being sent to D:\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0001.txt

spid148     * Latch timeout
spid148     * Input Buffer 84 bytes –
spid148     *             DBCC CHECKDB WITH ALL_ERRORMSGS
External dump process returned no errors.

I have only pasted the relevant portion from the Errorlog for brevity. As I have outlined in my previous blog posts on similar topics, that there is a large opportunity for due diligence that can be done with the help of the Windows Event Logs and the SQL Server Errorlogs before you start spawning off windows debugger to analyze the memory dump on your system. The first few obvious things that you will notice is that SPID 148 encountered the issue while performing a CHECKDB on database ID 120. The timeout occurred while waiting for a buffer latch on a page (Page ID is available in the message above). I don’t see a “Timeout waiting for external dump process” message in the SQL Errorlog which means that I have a good chance of extracting useful information from the mini-dump that was generated by SQLDumper.

Latch timeouts are typically victims of either a system related issue (hardware or drivers or operating system or a previous error encountered by SQL Server). So the next obvious action item would be to look into the SQL Errorlogs and find out if there were any additional errors prior to the latch timeout issue. I see a number of OS Error 1450 reported by the same SPID 148 for the same file handle but different offsets.

spid148     The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x0000156bf36000 in file with handle 0x0000000000001358. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists then immediate action must be taken to correct it.

Additionally, I see prior and post (within 5-15 minutes) the latch timeout issue, multiple other SPIDs reporting the same 1450 error message for different offsets but again on the same file.

spid137     The operating system returned error 1450(Insufficient system resources exist to complete the requested service.) to SQL Server during a write at offset 0x000007461f8000 in file with handle 0x0000000000001358. This is usually a temporary condition and the SQL Server will keep retrying the operation. If the condition persists then immediate action must be taken to correct it.

I also see the latch timeout message being reported after every 300 ms for the same page and the database.

spid148     Time out occurred while waiting for buffer latch — type 4, bp 0000000832FE1200, page 3:11234374, stat 0x7c20009, database id: 120, allocation unit id: 72057599731367936, task 0x0000000003C4F2E8 : 0, waittime 82800, flags 0x1a, owning task 0x0000000003C129B8.  Continuing to wait.

Notice the waittime above, it has increased from 300 to 82800!! So the next thing I do is look up issues related to CHECKDB and 1450 error messages on the web using Bing (Yes, I do use BING!!). These are relevant posts related to the above issue.

http://blogs.msdn.com/b/psssql/archive/2008/07/10/sql-server-reports-operating-system-error-1450-or-1452-or-665-retries.aspx
http://blogs.msdn.com/b/psssql/archive/2009/03/04/sparse-file-errors-1450-or-665-due-to-file-fragmentation-fixes-and-workarounds.aspx

As of now, it is quite clear that the issue is related to a possible sparse file issue related to file fragmentation. Now it is time for me to check if there are other threads in the dump waiting on SyncWritePreemptive calls.

Use the location provided in the Errorlog snippet reporting the Latch Timeout message to locate the mini-dump for the issue (in this case SQLDump0001.mdmp).

Now when you load the dump using WinDBG, you will see the following information:

Loading Dump File [D:\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0001.mdmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Comment: ‘Stack Trace’
Comment: ‘Latch timeout’

This dump file has an exception of interest stored in it.

The above tells you that this is a mini-dump for a Latch Timeout condition and the location from where you loaded the dump. Then I use the command to set my symbol path and direct the symbols downloaded from the Microsoft symbol server to a local symbol file cache on my machine.

.sympath srv*D:\PublicSymbols*http://msdl.microsoft.com/download/symbols

Then I issue a reload command to load the symbols for sqlservr.exe. This can also be done using CTRL+L and providing the complete string above (without .sympath), checking the Reload checkbox and clicking on OK. The only difference here is that the all the public symbols for all loaded modules in the dump will be downloaded from the Microsoft Symbol Server which are available.

.reload /f sqlservr.exe

Next thing is to verify that the symbols were correctly loaded using the lmvm sqlservr command. If the symbols were loaded correctly, you should see the following output. Note the text in green.

0:005> lmvm sqlservr

start             end                 module name
00000000`01000000 00000000`03668000   sqlservr T (pdb symbols)          d:\publicsymbols\sqlservr.pdb\2A3969D78EE24FD494837AF090F5EDBC2\sqlservr.pdb

 

If symbols were not loaded, then you will see an output as shown below.

0:005> lmvm sqlservr
start end module name
00000000`01000000 00000000`03668000 sqlservr (export symbols) sqlservr.exe

I will use the !findstack command to locate all threads which have the function call SyncWritePreemptive on their callstack.

0:070> !findstack sqlservr!FCB::SyncWritePreemptive 0

Thread 069, 1 frame(s) match
Thread 074, 1 frame(s) match
Thread 076, 1 frame(s) match
Thread 079, 1 frame(s) match
Thread 081, 1 frame(s) match
Thread 082, 1 frame(s) match
Thread 086, 1 frame(s) match
Thread 089, 1 frame(s) match
Thread 091, 1 frame(s) match
Thread 095, 1 frame(s) match
Thread 098, 1 frame(s) match
Thread 099, 1 frame(s) match
Thread 104, 1 frame(s) match
Thread 106, 1 frame(s) match
Thread 107, 1 frame(s) match
Thread 131, 1 frame(s) match
Thread 136, 1 frame(s) match

0:070> ~81s
ntdll!ZwWaitForSingleObject+0xa:
00000000`77ef0a2a c3              ret
0:081> kL100

ntdll!ZwDelayExecution
kernel32!SleepEx
sqlservr!FCB::SyncWritePreemptive
sqlservr!FCB::PullPageToReplica
sqlservr!alloca_probe
sqlservr!BUF::CopyOnWrite
sqlservr!PageRef::PrepareToDirty
sqlservr!RecoveryMgr::DoCOWPreWrites

You could get all the callstacks with the function that you are searching for using the command: !findstack sqlservr!FCB::SyncWritePreemptive 3

If you look at the thread that raised ended up raising the Latch Timeout warning was also performing a CHECKDB.

0:074> .ecxr

0:074> kL100

kernel32!RaiseException
sqlservr!CDmpDump::Dump
sqlservr!CImageHelper::DoMiniDump
sqlservr!stackTrace
sqlservr!LatchBase::DumpOnTimeoutIfNeeded
sqlservr!LatchBase::PrintWarning
sqlservr!alloca_probe
sqlservr!BUF::AcquireLatch


sqlservr!UtilDbccCreateReplica
sqlservr!UtilDbccCheckDatabase
sqlservr!DbccCheckDB
sqlservr!DbccCommand::Execute

If you cannot find the thread which raised the Latch Timeout warning, you could print out all the callstacks in the dump using ~*kL100 and the searching for the function call in blue above. It is quite clear from the callstack above that the thread was also involved in performing a CHECKDB operation as reported in the SQL Errorlog in the input buffer for the Latch Timeout dump.

If case you were not able to identify the issue right off the bat, you need to check the build that you are on and look for issues that were addressed related to Latch Timeouts for the SQL Server release  that you are using. The symptoms section would have sufficient amount of information for you to compare with your current symptoms and scenario and determine if the KB Article that you are looking at is applicable in your case.

Now is the time, when you need to have some context about the operations that were occurring on the server to actually determine what the actual issue is. Based on what I heard from the system administrators that there was a CHECKDB being executed on the database while the application was executing DML operations on the database. Additionally, the volume on which the disk resides on has fragmentation and the database in question is large (>750GB).

Based on the two MSDN blog posts that I mentioned above, it is quite clear that it is possible to run into sparse file limitations when there is high amount of fragmentation on the disks or if there are a large number of concurrent changes occurring on the database when a CHECKDB is executing on it. A number of Windows and SQL Server updates along with workarounds to run CHECKDB on such databases is mentioned in the second blog post mentioned above. On a separate note, this is not an issue with CHECKDB! It is limitation that you are hitting with sparse files on the OS layer. Remember CHECKDB, starting from SQL Server 2005, creates an internal snapshot (makes sparse file) to execute the consistency check. Paul Randal’s tweet made me add this line to call this out explicitly!

As always… if you are still stuck, contact Microsoft CSS with the mini-dump file, SQL Errorlog and the Windows Event Logs. It might be quite possible that CSS might ask you to collect additional data as most Latch Timeout issues are generally an after-effect of a previous issue. In this case, it was the OS Error 1450.

Well… That’s it for today! Hope this is helpful for the next time you encounter a Latch Timeout issue.

Additional References

Four stages of NTFS File Growth
KB 315263 – How to read the small memory dump files that Windows creates for debugging