Tools Tips and Tricks #4: RML Utilities

It is time again for your friendly neighborhood (or virtual neighborhood) SQL CSS person to provide you with the next set of tips for the various tools that he uses in his day-to-day work. I probably use this tool “RML Utilities” or the database created by this at least once a day and sometimes even more. Not kidding!!

This tool has become synonymous with Profiler Trace analysis collected from customer environments in CSS. Majority of you might be familiar with the tool “SQL Nexus” which call RML utilities in the background to load the collected Profiler Traces into a SQL Server database. This calls an executable, ReadTrace.exe. In this post, I will talk about a few conditions when you should be calling ReadTrace.exe instead of invoking it from SQL Nexus.

image

As you can see above that SQL Nexus provides you the option of breaking down the activity of each SPID into individual .trc files but they can only be directed to the %TEMP%\RML folder on your machine. The previous server that I had didn’t have sufficient disk space to hold over 100GB of data in the TEMP directory. Yes, we do analyze data which runs into a few hundred GBs at times!

Now you have two options, change the %TEMP% environment path on your machine to a different drive or use the ReadTrace.exe to generate the .trc files in the required path.

Another option that cannot be controlled from SQL Nexus is the ability to add SPID, Hostname and Application filters while importing the data.

Some of the defaults for ReadTrace are shown below which are required for any profiler trace import are:

-o Output files to be generated in the current directory
-S Connect to default SQL Server instance on the machine
-d Load profiler data into PerfAnalysis database
-E Use windows authentication while connection.

Here are a few scenarios where I use ReadTrace.exe directly….

Analyzing a deadlock scenario where I know the Session IDs involved

Sometimes the deadlock graph is not sufficient to figure out why a lock is still held by that session and why that session acquired the lock on the object in the first place. At that point, you need to track down the sequence of statements executed by the session. In this case, I will use the following command to import the data.

readtrace -S. -E -ddbRMLUtilDemo
-I"D:\RML\RMLUtil_demo.trc" -A"!SQLCMD" -s53 -s55 -MF -o"D:\RML\output" -f

The above command will import the trace files into a database dbRMLUtilDemo on the default instance on the machine with a SPID filter of 53 and 55 and exclude events generated with the application name SQLCMD. Additionally, individual .trc files will be generated containing the activity of SPIDs 53 and 55.

Generating .RML files to be used by OStress

Use the –o parameter to change the output path and use the –s, –A and –H filters to apply the necessary SPID, application name and host name filters respectively. An example to apply filters for SPIDs 51, 52, 53, exclude events generated by application name PSSDIAG or hostname MyLaptop.

readtrace -S. -E -ddbRMLUtilDemo
-I"D:\RML\RMLUtil_demo.trc" -A"!SQLCMD" –s51 –s52 -s53 -o"D:\RML\output" –H"!MyLaptop" –A"!PSSDIAG"

The command line window and the ReadTrace.log file available in the output directory will confirm if the filters were applied correctly or not. Below is a screen shot of the command line window showing that the filters were applied correctly:

image

Handling MARS sessions

ReadTrace has limited processing ability for handling sessions using MARS (Multiple Active Result Sets). You have to use –f and –T35 to import data from trace files which contains MARS sessions. ReadTrace uses the formula ((Session Id * 10 ) + BatchId) so that a MARS sessions appears as a separate session. Eg. If you see Session ID 1620, 1621 and so on in the ReadTrace/SQLNexus database but cannot find such a session in the DMV outputs captured during that time, then the activity corresponds to Session ID 162 which was using MARS.

Note: And remember to always use a Server Side trace whenever possible to capture a profiler trace output to a local disk.

The PDF help files for RML Utilities are located at <system drive>:\Program Files\Microsoft Corporation\RMLUtils\Help folder by default on a machine where RML Utilities is installed in case you wanted more information on the above.

RML Utilities Download
http://blogs.msdn.com/b/psssql/archive/2008/11/12/cumulative-update-1-to-the-rml-utilities-for-microsoft-sql-server-released.aspx

Keep following the tag “Tools tips and Tricks” for more posts on the various tools that we use. That’s it for today!

Tools Tips and Tricks #1: Process Monitor

I recently wrote about importing a Process Monitor trace into SQL Server database table and crunch up the data to extract the events and call stacks. This prompted me to think about capturing data with Process Monitor and some things I learnt along way while using this tool working at CSS.

imageThe first tip is to disable any activity that you don’t want to capture or are not required for the issue that you are troubleshooting. The capture tracks three classes of operations: File System, Registry and Process. In the toolbar show on the left in the screenshot, you can enable/disable the following captures:

a. Registry activity
b. File System activity
c. Network activity
d. Process and Thread activity
e. Profiling events

More information about the above is available in the Process Monitor help file. image

The command line options specified are immensely helpful if you are scripting the capture of a trace using a batch file or if you are generating an automation routine to load the captured data into another data source. I had used /OpenLog and /SaveAs1 option to generate the XML file from the saved .PML file.

imageOne of the most useful options that I suggest using when capturing a Process Monitor trace is to use the backing file option (/BackingFile command line parameter or CTRL+B when using the GUI). This prevents using the page file as the backing store for trace capture and avoid running in unresponsive server issues while you are still capturing your trace and the paging file fills up. I normally point the backing file to a local drive on the machine which has sufficient amount of disk space.image

Process Monitor can use symbol information, if available, to show functions referenced on event stacks. You can point to the symbol path (local symbol cache or Microsoft Symbol Server: http://msdl.microsoft.com/download/symbols) using Options –> Configure Symbols. Additionally, you can specify the path to the source files for the application in the same dialog. This will help you resolve the function calls using the symbol path and if a source path is present, open a text viewer dialog with the source line highlighted which is being referenced. The symbol path is needed when /SaveAs2 option is used for converting the .PML file to .XML format. Note that this option considerably increases the export time due to symbol resolution time involved.

I am starting a series tagged with “Tools Tips and Tricks” which will document the various tweaks that I use for data collection for the various data collection/analysis tools that I use on a day-to-day basic.

T-SQL Tuesday #18: CTEs, XML and Process Monitor

T-SQL TuesdayThis month’s recurring SQL blog party a.k.a. T-SQL Tuesday started by Adam Machanic (blog | @AdamMachanic) is being hosted by Bob Pusateri (blog | @SQLBob). This month’s topic is about Common Table Expressions which was introduced for the first time in SQL Server 2005. CTEs can be thought of as a derived table which can be self-referenced within a query. With the advent of CTEs, a lot of T-SQL developers made full use of this feature. Personally, I have used CTEs for various data collection and parsing scripts that I use for daily for my day-to-day work. I am a big fan of CTEs and have introduced their usage in one of the blocking reports for SQL Nexus. As with any coding construct, the use of CTEs has it’s PROs and CONs but in this post, I shall not go down the path as there are a lot of depends on whether a particular scenario is fit for the use of CTE of a temporary table. That decision has to be made based on testing against the benchmarks that you have or response-time SLAs that have defined for your application.

Since my primary expertise lies with SQL Server, I examine every opportunity of data analysis with the intention of importing the data into a SQL Server database table and then writing an automation routine over it to help me save the repititive iterations when I am analyzing the same data or need to perform the same set of steps over and over again. Recently I have shifted over to Powershell due to affinity for scripting but that will be a story for some other blog post. CSS uses Process Monitor, a vastly popular tool from Sysinternals, to troubleshoot various problems that we encounter while working on SQL Server related issues. Some of our adventures with Process Monitor are mentioned below:

PRB: SQL Backups to a UNC path fail with OS Error 1 (Incorrect Function)
INF: Permissions required for SQL Server Service account to use SSL certificate

During my tenure here with CSS, I too have relied on the Process Monitor to troubleshoot some fairly complex issues. Sometimes the Process Monitor .pml files can be quite large if collected without adding appropriate filters and I am just interested in a particular process(es). By importing the .PML file into a database, I can not only query the imported data using T-SQL queries but I can also write quick automations to look for common stack traces that I know have known issues/problems associated with them. Yes with Process Monitor, you can use locally cached symbols to get the call stack of the function call as well. Note that saving a .PML into XML with resolved stack symbols is slow due to the time taken for symbol resolution. The .XML file that I will use as an example was saved using the “Extensible Markup Language (XML)” option with “Include stack traces (will increase file size)” option enabled. I captured a Process Monitor trace while trying to access a directory which was not present on my server.

I already have a stored procedure to import the saved .XML file into a database table, then parse that existing data and insert the parsed data into the two tables that I have created to stored the event and stack information.

How does the use of CTEs fit in here?

The .XML file has the following structure for the stack and the frames:

<ProcessIndex>2083</ProcessIndex>
<Time_of_Day>5:52:38.8471548 PM</Time_of_Day>
<Process_Name>Explorer.EXE</Process_Name>
<PID>6588</PID>
<Operation>NotifyChangeDirectory</Operation>
<Path>D:\Tools</Path>
<Result></Result>
<Detail>Filter: FILE_NOTIFY_CHANGE_FILE_NAME, FILE_NOTIFY_CHANGE_ATTRIBUTES, FILE_NOTIFY_CHANGE_LAST_WRITE</Detail>
<stack>
<frame>
<depth>0</depth>
<address>0xfffff880013ab027</address>
<path>C:\Windows\system32\drivers\fltmgr.sys</path>
<location>fltmgr.sys + 0x2027</location>
</frame>

<frame>
<depth>1</depth>
<address>0xfffff880013abbe9</address>
<path>C:\Windows\system32\drivers\fltmgr.sys</path>
<location>fltmgr.sys + 0x2be9</location>
</frame>
….
….
</stack>

Now each operation has a call-stack associated with it which I am interested in importing into my database. But with the existing schema above, I cannot get the information that I require . After exploring the use of multiple ways of parsing the data using temporary variables, temporary tables, recursive CTEs, XML indexes and blah.. blah.. , I found that that quickest way to do this was:

1. Get the event list imported into a temporary table
2. Using the event list generated above, create the call stacks from the <frame> nodes (see code snippet below)

The logic here is that the cteXML used below extracts all frames for a particular event using a counter variable and creates XML data for each of the frames which are parsed by referencing the cteXML in the INSERT INTO…SELECT query.

-- Extract events from the XML file and put them into a staging table

INSERT INTO #tblXMLstaging (event,fname)

SELECT C.query('.') as event,@fname

FROM dbo.tbl_XMLData

CROSS APPLY xmldata.nodes('/procmon/eventlist/event') as T(C)

WHERE fname = @fname

-- Fetch values from staging table, store parsed frames in an XML and parse them to insert data into procmon_stacks table

WITH ctexml as

(

SELECT TOP 15 eventid,C.query('.') as frames,fname

FROM #tblXMLstaging

CROSS APPLY event.nodes('/event/stack/frame') as T(C)

WHERE eventid = @counter

)

INSERT INTO dbo.tblProcMon_Stacks (StackID, FName, FrameDepth, [Address], [Path], Location)

SELECT

eventid as StackID,

fname,

frames.value ('(/frame/depth)[1]', 'int') as FrameDepth,

frames.value ('(/frame/address)[1]', 'varchar(50)') as [address],

frames.value ('(/frame/path)[1]', 'varchar(255)') as [Path],

frames.value ('(/frame/location)[1]', 'varchar(255)') as [location]

FROM ctexml

The output of

If you are curious as to how long it takes to import the data, it took me a little over a minute (84 seconds to be precise) to load a 100MB XML file, parse and shred it! I am sure that there are optimizations possible to this method. The screenshot on the left shows the load times. The system specifications on where the data import is being done is: 

Operating System: Microsoft Windows Server 2008 R2 Enterprise (x64)

Machine: Hewlett-Packard HP Z800 Workstation

Processor: 2 Quad Core [Intel(R) Xeon(R) CPU E5506  @ 2.13GHz, 2128 Mhz, 4 Core(s), 4 Logical Processor(s)]

RAM: 16.0 GB

Hard Disk: Barracuda 7200.12 SATA 3Gb/s 500GB Hard Drive

SQL Server: SQL Server 2008 R2
Now that the data is available in a database table, I can query the information using T-SQL queries. I had attempted to open the directory C:\Foo on my server using Windows Explorer. This information and the callstack can be visible using the following T-SQL queries:

-- Get the event information associated with the directory that I was looking up
select ProcessName, PID, Operation, Path, Result, Detail, StackID from dbo.tblprocmon_events

where fname = 'FileNotFound.xml'

and path like '%C:\Foo%'

order by StackID

-- Get information for the call-stack using the StackID obtained above
select FrameDepth, address, Path, Location

from tblProcmon_stacks

where fname = 'FileNotFound.xml' and StackID = 662

order by FrameDepth

Below is a screenshot showing me that the directory doesn’t exist and the call-stack associated with the function call that failed.

The options are now limitless, you can take this forward by creation for common callstacks in-case you are troubleshooting an application developed by you and have access to the private symbols. Or you could look for common results for certain operations like Operation = QueryDirectory and Result = NO SUCH FILE. My use of CTEs are for the following tasks:

1. Parsing XML data as shown here
2. Extracting useful information from Ring Buffers
3. Tracking available contiguous memory used and available in the SQL Server process address space

The scripts containing the Stored Procedure definitions and table schemas can be downloaded from here.

T-SQL Tuesday#17: It’s all about APPLYcation this time

imageIt’s time for another round of T-SQL Tuesday and this round of the revolving blog party is being hosted by Matt Velic [Blog | Twitter].

APPLY – That is the topic for this month’s T-SQL Tuesday! The APPLY operator was added to the T-SQL repertoire and which has resulted in lesser use of cursors for a large number of diagnostic scripts that CSS uses to collect data while working on SQL Performance issues. In this blog, I shall share a few examples of such queries that we use to collect data while working on SQL Performance cases.

TOP Query Plan Statistics

The following query gives you a list of the SQL batches/procedures with their CPU usage, Query/Batch duration and Physical Reads rank. This query helps identify the TOP CPU/Duration/Read consuming queries by making use of system DMVs. The output below is useful for the following reasons:

1. I get the usecount of the procedure/batch and if this batch is called multiple times and the use count of a Compiled Proc cached object is only 1, then the plan is not being re-used. This now tells me that I need to look at reasons behind inability of plan re-use.

2. I get the total and average resource usage statistics for each of the queries listed in the output.

3. A quick glance at the output gives me an idea of the most expensive queries on the instance w.r.t. reads or/and CPU and/or query duration.


SELECT

LEFT(p.cacheobjtype + ' (' + p.objtype + ')',35) AS cacheobjtype,

p.usecounts,

p.size_in_bytes/1024  AS size_in_kb,

PlanStats.total_worker_time/1000 AS tot_cpu_ms,

PlanStats.total_elapsed_time/1000 AS tot_duration_ms,

PlanStats.total_physical_reads,

PlanStats.total_logical_writes,

PlanStats.total_logical_reads,

PlanStats.CpuRank,

PlanStats.PhysicalReadsRank,

PlanStats.DurationRank,

LEFT(CASE WHEN pa.value = 32767 THEN 'ResourceDb' ELSE ISNULL(DB_NAME(CONVERT(sysname,pa.value)),CONVERT(sysname,pa.value)) END,40) AS dbname,

sql.objectid,

CONVERT(nvarchar(50), CASE WHEN sql.objectid IS NULL THEN NULL ELSE REPLACE(REPLACE(sql.[text],CHAR(13),' '),CHAR(10),' ') END) AS procname,  REPLACE(REPLACE(SUBSTRING(sql.[text],PlanStats.statement_start_offset/2+1,CASE WHEN PlanStats.statement_end_offset=-1 THEN LEN(CONVERT(nvarchar(max),sql.[text]))

ELSE PlanStats.statement_end_offset/2 - PlanStats.statement_start_offset/2+1 END),CHAR(13),' '),CHAR(10),' ') AS stmt_text

FROM

(

SELECT

stat.plan_handle,

statement_start_offset,

statement_end_offset,

stat.total_worker_time,

stat.total_elapsed_time,

stat.total_physical_reads,

stat.total_logical_writes,

stat.total_logical_reads,

ROW_NUMBER()OVER ( ORDER BY stat.total_worker_time DESC ) AS CpuRank,

ROW_NUMBER()OVER ( ORDER BY stat.total_physical_reads DESC ) AS PhysicalReadsRank,

ROW_NUMBER()OVER ( ORDER BY stat.total_elapsed_time DESC ) AS DurationRank

FROM sys.dm_exec_query_stats stat

) AS PlanStats

INNER JOIN sys.dm_exec_cached_plans p

ON p.plan_handle =  PlanStats.plan_handle

OUTER APPLY sys.dm_exec_plan_attributes ( p.plan_handle ) pa

OUTER APPLY sys.dm_exec_sql_text ( p.plan_handle ) AS sql

WHERE

(PlanStats.CpuRank<50

OR PlanStats.PhysicalReadsRank<50

OR PlanStats.DurationRank<50)

AND

pa.attribute='dbid'

ORDER BY tot_cpu_ms DESC


Top Queries with Similar Query Hash and Query Plan Hash

 

SELECT TOP 10 query_plan_hash, query_hash,

COUNT (distinct query_plan_hash) as 'distinct query_plan_hash count',

SUM(execution_count) as 'execution_count',

SUM(total_worker_time) as 'total_worker_time',

SUM(total_elapsed_time) as 'total_elapsed_time',

SUM (total_logical_reads) as 'total_logical_reads',

MAX(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text])) ELSE qs.statement_end_offset/2 - qs.statement_start_offset/2 + 1 END), CHAR(13), ' '), CHAR(10), ' '))  AS sample_statement_text,

MIN(CAST(query_plan as varchar(max))) AS 'ShowPlan XML'

FROM sys.dm_exec_query_stats AS qs

CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st

CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as sp

GROUP BY query_plan_hash, query_hash

ORDER BY sum(total_worker_time) ASC;

This is a query which can help you identify queries which have the same query plan hash

SQL Server Books Online topic “Finding and Tuning Similar Queries by Using Query and Query Plan Hashes” has more information on this topic. The query hash feature was added in SQL Server 2008 which made it easier to troubleshooting performance issues caused by ad-hoc queries which differed in just literal values. RML Utilities does a similar task by creating query hash but now if you are troubleshooting on the server, you can do this using DMVs without having to capture a profiler trace.

The right operand supplied to the Apply operator is a function of one or more column values that are present in the left operand. So basically, the right operand is a table-valued expression of which is evaluated once for each row that appears in the left operand. The Cross Apply and Outer Apply are the two flavors of the Apply operator. So if I wanted to simulate an Apply Operation without the Operator itself, it would require the use of temporary tables or table variables.

I use the APPLY operator a lot while parsing XML data like Process Monitor traces or XML query plans which make life a lot easier and saves me from writing a huge bunch of T-SQL code.

Profiler events for different scenarios

A few moons back I had blogged about the high frequency events in SQL Server. In this post, I shall talk about the common scenarios where profiler traces are collected for SQL Performance issues and what kind of events need to be captured in the server side traces that are configured for the data collection.

While configuring a profiler trace, always include all data columns for that event as that doesn’t have an overhead as much as the number of events being captured during the trace capture.

In this post, I shall cover two of the most common issues that CSS collects profiler data for addressing SQL Performance issues: Attentions/Timeouts and High CPU usage/Long Running queries.

Attentions or Timeouts

Attentions or Timeouts can occur due to client side application timeout setting (mostly this is the case) or due to database engine side issue (like failure to acquire memory, problem on the server side connectivity layer etc.). During this time, you will have a lot of queries starting but not completing successfully. If you are dealing with such a scenario, then you would need to capture the Starting events using a profiler trace to find out which statements didn’t complete and encountered the Attention event.

The events that need to be captured are:

All Database Events

Log File Auto Shrink
Data File Auto Shrink
Log File Auto Grow
Data File Auto Grow

All Errors and Warnings

Attention (Important for tracking the timeouts)
ErrorLog
EventLog
Exception
Hash Warning
Execution Warnings
Sort Warnings
Missing Column Statistics
Missing Join Predicate
Exchange Spill Event
Blocked process report
User Error Message
Background Job Error
Bitmap Warning
Database Suspect Data Page
CPU threshold exceeded

Lock:Deadlock Chain and Lock:Timeout (timeout > 0)

Performance: Auto Stats

Showplan All or Showplan XML (As execution plan will not be available for queries which didn’t complete)

Showplan Statistics Profile or Showplan XML Statistics Profile (For comparing estimated and actual query plan)

Security Audit: Audit Login, Audit Login Failed and Audit Logout

Stored Procedures events:

RPC:Starting
SP:Recompile
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
RPC:Completed

Transactions: SQLTransaction is required for tracing back the transaction if there are nested transactions). DTCTransaction is required only if DTC transactions are getting timed out.

TSQL events:

SQL:BatchCompleted
SQL:BatchStarting
SQL:StmtStarting
SQL:StmtCompleted
SQL:StmtRecompile

Cursor events: CursorRecompile and CursorImplicitConversion (Required for checking if the requested cursor is being implicitly converted or if the cursor is being frequently recompiled).

The above events should be sufficient to track down any query timeouts and the common root causes for the same.

High CPU usage/Long Running queries

Another set of issues that are commonly dealt with while troubleshooting SQL Performance issues are Long Running Queries or Queries causing high CPU usage on the server. In such a case, the above set of events would be definitely be useful. If you are certain that the profiler trace can be captured for the entire duration of the query execution, then for highly transactional servers, you can omit the capture of the following events:

SQL: BatchStarting
SQL: StmtStarting
SP: StmtStarting
SP: Starting
RPC: Starting

The above logic of configuring a profiler trace will also help you capture trace data when you are trying to track down queries which are causing excessive amount of IO on the SQL instance.

Reference for capturing blocking information on a SQL instance: SQL 2000 Blocker Script output/SQL 2005 PerfStats

Keep in mind that the profiler data captured may not be sufficient for root cause analysis if you are not capturing SQL blocking information and performance monitor data in conjunction with the profiler traces.

In case you need additional events like deadlock events or lock events, then refer to my previous post and find out if they fall in the high frequency event category and when to capture the particular event.

TIP: As much as possible, avoid adding Text filters to your profiler trace configuration as it can lead to high CPU usage for highly active OLTP environments during filter evaluation.

Hope that the above information will help you reduce the excessive amount of data collection for troubleshooting SQL performance issues.