Profiler events for different scenarios


A few moons back I had blogged about the high frequency events in SQL Server. In this post, I shall talk about the common scenarios where profiler traces are collected for SQL Performance issues and what kind of events need to be captured in the server side traces that are configured for the data collection.

While configuring a profiler trace, always include all data columns for that event as that doesn’t have an overhead as much as the number of events being captured during the trace capture.

In this post, I shall cover two of the most common issues that CSS collects profiler data for addressing SQL Performance issues: Attentions/Timeouts and High CPU usage/Long Running queries.

Attentions or Timeouts

Attentions or Timeouts can occur due to client side application timeout setting (mostly this is the case) or due to database engine side issue (like failure to acquire memory, problem on the server side connectivity layer etc.). During this time, you will have a lot of queries starting but not completing successfully. If you are dealing with such a scenario, then you would need to capture the Starting events using a profiler trace to find out which statements didn’t complete and encountered the Attention event.

The events that need to be captured are:

All Database Events

Log File Auto Shrink
Data File Auto Shrink
Log File Auto Grow
Data File Auto Grow

All Errors and Warnings

Attention (Important for tracking the timeouts)
ErrorLog
EventLog
Exception
Hash Warning
Execution Warnings
Sort Warnings
Missing Column Statistics
Missing Join Predicate
Exchange Spill Event
Blocked process report
User Error Message
Background Job Error
Bitmap Warning
Database Suspect Data Page
CPU threshold exceeded

Lock:Deadlock Chain and Lock:Timeout (timeout > 0)

Performance: Auto Stats

Showplan All or Showplan XML (As execution plan will not be available for queries which didn’t complete)

Showplan Statistics Profile or Showplan XML Statistics Profile (For comparing estimated and actual query plan)

Security Audit: Audit Login, Audit Login Failed and Audit Logout

Stored Procedures events:

RPC:Starting
SP:Recompile
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
RPC:Completed

Transactions: SQLTransaction is required for tracing back the transaction if there are nested transactions). DTCTransaction is required only if DTC transactions are getting timed out.

TSQL events:

SQL:BatchCompleted
SQL:BatchStarting
SQL:StmtStarting
SQL:StmtCompleted
SQL:StmtRecompile

Cursor events: CursorRecompile and CursorImplicitConversion (Required for checking if the requested cursor is being implicitly converted or if the cursor is being frequently recompiled).

The above events should be sufficient to track down any query timeouts and the common root causes for the same.

High CPU usage/Long Running queries

Another set of issues that are commonly dealt with while troubleshooting SQL Performance issues are Long Running Queries or Queries causing high CPU usage on the server. In such a case, the above set of events would be definitely be useful. If you are certain that the profiler trace can be captured for the entire duration of the query execution, then for highly transactional servers, you can omit the capture of the following events:

SQL: BatchStarting
SQL: StmtStarting
SP: StmtStarting
SP: Starting
RPC: Starting

The above logic of configuring a profiler trace will also help you capture trace data when you are trying to track down queries which are causing excessive amount of IO on the SQL instance.

Reference for capturing blocking information on a SQL instance: SQL 2000 Blocker Script output/SQL 2005 PerfStats

Keep in mind that the profiler data captured may not be sufficient for root cause analysis if you are not capturing SQL blocking information and performance monitor data in conjunction with the profiler traces.

In case you need additional events like deadlock events or lock events, then refer to my previous post and find out if they fall in the high frequency event category and when to capture the particular event.

TIP: As much as possible, avoid adding Text filters to your profiler trace configuration as it can lead to high CPU usage for highly active OLTP environments during filter evaluation.

Hope that the above information will help you reduce the excessive amount of data collection for troubleshooting SQL performance issues.

Wikis: Disk Contention Issues


I had written this post a while back and decided to add a bit of more context w.r.t. Perfmon and other DMVs that can be used to fetch information.

It is said that the recommendation was that Average Disk Queue Length (perfmon counter in Logical/Physical Disk object) for disks be less than 2 * number of spindles on the disk. So if your disk had 10 spindles, then values above 20 woud be a cause of concern. But on a SAN or on virtualized environments, this particular counter can be misleading at times. So what was valid a few years has become somewhat of an urban legend. I personally am not a very big fan of calling a disk performance bad just based on this counter alone and without knowing how the underlying physical disk structure looks like.

To get a complete picture of your I/O performance, you might want to the check the following counters under perfmon too (keep in mind IO vendor logging and if you are using Virtual Machines, logging I/O calls at the virtualization bus layer can become equally important in some cases):
Physical Disk:
Avg. Disk Secs/Write
Avg. Disk Secs/Read
Avg. Disk Secs/Transfer

Logical Disk:
Avg. Disk Secs/Write
Avg. Disk Secs/Read
Avg. Disk Secs/Transfer

Also, you would want to find out if there are a lot of Split I/O (another Perfmon disk counter) occurring on the server. If that is the case, then most probably, you have disk level fragmentation on the drives storing SQL database files. That doesn’t mean you start de-fragmenting your disk drives on which SQL Server database files reside on.

For disks on which SQL Server Data (MDF,NDF) and Log (LDF) files reside, we recommend that these counters (Avg Disk Secs/Read,Write,Transfer) show values less that 30 milli-seconds or they do not represent frequent spikes or prolonged response times above 30 milli-seconds. If they are constantly showing values of 0.5 and above, then we have a serious disk contention issue on our hands. What this means is that your disk is roughly taking 0.5 second to perform an I/O operation which on database server environments would be a performance bottleneck. However, a low average value and very frequent spikes for the counters mentioned above which occur during the the timeframe of your performance issues is also not a good sign and would require a further investigation.

The other counters of interest in Perfmon are:

Disk Bytes/Sec
Disk Read Bytes/Sec
Disk Write Bytes/Sec

You can drill down into the above counters on the basis of logical disk or physical disk to determine which disk is receiving the highest amount of I/O activity. As always with performance issues, what is a good value *depends* on the throughput of the disks. Depending on how much I/O these disks are receiving and the throughput of the disks, you will be able to determine if the disks are capable of hanlding the load.

The disk contention could be due to these reasons:
1. There is an underlying H/W issue. This could mean that some driver or firmware needs an upgrade (essentially all your firmware/drivers should always be on the latest builds available) or it could be a problem with the SAN configuration or it could be that this particular SAN is not dedicated to SQL Server.
2. OR it could be that the disk contention is due to the fact that the queries executing on SQL Server are performing excessive amount of reads due to sub-optimal plans. Sub-optimal plans would be generated if your indexes are out of shape or if your statistics are skewed. This would require a statistics update with a full scan. (Please refer my statistics update post.) You should be able to find out which process is driving IO by looking at the Process performance object in Perfmon and the I/O Data Bytes counter. There is no fixed value for good and bad performance for this counter. The amount of I//O being pushed through should be compared with the response time of the disks and the I/O thresholds which the disk can handle.
3. OR it could be that the disk activity on the disks has increased over a period of time due to increase in data or addition of new databases. In such a case, it would be a good idea to provide a separate disk or storage area for some of the databases as the current activity might be too much for the disk to handle. Sometimes, load balancing for physical storage also helps in alleviating a disk contention issue.

Also, it would be a good idea to check if you are receiving any stuck/stalled IO warnings for any particular database in the SQL Server ERRORLOGs. (Reference: Stuck Stalled IO post)

What kind of waits would you monitor for when looking at wait statistics while troubleshooting disk contention. The most common waits that you would see in the sys.sysprocesses or sys.dm_exec_requests DMV output are PAGEIOLATCH_XX waits (XX = EX or SH most commonly) or WRITELOG under the wait types. Keep in mind, that the wait duration for these wait types should be non-zero. It could either keep increasing over a period of time for a particular request. Or it could show non-zero values which are constantly dropping to zero and increasing which means that there are multiple requests which are waiting on IO to complete. If you experience WRITELOG waittypes for high durations for user transactions, then that would have a cascading effect as SQL Server follows a Write-Ahead-Logging (WAL) protocol which means that the log is written into in sequential order. If one transaction is waiting for a log write operation to complete, then another concurrent transaction on the same database would also experience delays.

You can identify the queries performing high amount of I/O by collecting profiler traces and tracking the reads/writes done by the statements and batches executing on the server during that time. You only need Statement and Batch Completed events in case you are not encountering any attentions. Refer my post on profiler for the set of profiler events to be avoided.

Addition: May 5, 2011:

You can also use the query below to find out which database is receiving the maximum amount of I/O and on which database are the IO stalls being experienced. This information is retrieved from sys.dm_io_virtual_file_stats.


SELECT DB_NAME (f.database_id) AS database_name, f.database_id, f.name AS logical_file_name, f.[file_id], f.type_desc,

CAST (CASE WHEN LEFT (LTRIM (f.physical_name), 2) = '\\' THEN LEFT (LTRIM (f.physical_name), CHARINDEX ('\', LTRIM (f.physical_name), CHARINDEX ('\', LTRIM (f.physical_name), 3) + 1) - 1)

WHEN CHARINDEX ('\', LTRIM(f.physical_name), 3) > 0 THEN UPPER (LEFT (LTRIM (f.physical_name), CHARINDEX ('\', LTRIM (f.physical_name), 3) - 1))

ELSE f.physical_name END AS nvarchar(255)) AS logical_disk,

fs.num_of_reads, fs.num_of_bytes_read, fs.io_stall_read_ms, fs.num_of_writes, fs.num_of_bytes_written,

fs.io_stall_write_ms, fs.size_on_disk_bytes

FROM sys.dm_io_virtual_file_stats (default, default) AS fs

INNER JOIN sys.master_files AS f ON fs.database_id = f.database_id AND fs.[file_id] = f.[file_id]

*The above query is the same query that is used by Management Data Warehouse to collect a sub-set of the data using the “Server Activity” collection set.

If you want to drill-down further into the current low-level I/O, locking, latching, and access method activity for each partition of a table or index in the database identified in the above query, then you can use the query below to get that information for a particular database(s) and also identify if their is a missing index recommended for that table. The information for this is retrieved using sys.dm_db_index_operational_stats. Keep in mind that the data returned by sys.dm_db_index_operational_stats exists only as long as the metadata cache object that represents the heap or index is available. This data is neither persistent nor transactionally consistent. This means you cannot use these counters to determine whether an index has been used or not, or when the index was last used.

-- Shows objects which have page_io_latch_wait_count > 0

select db_name(d.database_id) as database_name,

quotename(object_schema_name(d.object_id, d.database_id)) + N'.' + quotename(object_name(d.object_id, d.database_id)) as object_name,

d.database_id,

d.object_id,

d.page_io_latch_wait_count,

d.page_io_latch_wait_in_ms,

d.range_scans,

d.index_lookups,

case when mid.database_id is null then 'N' else 'Y' end as missing_index_identified

from (select

database_id,

object_id,

row_number() over (partition by database_id order by sum(page_io_latch_wait_in_ms) desc) as row_number,

sum(page_io_latch_wait_count) as page_io_latch_wait_count,

sum(page_io_latch_wait_in_ms) as page_io_latch_wait_in_ms,

sum(range_scan_count) as range_scans,

sum(singleton_lookup_count) as index_lookups

from sys.dm_db_index_operational_stats(NULL, NULL, NULL, NULL)

where page_io_latch_wait_count > 0

group by database_id, object_id ) as d

left join (select distinct database_id, object_id from sys.dm_db_missing_index_details) as mid on mid.database_id = d.database_id and mid.object_id = d.object_id

where d.row_number <= 20 and d.database_id = DB_ID('AdventureWorks') -- Replace database name with the database that you are interested in

-- Comment out the line below if you want to look at I/O for system tables as well

and d.object_id not in (select object_id from sys.objects where is_ms_shipped = 1)

*The query above is  a variation of the query that is used by the “IO Statistics” report present in SQL Server Performance Dashboard.

If you are interested in index usage statistics for that database, then you do so using sys.dm_db_index_usage_stats.

Example:


select OBJECT_NAME(object_id,database_id) as ObjectName, object_id, index_id, user_seeks, user_scans, user_lookups, user_updates, last_user_seek, last_user_scan, last_user_lookup, last_user_update

from sys.dm_db_index_usage_stats

where database_id = DB_ID('AdventureWorks') -- Replace database name with the database that you are interested in

and (user_lookups <> 0 or user_scans <> 0 or user_seeks <> 0)

order by user_scans desc

Related Articles

Sometimes, it’s also worthwhile checking the activity on tempdb and finding out if the contention is due to tempdb activity. The links below could be helpful: