Profiler events for different scenarios

A few moons back I had blogged about the high frequency events in SQL Server. In this post, I shall talk about the common scenarios where profiler traces are collected for SQL Performance issues and what kind of events need to be captured in the server side traces that are configured for the data collection.

While configuring a profiler trace, always include all data columns for that event as that doesn’t have an overhead as much as the number of events being captured during the trace capture.

In this post, I shall cover two of the most common issues that CSS collects profiler data for addressing SQL Performance issues: Attentions/Timeouts and High CPU usage/Long Running queries.

Attentions or Timeouts

Attentions or Timeouts can occur due to client side application timeout setting (mostly this is the case) or due to database engine side issue (like failure to acquire memory, problem on the server side connectivity layer etc.). During this time, you will have a lot of queries starting but not completing successfully. If you are dealing with such a scenario, then you would need to capture the Starting events using a profiler trace to find out which statements didn’t complete and encountered the Attention event.

The events that need to be captured are:

All Database Events

Log File Auto Shrink
Data File Auto Shrink
Log File Auto Grow
Data File Auto Grow

All Errors and Warnings

Attention (Important for tracking the timeouts)
ErrorLog
EventLog
Exception
Hash Warning
Execution Warnings
Sort Warnings
Missing Column Statistics
Missing Join Predicate
Exchange Spill Event
Blocked process report
User Error Message
Background Job Error
Bitmap Warning
Database Suspect Data Page
CPU threshold exceeded

Lock:Deadlock Chain and Lock:Timeout (timeout > 0)

Performance: Auto Stats

Showplan All or Showplan XML (As execution plan will not be available for queries which didn’t complete)

Showplan Statistics Profile or Showplan XML Statistics Profile (For comparing estimated and actual query plan)

Security Audit: Audit Login, Audit Login Failed and Audit Logout

Stored Procedures events:

RPC:Starting
SP:Recompile
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
RPC:Completed

Transactions: SQLTransaction is required for tracing back the transaction if there are nested transactions). DTCTransaction is required only if DTC transactions are getting timed out.

TSQL events:

SQL:BatchCompleted
SQL:BatchStarting
SQL:StmtStarting
SQL:StmtCompleted
SQL:StmtRecompile

Cursor events: CursorRecompile and CursorImplicitConversion (Required for checking if the requested cursor is being implicitly converted or if the cursor is being frequently recompiled).

The above events should be sufficient to track down any query timeouts and the common root causes for the same.

High CPU usage/Long Running queries

Another set of issues that are commonly dealt with while troubleshooting SQL Performance issues are Long Running Queries or Queries causing high CPU usage on the server. In such a case, the above set of events would be definitely be useful. If you are certain that the profiler trace can be captured for the entire duration of the query execution, then for highly transactional servers, you can omit the capture of the following events:

SQL: BatchStarting
SQL: StmtStarting
SP: StmtStarting
SP: Starting
RPC: Starting

The above logic of configuring a profiler trace will also help you capture trace data when you are trying to track down queries which are causing excessive amount of IO on the SQL instance.

Reference for capturing blocking information on a SQL instance: SQL 2000 Blocker Script output/SQL 2005 PerfStats

Keep in mind that the profiler data captured may not be sufficient for root cause analysis if you are not capturing SQL blocking information and performance monitor data in conjunction with the profiler traces.

In case you need additional events like deadlock events or lock events, then refer to my previous post and find out if they fall in the high frequency event category and when to capture the particular event.

TIP: As much as possible, avoid adding Text filters to your profiler trace configuration as it can lead to high CPU usage for highly active OLTP environments during filter evaluation.

Hope that the above information will help you reduce the excessive amount of data collection for troubleshooting SQL performance issues.

Profiler: Why do you collect so much data?

This has been always been a classic Catch-22. Should I include that event? But that will bloat my profiler trace size very quickly!! And then suddenly, either your gut feeling takes over and you take a split second decision or you make the same decision again based on your SQL knowledge (+gut feeling) to include or not include that event. The decision is always between collecting too much data at the cost of degrading server performance further or collecting too less data that you would require another round of data collection to figure out the root cause of the performance issue that you are troubleshooting.

At the outset, let me make it clear that I am not *perfect* at configuring a profiler trace for every performance issue that you face. But I will say this that I have handled enough SQL performance issues and looked at profiler traces (I actually do this to earn a living Winking smile)  to help outline a set of DOs and DON’Ts for capturing profiler traces for diagnosing performance issues.

In this post, I shall outline some Profiler event categories and events which generate a plethora of noise more than anything else. These events are required for troubleshooting some very specific scenarios (which is why they were introduced) and can be done away with when you are capturing data for evaluating general server performance or looking to identify the resource consuming queries on your instance.

High frequency event categories

  • Broker – None of the events in the Broker Category are required unless you are troubleshooting an issue related to Database Mail or Service Broker in SQL Server 2005 or above. However, the Broker event category can generate a lot of noise if you have Service Broker/Database Mail configured on the server and are using it heavily.
  • Cursors – If the client application or provider that connects to your SQL Server instance uses Server Side cursors, then this event category can basically lead to an unmanageable amount of data collected which will take you ages to sift through. The sp_cursor* calls will be captured either ways in your Stored Procedure event classes which makes it moot point to track Cursor: Open, Close, Execute through this event category for general performance issues. The only useful events in this event category are CursorImplicitConversion (which lets you track Implicit Conversion of requested non-supported cursor type by the SQL engine) and CursorRecompile (which helps you track down T-SQL cursor recompiles due to schema changes).
  • Locks – Now this event category is a whale if you want my frank opinion. Lock:Acquired and Lock:Released event classes (if captured), can make your data analysis more difficult than scaling Mt. Everest! If you need to track the number of locks for each session, then it is done best outside the profiler using DMVs like sys.dm_tran_locks. However, this event category has some useful event classes like:
    1. Lock: Deadlock and Lock: Deadlock Chain – Helps a lot when troubleshooting deadlock issues
    2. Lock: Timeout and Lock: Timeout (timeout > 0) – Helps troubleshooting timeout related issues due to lock blocking.
    3. Lock: Escalation – This one is debatable. Helps track down Lock Escalations but on servers where there are high number of Lock Escalations, this can become a high frequency event.
  • OLEDB – Unless you know based on the symptoms seen regarding the issue that OLEDB calls need to be traced, you shouldn’t be collecting any event from this Event category except for OLEDB Errors. So if your OLEDB provider was returning a large number of errors, you would be able to track it down using the HRESULT obtained from this event class.
  • Query Notifications – Again the same logic mentioned above for Broker event category applies to Query Notifications event category too.
  • Scans – Now this might be useful when you are doing a repro of sorts on a test or development environment but on a production server this is a strict NO-NO! You really should be able to figure out looking at the plans whether there was a table/index scan without having to capture a separate noisy event in the profiler traces.
  • Security Audit – A very good event category when troubleshooting security or permissions related issues on a SQL instance. For Performance related issues, this is just plain *noise*! Because the events under this category are fired nineteen-to-the-dozen on a SQL instance and this would just bloat your profiler trace size rather than doing anything useful! The two really and I mean really useful events in this category are Audit: Login and Audit: Logout which help track the session’s connection settings and other attributes like host name, user name etc.
  • Transactions – This event category also has a lot of noise events which would aid in bloating profiler trace size rather than doing anything useful. Especially the Transaction Manager (TM:*) event classes are not to be captured unless you are troubleshooting an issue Begin/Commit/Rollback taking a long time to complete. The SQLTransation event is quite useful for troubleshooting Deadlock related issues as it helps track transaction begin and commit/rollback and nest levels for a particular session. The DTCTransaction event class is useful for troubleshooting DTC Transaction related issues and tracking the different states of a DTC transaction.
  • Performance Event Category – This is one of the most widely used event categories in CSS for troubleshooting query performance  issues. The reason for that is this event category helps you capture Query Plans. There are a bunch of different event classes in this category which helps you capture query plans which I shall divide into two classes based on data to be obtained from these events:
    1. Compile time details: Showplan All, Showplan XML & Showplan Text (occurs during query execution but doesn’t contain run-time details like rows returned) and Showplan XML For Query Compile & Showplan All For Query Compile (show you the compile time query plan). This event can be very useful when you want to capture a query plan for timeout related issues as the other events which show you the execution time query plans may not be generated when you are dealing with query timeouts. So if you are not facing any timeouts and need the query plans with run-time details, you need not capture any of the events mentioned above. The Query Compile event classes (especially the XML event) are required when you trying to nail down compilation related issues and want to find out the resources (CPU/Memory) consumed during query compilation.
    2. Run-time details: Showplan Statistics Profile and Showplan XML Statistics Profile show you the the query plan with run-time details in text and XML formats respectively. Based on what you like sifting through Text or XML, you could capture either one. XML events are not the best when it comes to toning down the size of the profiler traces captured.

    There are some other high frequency event classes which are mentioned below:

    1. SP:CacheMiss, SP:CacheInsert, SP:CacheRemove, SP:CacheHit – These events are helpful when tracking procedure cache plan caching issues but if you are not dealing with a specific issue related to Procedure Cache, then you are going to be dealing with a lot of noise in the profiler traces if these events are captured.
    2. TSQL : Prepare SQL, TSQL : Exec Prepared SQL, TSQL : Unprepare SQL – Useful for troubleshooting specific server side cursor calls but otherwise plain noise and should be avoided.

    image*TIP: Always check the “Show all Columns” option in the Profiler UI when configuring the trace as some of the event data columns capture valuable information about event which is essential to troubleshooting a performance issue like ObjectID, ObjectName etc.

    The events to be captured using SQL Profiler for common issues like High CPU usage, excessive I/O, timeouts etc. will be covered through a series of posts later.

    To summarize, the profiler can be a friendly tool when run using a Server Side trace with the right set of events being captured.

    How to fetch all Login Failed messages from default traces

    I was recently investigating a login failed issue on a SQL instance and required to quickly find out all the Login Failed events that had occurred on the server. I could have scoured the SQL Errorlogs but I didn’t want to do this due to the bloated size of the Errorlogs. An easier way would be to read the Default Traces (available from SQL Server 2005 and above) and look for all the Audit Login Failed events (event class = 20) using fn_trace_gettable function.

    The query that can be used to get this information is given below:

    
    declare @path varchar(1000)
    
    select @path = path from sys.traces where id = 1
    
    select trc.Spid, TextData, State, Error, StartTime, EventSubClass, LoginName, NTDomainName, NTUserName, ClientProcessID, HostName, ApplicationName
    
    from fn_trace_gettable(@path,DEFAULT) trc
    
    where EventClass = 20
    
    order by StartTime desc
    
    

    How to use Reporter to analyze SQL Profiler traces

    I had a question on the SQL Nexus discussion forum regarding the following error while using Reporter:

    When I run Reporter, with the default settings (i.e. DB=PerfAnalysis), I get an error :
    Server: (local) Error: 4060 Severity: 11 State: 1 Line: 65536 Source: .Net SqlClient Data Provider
    Cannot open database "PerfAnalysis" requested by the login. The login failed.

    When you launch Reporter which is part of the RML Utilities installation, the PerfAnalysis database is the default database that the Report will attempt to connect to. See illustration below:

    image

    You will need to change the baseline database to the database into which you imported the profiler traces using SQL Nexus. In my case, I had imported the SQL Profiler Traces captured into a database called nexus_blocking.

    I shall cover a walkthrough on importing data into SQL Nexus sometime later.

    Unable to import a trace using SQL Nexus

    While trying to import a profiler trace using SQL Nexus, you might get the following error in the SQL Nexus log file:

    "The system cannot find the file specified (System)"
    ——————————
    Program Location:
       at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
       at System.Diagnostics.Process.Start()
       at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
       at ReadTrace.ReadTraceNexusImporter.DoImport() in C:\jacklidocs3\cprsrc\main\src\sqlnexus_pass\ReadTraceNexusImporter\ReadTraceNexusImporter.cs:line 364
       at sqlnexus.fmImport.DoImport() in C:\jacklidocs3\cprsrc\main\src\sqlnexus_pass\sqlnexus\fmImport.cs:line 557

    The issue could be one of the two reasons:

    1. Your RML Utilities version is older than the current version – Install the latest version of RML Utilities.
    2. Your RML utilities location is not registered with SQL Nexus – This can be corrected by navigating to  C:\Program Files\Microsoft Corporation\RMLUtils from Command Prompt and execute the following: orca.exe /R