Profiler: Why do you collect so much data?


This has been always been a classic Catch-22. Should I include that event? But that will bloat my profiler trace size very quickly!! And then suddenly, either your gut feeling takes over and you take a split second decision or you make the same decision again based on your SQL knowledge (+gut feeling) to include or not include that event. The decision is always between collecting too much data at the cost of degrading server performance further or collecting too less data that you would require another round of data collection to figure out the root cause of the performance issue that you are troubleshooting.

At the outset, let me make it clear that I am not *perfect* at configuring a profiler trace for every performance issue that you face. But I will say this that I have handled enough SQL performance issues and looked at profiler traces (I actually do this to earn a living Winking smile)  to help outline a set of DOs and DON’Ts for capturing profiler traces for diagnosing performance issues.

In this post, I shall outline some Profiler event categories and events which generate a plethora of noise more than anything else. These events are required for troubleshooting some very specific scenarios (which is why they were introduced) and can be done away with when you are capturing data for evaluating general server performance or looking to identify the resource consuming queries on your instance.

High frequency event categories

  • Broker – None of the events in the Broker Category are required unless you are troubleshooting an issue related to Database Mail or Service Broker in SQL Server 2005 or above. However, the Broker event category can generate a lot of noise if you have Service Broker/Database Mail configured on the server and are using it heavily.
  • Cursors – If the client application or provider that connects to your SQL Server instance uses Server Side cursors, then this event category can basically lead to an unmanageable amount of data collected which will take you ages to sift through. The sp_cursor* calls will be captured either ways in your Stored Procedure event classes which makes it moot point to track Cursor: Open, Close, Execute through this event category for general performance issues. The only useful events in this event category are CursorImplicitConversion (which lets you track Implicit Conversion of requested non-supported cursor type by the SQL engine) and CursorRecompile (which helps you track down T-SQL cursor recompiles due to schema changes).
  • Locks – Now this event category is a whale if you want my frank opinion. Lock:Acquired and Lock:Released event classes (if captured), can make your data analysis more difficult than scaling Mt. Everest! If you need to track the number of locks for each session, then it is done best outside the profiler using DMVs like sys.dm_tran_locks. However, this event category has some useful event classes like:
    1. Lock: Deadlock and Lock: Deadlock Chain – Helps a lot when troubleshooting deadlock issues
    2. Lock: Timeout and Lock: Timeout (timeout > 0) – Helps troubleshooting timeout related issues due to lock blocking.
    3. Lock: Escalation – This one is debatable. Helps track down Lock Escalations but on servers where there are high number of Lock Escalations, this can become a high frequency event.
  • OLEDB – Unless you know based on the symptoms seen regarding the issue that OLEDB calls need to be traced, you shouldn’t be collecting any event from this Event category except for OLEDB Errors. So if your OLEDB provider was returning a large number of errors, you would be able to track it down using the HRESULT obtained from this event class.
  • Query Notifications – Again the same logic mentioned above for Broker event category applies to Query Notifications event category too.
  • Scans – Now this might be useful when you are doing a repro of sorts on a test or development environment but on a production server this is a strict NO-NO! You really should be able to figure out looking at the plans whether there was a table/index scan without having to capture a separate noisy event in the profiler traces.
  • Security Audit – A very good event category when troubleshooting security or permissions related issues on a SQL instance. For Performance related issues, this is just plain *noise*! Because the events under this category are fired nineteen-to-the-dozen on a SQL instance and this would just bloat your profiler trace size rather than doing anything useful! The two really and I mean really useful events in this category are Audit: Login and Audit: Logout which help track the session’s connection settings and other attributes like host name, user name etc.
  • Transactions – This event category also has a lot of noise events which would aid in bloating profiler trace size rather than doing anything useful. Especially the Transaction Manager (TM:*) event classes are not to be captured unless you are troubleshooting an issue Begin/Commit/Rollback taking a long time to complete. The SQLTransation event is quite useful for troubleshooting Deadlock related issues as it helps track transaction begin and commit/rollback and nest levels for a particular session. The DTCTransaction event class is useful for troubleshooting DTC Transaction related issues and tracking the different states of a DTC transaction.
  • Performance Event Category – This is one of the most widely used event categories in CSS for troubleshooting query performance  issues. The reason for that is this event category helps you capture Query Plans. There are a bunch of different event classes in this category which helps you capture query plans which I shall divide into two classes based on data to be obtained from these events:
    1. Compile time details: Showplan All, Showplan XML & Showplan Text (occurs during query execution but doesn’t contain run-time details like rows returned) and Showplan XML For Query Compile & Showplan All For Query Compile (show you the compile time query plan). This event can be very useful when you want to capture a query plan for timeout related issues as the other events which show you the execution time query plans may not be generated when you are dealing with query timeouts. So if you are not facing any timeouts and need the query plans with run-time details, you need not capture any of the events mentioned above. The Query Compile event classes (especially the XML event) are required when you trying to nail down compilation related issues and want to find out the resources (CPU/Memory) consumed during query compilation.
    2. Run-time details: Showplan Statistics Profile and Showplan XML Statistics Profile show you the the query plan with run-time details in text and XML formats respectively. Based on what you like sifting through Text or XML, you could capture either one. XML events are not the best when it comes to toning down the size of the profiler traces captured.

    There are some other high frequency event classes which are mentioned below:

    1. SP:CacheMiss, SP:CacheInsert, SP:CacheRemove, SP:CacheHit – These events are helpful when tracking procedure cache plan caching issues but if you are not dealing with a specific issue related to Procedure Cache, then you are going to be dealing with a lot of noise in the profiler traces if these events are captured.
    2. TSQL : Prepare SQL, TSQL : Exec Prepared SQL, TSQL : Unprepare SQL – Useful for troubleshooting specific server side cursor calls but otherwise plain noise and should be avoided.

    image*TIP: Always check the “Show all Columns” option in the Profiler UI when configuring the trace as some of the event data columns capture valuable information about event which is essential to troubleshooting a performance issue like ObjectID, ObjectName etc.

    The events to be captured using SQL Profiler for common issues like High CPU usage, excessive I/O, timeouts etc. will be covered through a series of posts later.

    To summarize, the profiler can be a friendly tool when run using a Server Side trace with the right set of events being captured.

    Advertisements

    T-SQL Tuesday #13 – What the Business Says Is Not What the Business Wants


    This is my second post for T-SQL Tuesday. This time around, T-SQL Tuesday is being hosted by Steve Jones (Twitter | Blog), The Voice of the DBA on SQLServerCentral.com this month. The topic this time around is not a technical topic but nonetheless very important in the grander scheme of things.

    I have seen multiple scenarios where the teams involved in deploying solutions/projects work in silos. There is the management team which decides what the final outcome of the project should look like and what the end deliverables are. Then the set of developers who can throw together gazillion lines of code (in multiple languages) to put together a front-end, maybe a middle-tier, which talks to a database server fetches data and presents it to the end-user. And finally a set of DBAs who are always guilty of the post-deployment issues “till the database is proven innocent” as most applications hit a database to fetch the data at one time or another.

    There are some key things to consider and contemplate about before you start writing code and deploying your solutions.

    What’s the business?

    Business

    Knowing your business is like knowing your backyard. The bigger it is the more lethargic you tend to become. Business acumen is not something that is missing in individuals but it has got to do more with “I want to” versus “I don’t want to”! The positive side of knowing enough about your business is that it can make your life easier. This will prevent you from spending additional hours troubleshooting database performance issues since you deployed something that was not needed for the business and unfriendly to your current database environment. So unless you have a fair idea of why you are implementing what you are implementing, you are going to invite trouble later if not sooner!

    Requirement Analysis: The Who, What and How

    analysisIf you have background in Software Design or have been part of a project, then you will know what RA is. However, due to the time-to-market commitments becoming tighter and tighter in a competitive environment, RA becomes a non-entity or a formality with developers developing without knowing why they are doing what they are doing. A definite recipe for disaster! Unless you understand WHO is going to be using what you are developing, WHAT purpose is it going to serve and HOW they will be using it; you will not be in a position to explain why a module or feature may be an overkill or completely unnecessary. Figuring out the Who, What and How requires a certain understanding of the business (my first point) and without this, you are bound to face challenges!

    Good-to-have and Must-have

    Under_construction_icon-blueYou must have heard this a million times. What is a must-have as opposed to a good-to-have. Without knowing this, you might not be able to land a project where end deliverables are met. It is never a good idea to over-commit. Timelines and deliverables need to be frozen before you start developing which would make me point back to the planning phase. Just because the business asks for it doesn’t mean that the business needs it. Find out if that is going to serve the end-goal. What is being prioritized may not be a must-have but actually a good-to-have. Must-haves in a project are non-comprisable (if the right RA has been done) and the good-to-have is well… (the phrase is self explanatory)

    The shops where there are smart people who can look beyond their egos and work efficiently in a cross-team environment usually don’t land up in a soup post-deployment; using band-aids to fix a “gone-to-hell” deployment.

    Conclusion: Cohesion in communication among teams is required to formulate a concise design goal to achieve or solve a specific business need. Without this, you will be back to the drawing board in no time after spending substantial amount of effort and $$ in a futile exercise.

    Note: The above post is not penned to finger-point at any set of individuals or a group. It is just an attempt to help understand the need for cross collaboration between business groups. Additionally, it is equally pertinent to know your backyard before you deploy something. This is applicable to all business groups and not just DBAs!

    Volume Shadow barfs with 0x80040e14 code


    I was replying on a MSDN forum thread recently and found that the a VSS backup was failing due to large number of database volumes residing on a particular volume.

    I re-created the issue on my box by limiting the number of Max Worker Threads to 128 on my SQL Server 2008 instance. After making the change to the number of worker threads, I created 99 databases on the D: drive of my server using the following script:

    set nocount on
    declare @cntr int = 1,@sql varchar(8000)
    while @cntr < 100
    begin
    set @sql = 'CREATE DATABASE [db'+cast(@cntr as varchar(5))+'] ON  PRIMARY
    
    ( NAME = N' 'db'+cast(@cntr as varchar(5))+''', FILENAME = N' 'D:\DBFiles\db'+cast(@cntr as varchar(5))+'.mdf' ')
    LOG ON
    ( NAME = N' 'db'+cast(@cntr as varchar(5))+'_log' ', FILENAME = N' 'D:\DBFiles\db'+cast(@cntr as varchar(5))+'.ldf'')'
    
    set @cntr = @cntr + 1
    exec(@sql)
    end

    I then attempted to backup the DBFiles folder on the D: drive using Windows Server Backup and it failed.

    image

    After that I looked into the SQL Server Errorlog and found the following errors:

    2010-12-06 03:39:15.280 Backup       Error: 3041, Severity: 16, State: 1.2010-12-06 03:39:15.280 Backup       BACKUP failed to complete the command BACKUP DATABASE db68. Check the backup application log for detailed messages.

    2010-12-06 03:39:15.370 spid53       Error: 18210, Severity: 16, State: 1.2010-12-06 03:39:15.370 spid53       BackupVirtualDeviceFile::PrepareToFreeze:  failure on backup device ‘{DB840D1B-DD17-4D73-83F1-7390F676DEE4}1’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).

     

    I had already documented earlier that the infamous 995 error message could be a problem with the sqlvdi.dll but in this case, it is not!

    The next thing I did was to pick the errors from the Application Event logs and I found the following errors repeated multiple times:

    Error    12/6/2010 3:39:15 AM    SQLVDI    1    None    SQLVDI: Loc=SignalAbort. Desc=Client initiates abort. ErrorCode=(0). Process=3920. Thread=11604. Client. Instance=SQL2008. VD=Global\{DB840D1B-DD17-4D73-83F1-7390F676DEE4}1_SQLVDIMemoryName_0.Error    12/6/2010 3:39:15 AM    SQLWRITER    24583    None    Sqllib error: OLEDB Error encountered calling ICommandText::Execute. hr = 0x80040e14. SQLSTATE: 42000, Native Error: 3013

    Error state: 1, Severity: 16

    Source: Microsoft SQL Server Native Client 10.0

    Error message: BACKUP DATABASE is terminating abnormally.

    SQLSTATE: 42000, Native Error: 3224

    Error state: 1, Severity: 16

    Source: Microsoft SQL Server Native Client 10.0

    Error message: Cannot create worker thread.

    If you look closely at the above error, then you will find that the following error message stands out like a sore thumb: “Cannot create worker thread”. After this issue occurs, you will find that the SQLWriter shows the following when you execute the command: vssadmin list writers from a command prompt window:

    Writer name: ‘SqlServerWriter’   Writer Id: {a65faa63-5ea8-4ebc-9dbd-a0c4db26912a}

       Writer Instance Id: {9075f235-fdee-4445-851b-a69c54bd8b33}

    State: [8] Failed

    Last error: Non-retryable error

    Ensure that you do not have any memory related errors in the SQL Server Errorlogs or any DBCC MEMORYSTATUS outputs printed in the Errorlog during the time of the backup failure. If there aren’t, then the above issue occurs due to the SQL instance hitting a worker thread limit.

    The reason for this is explained by Rob Dorr (Principal SQL Server Escalation Engineer at Microsoft) in his blog post:

    The controller and database worker come from the worker pool.  So if your max worker thread is configured for 255 the answer would be a max of 255 / 2 = 127 databases.    You probably won’t get to 127 with this configuration as you have system workers (lazy writer, checkpoint, log writer, and others) that already are using workers.

    On my SQL instance, I have 120 database and I need about 240 worker threads to complete this operation. However, I have only 128 worker threads configured for my SQL instance. If you have Dedicated Administrator Connection enabled for the SQL instance, then you can this troubleshooting further and execute a select against the DMV sys.dm_os_schedulers (for SQL Server 2005 or above) and get an output of the following type:

    status                 counts
    ———————- ——–

    HIDDEN ONLINE          168

    VISIBLE ONLINE         8

    VISIBLE ONLINE (DAC)   1

    Query used:

    select status, count(*) as counts
    from sys.dm_os_schedulers
    group by status

    As you will see that there are about 168 hidden online schedulers created for performing the Snapshot backups. But SQL instance in question doesn’t have sufficient work threads to facilitate this operation.

    What can you do in such situations:

    1. Increase the number of worker threads to account for the additional worker threads required for completing the VSS backups. This can be counter-productive as it can lead to non-Buffer Pool memory contention on 32-bit instances of SQL Server. The higher the number of max worker threads on your SQL Server instance, more the amount of non-BPool memory is consumed for the thread stack. This results in depletion of the available Buffer Pool memory.

    2. Change the physical location of the SQL Server database files so that all the database files do not reside on the same volume.

    3. Perform a VSS Backup with SQL Server VSS Writer service turned OFF. However, you cannot recover any SQL Server database files from such a Volume Shadow Backup.

    4. Use SQL Native backups or other VDI backup tools (non-Snapshot) to perform SQL Server database backups.

    Note: This also happens when you select any file on a drive which has a high number of SQL Server database files. This is because the SQL Server VSS Writer tries to enumerate the database files for the SQL Server instance that reside on the volume that has been selected for the Snapshot backup.