SQL Server Backup Simulator: Cumulative update

The SQL Server Backup Simulator was launched on Microsoft Code Gallery to aid and assist in troubleshooting backup issues related to SQLVDI. Thank you to everyone who has evangelized the tool and provided feedback on the same.

The current release is marked as v1.2.

We released two minor updates for the tool to incorporate the following changes:

  1. SQL Backup Simulator does not report FileVersion instead it reports ProductVersion while reading the sqlvdi.dll information – There was an issue that we uncovered while validating the sqlvdi DLL version which was corrected in v1.1 and is now part of the v1.2 release.

  2. COPY_ONLY feature while simulating backup – The COPY_ONLY feature was added to ensure that the any full backups done for a database by the tool doesn’t break an existing backup chain for a database.

The latest executable can be downloaded from: https://github.com/Microsoft/tigertoolbox/releases/latest

What can you tell me Default Trace

The default trace functionality introduced in SQL Server 2005 is still a much under appreciated feature of the product. This is somewhat like a black box trace which is analogous to a airline black box can help you in certain post-mortem analysis as a DBA. I am sure a lot of you have a host of in-house scripts that you have in-place or some monitoring/auditing tool that reports some inadvertent/unscrupulous activity on the databases that you administer. Nonetheless, it is always a good idea to be aware of the out-of-the-box features that are available that can make your job a wee bit easier.

Deepak Rangarajan (Blog), a Microsoft MVP, had blogged about the set of events which are captured by this default trace. So, I shall not attempt to duplicate his efforts here but collate a set of issues that CSS has been able to resolve using Default Traces in the past and will continue to do so in the future.

Dude: Where’s my database?
Not really a question that you want to be pondering on if you do not have any monitoring/auditing tools in place. And nor do you want to BING that search term. (Ok, if you prefer even Google it!) The default trace captures the following events: Object:Created, Object:Altered and Object:Deleted which will help you track down inadvertent deletion of databases. SQL Server Management Studio has a standard report called “Schema Changes History” report which can help you track such changes on a SQL instance in the form of a report.

Memory pressure: Where, When?
Ideally this can be tracked by Perfmon data collection and most of the SQL monitoring tools available in the market today track the Target and Total Server Memory counters for SQL instances. When you monitor the delta changes for these counters, you can track the amount of fluctuations that are occurring. The default trace tracks these changes using the Server Memory Change profiler event which will help you decide if there is a large fluctuation in Max Server Memory configuration for the SQL instance. This event gets raised when SQL Server memory usage has increased or decreased by either 1 megabyte (MB) or 5 percent of the maximum server memory (Buffer Pool only), whichever is greater. This is quite useful when tracking memory pressure being experienced for a SQL instance due to factors external to the SQL instance.

Mirrored Database: When did your state change?
All database mirroring state changes for all your mirrored databases are also monitored using the Default Trace using the event: Database mirroring status change. This helps track all database mirroring state changes without having to parse the SQL Errorlogs for the same.

OMG: Someone’s mucking around with my database security?
We have handled multiple issues in the past where we needed to identify various security related audit activity on a SQL instance like new Logins/Users being added, mostly for post-mortem analysis. The default trace “Security Audit” event classes show their usefulness by letting you trace such kind of activity using the default trace.

Full-Text: What are you upto?
The default traces also help trace Full-Text crawl activity which at times have been known to cause performance degradation when very large FT catalogs are involved.

Database: Did you just grow or shrink?
SQL Server databases enabled for Autogrow help prevent the database from going down if the file becomes full but at the same time Autogrow events can cause performance issues like blocking. Additionally, database auto shrink (both log and data files) operations can severely impact database performance. With the help of the default traces, you can can all Auto Grow/Shrink operations for your databases on the instance.

SQL Performance: Now how can I use the default trace for this?
Well the answer is not a definitive one but the default trace can give you an indication if you have a baseline established. The default trace traces the following events: Hash Warning, Missing Column Statistics, Missing Join Predicate and Sort Warning. These events can be used to find out if there is an excessive amount of sorting or hashing that is occurring on your SQL instance (without a baseline, this data is moot point though). You can also identify if a number of your queries are suffering from missing statistics and missing join predicates.

One thing to note here is that the default traces rollover which means that if the default traces are not copied over to an alternate location, they will be overwritten and valuable information required for post-mortem information will be lost.

Profiler events for different scenarios

A few moons back I had blogged about the high frequency events in SQL Server. In this post, I shall talk about the common scenarios where profiler traces are collected for SQL Performance issues and what kind of events need to be captured in the server side traces that are configured for the data collection.

While configuring a profiler trace, always include all data columns for that event as that doesn’t have an overhead as much as the number of events being captured during the trace capture.

In this post, I shall cover two of the most common issues that CSS collects profiler data for addressing SQL Performance issues: Attentions/Timeouts and High CPU usage/Long Running queries.

Attentions or Timeouts

Attentions or Timeouts can occur due to client side application timeout setting (mostly this is the case) or due to database engine side issue (like failure to acquire memory, problem on the server side connectivity layer etc.). During this time, you will have a lot of queries starting but not completing successfully. If you are dealing with such a scenario, then you would need to capture the Starting events using a profiler trace to find out which statements didn’t complete and encountered the Attention event.

The events that need to be captured are:

All Database Events

Log File Auto Shrink
Data File Auto Shrink
Log File Auto Grow
Data File Auto Grow

All Errors and Warnings

Attention (Important for tracking the timeouts)
ErrorLog
EventLog
Exception
Hash Warning
Execution Warnings
Sort Warnings
Missing Column Statistics
Missing Join Predicate
Exchange Spill Event
Blocked process report
User Error Message
Background Job Error
Bitmap Warning
Database Suspect Data Page
CPU threshold exceeded

Lock:Deadlock Chain and Lock:Timeout (timeout > 0)

Performance: Auto Stats

Showplan All or Showplan XML (As execution plan will not be available for queries which didn’t complete)

Showplan Statistics Profile or Showplan XML Statistics Profile (For comparing estimated and actual query plan)

Security Audit: Audit Login, Audit Login Failed and Audit Logout

Stored Procedures events:

RPC:Starting
SP:Recompile
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
RPC:Completed

Transactions: SQLTransaction is required for tracing back the transaction if there are nested transactions). DTCTransaction is required only if DTC transactions are getting timed out.

TSQL events:

SQL:BatchCompleted
SQL:BatchStarting
SQL:StmtStarting
SQL:StmtCompleted
SQL:StmtRecompile

Cursor events: CursorRecompile and CursorImplicitConversion (Required for checking if the requested cursor is being implicitly converted or if the cursor is being frequently recompiled).

The above events should be sufficient to track down any query timeouts and the common root causes for the same.

High CPU usage/Long Running queries

Another set of issues that are commonly dealt with while troubleshooting SQL Performance issues are Long Running Queries or Queries causing high CPU usage on the server. In such a case, the above set of events would be definitely be useful. If you are certain that the profiler trace can be captured for the entire duration of the query execution, then for highly transactional servers, you can omit the capture of the following events:

SQL: BatchStarting
SQL: StmtStarting
SP: StmtStarting
SP: Starting
RPC: Starting

The above logic of configuring a profiler trace will also help you capture trace data when you are trying to track down queries which are causing excessive amount of IO on the SQL instance.

Reference for capturing blocking information on a SQL instance: SQL 2000 Blocker Script output/SQL 2005 PerfStats

Keep in mind that the profiler data captured may not be sufficient for root cause analysis if you are not capturing SQL blocking information and performance monitor data in conjunction with the profiler traces.

In case you need additional events like deadlock events or lock events, then refer to my previous post and find out if they fall in the high frequency event category and when to capture the particular event.

TIP: As much as possible, avoid adding Text filters to your profiler trace configuration as it can lead to high CPU usage for highly active OLTP environments during filter evaluation.

Hope that the above information will help you reduce the excessive amount of data collection for troubleshooting SQL performance issues.

Profiler: Why do you collect so much data?

This has been always been a classic Catch-22. Should I include that event? But that will bloat my profiler trace size very quickly!! And then suddenly, either your gut feeling takes over and you take a split second decision or you make the same decision again based on your SQL knowledge (+gut feeling) to include or not include that event. The decision is always between collecting too much data at the cost of degrading server performance further or collecting too less data that you would require another round of data collection to figure out the root cause of the performance issue that you are troubleshooting.

At the outset, let me make it clear that I am not *perfect* at configuring a profiler trace for every performance issue that you face. But I will say this that I have handled enough SQL performance issues and looked at profiler traces (I actually do this to earn a living Winking smile)  to help outline a set of DOs and DON’Ts for capturing profiler traces for diagnosing performance issues.

In this post, I shall outline some Profiler event categories and events which generate a plethora of noise more than anything else. These events are required for troubleshooting some very specific scenarios (which is why they were introduced) and can be done away with when you are capturing data for evaluating general server performance or looking to identify the resource consuming queries on your instance.

High frequency event categories

  • Broker – None of the events in the Broker Category are required unless you are troubleshooting an issue related to Database Mail or Service Broker in SQL Server 2005 or above. However, the Broker event category can generate a lot of noise if you have Service Broker/Database Mail configured on the server and are using it heavily.
  • Cursors – If the client application or provider that connects to your SQL Server instance uses Server Side cursors, then this event category can basically lead to an unmanageable amount of data collected which will take you ages to sift through. The sp_cursor* calls will be captured either ways in your Stored Procedure event classes which makes it moot point to track Cursor: Open, Close, Execute through this event category for general performance issues. The only useful events in this event category are CursorImplicitConversion (which lets you track Implicit Conversion of requested non-supported cursor type by the SQL engine) and CursorRecompile (which helps you track down T-SQL cursor recompiles due to schema changes).
  • Locks – Now this event category is a whale if you want my frank opinion. Lock:Acquired and Lock:Released event classes (if captured), can make your data analysis more difficult than scaling Mt. Everest! If you need to track the number of locks for each session, then it is done best outside the profiler using DMVs like sys.dm_tran_locks. However, this event category has some useful event classes like:
    1. Lock: Deadlock and Lock: Deadlock Chain – Helps a lot when troubleshooting deadlock issues
    2. Lock: Timeout and Lock: Timeout (timeout > 0) – Helps troubleshooting timeout related issues due to lock blocking.
    3. Lock: Escalation – This one is debatable. Helps track down Lock Escalations but on servers where there are high number of Lock Escalations, this can become a high frequency event.
  • OLEDB – Unless you know based on the symptoms seen regarding the issue that OLEDB calls need to be traced, you shouldn’t be collecting any event from this Event category except for OLEDB Errors. So if your OLEDB provider was returning a large number of errors, you would be able to track it down using the HRESULT obtained from this event class.
  • Query Notifications – Again the same logic mentioned above for Broker event category applies to Query Notifications event category too.
  • Scans – Now this might be useful when you are doing a repro of sorts on a test or development environment but on a production server this is a strict NO-NO! You really should be able to figure out looking at the plans whether there was a table/index scan without having to capture a separate noisy event in the profiler traces.
  • Security Audit – A very good event category when troubleshooting security or permissions related issues on a SQL instance. For Performance related issues, this is just plain *noise*! Because the events under this category are fired nineteen-to-the-dozen on a SQL instance and this would just bloat your profiler trace size rather than doing anything useful! The two really and I mean really useful events in this category are Audit: Login and Audit: Logout which help track the session’s connection settings and other attributes like host name, user name etc.
  • Transactions – This event category also has a lot of noise events which would aid in bloating profiler trace size rather than doing anything useful. Especially the Transaction Manager (TM:*) event classes are not to be captured unless you are troubleshooting an issue Begin/Commit/Rollback taking a long time to complete. The SQLTransation event is quite useful for troubleshooting Deadlock related issues as it helps track transaction begin and commit/rollback and nest levels for a particular session. The DTCTransaction event class is useful for troubleshooting DTC Transaction related issues and tracking the different states of a DTC transaction.
  • Performance Event Category – This is one of the most widely used event categories in CSS for troubleshooting query performance  issues. The reason for that is this event category helps you capture Query Plans. There are a bunch of different event classes in this category which helps you capture query plans which I shall divide into two classes based on data to be obtained from these events:
    1. Compile time details: Showplan All, Showplan XML & Showplan Text (occurs during query execution but doesn’t contain run-time details like rows returned) and Showplan XML For Query Compile & Showplan All For Query Compile (show you the compile time query plan). This event can be very useful when you want to capture a query plan for timeout related issues as the other events which show you the execution time query plans may not be generated when you are dealing with query timeouts. So if you are not facing any timeouts and need the query plans with run-time details, you need not capture any of the events mentioned above. The Query Compile event classes (especially the XML event) are required when you trying to nail down compilation related issues and want to find out the resources (CPU/Memory) consumed during query compilation.
    2. Run-time details: Showplan Statistics Profile and Showplan XML Statistics Profile show you the the query plan with run-time details in text and XML formats respectively. Based on what you like sifting through Text or XML, you could capture either one. XML events are not the best when it comes to toning down the size of the profiler traces captured.

    There are some other high frequency event classes which are mentioned below:

    1. SP:CacheMiss, SP:CacheInsert, SP:CacheRemove, SP:CacheHit – These events are helpful when tracking procedure cache plan caching issues but if you are not dealing with a specific issue related to Procedure Cache, then you are going to be dealing with a lot of noise in the profiler traces if these events are captured.
    2. TSQL : Prepare SQL, TSQL : Exec Prepared SQL, TSQL : Unprepare SQL – Useful for troubleshooting specific server side cursor calls but otherwise plain noise and should be avoided.

    image*TIP: Always check the “Show all Columns” option in the Profiler UI when configuring the trace as some of the event data columns capture valuable information about event which is essential to troubleshooting a performance issue like ObjectID, ObjectName etc.

    The events to be captured using SQL Profiler for common issues like High CPU usage, excessive I/O, timeouts etc. will be covered through a series of posts later.

    To summarize, the profiler can be a friendly tool when run using a Server Side trace with the right set of events being captured.

    T-SQL Tuesday #13 – What the Business Says Is Not What the Business Wants

    This is my second post for T-SQL Tuesday. This time around, T-SQL Tuesday is being hosted by Steve Jones (Twitter | Blog), The Voice of the DBA on SQLServerCentral.com this month. The topic this time around is not a technical topic but nonetheless very important in the grander scheme of things.

    I have seen multiple scenarios where the teams involved in deploying solutions/projects work in silos. There is the management team which decides what the final outcome of the project should look like and what the end deliverables are. Then the set of developers who can throw together gazillion lines of code (in multiple languages) to put together a front-end, maybe a middle-tier, which talks to a database server fetches data and presents it to the end-user. And finally a set of DBAs who are always guilty of the post-deployment issues “till the database is proven innocent” as most applications hit a database to fetch the data at one time or another.

    There are some key things to consider and contemplate about before you start writing code and deploying your solutions.

    What’s the business?

    Business

    Knowing your business is like knowing your backyard. The bigger it is the more lethargic you tend to become. Business acumen is not something that is missing in individuals but it has got to do more with “I want to” versus “I don’t want to”! The positive side of knowing enough about your business is that it can make your life easier. This will prevent you from spending additional hours troubleshooting database performance issues since you deployed something that was not needed for the business and unfriendly to your current database environment. So unless you have a fair idea of why you are implementing what you are implementing, you are going to invite trouble later if not sooner!

    Requirement Analysis: The Who, What and How

    analysisIf you have background in Software Design or have been part of a project, then you will know what RA is. However, due to the time-to-market commitments becoming tighter and tighter in a competitive environment, RA becomes a non-entity or a formality with developers developing without knowing why they are doing what they are doing. A definite recipe for disaster! Unless you understand WHO is going to be using what you are developing, WHAT purpose is it going to serve and HOW they will be using it; you will not be in a position to explain why a module or feature may be an overkill or completely unnecessary. Figuring out the Who, What and How requires a certain understanding of the business (my first point) and without this, you are bound to face challenges!

    Good-to-have and Must-have

    Under_construction_icon-blueYou must have heard this a million times. What is a must-have as opposed to a good-to-have. Without knowing this, you might not be able to land a project where end deliverables are met. It is never a good idea to over-commit. Timelines and deliverables need to be frozen before you start developing which would make me point back to the planning phase. Just because the business asks for it doesn’t mean that the business needs it. Find out if that is going to serve the end-goal. What is being prioritized may not be a must-have but actually a good-to-have. Must-haves in a project are non-comprisable (if the right RA has been done) and the good-to-have is well… (the phrase is self explanatory)

    The shops where there are smart people who can look beyond their egos and work efficiently in a cross-team environment usually don’t land up in a soup post-deployment; using band-aids to fix a “gone-to-hell” deployment.

    Conclusion: Cohesion in communication among teams is required to formulate a concise design goal to achieve or solve a specific business need. Without this, you will be back to the drawing board in no time after spending substantial amount of effort and $$ in a futile exercise.

    Note: The above post is not penned to finger-point at any set of individuals or a group. It is just an attempt to help understand the need for cross collaboration between business groups. Additionally, it is equally pertinent to know your backyard before you deploy something. This is applicable to all business groups and not just DBAs!