WOOT: Schema Changes History Report on Power View

The last post in this series talked about using Power View to analyze the data stored in the SQL Server’s default trace. I decided to take this a step further by creating the Schema Changes History report with the help of the data that I retrieved from the Default Traces. The advantage of a report created in Power View is that the interactivity which is missing in the standard report is available.

The way I created this report was to filter the data in the Power Pivot table using EventClass ID 46, 47 and 164 for only looking at the create, drop and alter commands which the default trace tracks. After that I created a table with a tile on the Database Name and a 100% Stacked Bar Chart to show the activity at a database level.

I also had to create linked tables for getting the Object Type and the Event Class Name that you see in the table below.

I will provide a final version of the Excel sheet once I have completed the other dashboards and sanitized the information available in the Power Pivot table.

SchemaChangesHistory

Previous Post in the Series:

Default Trace Dashboard
https://troubleshootingsql.com/2013/09/26/woot-default-trace-and-power-view/

Advertisement

WOOT: Default Trace and Power View

I have been working on building visualizations for various kinds of analysis that I perform for my customers. One such useful visualization was the use of Power View for analyzing the data available in the SQL Server Default Trace. The query below lets you retrieve all the information in the default traces. This same query is used to populate the Power Pivot table in the Excel file.


declare @enable int

-- Check to find out if Default Server Side traces are running
select top 1 @enable = convert(int,value_in_use) from sys.configurations where name = 'default trace enabled'

if @enable = 1 --default trace is enabled
begin

declare @d1 datetime;
declare @diff int;
declare @curr_tracefilename varchar(500);
declare @base_tracefilename varchar(500);
declare @indx int ;

select @curr_tracefilename = path from sys.traces where is_default = 1 ;

set @curr_tracefilename = reverse(@curr_tracefilename)
select @indx = PATINDEX('%\%', @curr_tracefilename)
set @curr_tracefilename = reverse(@curr_tracefilename)
set @base_tracefilename = LEFT( @curr_tracefilename,len(@curr_tracefilename) - @indx) + '\log.trc';

select EventCat.name as Category, EventID.name as EventName, Events.*
from ::fn_trace_gettable( @base_tracefilename, default ) Events
inner join sys.trace_events EventID
on Events.EventClass = EventID.trace_event_id
inner join sys.trace_categories EventCat
on EventID.category_id = EventCat.category_id

end

Once I have all the trace data available in my Power Pivot table, I created calculated columns for Day, Hour and Minute. Now that I have all the data readily available for me, I went about creating the main dashboard which provides a view of all the events that occurred along with a time line view. All this took me less than 5 minutes after I had finished writing the query! Pretty quick. Now I have an interactive report that I can use for performing various kinds of analysis.

The screenshot below will show that there was only one event raised for the Server event category and the actual time of occurrence is shown in the line graph. A simple mouse over on the point will give you the exact details. Now isn’t that a simple way to track down events! Smile

image

I will provide a final version of the Excel sheet once I have completed the other dashboards and sanitized the information available in the Power Pivot table.

Tools Tips and Tricks #4: RML Utilities

It is time again for your friendly neighborhood (or virtual neighborhood) SQL CSS person to provide you with the next set of tips for the various tools that he uses in his day-to-day work. I probably use this tool “RML Utilities” or the database created by this at least once a day and sometimes even more. Not kidding!!

This tool has become synonymous with Profiler Trace analysis collected from customer environments in CSS. Majority of you might be familiar with the tool “SQL Nexus” which call RML utilities in the background to load the collected Profiler Traces into a SQL Server database. This calls an executable, ReadTrace.exe. In this post, I will talk about a few conditions when you should be calling ReadTrace.exe instead of invoking it from SQL Nexus.

image

As you can see above that SQL Nexus provides you the option of breaking down the activity of each SPID into individual .trc files but they can only be directed to the %TEMP%\RML folder on your machine. The previous server that I had didn’t have sufficient disk space to hold over 100GB of data in the TEMP directory. Yes, we do analyze data which runs into a few hundred GBs at times!

Now you have two options, change the %TEMP% environment path on your machine to a different drive or use the ReadTrace.exe to generate the .trc files in the required path.

Another option that cannot be controlled from SQL Nexus is the ability to add SPID, Hostname and Application filters while importing the data.

Some of the defaults for ReadTrace are shown below which are required for any profiler trace import are:

-o Output files to be generated in the current directory
-S Connect to default SQL Server instance on the machine
-d Load profiler data into PerfAnalysis database
-E Use windows authentication while connection.

Here are a few scenarios where I use ReadTrace.exe directly….

Analyzing a deadlock scenario where I know the Session IDs involved

Sometimes the deadlock graph is not sufficient to figure out why a lock is still held by that session and why that session acquired the lock on the object in the first place. At that point, you need to track down the sequence of statements executed by the session. In this case, I will use the following command to import the data.

readtrace -S. -E -ddbRMLUtilDemo
-I"D:\RML\RMLUtil_demo.trc" -A"!SQLCMD" -s53 -s55 -MF -o"D:\RML\output" -f

The above command will import the trace files into a database dbRMLUtilDemo on the default instance on the machine with a SPID filter of 53 and 55 and exclude events generated with the application name SQLCMD. Additionally, individual .trc files will be generated containing the activity of SPIDs 53 and 55.

Generating .RML files to be used by OStress

Use the –o parameter to change the output path and use the –s, –A and –H filters to apply the necessary SPID, application name and host name filters respectively. An example to apply filters for SPIDs 51, 52, 53, exclude events generated by application name PSSDIAG or hostname MyLaptop.

readtrace -S. -E -ddbRMLUtilDemo
-I"D:\RML\RMLUtil_demo.trc" -A"!SQLCMD" –s51 –s52 -s53 -o"D:\RML\output" –H"!MyLaptop" –A"!PSSDIAG"

The command line window and the ReadTrace.log file available in the output directory will confirm if the filters were applied correctly or not. Below is a screen shot of the command line window showing that the filters were applied correctly:

image

Handling MARS sessions

ReadTrace has limited processing ability for handling sessions using MARS (Multiple Active Result Sets). You have to use –f and –T35 to import data from trace files which contains MARS sessions. ReadTrace uses the formula ((Session Id * 10 ) + BatchId) so that a MARS sessions appears as a separate session. Eg. If you see Session ID 1620, 1621 and so on in the ReadTrace/SQLNexus database but cannot find such a session in the DMV outputs captured during that time, then the activity corresponds to Session ID 162 which was using MARS.

Note: And remember to always use a Server Side trace whenever possible to capture a profiler trace output to a local disk.

The PDF help files for RML Utilities are located at <system drive>:\Program Files\Microsoft Corporation\RMLUtils\Help folder by default on a machine where RML Utilities is installed in case you wanted more information on the above.

RML Utilities Download
http://blogs.msdn.com/b/psssql/archive/2008/11/12/cumulative-update-1-to-the-rml-utilities-for-microsoft-sql-server-released.aspx

Keep following the tag “Tools tips and Tricks” for more posts on the various tools that we use. That’s it for today!

Two starts, one recompile and a confusing place

I used to be regular viewer of the TV Series “Two Guys, A Girl and A Pizza Place” co-starring Ryan Reynolds. The blog post title is a play on that name. In the past week, I have heard about this phenomenon of “one ghost Stmt: Starting” event more than thrice which is what triggered this blog post.

When you are analyzing such a trace file using RML Utilities, you will find an entry in the tblStatements table showing a Statement for a particular Batch Sequence (Column name: BatchSeq) which has a Start Time but no End Time. Additionally the AttnSeq column for that row would have a NULL value indicating that the statement didn’t timeout nor was it cancelled due to a Server/Client side error.

So what is this ghost entry all about. If you manually dig through the profiler trace, you will find that that the Statement in question has a Recompile event associated with it. This is by-design. When a Stored Procedure statement encounters a Recompile event, it would have a SP: StmtStarting event associated with it but no corresponding SP: StmtCompleted event. This can lead to confusion during trace file analysis.

Here is an example illustrating the same. I have a Stored Procedure whose definition is as follows:

CREATE PROC usp_recompile
AS
SELECT * FROM tbl1 OPTION (RECOMPILE)

The Recompile hint is added to forcefully induce a Recompile for the Stored Procedure when it is executed. If you look into the profiler events captured below during the Stored Procedure execution, you will see that there is a SP:StmtStarting event followed by a SP:Recompile and a SP:StmtRecompile event. Then there is another SP:StmtStarting event indicating that the recompile operation is complete and the start of the execution of the SP statement.

image

Hope this helps clear up the confusion around this topic.

What can you tell me Default Trace

The default trace functionality introduced in SQL Server 2005 is still a much under appreciated feature of the product. This is somewhat like a black box trace which is analogous to a airline black box can help you in certain post-mortem analysis as a DBA. I am sure a lot of you have a host of in-house scripts that you have in-place or some monitoring/auditing tool that reports some inadvertent/unscrupulous activity on the databases that you administer. Nonetheless, it is always a good idea to be aware of the out-of-the-box features that are available that can make your job a wee bit easier.

Deepak Rangarajan (Blog), a Microsoft MVP, had blogged about the set of events which are captured by this default trace. So, I shall not attempt to duplicate his efforts here but collate a set of issues that CSS has been able to resolve using Default Traces in the past and will continue to do so in the future.

Dude: Where’s my database?
Not really a question that you want to be pondering on if you do not have any monitoring/auditing tools in place. And nor do you want to BING that search term. (Ok, if you prefer even Google it!) The default trace captures the following events: Object:Created, Object:Altered and Object:Deleted which will help you track down inadvertent deletion of databases. SQL Server Management Studio has a standard report called “Schema Changes History” report which can help you track such changes on a SQL instance in the form of a report.

Memory pressure: Where, When?
Ideally this can be tracked by Perfmon data collection and most of the SQL monitoring tools available in the market today track the Target and Total Server Memory counters for SQL instances. When you monitor the delta changes for these counters, you can track the amount of fluctuations that are occurring. The default trace tracks these changes using the Server Memory Change profiler event which will help you decide if there is a large fluctuation in Max Server Memory configuration for the SQL instance. This event gets raised when SQL Server memory usage has increased or decreased by either 1 megabyte (MB) or 5 percent of the maximum server memory (Buffer Pool only), whichever is greater. This is quite useful when tracking memory pressure being experienced for a SQL instance due to factors external to the SQL instance.

Mirrored Database: When did your state change?
All database mirroring state changes for all your mirrored databases are also monitored using the Default Trace using the event: Database mirroring status change. This helps track all database mirroring state changes without having to parse the SQL Errorlogs for the same.

OMG: Someone’s mucking around with my database security?
We have handled multiple issues in the past where we needed to identify various security related audit activity on a SQL instance like new Logins/Users being added, mostly for post-mortem analysis. The default trace “Security Audit” event classes show their usefulness by letting you trace such kind of activity using the default trace.

Full-Text: What are you upto?
The default traces also help trace Full-Text crawl activity which at times have been known to cause performance degradation when very large FT catalogs are involved.

Database: Did you just grow or shrink?
SQL Server databases enabled for Autogrow help prevent the database from going down if the file becomes full but at the same time Autogrow events can cause performance issues like blocking. Additionally, database auto shrink (both log and data files) operations can severely impact database performance. With the help of the default traces, you can can all Auto Grow/Shrink operations for your databases on the instance.

SQL Performance: Now how can I use the default trace for this?
Well the answer is not a definitive one but the default trace can give you an indication if you have a baseline established. The default trace traces the following events: Hash Warning, Missing Column Statistics, Missing Join Predicate and Sort Warning. These events can be used to find out if there is an excessive amount of sorting or hashing that is occurring on your SQL instance (without a baseline, this data is moot point though). You can also identify if a number of your queries are suffering from missing statistics and missing join predicates.

One thing to note here is that the default traces rollover which means that if the default traces are not copied over to an alternate location, they will be overwritten and valuable information required for post-mortem information will be lost.