SQL Server Backup Simulator Download Location

I have received a number on questions about the download location of the SQL Server Backup Simulator. I am writing this post to let you know that the v2.0 version of the SQL Server Backup Simulator is now available on the tigertoolbox GitHub repository under Releases.

We are also working on an updated version which will account for the latest VDC_Complete enhancements.

Advertisement

SQL Server 2016 Public Preview (CTP2) – Live Query Statistics

I have spent a lot of time at customer locations working on resolving SQL Server performance issues on business critical environments. This also involves helping customers understand how to track down the performance bottlenecks and the remediation steps that need to be taken to remove identified performance bottlenecks. This involves two kinds of troubleshooting: post-mortem analysis and troubleshooting the issue as and when it is happening! Yes, I am talking about live troubleshooting which is a scary thing to do on a production server.

So if you share my deep rooted passion for working on SQL Server performance issues, the Live Query Statistics feature in SQL Server 2016 CTP2 is definitely worth knowing more about!

The Live Query Statistics can be enabled in one of the following ways:

1. Using Management Studio

The screenshot below shows the button which enables the Live Query Statistics. This can be a very powerful tool to troubleshoot bad query performance while the query is actually executing. You actually get insights into the plan and which part of the plan is actually taking time while the query executes.

Live Query Stats button on toolbar

You get a host of information from the Live Query Statistics as seen in the screenshot below. You will be able to pin point the part of the plan which is the culprit because you will have the completion percentage statistics for each and every operator in the plan. The completed operators show you the efficient parts of the plan. Additionally, you also get the time spent in each operator which gives you statistics for identifying the most time consuming part of the plan. And what’s more, you even know how much of the query is completed. This is one of the most common questions that I used to receive from customers while troubleshooting long running queries: “How long will the query take to complete“? Well now, there is an answer!

image

2. Use Activity Monitor

A new grid has been added in Activity Monitor named “Active Expensive Queries” which allows you to right-click a query and click on the “Show Live Execution Plan” option. Live Query Stats button in Activity Monitor

However, the “Show Live Execution Plan” option will only be enabled if the application  or user:

  • Executes SET STATISTICS XML ON; or SET STATISTICS PROFILE ON; in the target session.

  • The query_post_execution_showplan extended event has been enabled. This is a server wide setting that enable live query statistics on all sessions

And if you are developer, then you can use this feature in conjunction with the Transact-SQL debugger and pin point slow parts of the execution plan while the query is running. A truly powerful way to write and optimize queries! The debugging experience is now enhanced as the live query plan can be used along with breakpoints! The screenshot below shows what the debugging experience would look like.

image

Do keep in mind that this feature is provided for troubleshooting/debugging slow running query scenarios and should not be kept enabled in production environments. There is a moderate performance impact for collecting and displaying the above mentioned statistics.

Reference:

Live Query Statistics
https://msdn.microsoft.com/en-us/library/dn831878.aspx 

Disclaimer: Some information in this topic is preview and subject to change in future releases. Preview information describes new features or changes to existing features in Microsoft SQL Server 2016 Community Technology Preview 2 (CTP2).

Running SQL Nexus using Command Line Parameters

SQL Nexus is a tool that helps you identify the root cause of SQL Server performance issues. It loads and analyzes performance data collected by SQLDiag and PSSDiag. It can dramatically reduce the amount of time you spend manually analyzing data. One of the least commonly known facts is that the SQL Nexus tool also allows you to run it’s reports using command line arguments or even import diagnostic data automatically. The command line help reference is shown below. You can find more details on GitHub.

sqlnexus.exe [ [ /S [ /E | /Uuser /Ppwd ] [/D"database"] ] | [/C"connstr"] ] [/Iinputpath] [/Ooutputpath] [/Rreport] [/X]

sqlnexus.exe [ [ /S [ /E | /Uuser /Ppwd ] [/D"database"] ] | [/C"connstr"] ] [/Iinputpath] [/Ooutputpath] [/Rreport] [/X] [/Q] [/N]

/S"server"      Specifies a SQL Server name to connect to.
/D"database"    Database to connect to
/E              Log in to SQL using Windows/integrated security
/Uuser          Specifies a SQL (non-Windows) login name
/Ppassword      Specifies the password for a SQL (non-Windows) login
/C"connstring"  Specifies the SqlClient ConnectionString to use (can be used instead of /S /E).
/I"path"        Import SQL diagnostic data from this path
/R"report"      Specifies a report file name to run.
/O"path"        Specifies an export path for reports executed via /R. Also this is where the sqlnexus.000.log would get created
/X              Exit after importing (/I) or exporting (/O) the specified report (/R)
/Vparam=value   Specify the value of a form parameter
/Q              Quiet Mode - minimize windows in console mode
/N              Create a new SQLNexus database before importing (drop existing).

Automating Report Execution

The first scenario that I want to outline is the need to automate a report execution especially when you need to execute the reports repeatedly after activities like performance testing or during some other iterative activity which requires analysis of various sets of diagnostic data.

This can be achieved using the command line parameters shown below:

sqlnexus.exe /S"." /X /D"sqlnexus" /R"\Summary.rdl" /O""

As you can see from the screenshot below, you can see that I have a standard report i.e. “Bottleneck Analysis_C.xls” that was exported and a non-standard report which I created Summary.xls in the same folder. This also gives additional opportunity to create your own performance reports and schedule them using the SQL Nexus executable.

Report Output

After you have the exported reports, you could also write additional automation to email the reports as required.

Automating Data Import

SQL Nexus also allows you to perform data imports automatically using the command line parameters. The command line shown below performs data import from the specified folder into a SQL Server database called sqlnexustest on the default SQL server instance installed on the machine.

sqlnexus.exe /S"." /X /D"sqlnexustest" /I"C:\temp\output"

If the database does not exist on the SQL Server instance, then it will be created. The settings for SQL Nexus utility will be used during the data import which were saved when the last time the UI was used. So if you have disabled the profiler trace import from the UI and saved your settings, then the command line execution will not import the profiler traces.

WOOT: Default Trace and Power View

I have been working on building visualizations for various kinds of analysis that I perform for my customers. One such useful visualization was the use of Power View for analyzing the data available in the SQL Server Default Trace. The query below lets you retrieve all the information in the default traces. This same query is used to populate the Power Pivot table in the Excel file.


declare @enable int

-- Check to find out if Default Server Side traces are running
select top 1 @enable = convert(int,value_in_use) from sys.configurations where name = 'default trace enabled'

if @enable = 1 --default trace is enabled
begin

declare @d1 datetime;
declare @diff int;
declare @curr_tracefilename varchar(500);
declare @base_tracefilename varchar(500);
declare @indx int ;

select @curr_tracefilename = path from sys.traces where is_default = 1 ;

set @curr_tracefilename = reverse(@curr_tracefilename)
select @indx = PATINDEX('%\%', @curr_tracefilename)
set @curr_tracefilename = reverse(@curr_tracefilename)
set @base_tracefilename = LEFT( @curr_tracefilename,len(@curr_tracefilename) - @indx) + '\log.trc';

select EventCat.name as Category, EventID.name as EventName, Events.*
from ::fn_trace_gettable( @base_tracefilename, default ) Events
inner join sys.trace_events EventID
on Events.EventClass = EventID.trace_event_id
inner join sys.trace_categories EventCat
on EventID.category_id = EventCat.category_id

end

Once I have all the trace data available in my Power Pivot table, I created calculated columns for Day, Hour and Minute. Now that I have all the data readily available for me, I went about creating the main dashboard which provides a view of all the events that occurred along with a time line view. All this took me less than 5 minutes after I had finished writing the query! Pretty quick. Now I have an interactive report that I can use for performing various kinds of analysis.

The screenshot below will show that there was only one event raised for the Server event category and the actual time of occurrence is shown in the line graph. A simple mouse over on the point will give you the exact details. Now isn’t that a simple way to track down events! Smile

image

I will provide a final version of the Excel sheet once I have completed the other dashboards and sanitized the information available in the Power Pivot table.

PowerView and System Health Session– System

Previous posts in this series:

PowerView and System Health Session–CPU health

PowerView and System Health Session–Scheduler Health

PowerView and System Health Session–SQL Memory Health

PowerView and System Health Session– IO Health

In the last post for this series, I had explained how to retrieve the I/O statistics from the System Health Session data. In this post, I will describe how to build a dashboard using the SYSTEM component of the sp_server_diagnostics output. This view will help DBAs track various errors which can get their blood pressure shooting to abnormal levels. The SYSTEM component tracks various errors like non-yielding conditions, latch related warnings, inconsistent pages detected and access violations for the SQL Server instance.

Armed with this information in a Power Pivot table, I created two calculated columns for DAY and HOUR on the time the event was reported. After that I created KPIs on the maximum number of non-yielding conditions, latch related warnings, inconsistent pages and access violations reported.

Now that I have my Power Pivot data, I created a new Power View sheet which tracks the created KPIs for each day and hour. The screenshot below shows the final view.

The first half is a 100% Stacked Bar graph showing the various errors that were reported each day. There is a slicer for Day available which allows to filter the data quickly.

The second half of the report is a matrix which shows the KPI status for which day with a drill-down capability for hour.

The third half of the report shows a card view with the actual number of issues reported for each event against a particular time.

As usual the Excel sheet is available on SkyDrive at: http://sdrv.ms/10O0udO

Issue Statistics

The query to fetch the data required to build this report is available below.


SET NOCOUNT ON

-- Fetch data for only SQL Server 2012 instances

IF (SUBSTRING(CAST(SERVERPROPERTY ('ProductVersion') AS varchar(50)),1,CHARINDEX('.',CAST(SERVERPROPERTY ('ProductVersion') AS varchar(50)))-1) >= 11)

BEGIN
-- Get UTC time difference for reporting event times local to server time

DECLARE @UTCDateDiff int = DATEDIFF(mi,GETUTCDATE(),GETDATE());

-- Store XML data retrieved in temp table

SELECT TOP 1 CAST(xet.target_data AS XML) AS XMLDATA

INTO #SystemHealthSessionData

FROM sys.dm_xe_session_targets xet

JOIN sys.dm_xe_sessions xe

ON (xe.address = xet.event_session_address)

WHERE xe.name = 'system_health'

AND xet.target_name = 'ring_buffer';

-- Parse XML data and provide required values in the form of a table

;WITH CTE_HealthSession (EventXML) AS

(

SELECT C.query('.') EventXML

FROM #SystemHealthSessionData a

CROSS APPLY a.XMLDATA.nodes('/RingBufferTarget/event') as T(C)

)

SELECT

DATEADD(mi,@UTCDateDiff,EventXML.value('(/event/@timestamp)[1]','datetime')) as [Event Time],

EventXML.value('(/event/data/text)[1]','varchar(255)') as Component,

EventXML.value('(/event/data/value/system/@latchWarnings)[1]','bigint') as [Latch Warnings],

EventXML.value('(/event/data/value/system/@isAccessViolationOccurred)[1]','bigint') as [Access Violations],

EventXML.value('(/event/data/value/system/@nonYieldingTasksReported)[1]','bigint') as [Non Yields Reported],

EventXML.value('(/event/data/value/system/@BadPagesDetected)[1]','bigint') as [Bad Pages Detected],

EventXML.value('(/event/data/value/system/@BadPagesFixed)[1]','bigint') as [Bad Pages Fixed]

FROM CTE_HealthSession

WHERE EventXML.value('(/event/@name)[1]', 'varchar(255)') = 'sp_server_diagnostics_component_result'

AND EventXML.value('(/event/data/text)[1]','varchar(255)') = 'SYSTEM'

ORDER BY [Event Time];

DROP TABLE #SystemHealthSessionData

END