Awesomesauce: Perf Dashboard on Native SSRS

I had been asked the question about using the Perf Dashboard reports from a Reporting Services instance multiple times. Though this was not the intention of launching the SQL Server Performance Dashboard Reports but there is a compelling need at times to have these accessible from a web URL. In today’s age, you will find a lot of DBAs monitor their SQL Server instances remotely. Sometimes, there is another layer of complexity added to this when they want to just look at the health of the SQL Server instance by accessing a URL exposed through a corporate server in their environment. In such cases, you wouldn’t want to jump through a few hoops of setting up your VPN connection and blah blah.

So with all that in mind, let’s talk about how you can get your SQL Server 2012 Performance Dashboard to your existing SQL Server Reporting Services instance.

  • First you need to create a new Reporting Services Project using SQL Server Data Tools (SSDT).
  • Use the solution explorer to add all the existing performance dashboard reports from the C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Performance Dashboard folder.
  • Modify the project properties to ensure that the data source gets written to the same folder as the reports.
  • Create a shared data source pointing to your SQL Server instance.
  • Open each report that was imported into the project and modify the data source properties to use the shared data source that you created in the above step.
  • Modify the project deployment properties as shown below.

image

Now you are ready to deploy your Performance Dashboard SSRS project to your reporting services instance.

What can you do next?

You can now set up Report Server subscriptions for the set of reports that you wish to receive via email. You can get a status report of your SQL Server instance without even having to lift a finger and that to right in your mailbox! Smile

Note that Performance Dashboard is a tool which is provided “AS-IS” by Microsoft. The steps mentioned above will help you deploy the existing Performance Dashboard reports to a SQL Server Reporting Services instance. However, the failure to deploy these reports as mentioned above is not liable for support by Microsoft SQL Server Support team.

SQL Feature Discovery Script

As part of my work, I very frequently have to collect information about the various database engine features that are currently being used on a particular SQL Server instance. Sometimes, this requires me to write T-SQL scripts to fetch the required information. I had updated my initial data collection script some time back and this gave me the idea to write up another set of T-SQL queries to fetch the information for the database engine features in use.

The script collects a bunch of information which are categorized under the following headings:

1. General Server Configuration
        Server Info
        Non-default sp_configure settings
        Server Settings
        Active Trace Flags
2. Replication Configuration
        Replication Publishers
        Merge Replication Publishers
        Replication Subscribers
        Replication Distributors
3. Full-text enabled databases
4. Linked Servers
5. SQL Agent information
6. Databases
        Database information
        Database file information
7. Server Triggers
8. Policy Based Management
9. Resource Governor
10. Database Mail
11. Log Shipping
12. Database Mirroring
13. SQL CLR Assemblies
14. sp_OA* procedures

Usage

  1. Download the script using the link given at the bottom of the page and save it to a file named SQL_DISCOVERY.SQL. Open the script file in a SSMS Query Window.
  2. Press CTRL+SHIFT+F so that the output results are put into a file. Doing this will not produce a message or any other notification.
  3. Execute the script and specify SQL_DISCOVERY.html as the output file name so that we can get the output in the require HTML format.
  4. Once the script is completed, open the HTML file.

Script download: image

If you have any feedback about the script or feel any new additions to the existing data that is being captured, please feel free to leave a comment!

Technorati Tags: ,,

SQLDIAG and SQL Server 2012

SQLDIAG is a data collection utility that is used for collecting T-SQL script output, perfmon data and profiler traces in a consolidated manner. This allows the database administrators for collect a single output without having to configure multiple data collection utilities for capturing the required data.

SQLDIAG has been shipping with the SQL Server product since SQL Server 2005. Now the reason I am writing this post is to talk about a specific issue that you can encounter when you already have a previous version of SQLDIAG installed on your machine along with SQL Server 2012.

Using the command below I am trying to execute a SQLDIAG data collection using a specific SQL Server 2012 SQLDIAG configuration file. The command that I used was to specify the output folder and the default SQLDIAG configuration file available at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML:

C:\>sqldiag /O "F:\Temp\SQLDIAG Output" /I "C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML"

The output that I got was:

SQLDIAG Output path: F:\Temp\SQLDIAG Output\

SQLDIAG Invalid SQL Server version specified.  SQL Server version 11 is not supported by this version of the collector

SQLDIAG . Function result: 87. Message: The parameter is incorrect.

The reason for the above issue is that my environment PATH variable has the path for a previous version of SQLDIAG listed before the path of the SQL Server 2012 SQLDIAG. My path variable has the directory “C:\Program Files\Microsoft SQL Server\100\Tools\Binn\” listed before “C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\” which is the default location of the SQLDIAG utility. The PATH variable is updated with the SQL specific directories during a SQL Server installation. In my case, I have a SQL Server 2008 R2 instance installed on my box. So the configuration file which specifies a data collection for a SQL Server 2012 is failing because I am using a SQLDIAG from a SQL Server 2008 R2 installation.

If I executed the following command, then my SQLDIAG initialization will work correctly:

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\sqldiag.exe" /O "F:\Temp\SQLDIAG Output" /I "C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML"

To summarize, you need to full qualify your SQLDIAG executable path when collecting SQLDIAG data from a machine which has multiple versions of SQLDIAG installed on the machine.

Tools Tips and Tricks #3: Custom Rowsets using SQL Nexus

This post is part three of the Tools Tips and Tricks series that I started last week. In my T-SQL Tuesday post for this month, I already explained how I always have the inclination of importing data collected into a SQL Server database. I shall touch upon this yet again but this time through the use of SQL Nexus. I am going to use a small script to collect data for the user requests which are executing queries against a particular SQL Server instance. The script that I used to collect data is shown below:


set nocount on
go

while (1=1)
begin

print '-- RequestsRowset'

select GETDATE() as runtime,a.session_id as session_id,
a.start_time as start_time,
a.[status] as [status],
a.command as command,
a.database_id as database_id,
b.objectid
from sys.dm_exec_requests a
cross apply sys.dm_exec_sql_text(sql_handle) b
where session_id <> @@spid

raiserror ('', 0, 1) with nowait

waitfor delay '00:00:05'
end

I have stored the output captured by the above query in a file called CustomRowset.OUT. Using the Edit Custom Rowset option in the SQL Nexus UI (available in the third expandable tab named Data, on the left hand side), I can pull up a UI where I can specify the table name into which the data needs to be imported into (tbl_RequestsExample in this case) and the identifier for the data which needs to be imported (— RequestsRowset in this case). I was executing a WAITFOR DELAY command from another session while the above script was capturing data. I then import the data into a SQL Nexus database using the Importoption. Once this is done, I can then query the database and look into the results which were imported into the database. (see screenshot below). You can extend this functionality to any degree you want and even combine multiple T-SQL commands to capture different result sets in the same loop. You just need to make sure that the rowset identifier for each query result set is unique.

Additionally, if you attempt to capture outputs which have columns with data type length greater than 8000, then the import will fail with the following error in the SQL Nexus log file:

SQLNexus Information: 0 : RowsetImportEngine Error: An unexpected error has occurred:

System.Data.SqlClient.SqlException: The size (8192) given to the column ‘query_text’ exceeds the maximum allowed for any data type (8000).

SQL Nexus 3.0 doesn’t give you the option to add your own column data types using the UI.  Using the form (shown on the left) will treat all columns as varchar. If you want to change this behavior, then modify C:\Users\<user name>\AppData\Roaming\sqlnexus\TextRowsetsCustom.xml directly to add or modify the data types that you want.

Where is this helpful?
Let’s say I decided to collect the output of customized T-SQL script for an extended period of time. Instead of scouring the .txt or .out file manually using a text editor, I can import the data into a table and then run queries on them to save yourself a hair-raising experience and valuable time!

How do I make sure that the data is imported correctly?
1. If you have data which is larger than varchar(8000) in the result sets collected, then make sure to modify the TextRowsetsCustom.xml before you import the data.
2. Give each rowset that you collect an unique identifier. You don’t want the importer to mix-n-match the data you are importing.
3. Add a runtime column using GETDATE() or a variable for scripts capturing data in a loop to ensure that you can track the trend easily without having to second-guess.
4. All columns that are collected in the result set(s) have to be named.
5. Avoid CR/LFs in the result set i.e. don’t use CHAR(13) in your T-SQL script while capturing the data as this seriously confuses the importer as it treats CR/LFs as end-of-row indicator.
6. Either capture the data directly into a file by running the data collection script from Management Studio or use sqlcmd -W parameter if you are capturing the data using sqlcmd. -W ensures that trailing spaces from the result sets are removed.

That is all I have for today. Happy customizing and importing!

Tools Tips and Tricks #1: Process Monitor

I recently wrote about importing a Process Monitor trace into SQL Server database table and crunch up the data to extract the events and call stacks. This prompted me to think about capturing data with Process Monitor and some things I learnt along way while using this tool working at CSS.

imageThe first tip is to disable any activity that you don’t want to capture or are not required for the issue that you are troubleshooting. The capture tracks three classes of operations: File System, Registry and Process. In the toolbar show on the left in the screenshot, you can enable/disable the following captures:

a. Registry activity
b. File System activity
c. Network activity
d. Process and Thread activity
e. Profiling events

More information about the above is available in the Process Monitor help file. image

The command line options specified are immensely helpful if you are scripting the capture of a trace using a batch file or if you are generating an automation routine to load the captured data into another data source. I had used /OpenLog and /SaveAs1 option to generate the XML file from the saved .PML file.

imageOne of the most useful options that I suggest using when capturing a Process Monitor trace is to use the backing file option (/BackingFile command line parameter or CTRL+B when using the GUI). This prevents using the page file as the backing store for trace capture and avoid running in unresponsive server issues while you are still capturing your trace and the paging file fills up. I normally point the backing file to a local drive on the machine which has sufficient amount of disk space.image

Process Monitor can use symbol information, if available, to show functions referenced on event stacks. You can point to the symbol path (local symbol cache or Microsoft Symbol Server: http://msdl.microsoft.com/download/symbols) using Options –> Configure Symbols. Additionally, you can specify the path to the source files for the application in the same dialog. This will help you resolve the function calls using the symbol path and if a source path is present, open a text viewer dialog with the source line highlighted which is being referenced. The symbol path is needed when /SaveAs2 option is used for converting the .PML file to .XML format. Note that this option considerably increases the export time due to symbol resolution time involved.

I am starting a series tagged with “Tools Tips and Tricks” which will document the various tweaks that I use for data collection for the various data collection/analysis tools that I use on a day-to-day basic.