SQLDIAG and SQL Server 2012

SQLDIAG is a data collection utility that is used for collecting T-SQL script output, perfmon data and profiler traces in a consolidated manner. This allows the database administrators for collect a single output without having to configure multiple data collection utilities for capturing the required data.

SQLDIAG has been shipping with the SQL Server product since SQL Server 2005. Now the reason I am writing this post is to talk about a specific issue that you can encounter when you already have a previous version of SQLDIAG installed on your machine along with SQL Server 2012.

Using the command below I am trying to execute a SQLDIAG data collection using a specific SQL Server 2012 SQLDIAG configuration file. The command that I used was to specify the output folder and the default SQLDIAG configuration file available at C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML:

C:\>sqldiag /O "F:\Temp\SQLDIAG Output" /I "C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML"

The output that I got was:

SQLDIAG Output path: F:\Temp\SQLDIAG Output\

SQLDIAG Invalid SQL Server version specified.  SQL Server version 11 is not supported by this version of the collector

SQLDIAG . Function result: 87. Message: The parameter is incorrect.

The reason for the above issue is that my environment PATH variable has the path for a previous version of SQLDIAG listed before the path of the SQL Server 2012 SQLDIAG. My path variable has the directory “C:\Program Files\Microsoft SQL Server\100\Tools\Binn\” listed before “C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\” which is the default location of the SQLDIAG utility. The PATH variable is updated with the SQL specific directories during a SQL Server installation. In my case, I have a SQL Server 2008 R2 instance installed on my box. So the configuration file which specifies a data collection for a SQL Server 2012 is failing because I am using a SQLDIAG from a SQL Server 2008 R2 installation.

If I executed the following command, then my SQLDIAG initialization will work correctly:

"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\sqldiag.exe" /O "F:\Temp\SQLDIAG Output" /I "C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLDiag.XML"

To summarize, you need to full qualify your SQLDIAG executable path when collecting SQLDIAG data from a machine which has multiple versions of SQLDIAG installed on the machine.

Visual Studio 2010: Are you being cordial to Report Viewer

imageIf you are using Visual Studio 2010 and a Windows Forms Application which uses the Report Viewer control, then might have probably been scratching your head when you suddenly lost the capability of using any of the drill-through options in your reports.

The problem happens only after you install Visual Studio Service Pack 1! So, what is the solution… Install an update released for Visual Studio 2010 Service Pack 1 (KB2549864).

If you are using only the Microsoft Report Viewer 2010 SP1 Redistributable Package, then you need to install the file named “ReportViewer.exe”. If you use Microsoft Visual Studio 2010 Service Pack 1, install the file that is named “VS10SP1-KB2549864-x86.exe.” I saw yet another question on this being raised on this again this week… So I thought that a quick blog post on this would be definitely worth the effort!

Reference: Update fixes several Report Viewer issues after you install Visual Studio 2010 Service Pack 1 http://support.microsoft.com/kb/2549864

Brian Hartman’s Blog
http://blogs.msdn.com/b/brianhartman/archive/2011/03/31/visual-studio-2010-sp1.aspx

Addition: 10/22/2014: A number of people have reported that the above link does not work anymore for downloading the hotfix files. You will need to download the hotfix files from the link mentioned below:

Update for Microsoft Visual Studio 2010 Service Pack 1 Report Viewer (KB2549864)
http://www.microsoft.com/en-us/download/details.aspx?id=27231

SQL Server Backup Simulator v2 available now

SQL Server Backup Simulator is used by CSS to troubleshoot SQLVDI related issues and to identify if the SQLVDI DLL is functioning correctly. Based on the feedback received from the use of the tool and the current troubleshooting needs, we decided to do v2 release for SQL Server Backup Simulator.

The new features for the v2 release are:

  1. Log backup – Now the tool can perform log backups. The tool performs COPY_ONLY backups so that your LSN chain is not broken.
  2. Compression support – Starting from v2, the tool will allow you to take backups with compression enabled for SQL Server 2008 and higher.

The compression option drop-down list has three-options:
a. With Compression: This option will allow you to perform a backup with compression enabled even if the server default is not to use compression for backups.
b. No Compression: This option will allow you to perform a backup with compression disabled even if the server default is to use compression for backups.
c. Server Default: This option uses the server default setting (configuration setting: backup compression default) to perform the backup.

image

Screenshot of the v2 UI

Log Restore option is not currently available in this release. The incorporation of this feature will be evaluated while planning for the next release.

A big thank you to Sakthi [Blog | Twitter] for his assistance on the v2 release.

The latest release can be downloaded here.

Previous posts for Backup Simulator:
SQL Server Backup Simulator v1.0
http://blogs.msdn.com/b/sqlserverfaq/archive/2010/10/27/sql-server-backup-simulator.aspx
SQL Server Backup Simulator v1.2
https://troubleshootingsql.com/2011/01/17/sql-server-backup-simulator-cumulative-update/

Tools Tips and Tricks: Round-up

Last month I ran a blog series on the different Tools that CSS uses for certain troubleshooting scenarios SQL Server related issues. There were 12 posts that covered the use of different tools like Debug Diagnostic, XPerf, tools from Sysinternals, SQLDIAG, SQL Nexus and some debugging tips.

Now it is time to do a SELECT [PostURL], [Summary] FROM [TroubleshootingSQL].[tblBlogSeries] WHERE [Series] = ‘Tools Tips and Tricks’…..

Post#1: Tools Tips and Tricks #1: Process MonitorSome best practices to be followed when capturing Process Monitor traces to ensure that the data collection doesn’t causes additional performance issues.

Post#2: Tools Tips and Tricks #2: SQL Express RANU instances Explains how to connect to SQL Express Run As User Instances (RANU) using existing tools.

Post#3: Tools Tips and Tricks #3: Custom Rowsets using SQL NexusAnother quick tip on using SQL Nexus to import outputs of .sql scripts used to capture diagnostic data using Rowset Importer.

Post#4: Tools Tips and Tricks #4: RML Utilities – Some helpful tips on the use of RML Utilities which is used by SQL Nexus under the hood for importing SQL Profiler traces into a SQL Server database.

Post#5: Tools Tips and Tricks #5: SQLDIAG and RANU – Explains how to capture diagnostic data for SQL Express Run As User Instances (RANU) using SQLDIAG.

Post#6: Tools Tips and Tricks #6: Custom Reports in SQL NexusA quick tip on how to create custom reports for SQL Nexus a tool widely used within CSS for analyzing performance diagnostic data collected from a SQL Server instance.

Post#7: Tools Tips and Tricks #7: PsExec as parent and ProcMon as childExplains how to use PsExec (a tool from Sysinternals) to launch a process remotely and capture data. In this example, I have used Process Monitor as the remote process.

Post#8: Tools Tips and Tricks #8 – Debug Diagnostic and Crash Rules – A walkthrough on using Debug Diagnostic tool for capturing crash dumps and analyzing them using the Crash Analysis rule.

Post#9: Tools Tips and Tricks #9: PSSDIAG Configuration Manager – Explains how to configure PSSDIAG collection using Configuration Manager GUI with a few tips and tricks on tweaking the XML configuration file.

Post#10: Tools Tips and Tricks #10: Caching PDB files locally – Explains how to cache symbol files locally using CDB.exe.

Post#11: Tools Tips and Tricks #11: Debug Diag and Memory leaks – A walkthrough on configuring Debug Diag for tracking memory leaks for a program which can be extended to tracking non-BPool allocations for SQL Server.

Post#12: Tools Tips and Tricks #12: XPerf, Memory usage and much more – A walkthrough on how to use XPerf Heap allocation tracking for identifying memory consumers for a program. Can be extended to SQL Server Out-of-Memory (OOM) issues for non-BPool memory crunch.

Webcast Material for Virtual Tech Days

In May, I had done a webcast on “Understanding Performance Bottlenecks using Performance Dashboard”. The presentation material is now available on the SQLServerFAQ MSDN blog and the webcast videos are available for download on MSDN. Refer the links below for the presentation deck and webcast video download link.

Managing and Optimizing Resources for SQL Server [PPT | Webcast Download] – Balmukund Lakhani [Blog | @Blakhani]
Optimizing and Tuning Full Text Search for SQL Server [PPT | Webcast Download] – Sudarshan Narasimhan [Blogs @ SQLServerFAQ]
Understanding Performance Bottlenecks using Performance Dashboard [PPT | Webcast Download | QnA]
Cool Tools to have for SQL Server DBA [Webcast Download] – Pradeep Adiga [Blog | @PradeepAdiga]
Learn Underappreciated Features of SQL Server to Improve Productivity [Webcast Download] – Nakul Vachhrajani [Blog]

 

del.icio.us

Tags: ,,,

Tools Tips and Tricks #12: XPerf, Memory usage and much more

This is the last post for the series Tools Tips and Tricks as May draws to a close. Today I shall talk about another tool that we use called XPerf for performance troubleshooting. Though this is not a common tool which is used on a regular basis by the SQL CSS team. But when we do decide to use this for very specific scenarios, the usefulness of this tool cannot be put in words. I had talked about using Debug Diag for monitoring memory usage and tracking down allocations right upto the function call. There is another way to track heap allocations which is what I shall be talking about today. I shall use the same MemAllocApp that I had used last week. I start off the Xperf monitoring using the following commands:

xperf -on PROC_THREAD+LOADER -BufferSize 1024 -MinBuffers 16 -MaxBuffers 16
xperf -start HeapSession -heap -Pids 9532 -BufferSize 1024 -MinBuffers 128 -MaxBuffers 128 -stackwalk HeapAlloc+HeapRealloc+HeapCreate

Now once I have collected the XPerf data, you can use the following command to stop the data collection:

xperf -stop HeapSession -stop –d F:\MemAlloc.etl

Once that is done, you should have an ETL file in the specified location by the –d parameter. Since, I am interested in the functions in the functions which were allocating the maximum amount of memory, I will use the following command to generate a summary report for the heap allocations traced by XPerf using the command below:

xperf -i "F:\MemAlloc.etl" -o "F:\MemAlloc.txt" -symbols -a heap -stacks -top 5

/* Output of MemAlloc.txt file */

Results for process MemAllocApp.exe (9532):

———————————————————————

GLOBAL ALLOCATIONS:
Alloc       :         100,     512000.0 KB
Realloc     :           0
Outstanding :         100,     512000.0 KB

———————————————————————

TOP 1:
Alloc       :         100,     512000.0 KB
Realloc     :           0
Outstanding :         100,     512000.0 KB

———————————————————————

MemAllocApp.exe!fn_allocatememory
MemAllocApp.exe!wmain
MemAllocApp.exe!__tmainCRTStartup
MemAllocApp.exe!wmainCRTStartup
kernel32.dll!BaseThreadInitThunk
ntdll.dll!RtlUserThreadStart

Alloc       :         100,     512000.0 KB
Realloc     :           0
Outstanding :         100,     512000.0 KB

 

As you can see from the above output, the function fn_allocatememory was responsible for 100 allocations worth 512KB each. With just the use of a single command I was able to figure out the reason behind my outstanding allocations for my EXE. Troubleshooting SQL Server outstanding memory allocations for heaps may not be as easy as this but it definitely saves time in having to look and dig out the allocations from the a memory dump.

This method is quite useful when you have a very large ETL file which you need to analyze. You can even configure a Circular Buffer for capturing data appending the following command for your HeapSession tracing commands:

-BufferSize 1024 -MaxBuffers 1024 -MaxFile 1024 -FileMode Circular

Note: Make sure that you set your _NT_SYMBOL_PATH environment variable correctly if you want the function calls to be resolved correctly.

Hope you enjoyed this series of Tools Tips and Tricks as much as I had fun in posting the various methods that I use to collect diagnostic data while troubleshooting SQL performance related issues.

References:
Using Actions to process Heap Data
Enabling Data Capture using XPerf
XPerf Options

Tools Tips and Tricks #11: Debug Diag and Memory leaks

This week I had shown how to use Debug Diagnostic tool to capture a dump for a first chance exception encountered by an application and perform analysis using the Crash Analysis rule. Today I am going to show to use Debug Diagnostic tool to track outstanding memory allocations for a process.

Steps

image1. Launch the Debug Diagnostic tool and click on the Rules tab. Click on Add Rule.

2. Select the Native (non-.NET) Memory and Handle Leak rule. (see screenshot on the right)

3. You cannot setup a memory leak tracking rule for a process that is not running as the Leak Tracking dll has to hook onto the imageprocess. In this example, I will be using the tool to track an executable called MemAllocApp.exe. Select the required process using the Select Target window. (see screenshot on the left)

4. In the next window titled “Configure Leak Rule”, you can use that to go granular with your tracking requirements. I have opted not to generate a dump after n minutes of tracking (Option: Generate final userdump after x minutes of tracking). I have selected an auto-unload of the Leak Tracking DLL once the rule is completed or deactivated (Option: Auto-unload Leak Track when rule is completed or deactivated). (see screenshot below)

5. Click on the Configure button and you can then configured additional options for the userdump generation for the process being tracked. I also have the tool set to automatically capture a user dump if the process that I am tracking unexpectedly shuts down. (Configure userdumps for Leak Rule window below in screenshot). I have configured the rule to capture a dump automatically if the process unexpectedly shuts down. (Option: Auto-create a crash rule to get userdump on unexpected process exit). Additionally, I have configured the rule to capture a userdump once the private bytes for the process reaches 350MB. (Option: Generate a userdump when private bytes reach x MB). As you can see in the screenshot below, there are additional options that you can configure but I don’t need them for this particular demo. image

6. Next you get the “Select Dump Location and Rule Name” window where you can changed the rule name and the location of the dumps generation. By default the dumps are generated at <Debug Diagnostic Install Path>\Logs\<Rule Name> folder.

7. Click on Activate Rule in the next window to start the tracking.image

Note: If you are not in the same session as the Debug Diag Service, then you will get the following message when you get the following pop-up, once you have configured the rule. Click on Yes. And then you should get a pop-up stating that Debug Diag is monitoring the EXE for leaks.

Process MemAllocApp.exe(15316) is in the same logon session as DebugDiag (session 2), but it is not in the same logon session as the DbgSvc service (session 0).  Do you want to return to ‘Offline Mode’ and continue?

On the Rules tab, you should see two rules. One for the leak tracking and the other for the crash rule. Once I hit the threshold of 350MB of privates bytes, I will a dump generated and the Userdump Count column value should change to 1. I was monitoring my application’s Private Bytes perfmon counter and the graph showed a steady increase. (see screenshot below). Now that the rule is active, I can find that the Being Debugged column has the value “Yes” and the LeakTrack Status column value will be Tracking for MemAllocApp.exe under the Processes tabs.image I then used the Analyze Data button under the Rules tab to generate the memory tracking report of a memory dump that I had captured earlier which I analyzed and these are a few excerpts from the report. image

The Analysis Summary tells me that I have outstanding memory allocations of 205MB. This dump was generated using a rule to capture a userdump when Private Bytes for the process exceeded 200MB. Next I shall look at the Virtual Memory Analysis Summary sub-heading…

image

This clearly tells me that the memory allocations are coming from the Native Heaps. And I know from the previous screen-shot that Heap Allocation functions (HeapAlloc) is being called. Now digging into the Outstanding Allocation Summary, I find that over 200MB of allocations have been done from my application and all allocations have been done on the heap. In the Heap Analysis summary, I find that the allocations have all come in from the default process heap. Drilling down into the MemAllocApp hyperlink, I get the offset making these allocations which is MemAllocApp+2cbb. image

The function details from the report is available in the quoted text below. If I have the debug symbols of the application (which I do), I find that this corresponds to my function call fn_allocatememory which makes 5MB allocations using HeapAlloc on the default process heap. If you align your symbols correctly for the analysis, you will find that the report also gives you the correct function names.

Function details

Function
MemAllocApp+2cbb

Allocation type
Heap allocation(s)

Heap handle
0x00000000`00000000

Allocation Count
41 allocation(s)

Allocation Size
205.00 MBytes

Leak Probability
52%

So without any debugging commands, I was able to drill down to the culprit making the maximum number of allocations. This can be quite a useful way of tracking down non-BPool (commonly known as MemToLeave on 32-bit SQL instances) allocations when the Mutli Page Allocations don’t show a high count but you are experiencing non-BPool memory pressure.

The fn_allocationmemory function code is mentioned below:

void fn_allocatememory(int cntr)
{
printf("Sleeping for 10 seconds\n");
Sleep(10000);
BYTE* pByte=(BYTE*) HeapAlloc(GetProcessHeap(), 0, 5242880);
(*pByte)=10;
printf("Iteration %d: Allocating 5MB using HeapAlloc\n",cntr+1);
}

I used the same HeapAlloc function that Sudarshan had used in his latest blog post to explain behavior changes in Windows Server 2008 for tracking heap corruptions.