This week I had shown how to use Debug Diagnostic tool to capture a dump for a first chance exception encountered by an application and perform analysis using the Crash Analysis rule. Today I am going to show to use Debug Diagnostic tool to track outstanding memory allocations for a process.
Steps
1. Launch the Debug Diagnostic tool and click on the Rules tab. Click on Add Rule.
2. Select the Native (non-.NET) Memory and Handle Leak rule. (see screenshot on the right)
3. You cannot setup a memory leak tracking rule for a process that is not running as the Leak Tracking dll has to hook onto the process. In this example, I will be using the tool to track an executable called MemAllocApp.exe. Select the required process using the Select Target window. (see screenshot on the left)
4. In the next window titled “Configure Leak Rule”, you can use that to go granular with your tracking requirements. I have opted not to generate a dump after n minutes of tracking (Option: Generate final userdump after x minutes of tracking). I have selected an auto-unload of the Leak Tracking DLL once the rule is completed or deactivated (Option: Auto-unload Leak Track when rule is completed or deactivated). (see screenshot below)
5. Click on the Configure button and you can then configured additional options for the userdump generation for the process being tracked. I also have the tool set to automatically capture a user dump if the process that I am tracking unexpectedly shuts down. (Configure userdumps for Leak Rule window below in screenshot). I have configured the rule to capture a dump automatically if the process unexpectedly shuts down. (Option: Auto-create a crash rule to get userdump on unexpected process exit). Additionally, I have configured the rule to capture a userdump once the private bytes for the process reaches 350MB. (Option: Generate a userdump when private bytes reach x MB). As you can see in the screenshot below, there are additional options that you can configure but I don’t need them for this particular demo.
6. Next you get the “Select Dump Location and Rule Name” window where you can changed the rule name and the location of the dumps generation. By default the dumps are generated at <Debug Diagnostic Install Path>\Logs\<Rule Name> folder.
7. Click on Activate Rule in the next window to start the tracking.
Note: If you are not in the same session as the Debug Diag Service, then you will get the following message when you get the following pop-up, once you have configured the rule. Click on Yes. And then you should get a pop-up stating that Debug Diag is monitoring the EXE for leaks.
Process MemAllocApp.exe(15316) is in the same logon session as DebugDiag (session 2), but it is not in the same logon session as the DbgSvc service (session 0). Do you want to return to ‘Offline Mode’ and continue?
On the Rules tab, you should see two rules. One for the leak tracking and the other for the crash rule. Once I hit the threshold of 350MB of privates bytes, I will a dump generated and the Userdump Count column value should change to 1. I was monitoring my application’s Private Bytes perfmon counter and the graph showed a steady increase. (see screenshot below). Now that the rule is active, I can find that the Being Debugged column has the value “Yes” and the LeakTrack Status column value will be Tracking for MemAllocApp.exe under the Processes tabs. I then used the Analyze Data button under the Rules tab to generate the memory tracking report of a memory dump that I had captured earlier which I analyzed and these are a few excerpts from the report.
The Analysis Summary tells me that I have outstanding memory allocations of 205MB. This dump was generated using a rule to capture a userdump when Private Bytes for the process exceeded 200MB. Next I shall look at the Virtual Memory Analysis Summary sub-heading…
This clearly tells me that the memory allocations are coming from the Native Heaps. And I know from the previous screen-shot that Heap Allocation functions (HeapAlloc) is being called. Now digging into the Outstanding Allocation Summary, I find that over 200MB of allocations have been done from my application and all allocations have been done on the heap. In the Heap Analysis summary, I find that the allocations have all come in from the default process heap. Drilling down into the MemAllocApp hyperlink, I get the offset making these allocations which is MemAllocApp+2cbb.
The function details from the report is available in the quoted text below. If I have the debug symbols of the application (which I do), I find that this corresponds to my function call fn_allocatememory which makes 5MB allocations using HeapAlloc on the default process heap. If you align your symbols correctly for the analysis, you will find that the report also gives you the correct function names.
Function details
Function
MemAllocApp+2cbbAllocation type
Heap allocation(s)Heap handle
0x00000000`00000000Allocation Count
41 allocation(s)Allocation Size
205.00 MBytesLeak Probability
52%
So without any debugging commands, I was able to drill down to the culprit making the maximum number of allocations. This can be quite a useful way of tracking down non-BPool (commonly known as MemToLeave on 32-bit SQL instances) allocations when the Mutli Page Allocations don’t show a high count but you are experiencing non-BPool memory pressure.
The fn_allocationmemory function code is mentioned below:
void fn_allocatememory(int cntr)
{
printf("Sleeping for 10 seconds\n");
Sleep(10000);
BYTE* pByte=(BYTE*) HeapAlloc(GetProcessHeap(), 0, 5242880);
(*pByte)=10;
printf("Iteration %d: Allocating 5MB using HeapAlloc\n",cntr+1);
}
I used the same HeapAlloc function that Sudarshan had used in his latest blog post to explain behavior changes in Windows Server 2008 for tracking heap corruptions.