System Health Session and beyond

I had blogged extensively about System Health Session available in SQL Server 2008 and above. I had also demonstrated how the System Health Session can be used in conjunction with SQL Server Management Studio reports in previous series that I ran on my blog.

System Health Session Dashboard Reports for SQL Server 2008 and SQL Server 2008 R2 – A set of reports available for SQL Server 2008/SQL Server 2008 R2 which provide visualizations for the events tracked by the System Health Session. The reports can be used using the custom reports option in SQL Server Management Studio.

System Health Session Dashboard – Shows the basic information of the information that can be fetched from the System Health Session.

System Health Session Dashboard — sp_server_diagnostics and more – A look into the different issues tracked by the sp_server_diagnostics script and how to build visualizations using that information.

System Health Session Dashboard – sp_server_diagnostics – The basics of what is covered by the sp_server_diagnostics procedure and how to build visualizations on top of the data captured.

System Health Session Dashboard – Error Summary Report
– Shows information on how to write queries to fetch information about various errors tracked by the System Health Session. Shows information tracked by the enhanced System Health Session in SQL Server 2012.

System Health Session Dashboard – Health Summary Report
– Gives you information about the SQL Server instance health like CPU usage, memory etc. A screenshot of the report is shown below:

I have added UTC time formatting so that the time is shown in the same time zone based on the server on which you are viewing the report.

I will keep updating this post once I keep updating with the new reports. As usual, feedback is always welcome.

The report definitions (.rdl) can be downloaded from here.

Awesomesauce: Finding out missing sequences

Another of those #sqlhelp inspired posts. This was around ways to track down missing numbers in a sequence table. This task can be done in multiple ways and as I like to say that there are multiple ways to skin a cat! However, this blog post is about using the new T-SQL enhancements in SQL Server 2012 to figure out missing sequential numbers. I will demonstrate this using an example.

I have a table tblsequences which has two integer columns with the primary key being an int datatype. I did some random inserts into the table. The script below can be used to find out missing sequential numbers in an identity column or an integer column which is supposed to store sequential values.


-- Replace starting value with minimum starting value and increment for your sequence
-- Replace the table name with the table name that you are interested in
declare @startvalue int = 1, @increment int = 1
;with cte as
(
select a,(a-lag(a,1) OVER (ORDER BY a)) as MissingSequences
from tblsequences
)
select a, (MissingSequences/@increment)-1 as MissingSequences
from cte
where MissingSequences > @increment
union all
select TOP 1 MIN (a),
CASE (MIN(a)- @startvalue)/@increment
when 0 then null
else (MIN(a)- @startvalue)/@increment
end as MissingSequences
from tblsequences
group by a
order by a

The output is as shown below in the screenshot. You will notice that the first column reports the primary key value and the MissingSequences column reports the number of missing sequential values lesser than the value in the first column. You will need to make the necessary changes to the two scripts shown in this blog post so that the tables you want analyzed are done so.

image

Next, lets talk about Sequences which is a new feature in SQL Server 2012. I created a new sequence using the a start value of 10 and an increment value of 3.

I used the Sequence which was created for generating OrderID values in two different tables. Now we are posed with an interesting problem. I have two different tables in which I want to compare missing sequence numbers. This can be done with T-SQL code below.


-- Replace schema name, table name(s) and sequence name as appropriate
declare @startvalue int = 1,@interval int = 1, @seqname sysname = 'TestSeq', @schemaname sysname = 'dbo'
select @startvalue = TRY_CAST(TRY_CAST(start_value as varchar(255)) as int),
@interval = TRY_CAST(TRY_CAST(increment as varchar(255)) as int)
from sys.sequences
where name = @seqname and [schema_id] = (select [schema_id] from sys.schemas where name = @schemaname)

if (@startvalue IS NOT NULL and @interval IS NOT NULL)
begin
;with cte as
(
select OrderID,(OrderID-lag(OrderID,1) over (order by OrderID)) as MissingSequences
from
(select OrderId as OrderID
from tblTestSeq
union all
select OrderId as OrderID
from tblTestSeq_2) A
)
select OrderID, (MissingSequences/@interval)-1 as MissingSequences
from cte
where MissingSequences > @interval
union all
select TOP 1 OrderID,
CASE (MIN(OrderID)- @startvalue)/@interval
when 0 then null
else (MIN(OrderID)- @startvalue)/@interval
end as MissingSequences
from tblTestSeq
group by OrderID
order by OrderID
end
else

else
begin
PRINT 'CAST FAILED'
end


The logic used is the same as the logic when we were looking for missing sequences in identity/integer columns described in the first half of the post. However, since sequences can be used for one or more tables, I have used a UNION query to get all the columns together which use the same sequence object. The second result set of the output in the screenshot below shows the tables involved along with the sequential OrderIDs. As you can see below, there are 4sequence numbers missing  10, 13, 16 and 28. The first result set reports exactly that by notifying that there are 3 sequence ids missing lesser than 19 and 1 sequence number missing lesser than 31.

The output of the above T-SQL script is shown in the screenshot below.

image

Another way how T-SQL enhancements in SQL Server 2012 can make your life easier.

Happy T-SQL coding and a have a great weekend!

Moving those large files for secondary databases

I remember seeing a question on the #sqlhelp hashtag on how to move the secondary database files to a new physical location. While this might seem a mundane task but can throw up a few surprises. If you don’t want to tear down your log shipping configuration and re-establish it with the files in the new physical location, then this post will definitely interest you.

If you are running your log shipping in standby mode, you will first need to switch to norecovery mode. This is due to the fact that you cannot execute an ALTER DATABASE command on a standby database as it is not writable. You will be presented with a following error if you attempt to do so:

Msg 5004, Level 16, State 4, Line 1
To use ALTER DATABASE, the database must be in a writable state in which a checkpoint can be executed.

So to avoid the above issue, you need to switch to norecovery mode. If you are already running in this mode, then you have one task reduced from your list. Keep in mind that the switch of the operating mode takes effect only after the next log restore operation.

Now here are the exact set of steps that you need to follow:

1. Disable the Log Shipping Restore SQL Agent job.
2. Use ALTER DATABASE command to change the location of the secondary database files
3. Stop the SQL Server instance hosting the secondary database files
4. Move the secondary database files to the new location as updated in Step #1. (OS File Copy)
4. Start the SQL Server instance
5. Enable the Log Shipping Restore SQL Agent job

Verify that the log shipping jobs are running without any errors. The reason you need to stop the SQL Server instance is because an ALTER DATBASE…OFFLINE doesn’t work when the database is in RESTORING state. You will be blocked by the following error if you attempt to set the database offline:

Msg 5052, Level 16, State 1, Line 1
ALTER DATABASE is not permitted while a database is in the Restoring state.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Hope the above helps in moving the database files of the secondary databases which are configured for log shipping. The steps mentioned above works for both data and log files.

Why did the restore fail on the log shipped secondary database

I was doing some random testing for a different log shipping related scenario, when I ran into a restore failure reported by the log shipping restore job. A snippet of the error message is shown below:

*** Error: Could not apply log backup file ‘<backup file name>’ to secondary database ‘<database name>’.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: The operation cannot be performed on a database with database snapshots or active DBCC replicas.

The above error is reported as Error #5094. The error message is shown in bold above. Now this error occurred because I had created a database snapshot on the log shopped secondary database which was operating in standby (warm standby) mode. In such an event, you can only perform a log shipping restore on the secondary server after the database snapshot is dropped.

This also applies to the active DBCC replicas as well. Which means a DBCC CHECK being executed on the standby database could also cause a log shipping restore to fail. This issue would never occur if the secondary database was in no recovery mode since a database snapshot cannot be created for this log shipping operating mode.

The Ring_Buffer_Exception reports the error as:

<Error>5094</Error>
<Severity>16</Severity>
<State>2</State>

The idea of putting this blog post together was to ensure that you were aware of this behavior before you created database snapshots on your standby log shipped secondary databases for reporting purposes.

Get your log shipping restore to run faster

Recently I was working on a customer environment where the requirement was to use a log shipped secondary database for reporting purposes. The catch here was that the restore of all the transaction log backups needed to complete within a fixed time.

Right at the outset, I would like to point out that the restore of transaction logs are dependent on a number of parameters like disk subsystem performance, number of transactions present in the log backup, resource availability on the server where the restore is occurring. Considering that all the factors remain constant, there is a basic difference between a secondary log shipped database in No Recovery and in Standby mode. In Standby mode, a partial recovery is done with the help of a transaction undo file to get the database into a read-only state. This partial recovery operation and undo that occurs while performing the next log restore (with the use of the transaction undo file) requires time and resources and can slow down the time of restore for each transaction log backup that is restored on the secondary database, especially if there are resource contention issues on the server. In this blog post, I shall outline how you can workaround the same in case you have to meet strict restore time SLAs.

Continue reading

System Health Session and Deadlocks

I had blogged about retrieving deadlock related information using the default Extended Event session which runs by default on all SQL Server 2008 instances and above. However, once you have retrieved the XML deadlock graph, it could be quite cumbersome to read if the deadlock happens to be complex or involves multiple nodes. I frequently require the need to fetching the information about past deadlocks from the System Health Session data while working on customer environments. Due to the frequent repetitive nature of the data collection, I decided to automate this task.

I again decided to use a combination of Powershell and T-SQL to extract this information. The Powershell script (TransformtoXDL.ps1), which requires Powershell 2.0 or above, uses a T-SQL script (TransformtoXDL.sql ) to extract data from the System Health Session and outputs each individual deadlock graph as a separate .XDL file into a folder of your choice with the timestamp of the occurrence of the deadlock. Note that the time reported will be in GMT timezone.

The powershell script accepts two parameters: vServername for the SQL Server instance that you want to extract the data from and the vPath for the folder into which the XDL files should be saved into.

.\TransformToXDL.ps1 -vServername "<server name>" -vPath "C:\Tempdb\"

Yes… I have a folder called Tempdb on my C: drive!! Smile

A sample output is shown in the screenshot below:

image

The Transact-SQL script called TransformtoXDL.sql does the following:

1. Extracts the System Health Session data into a temporary table
2. Based on the version of your SQL Server instance, it performs the parsing to extract the deadlock graph. This script accounts for issues mentioned in KB978629. I would like to thank Michael Zilberstein [B] for the proposed corrective action on an issue that Jonathan Kehayias [B|T] had blogged about.
3. The last action that the script takes is to perform XML modification to get the XML deadlock data in the same format which is recognized by SQL Server 2012 Management Studio when viewing XDL files.

The powershell and T-SQL script can be downloaded here.


#    Script Name: TransformToXDL
#    Author: Amit Banerjee
#    Date: September 6, 2012
#    Description:
#    The script reads the deadlock graphs from the System Health Session
#    Ring Buffer and parses them to create an individual deadlock graph
#    in a folder of your choice.
#    Usage: .\TransformToXDL.ps1 -vServername "INST1" -vPath "C:\Tempdb\"
# This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

Param ([string] $vServername,
[string] $vPath)

cls

# Load the SQL Server snap-in for using sqlcmd cmdlet
$ErrorActionPreference = "SilentlyContinue"
Import-Module sqlps
$ErrorActionPreference = "Continue"

Write-Host "`nConnecting to SQL Server instance " $vServerName " to extract deadlock information"
# Extract the deadlock graphs and parse them in the system health session
Invoke-Sqlcmd -InputFile "C:\Tempdb\TransformToXDL.sql" -ServerInstance $vServerName

# Function to get the information from the table stored in tempdb
function Get-SqlData
{
param(
[string]$serverName=$(throw 'serverName is required.'),
[string]$databaseName,
[string]$query
)

$connString = "Server=$serverName;Database=$databaseName;Integrated Security=SSPI;"
$da = New-Object "System.Data.SqlClient.SqlDataAdapter" ($query,$connString)
$dt = New-Object "System.Data.DataTable"
[void]$da.fill($dt)
$dt

}

# Get the data stored in tempdb using the function defined above
$rows = get-sqldata $vServername  "tempdb"  "select row_id,event_time,deadlockgraph from tempdb.dbo.deadlock_graphs"

$vCount = 0
# Extract each row retrieved into an individual XDL file with the timestamp of the issue
foreach ($row in $rows)
{
if($row -ne $null)
{
$vCount++
$vFileName = $vPath + $vServername.Replace("\","_")+ "_" + $row.event_time.ToString().Replace(":","_").Replace("/","_") + ".xdl"
Write-Host "`nCreating file: "  $vFileName
$row.deadlockgraph | Out-File $vFileName
}
}

Write-Host "`nDeadlocks found: " $vCount.ToString()
Write-Host "`nPerforming cleanup"
Invoke-Sqlcmd -Query "IF EXISTS (SELECT TOP 1 name FROM tempdb.sys.objects where name = 'deadlock_graphs')
BEGIN
DROP TABLE tempdb.dbo.deadlock_graphs
END" -ServerInstance $vServerName

The above has been tested on SQL Server 2008, SQL Server 2008 R2 and SQL Server 2012. The resulting XDL files can be opened in SQL Server 2012 Management Studio. I am always looking for feedback. So please feel free to Tweet, Facebook or Email me regarding any issues or enhancements that you might need for the same.

You might want to remember that the T-SQL query used is a resource intensive query and it is preferable that you run this extraction exercise during non-business hours especially if your SQL Server instance is experiencing a large number of deadlocks.

Another saga for corrupt images

I had blogged in the past on how to track corrupt images while rendering a SSRS report containing images. In this post, I shall talk about generating reports for reports for images which have a corrupted JPEG stream.

If you have an image which has a corrupt JPEG stream, then the report will render without any issues and an export of the report to a PDF document will report no errors. However, when you open the PDF document, you will be greeted with the error seen in the screenshot below reporting “An internal error occurred”:

image

Now this can be a bit disconcerting when you are automating your report generation and are not aware of any exceptions reported. Interestingly when this report is rendered in MHTML format, the file can be opened without issues but you still see the corrupt image stream. The crux of the matter lies in how PDF documents are rendered and when a corrupt JPEG stream is encountered. This issue manifests itself while creating a PDF document with such a corrupt image stream outside of SQL Server Reporting Services.

Now that we know that there are image coding standards at play here and nothing much can be done, what are the options.

For the curious mind, the image when viewed in a picture viewing application shows the corrupt stream as seen below:

image

The first option is to use an image processing executable and detect all images that you will be processing in your report before you render it into a PDF format.

Image Magick DLLs and binaries are one such option. You could either choose to use the binaries that are available on the site or you could create your own picture verification package using their DLLs available on the site. Their code is available on CodePlex as well. The binaries available on their site has an executable called “Identify”. This can be used to identify corrupt image streams. The output of the tool when used to evaluate an image will be as follows:

identify.exe: Corrupt JPEG data: bad Huffman code `C:\Tempdb\1.jpg’ @ warning/jp
eg.c/JPEGWarningHandler/346.
identify.exe: Corrupt JPEG data: 20 extraneous bytes before marker 0xd9 `C:\Temp
db\1.jpg’ @ warning/jpeg.c/JPEGWarningHandler/346.

The other way to workaround this issue would be to use Custom Code within your report to re-save the image. This doesn’t remove the corrupt image stream but the image snippet that you see in the screenshot above now becomes part of your image. This will help you generate your PDF documents correctly and open them for viewing later. However, if you want to correct the corrupt JPEG stream, then you will need to store the correct image in your database.

Steps for using custom code to prevent PDF opening failure

1. Paste the VBCode below in the Code section of the Report.

' From the data base the data is returned in the form of byte  array. So we’re writing a function that accepts the byte array as parameter.

' Also it is going to return the new image as byte array.

Public Shared Function ConvertImage(ByVal imgData As Byte()) As Byte()

'Stream to hold the source image (jpeg).

Dim msSource As New System.IO.MemoryStream()

'Stream to hold the converted image (png).

Dim msDest As New System.IO.MemoryStream()

' Reading the image from byte array to the source stream.

msSource.Write(imgData, 0, imgData.Length)

msSource.Flush()

'Image object that will store the image extracted from the source stream.

Dim imgSource As System.Drawing.Image = System.Drawing.Image.FromStream(msSource)

'Saving the New, Converted Image in to the destination stream.

imgSource.Save(msDest, System.Drawing.Imaging.ImageFormat.Png)

'Converting the New stream into Byte array, to be used inside the reporting service image control.

Dim imgDest(msDest.Length) As Byte

imgDest = msDest.GetBuffer()

Return imgDest

End Function

2. Now click on the References tab, Under Assembly Name, click on the ellipses button “…”.

3. Locate and select System.Drawing.dll, Click on Add and then Click Ok.

4. Now go to the report, Image control Properties and click on the Value. which would look something similar to this =First(Fields!Photo.Value, “DataSetName”)

5. Replace the above value with the expression, =Code.ConvertImage(First(Fields!Photo.Value,”DataSetName”)).

6. Export the report to PDF and you will now notice that the previous “internal error” is not reported.

Since you are adding custom code to your report to re-save your binary image stream, you will incur an additional processing overhead while generating the reports. If your PDF contains a large number of images, then you will have to factor in the additional resource and time consumed to perform this sanitization process!

IMHO using the first option of checking the correctness of the JPEG stream using a separate executable would be much more feasible in case you are processing 1000+ images in a single report.

A special shout-out to my colleagues Selva [Blog] and Arvind [Blog] for helping me out on this issue!

Disclaimer: The Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. Image Magick is a third-party utility which has been referenced in this post. Please refer documentation on the Image Magick website regarding licensing, warranty and usage before implementing the use of the same in your environment.