Azure Storage and SQL Server – Part 6

In the last post of this series, I had described how to determine if any database files were hosted on the temporary drive on the Azure virtual machine. In this post, I shall check if any of the drives hosting the SQL Server database files have block sizes other than 64KB. As per best practice recommendations for SQL Server workloads running on Azure virtual machines, it is recommended that you use a 64-KB allocation unit size for data and log files as well as tempdb.

Determining the block size

In the first part, I will be talking about how to determine if the block size for the drives hosting the database files is 64KB or not. The first thing that needs to be done is to retrieve the volume details using the Win32_Volume class and use the BlockSize value to determine the block size of the volume. To avoid false positives, you will also need the disks on which the SQL Server database files are hosted on. This can be retrieved using the sys.master_files system catalog. Using both the sets of information, you can determine if any volume that is hosting a SQL Server database file has a block size other than 64KB.

The PowerShell script that I had used is available below. This script also makes use of PowerShell snippet that I had used to identify the temporary drive which I had blogged about last week.


# Script for discovering temporary drive is available in Part 4 of this series

# Hence, not re-writing the script here

$TempDrive = "D:\"

# Fetching the disks on which the SQL database files reside on

$sqldisks = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select distinct substring(physical_name,0,4) as disk from sys.master_files" | Select-Object disk

# Getting the block size for each volume

$volumes = gwmi -Class Win32_Volume -Filter "DriveType = 3" | select-object BlockSize, Name

# Foreach loop to determine if block size for SQL disk drives vary from 64K

foreach ($volumne in $volumes)

{

if ($volumne.BlockSize -ne "65536" -and $volumne.Name.ToString() -In $sqldisks.disk)

{

if ($volumne.Name.ToString() -ne $TempDrive)

{

Write-Host "WARNING:" $volumne.Name "has a block size =" $volumne.BlockSize "bytes" -ForegroundColor Red

}

else

{

Write-Host "INFO:" $volumne.Name "has a block size =" $volumne.BlockSize "bytes. This is the temporary drive and should only host the tempdb files" -ForegroundColor Green

}

}

}

You might want to also read the following articles about disk partition alignment for SQL Server:
Disk Partition Alignment Best Practices for SQL Server
http://technet.microsoft.com/en-us/library/dd758814(v=SQL.100).aspx

Disk Partition Alignment: It Still Matters–DPA for Windows Server 2012, SQL Server 2012, and SQL Server 2014
http://blogs.msdn.com/b/jimmymay/archive/2014/03/14/disk-partition-alignment-for-windows-server-2012-sql-server-2012-and-sql-server-2014.aspx

Data and log files on the same drive

This is quite a common recommendation and has been taught to DBAs when they were in DBA elementary school! But sometimes, these recommendations are not followed for various reasons… some known and some unknown. So I decided that while I was at it, I would write up a quick PowerShell script to determine if the data and log files are present in the same drive using information retrieved from the sys.master_files catalog view.

The PowerShell script is give below along with a screenshot of the output.


$sqldisks = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select distinct substring(physical_name,0,4) as disk, type_desc from sys.master_files"

$datafiles = $sqldisks | Where-Object {$_.type_desc -eq "ROWS" }

$logfiles = $sqldisks | Where-Object {$_.type_desc -eq "LOG" }

if ($datafiles.disk -contains $logfiles.disk)

{

Write-Host "ISSUE: Data and log files found on the same drive" -ForegroundColor Red

foreach($drive in $datafiles)

{

if ($drive.disk -contains $logfiles.disk)

{

Write-Host "ISSUE: Drive" $drive.disk "hosts data and log files" -ForegroundColor Red

}

}

}

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/
Azure Storage and SQL Server – Part 5
https://troubleshootingsql.com/2014/11/18/azure-storage-and-sql-server-part-5

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/
Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 16th November, 2014.

Azure Storage and SQL Server – Part 5

In this post, we will start checking if the SQL Server files are following best practices for Azure storage recommendations. In Part 4 of this series, I had written about how to identify the temporary drive on an Azure Virtual Machine. I am going to use that snippet of code to perform the file layout test that I will perform here.

The script performs the following operations:

1. Finds out the temporary drive letter

2. The script first finds out if the Azure Virtual Machine is a D-series machine or not.

3. Then it fetches the file location from the SQL Server instance using the Invoke-Sqlcmd cmdlet.

4. Using the above information, it determines if files are present on the temporary drive or not.

If this is a D-Series virtual machine, then the script will not report a warning if it finds the tempdb files on the temporary drive as it is supported. For any other series, the script will report an issue since database files residing on the temporary drive are not supported on the temporary drive other than the D-Series.

The PowerShell script is available below:


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

# Variable to track if this is a D-Series VM

[boolean] $DSeries = $false

# Using Get-AzureVM to get the instance size property

$InstanceSize = Get-AzureVM -ServiceName "<azure service name>" -Name "<azure vm name>" | Select-Object InstanceSize

# Find out if this is a D-Series VM

if ($InstanceSize.InstanceSize -like "*_D*")

{

$DSeries = $true

}

# Get the location of the database files from the system catalog

$dbfiles = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select physical_name, name, db_name(database_id) as dbname, type_desc from sys.master_files"

# Run a foreach loop to determine if any user or system database files are on the temporary drive other than the tempdb

foreach ($file in $dbfiles)

{

# Tempdb can be hosted on the temporary drive on D-Series VMs

if ($DSeries-eq $true)

{

if ($file.physical_name.substring(0,2) -eq $TempDrive -and $file.dbname -ne "tempdb")

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

# Report any databases on temporary drive for non-D-Series VMs

else

{

if ($file.physical_name.substring(0,2) -eq $TempDrive)

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

}

Please keep in mind that the above script has not been tested for mount points.

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

GOTCHA: Executing powershell scripts using scheduled tasks

This is another gotcha for setting up scheduled tasks which execute PowerShell scripts. I have a SQL Server instance installed on an Azure virtual machine. I am using a D-Series machine which allows me to store my tempdb files on the D: drive which is a SSD drive. However, the D: drive on Azure virtual machines is not a persistent drive. If you have change the drive letters on your Azure VM, then you can use the PowerShell script in my earlier blog post to identify the temporary drive. So, when the Azure virtual machine restarts the D: drive is re-created and all my folder structure is lost. I already have a scheduled task created on my Azure virtual machine which re-creates the folder structure on the D: drive. The blog post in the reference section has more details on how to achieve this.

However, when the scheduled task executes, the following error is reported in the Task Scheduler logs.

image

The last run result is reported as 0xFFFD0000 and the task history would show the following message:

Task Scheduler successfully completed task “\Tempdb Folder Creation” , instance “{35ec7a4f-6669-437f-b12f-40b95689896c}” , action “C:\Windows\System32\WindowsPowerShell\v1.0\powershell.EXE” with return code 4294770688.

First, let us find out why this error message gets reported. If you have setup a PowerShell script to execute the script in the following manner using single quotes, then this issue occurs.

powershell -file ‘C:\Automation\TempdbFolder.ps1’

If you have not used an output file, then you will find that this might not be an easy thing to troubleshoot. If you execute the above command from a command prompt window, you will find the following error reported:

Processing -File ”C:\Automation\TempdbFolder.ps1” failed: The given path’s format is not supported. Specify a valid path for the -File parameter.

Changing the command to use double quotes would make it execute without any issues.

Some other things to keep in mind when creating the scheduled task would be:

1. Using a full qualified path to the script file rather than a relative path.

2. Ensuring that the account running the script has the correct privileges.

3. The task should be configured to run without having the user logged in.

Reference:
Using SSDs in Azure VMs to store SQL Server TempDB and Buffer Pool Extensions
http://blogs.technet.com/b/dataplatforminsider/archive/2014/09/25/using-ssds-in-azure-vms-to-store-sql-server-tempdb-and-buffer-pool-extensions.aspx

Azure Storage and SQL Server – Part 4

In Part 3 of this series, I had talked about how to determine the IOPS supported by the data disks attached to the virtual machine and determine if all the data disks belong to the same storage account. In this post, I shall talk about how to determine if your compute and storage resources are located in the same region. It would not make sense to have your compute and storage resources in different data centers especially for high performance workloads. This would be equivalent of having geographically dispersed storage which would not really give you good throughput. It is recommended that you create your Azure storage account in the same data center as your SQL Server virtual machines to reduce transfer delays.

Colocation of service and storage account

Again, we are going to use Azure cmdlets to make our work easier. The first script finds out the location of your Azure service and the Azure storage account and checks to see if they are in the same location. If not, it flags an issue. The Get-AzureService cmdlet provides the location of the Azure service and the Get-AzureStorageAccount gives the location of the storage account.

The Powershell script and a screenshot of the output is shown below.


$AzureServiceName = "<Service Name>"

$AzureStorageName = "<Storage Account Name>"

$ComputeLocation = Get-AzureService -ServiceName $AzureServiceName | Select-Object Location

$StorageLocation = Get-AzureStorageAccount -StorageAccountName $AzureStorageName | Select-Object Location

if ($ComputeLocation.Location -ne $StorageLocation.Location)

{

Write-Host "ISSUE: Service and storage are NOT co-located. Storage location:" $StorageLocation.Location "Service location:" $ComputeLocation.Location -ForegroundColor Red

}

else

{

Write-Host "PASSED: Service and storage are co-located in" $StorageLocation.Location -ForegroundColor Green

}

image

Finding out the temporary drive

It is possible that someone might re-map the temporary drive to another drive letter other than D:. In such scenarios, it is important to find out the temporary drive letter. It would do well to not assume that the temporary drive is not the D: drive. I had blogged earlier on how to re-map the temporary drive.

Once you have logged into the Azure virtual machine, you would need to jump a few PowerShell and WMI hoops to get the temporary drive letter:

1. Find out the IDE drive which has a SCSI Target ID of 1. The C: drive would also be an IDE drive but with a SCSI Target ID = 0. This information is available from the Win32_DiskDrive class.

2. Now that you have the partition name, it is time to get the disk # and partition # using the Win32_DiskDriveToDiskPartition class.

3. And the last hoop that you need to jump is to use the disk # and partition # to get the logical drive name using the Win32_LogicalDiskToDiskPartition class.

This information will be used in future automation posts to determine if the temporary drive is being used by SQL Server or not. The temporary drive can be used for Buffer Pool Extensions and the tempdb files only on the D-Series virtual machines.

The Powershell script and screenshot of the output is shown below.


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

Azure Storage and SQL Server – Part 3

In the last post of this series, I had talked about how to determine the number of page blobs present in a particular storage account and if the limit for Azure storage service will be exceeded based on the current configuration. In this post, I shall talk about something a bit closer to your virtual machine which is running the SQL Server workload. The script shown below will determine the IOPS supported by the data disks attached to the virtual machine and determine if all the data disks belong to the same storage account. It is recommended that your data disks belong to a single Azure storage account. If they are not part of the same storage account, then a warning in flagged in the output as seen in the screenshot.

The first part of the PowerShell script is uses Get-AzureVM cmdlet to find out the details of the Azure virtual machine which is running your workload. The output of this cmdlet is used as an input for the Get-AzureDataDisk cmdlet which allows me to find out all the data disks that are attached to my Azure virtual machine. Then a simple calculation lets me understand how much I/O workload the virtual machine can handle.

The next part of the script performs an important task of determining if the data disks attached to the virtual machine belong to the same storage account. If not, then it flags a warning. Typically, for high performance VMs, it is recommended that a single storage account be used for hosting the data disks of the virtual machine.

The Powershell script and a screenshot of the output is shown below.


# Temporary variables

$AccountName = ""

$fail = 0

# Get the list of data disks for the Azure virtual machine

$DataDisks = Get-AzureVM -ServiceName "<Azure Service Name>" -Name "<Azure VM Name>" | Get-AzureDataDisk

Write-Host "INFO: This virtual machine has" $DataDisks.Count "data disks which can support a total of"($DataDisks.Count*500)" IOPS" -ForegroundColor Green

# Check if there is a data disk which belongs to a different storage account

foreach ($disk in $DataDisks)

{

if ($AccountName -eq "")

{

$AccountName = $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8)

}

if ($AccountName -ne $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8))

{

$fail = 1

}

}

# Show a warning if the data disks span across multiple storage accounts

if ($fail -eq 1)

{

Write-Host "WARNING: Mulitple storage accounts found for disks attached to the virtual machine" -ForegroundColor Red

}

else

{

Write-Host "PASSED: All disks attached to the virtual machine belong to a single storage account" -ForegroundColor Green

}

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.