SQL on Azure VM BPCheck

After a long hiatus, I am back on the blogosphere and have decided to dedicate a series to running Best Practices checks using PowerShell on Azure virtual machines running in the new deployment model: ARM (Azure Resource Manager). If you have worked on the classic deployment model, then you will need to unlearn a few things and re-learn a little more. However, the learning curve ain’t that great. Think of Azure Resource Manager as a container for all the cloud components that make up your solution. If you had a SQL Server instance running on an Azure virtual machine, then you would be using compute, networking, storage components in Azure which would together be encapsulated in a Resource Manager group.

I had previously done a series of posts around running best practices checks on Azure Virtual Machines running SQL Server. Some of those checks are still valid today as they only dealt with the SQL Server instance. You will see components of those scripts reused. Some of the checks are way easier due to the way ARM deployments are managed in Azure and the endpoints that the Azure PowerShell 1.0 exposes.

The PowerShell scripts available on the GitHub repository are mentioned below. I will run a post on each of these scripts to explain what each of these do and what to expect as the output of each of these scripts.

  • Get-AllocationUnitCheck.ps1 – Checks if the allocation unit size for the disks attached to the VM is 64K
  • Get-DBProperties.ps1 – Checks if any database has AUTO CLOSE or AUTO SHRINK enabled
  • Get-FilesOnTemp.ps1 – Checks to see if any database files are hosted on the temporary drive
  • Get-IFI.ps1 – Checks to see if the SQL Server service account has instant file initialization security privileges
  • Get-LPIM.ps1 – Checks to see if Lock Pages in Memory privilege is granted to the SQL Server service account
  • Get-OSFilesDB.ps1 – Checks to see if database files are hosted on the OS drive
  • Get-StorageAccountBP.ps1 – Checks to see if the storage account has replication enabled
  • Get-VMSize.ps1 – Checks if the right virtual machine tier is being used
  • Temporary Drive.ps1 – Finds out the temporary drive on the virtual machine
  • Get-Backups.ps1 – Finds out if any backups are being taken to local disk

The PowerShell scripts are available on GitHub repository SqlOnAzureVM. Since these scripts are now on GitHub, please feel free to pull them and enhance them as per your needs.

Advertisement

Azure Storage and SQL Server – Part 5

In this post, we will start checking if the SQL Server files are following best practices for Azure storage recommendations. In Part 4 of this series, I had written about how to identify the temporary drive on an Azure Virtual Machine. I am going to use that snippet of code to perform the file layout test that I will perform here.

The script performs the following operations:

1. Finds out the temporary drive letter

2. The script first finds out if the Azure Virtual Machine is a D-series machine or not.

3. Then it fetches the file location from the SQL Server instance using the Invoke-Sqlcmd cmdlet.

4. Using the above information, it determines if files are present on the temporary drive or not.

If this is a D-Series virtual machine, then the script will not report a warning if it finds the tempdb files on the temporary drive as it is supported. For any other series, the script will report an issue since database files residing on the temporary drive are not supported on the temporary drive other than the D-Series.

The PowerShell script is available below:


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

# Variable to track if this is a D-Series VM

[boolean] $DSeries = $false

# Using Get-AzureVM to get the instance size property

$InstanceSize = Get-AzureVM -ServiceName "<azure service name>" -Name "<azure vm name>" | Select-Object InstanceSize

# Find out if this is a D-Series VM

if ($InstanceSize.InstanceSize -like "*_D*")

{

$DSeries = $true

}

# Get the location of the database files from the system catalog

$dbfiles = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select physical_name, name, db_name(database_id) as dbname, type_desc from sys.master_files"

# Run a foreach loop to determine if any user or system database files are on the temporary drive other than the tempdb

foreach ($file in $dbfiles)

{

# Tempdb can be hosted on the temporary drive on D-Series VMs

if ($DSeries-eq $true)

{

if ($file.physical_name.substring(0,2) -eq $TempDrive -and $file.dbname -ne "tempdb")

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

# Report any databases on temporary drive for non-D-Series VMs

else

{

if ($file.physical_name.substring(0,2) -eq $TempDrive)

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

}

Please keep in mind that the above script has not been tested for mount points.

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

Azure Storage and SQL Server – Part 3

In the last post of this series, I had talked about how to determine the number of page blobs present in a particular storage account and if the limit for Azure storage service will be exceeded based on the current configuration. In this post, I shall talk about something a bit closer to your virtual machine which is running the SQL Server workload. The script shown below will determine the IOPS supported by the data disks attached to the virtual machine and determine if all the data disks belong to the same storage account. It is recommended that your data disks belong to a single Azure storage account. If they are not part of the same storage account, then a warning in flagged in the output as seen in the screenshot.

The first part of the PowerShell script is uses Get-AzureVM cmdlet to find out the details of the Azure virtual machine which is running your workload. The output of this cmdlet is used as an input for the Get-AzureDataDisk cmdlet which allows me to find out all the data disks that are attached to my Azure virtual machine. Then a simple calculation lets me understand how much I/O workload the virtual machine can handle.

The next part of the script performs an important task of determining if the data disks attached to the virtual machine belong to the same storage account. If not, then it flags a warning. Typically, for high performance VMs, it is recommended that a single storage account be used for hosting the data disks of the virtual machine.

The Powershell script and a screenshot of the output is shown below.


# Temporary variables

$AccountName = ""

$fail = 0

# Get the list of data disks for the Azure virtual machine

$DataDisks = Get-AzureVM -ServiceName "<Azure Service Name>" -Name "<Azure VM Name>" | Get-AzureDataDisk

Write-Host "INFO: This virtual machine has" $DataDisks.Count "data disks which can support a total of"($DataDisks.Count*500)" IOPS" -ForegroundColor Green

# Check if there is a data disk which belongs to a different storage account

foreach ($disk in $DataDisks)

{

if ($AccountName -eq "")

{

$AccountName = $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8)

}

if ($AccountName -ne $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8))

{

$fail = 1

}

}

# Show a warning if the data disks span across multiple storage accounts

if ($fail -eq 1)

{

Write-Host "WARNING: Mulitple storage accounts found for disks attached to the virtual machine" -ForegroundColor Red

}

else

{

Write-Host "PASSED: All disks attached to the virtual machine belong to a single storage account" -ForegroundColor Green

}

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

Azure Storage and SQL Server – Part 2

In the first post of this series, I had talked about the cheat sheet which documents the best practices for storage for SQL Server workloads running on Azure Virtual Machines. In this post, I am going to show how to automate additional checks for your virtual machine and your storage configuration. I am going to show how to determine if you have more than 40 disks in the storage account as they would not be able to sustain more than 20,000 IOPs as per the storage limits for Azure storage accounts.

The first part of the PowerShell script is used to find out the number of page blobs available in the storage account using the Get-AzureStorageContainer cmdlet. This allows me to fetch all the containers in a particular storage account which is specified as a parameter in the beginning of the script. The storage account context was created using the New-AzureStorageContext cmdlet. This is required for all the subsequent commands to ensure that the cmdlet executes against the correct storage account. This is required especially if you have more than one storage account associated with your subscription.

The next part of the script uses Get-AzureStorageBlob cmdlet to determine the number of blobs in a container. I used a foreach loop to determine the contents of each container. The reason I am using local variables to ensure that I minimize the network call to the Azure Storage service to get the filtering for both types of blobs in the storage container.

The last if-else block determines if you have more than 40 disks in the storage account. If yes, then it flags off an issue because if all the disks are active, they will not be able to scale to their potential of 500 IOPS. This is due to the fact that 20,000 IOPS per storage account limit will be divided across the disks available in the storage account.

The Powershell script and a screenshot of the output is shown below.

# Assign the storage account name

$StorageAccount = "<storage account name>"

 # Get the storage account key as this is needed for creating the storage context

$StorageKey = Get-AzureStorageKey $StorageAccount | %{ $_.Primary }

 # Create a new storage context for use in the next sections on the code block

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccount -StorageAccountKey $StorageKey

 # Find out all the containers in the storage account

$Containers = Get-AzureStorageContainer -Context $StorageContext

 # Local variables for tracking the page and block blobs

$PageBlob = 0

$BlockBlob = 0

 # Get the disks in each container and count each type of blog present for the summary output

foreach ($name in $Containers)

{

$temp = Get-AzureStorageBlob -Container $name.Name -Context $StorageContext

$Blob = $temp | Where-Object {$_.BlobType -eq "PageBlob"}

$PageBlob += $Blob.Count 

$Blob = $temp | Where-Object {$_.BlobType -eq "BlockBlob"}

$BlockBlob += $Blob.Count 

}

Write-Host "INFO: There are" $Containers.Count "containers in the storage account:" $StorageAccount "which have" $PageBlob "Page Blob(s) and" $BlockBlob "Block Blob(s)" -ForegroundColor Green

 # Determine if 20K IOPS limit will be crossed due to having 40+ disks

if ($PageBlob -le 40)

{

Write-Host "INFO: This storage account has" $PageBlob "disk(s) which can support a total of" ($PageBlob*500) "IOPS" -ForegroundColor Green

 }

else

{

Write-Host "ISSUE: This storage account has" $PageBlob "disk(s) which can ONLY support a total of 20,000 IOPS" -ForegroundColor Red

 }

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

 

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 9th November, 2014.

T-SQL Tuesday #21: It’s easy to be lazy

This month’s revolving blog party is being hosted by the creator himself, Adam Machanic (blog|twitter). This month’s topic chosen by Adam is about the forgettable practices/habits of the past that can be avoided. This is a topic on which I can write long rants about… But I shall limit myself to writing about three areas where I have managed to chuck some bad habits and thereby reduced the amount of redundant time that I needed to spend due to these avoidable habits!

Your code needs some English!

One of the practices to be avoided pertaining to T-SQL coding was brought out in unison by the community members during last month’s T-SQL Tuesday was about adding relevant comments to your code. I shall take this a notch higher and talk about adding comments for two reasons:

a. Remember why a change/modification was made to the code
b. So that someone supporting your code understands the logic behind what you have written

More often than not it’s a convenience and sometimes sheer laziness which has prevented. This happens especially during crisis situations where a quick change to the code resolves the situation but you forgot to mention in the file why and when the change was made. This little indiscretion which I have been a victim as well as a perpetrator of has cost me a lot of precious time in the past. So, don’t be lazy… Add a few English lines to your code to help the person supporting it!

Simplicity in source control can prevent hair loss

Not maintaining any form of version control for your source code is quite common when it is an individual managing the entire project or the tool development. But I still believe that sometimes simple forms of source control can save you a lot of painful hours and wishing that you had one pill that Bradley Copper had in Limitless. You don’t need to have a version control software or VSTS setup to manage your code. You could use simple logic like taking a backup of your code or naming your files v1, v1.2 and so on and so forth before making a huge change to your application code that cannot be handled by the way of comments without writing a small essay. This is also a form of laziness which in the past has cost me hours of coding time. So be smart and maintain your code in such a way that you can revert your code back to a previous version in a matter of minutes!!

Documentation is an important key to saving time

I am sure all of you are well aware that documentation is necessary but it can be mundane at times. The entire reason I setup this blog was to document those unique quirks about SQL Server and those unique solutions that I arrived at by looking at disparate pieces of information. I must admit that I do not have photographic memory and I need to have reference to documentation which is available with the helps of a keywords search. I use the search on my blog to find my old posts!! So if I didn’t document a unique solution, I would end up spending the same X amount of hours working on the same issue the next time I encounter it. The value of documentation is that it saves time!

A wise man/woman learns from his/her mistakes. But a wiser man/woman learns from other people’s mistakes.