SQL on Azure VM BPCheck


After a long hiatus, I am back on the blogosphere and have decided to dedicate a series to running Best Practices checks using PowerShell on Azure virtual machines running in the new deployment model: ARM (Azure Resource Manager). If you have worked on the classic deployment model, then you will need to unlearn a few things and re-learn a little more. However, the learning curve ain’t that great. Think of Azure Resource Manager as a container for all the cloud components that make up your solution. If you had a SQL Server instance running on an Azure virtual machine, then you would be using compute, networking, storage components in Azure which would together be encapsulated in a Resource Manager group.

I had previously done a series of posts around running best practices checks on Azure Virtual Machines running SQL Server. Some of those checks are still valid today as they only dealt with the SQL Server instance. You will see components of those scripts reused. Some of the checks are way easier due to the way ARM deployments are managed in Azure and the endpoints that the Azure PowerShell 1.0 exposes.

The PowerShell scripts available on the GitHub repository are mentioned below. I will run a post on each of these scripts to explain what each of these do and what to expect as the output of each of these scripts.

  • Get-AllocationUnitCheck.ps1 – Checks if the allocation unit size for the disks attached to the VM is 64K
  • Get-DBProperties.ps1 – Checks if any database has AUTO CLOSE or AUTO SHRINK enabled
  • Get-FilesOnTemp.ps1 – Checks to see if any database files are hosted on the temporary drive
  • Get-IFI.ps1 – Checks to see if the SQL Server service account has instant file initialization security privileges
  • Get-LPIM.ps1 – Checks to see if Lock Pages in Memory privilege is granted to the SQL Server service account
  • Get-OSFilesDB.ps1 – Checks to see if database files are hosted on the OS drive
  • Get-StorageAccountBP.ps1 – Checks to see if the storage account has replication enabled
  • Get-VMSize.ps1 – Checks if the right virtual machine tier is being used
  • Temporary Drive.ps1 – Finds out the temporary drive on the virtual machine
  • Get-Backups.ps1 – Finds out if any backups are being taken to local disk

The PowerShell scripts are available on GitHub repository SqlOnAzureVM. Since these scripts are now on GitHub, please feel free to pull them and enhance them as per your needs.

Advertisements

Azure Storage and SQL Server – Part 5


In this post, we will start checking if the SQL Server files are following best practices for Azure storage recommendations. In Part 4 of this series, I had written about how to identify the temporary drive on an Azure Virtual Machine. I am going to use that snippet of code to perform the file layout test that I will perform here.

The script performs the following operations:

1. Finds out the temporary drive letter

2. The script first finds out if the Azure Virtual Machine is a D-series machine or not.

3. Then it fetches the file location from the SQL Server instance using the Invoke-Sqlcmd cmdlet.

4. Using the above information, it determines if files are present on the temporary drive or not.

If this is a D-Series virtual machine, then the script will not report a warning if it finds the tempdb files on the temporary drive as it is supported. For any other series, the script will report an issue since database files residing on the temporary drive are not supported on the temporary drive other than the D-Series.

The PowerShell script is available below:


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

# Variable to track if this is a D-Series VM

[boolean] $DSeries = $false

# Using Get-AzureVM to get the instance size property

$InstanceSize = Get-AzureVM -ServiceName "<azure service name>" -Name "<azure vm name>" | Select-Object InstanceSize

# Find out if this is a D-Series VM

if ($InstanceSize.InstanceSize -like "*_D*")

{

$DSeries = $true

}

# Get the location of the database files from the system catalog

$dbfiles = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select physical_name, name, db_name(database_id) as dbname, type_desc from sys.master_files"

# Run a foreach loop to determine if any user or system database files are on the temporary drive other than the tempdb

foreach ($file in $dbfiles)

{

# Tempdb can be hosted on the temporary drive on D-Series VMs

if ($DSeries-eq $true)

{

if ($file.physical_name.substring(0,2) -eq $TempDrive -and $file.dbname -ne "tempdb")

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

# Report any databases on temporary drive for non-D-Series VMs

else

{

if ($file.physical_name.substring(0,2) -eq $TempDrive)

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

}

Please keep in mind that the above script has not been tested for mount points.

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

Azure Storage and SQL Server – Part 3


In the last post of this series, I had talked about how to determine the number of page blobs present in a particular storage account and if the limit for Azure storage service will be exceeded based on the current configuration. In this post, I shall talk about something a bit closer to your virtual machine which is running the SQL Server workload. The script shown below will determine the IOPS supported by the data disks attached to the virtual machine and determine if all the data disks belong to the same storage account. It is recommended that your data disks belong to a single Azure storage account. If they are not part of the same storage account, then a warning in flagged in the output as seen in the screenshot.

The first part of the PowerShell script is uses Get-AzureVM cmdlet to find out the details of the Azure virtual machine which is running your workload. The output of this cmdlet is used as an input for the Get-AzureDataDisk cmdlet which allows me to find out all the data disks that are attached to my Azure virtual machine. Then a simple calculation lets me understand how much I/O workload the virtual machine can handle.

The next part of the script performs an important task of determining if the data disks attached to the virtual machine belong to the same storage account. If not, then it flags a warning. Typically, for high performance VMs, it is recommended that a single storage account be used for hosting the data disks of the virtual machine.

The Powershell script and a screenshot of the output is shown below.


# Temporary variables

$AccountName = ""

$fail = 0

# Get the list of data disks for the Azure virtual machine

$DataDisks = Get-AzureVM -ServiceName "<Azure Service Name>" -Name "<Azure VM Name>" | Get-AzureDataDisk

Write-Host "INFO: This virtual machine has" $DataDisks.Count "data disks which can support a total of"($DataDisks.Count*500)" IOPS" -ForegroundColor Green

# Check if there is a data disk which belongs to a different storage account

foreach ($disk in $DataDisks)

{

if ($AccountName -eq "")

{

$AccountName = $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8)

}

if ($AccountName -ne $disk.MediaLink.AbsoluteUri.Substring(8,$disk.MediaLink.AbsoluteUri.IndexOf(".")-8))

{

$fail = 1

}

}

# Show a warning if the data disks span across multiple storage accounts

if ($fail -eq 1)

{

Write-Host "WARNING: Mulitple storage accounts found for disks attached to the virtual machine" -ForegroundColor Red

}

else

{

Write-Host "PASSED: All disks attached to the virtual machine belong to a single storage account" -ForegroundColor Green

}

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.