Book on Azure and SQL Server

image

My last contribution to a book was in 2012. With the advent of the cloud and my continuing work with SQL Server, I jumped at the opportunity when my friends and colleagues, Pranab Mazumdar [t] and Sourabh Agarwal [t], talked to me about contributing to a book on running SQL Server on Azure.

The book “Pro SQL Server on Microsoft Azure” attempts to teach the basics of Microsoft Azure and see how SQL Server on Azure VMs (Infrastructure-as-a-Service) and Azure SQL Databases (Platform-as-a-Service) work. This book will show you how to deploy, operate, and maintain your data using any one or more combinations of these offerings along with your on-premise environments. You will also find some architecture details which are very important for an end user to know in order to run operations using Azure.

The book is available on Apress and Amazon.

We would love to hear any feedback about the book. It could be good, bad or ugly. You will find the resources available for download on the site.

24HOP Session: How to be a Ninja – Troubleshooting SQL PERF on Azure VMs

I have been a bit quiet on this blog but that is partly due to the fact that I have moved into a new role and a new country. I am now a part of the SQL Server Product Group [b|t] and based out of Redmond.

I am actually really excited about this. Sourabh Agarwal [b|t] and I are going to be presenting a preview to our SQL PASS Summit 2015 pre-con session "How to be a Ninja: Troubleshooting SQL performance on Azure Virtual Machines". Yes, we are starting the initiation program of becoming a SQL performance troubleshooting NINJA on SQL Server!

Troubleshooting is an art but the tricks of the trade changed with the advent of Azure Virtual Machines. Performance troubleshooting is different and at the same time very similar to what you have been used to for SQL Server. SQL Server performance on Azure VMs can be a sore point for many as the host troubleshooting entry points are limited and the knowledge of the internal workings scarce.

In this session, we will show you what best practices should be known for SQL Server instances running on Azure Virtual Machines! We will talk about tips on automating the implementation of all these best practices during deployment making this a single one-click deployment. This session will be a pre-cursor to our pre-con where we will go the whole nine yards and detail how to automate deployments from scratch, implement best practices automatically and analyze performance issues magically!

We hope you can join us for this session online and we do hope to see you during our pre-con! The 24Hop sessions are full of great sessions from great speakers in the SQL Family. See the full list here. I would recommend looking through the list and signing up for the ones that you are interested in. This will also give you a preview of what you can expect in the SQL PASS Summit this year.

This 24 Hours of PASS: Summit 2015 Preview event takes place over 24 hours, beginning September 17, 2015, 12:00 UTC. Featuring 24 webcasts delivered over 24 hours, this event provides a glimpse into the unparalleled content on offer at PASS Summit 2015, October 27-30, in Seattle, WA.

WHEN: September 17th at 8AM PST (3PM GMT)
WHERE: ONLINE
Facebook Event for our session: https://www.facebook.com/events/938656286172663/
Registration link for the event: http://www.sqlpass.org/24hours/2015/summitpreview/Registration.aspx

[UPDATE] September 29th, 2015

Thank you for the feedback that you shared after the session. It is always great to know what people liked in the session and even better to know where we need to improve. This helps ensure that our next iteration has the necessary tweaks. We received an overall 90% positive feedback and we thank everyone who attended for that!

The replies to the questions from the session are available below.

Q. Regarding the performance fixes as best practices(hotfixes/CU), do we have separate hotfixes(.msi/msp) for azure environment when compared to on premises environment?
A. The SQL Server installation bits that you would run on Azure VMs and on virtualized/physical on-premises environment are the same. So there aren’t any different set of fixes that need to run on Azure VMs.

Q. Are these Cheat Sheets available online?
A. The cheat sheets are available in the presentation PDF on the 24HOP site.

Q. Is using "Lock Pages in Memory" lead to that total allocated memory amount of SQL Server process is not seen in Windows Task Manager?
A. Task manager is not a good place to look for allocated memory when you want to find out allocations made after enabling Lock Pages in Memory privilege for the SQL Server service account. You could either look at Total Server Memory perfmon counter or the memory DMVs to track SQL Server memory usage. Additional reference: https://msdn.microsoft.com/en-us/library/ms176018.aspx 

Q. Why are you disabling caching on the log file drive?
A. This is due to the IO patterns that the SQL Server transaction log file receives and how Azure storage is structured. We have seen in tests that the performance for SQL Server transaction log is best when write caching is disabled for disks which hosts transaction log files. We will talk about this in detail during our pre-con session.

Q. For Datawarehousing workloads, do you recommend lock pages in memory setting on on-premise/azure VM hosting SQL Server?
A. For on-premise workloads, we recommend you test and ascertain the needs before enabling Lock Pages in Memory (LPIM) privilege. For Azure VM workloads, the first important task is to pick the machine with the right SKU. We recommend enabling LPIM to prevent paging to the local disk on the rack which can negatively affect performance.

Q. Why are there different storage options based on Windows version? Is there any dependency on SQL versions?
A. There aren’t different storage options based on Windows version. The different storage options are based on the performance tier that you want to be on. It is Windows and SQL version and release agnostic.

Q. Can you let me know the resources on Azure Storage?
A. The Azure storage documentation is a good place to start for this. We will talk about this in detail in the IaaS introduction part of our pre-con.

If we have missed any question, please leave your question in the comment section of this post and we will answer it.

Lastly, we loved the notes that Matt Penny [t] took during our session. A screenshot of that is shown below. Thank you Matt! J The 24HOP session presentation is attached on the session page.

Notes

SQL Server 2016 Public Preview (CTP2) – Deploying to Azure VM

I had written a post earlier on deploying a SQL Server instance on a Azure Virtual Machine. Now that SQL Server 2016 CTP2 is out, let’s see how that looks on Azure. The wizard is the same as before but a new gallery option exists for deploying SQL Server 2016 CTP2. The catch is that any virtual machine created with this gallery image will expire on June 30th, 2016. The locations where this image can be deployed are East Asia, Southeast Asia, North Europe, West Europe, Central US, East US, East US 2 and South Central US. The gallery image gets provisioned with a single disk.

image

After the deployment is complete, you will need to enable connectivity for your SQL Server database engine as outlined in an earlier post of mine. What you get is the default instance of Database Engine, Analysis Services, Integration Services and Reporting Services. The deployment will not have the “PolyBase Query Service for External Data”. So if you are planning to test the PolyBase options in SQL Server 2016, then you will need to run the installation from the C:\SQLServer_13.0_Full folder. The other feature that is not available is the Distributed Replay. So, if you are planning to play around with these two features, then you would need to run the installer again.

Another feature which the gallery image does not use is the tempdb multiple file option setup parmater, “SQLTEMPDBFILECOUNT“. This is left at 1 so you will end up with the default tempdb configuration which you saw in the older releases. I would recommend using a virtual machine instance which has a SSD drive as the temporary drive so that you can use a SSD for testing out any intensive workload which requires either high tempdb usage or a local disk which supports high IOPs.

So now you have any option to play around with SQL Server 2016 CTP2 without having to hunt down a separate virtual machine or physical box in your environment.

Azure Storage and SQL Server – Part 7

One of the recommendations of running SQL Server instances on Azure virtual machines is to ensure that default paths and directory setup is changed once the gallery image has been deployed. If you are performing a custom install of your SQL Server instance and not using an Azure Gallery Image, then you would want to instruct the setup program to install the system databases and the log directory in a separate folder other than the operating system drive or the temporary drive.

If you have used a virtual machine gallery image, then you will have a default instance of SQL Server with the binaries and system database  files on the C: drive. I had earlier blogged about how to find out the startup parameters using a WMI class and VBScript. I just use PowerShell to make the WMI query. Yes, even I have learnt a few new tricks over the years! Winking smile

The PowerShell script below would let you check if the startup parameters which contains the location of the SQL Server Errorlog, the master data file and the master log file are placed on the OS drive. If yes, then it will report that you are doing something that we do not recommend! Time to change the files and the ERRORLOG to a data disk.


# Find out the boot drive on the virtual machine
$BootDrive = gwmi -Class Win32_Volume -Filter "BootVolume = 'True'" | Select-Object DriveLetter

# Get the startup parameters using the service name
# Depending on the version of SQL Server installed, the WMI Management namespace would vary
# The code block below checks the relevant WMI namespace
$Service = gwmi -Class Win32_Service -Filter "Name = 'MSSQLServer'" | Select-Object Name, PathName
if ($Service.PathName.ToString().Contains("MSSQL12"))
{
$Params = gwmi -Namespace root\Microsoft\SqlServer\ComputerManagement12 -Class SqlServiceAdvancedProperty  -Filter "SqlServiceType = 1 and PropertyName = 'STARTUPPARAMETERS' and ServiceName = 'MSSQLSERVER"
$Values = $Params.PropertyStrValue.Split(";")

}
elseif ($Service.PathName.ToString().Contains("MSSQL11"))
{
$Params = gwmi -Namespace root\Microsoft\SqlServer\ComputerManagement11 -Class SqlServiceAdvancedProperty -Filter "SqlServiceType = 1 and PropertyName = 'STARTUPPARAMETERS' and ServiceName ='MSSQLSERVER'"
$Values = $Params.PropertyStrValue.Split(";")

}
elseif ($Service.PathName.ToString().Contains("MSSQL10"))
{
$Params = gwmi -Namespace root\Microsoft\SqlServer\ComputerManagement10 -Class SqlServiceAdvancedProperty -Filter "SqlServiceType = 1 and PropertyName = 'STARTUPPARAMETERS' and ServiceName ='MSSQLSERVER'"
$Values = $Params.PropertyStrValue.Split(";")

}
else
{
Write-Host "Issue: No instances found running SQL Server 2008 or above" -ForegroundColor Red
}

# Run a foreach loop to check if the boot drive is present in the startup parameters. If yes, report the same.
foreach ($StartupParam in $Values)
{

if ($StartupParam.Contains($BootDrive.DriveLetter))
{
Write-Host "Boot drive used in" $StartupParam -ForegroundColor Red
}
}

 

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/
Azure Storage and SQL Server – Part 5
https://troubleshootingsql.com/2014/11/18/azure-storage-and-sql-server-part-5
Azure Storage and SQL Server – Part 6
https://troubleshootingsql.com/2014/11/19/azure-storage-and-sql-server-part-6

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/
Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 23rd November, 2014.

Azure Storage and SQL Server – Part 6

In the last post of this series, I had described how to determine if any database files were hosted on the temporary drive on the Azure virtual machine. In this post, I shall check if any of the drives hosting the SQL Server database files have block sizes other than 64KB. As per best practice recommendations for SQL Server workloads running on Azure virtual machines, it is recommended that you use a 64-KB allocation unit size for data and log files as well as tempdb.

Determining the block size

In the first part, I will be talking about how to determine if the block size for the drives hosting the database files is 64KB or not. The first thing that needs to be done is to retrieve the volume details using the Win32_Volume class and use the BlockSize value to determine the block size of the volume. To avoid false positives, you will also need the disks on which the SQL Server database files are hosted on. This can be retrieved using the sys.master_files system catalog. Using both the sets of information, you can determine if any volume that is hosting a SQL Server database file has a block size other than 64KB.

The PowerShell script that I had used is available below. This script also makes use of PowerShell snippet that I had used to identify the temporary drive which I had blogged about last week.


# Script for discovering temporary drive is available in Part 4 of this series

# Hence, not re-writing the script here

$TempDrive = "D:\"

# Fetching the disks on which the SQL database files reside on

$sqldisks = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select distinct substring(physical_name,0,4) as disk from sys.master_files" | Select-Object disk

# Getting the block size for each volume

$volumes = gwmi -Class Win32_Volume -Filter "DriveType = 3" | select-object BlockSize, Name

# Foreach loop to determine if block size for SQL disk drives vary from 64K

foreach ($volumne in $volumes)

{

if ($volumne.BlockSize -ne "65536" -and $volumne.Name.ToString() -In $sqldisks.disk)

{

if ($volumne.Name.ToString() -ne $TempDrive)

{

Write-Host "WARNING:" $volumne.Name "has a block size =" $volumne.BlockSize "bytes" -ForegroundColor Red

}

else

{

Write-Host "INFO:" $volumne.Name "has a block size =" $volumne.BlockSize "bytes. This is the temporary drive and should only host the tempdb files" -ForegroundColor Green

}

}

}

You might want to also read the following articles about disk partition alignment for SQL Server:
Disk Partition Alignment Best Practices for SQL Server
http://technet.microsoft.com/en-us/library/dd758814(v=SQL.100).aspx

Disk Partition Alignment: It Still Matters–DPA for Windows Server 2012, SQL Server 2012, and SQL Server 2014
http://blogs.msdn.com/b/jimmymay/archive/2014/03/14/disk-partition-alignment-for-windows-server-2012-sql-server-2012-and-sql-server-2014.aspx

Data and log files on the same drive

This is quite a common recommendation and has been taught to DBAs when they were in DBA elementary school! But sometimes, these recommendations are not followed for various reasons… some known and some unknown. So I decided that while I was at it, I would write up a quick PowerShell script to determine if the data and log files are present in the same drive using information retrieved from the sys.master_files catalog view.

The PowerShell script is give below along with a screenshot of the output.


$sqldisks = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select distinct substring(physical_name,0,4) as disk, type_desc from sys.master_files"

$datafiles = $sqldisks | Where-Object {$_.type_desc -eq "ROWS" }

$logfiles = $sqldisks | Where-Object {$_.type_desc -eq "LOG" }

if ($datafiles.disk -contains $logfiles.disk)

{

Write-Host "ISSUE: Data and log files found on the same drive" -ForegroundColor Red

foreach($drive in $datafiles)

{

if ($drive.disk -contains $logfiles.disk)

{

Write-Host "ISSUE: Drive" $drive.disk "hosts data and log files" -ForegroundColor Red

}

}

}

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/
Azure Storage and SQL Server – Part 5
https://troubleshootingsql.com/2014/11/18/azure-storage-and-sql-server-part-5

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/
Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 16th November, 2014.

Azure Storage and SQL Server – Part 5

In this post, we will start checking if the SQL Server files are following best practices for Azure storage recommendations. In Part 4 of this series, I had written about how to identify the temporary drive on an Azure Virtual Machine. I am going to use that snippet of code to perform the file layout test that I will perform here.

The script performs the following operations:

1. Finds out the temporary drive letter

2. The script first finds out if the Azure Virtual Machine is a D-series machine or not.

3. Then it fetches the file location from the SQL Server instance using the Invoke-Sqlcmd cmdlet.

4. Using the above information, it determines if files are present on the temporary drive or not.

If this is a D-Series virtual machine, then the script will not report a warning if it finds the tempdb files on the temporary drive as it is supported. For any other series, the script will report an issue since database files residing on the temporary drive are not supported on the temporary drive other than the D-Series.

The PowerShell script is available below:


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

# Variable to track if this is a D-Series VM

[boolean] $DSeries = $false

# Using Get-AzureVM to get the instance size property

$InstanceSize = Get-AzureVM -ServiceName "<azure service name>" -Name "<azure vm name>" | Select-Object InstanceSize

# Find out if this is a D-Series VM

if ($InstanceSize.InstanceSize -like "*_D*")

{

$DSeries = $true

}

# Get the location of the database files from the system catalog

$dbfiles = Invoke-Sqlcmd -ServerInstance "." -Database "master" -Query "select physical_name, name, db_name(database_id) as dbname, type_desc from sys.master_files"

# Run a foreach loop to determine if any user or system database files are on the temporary drive other than the tempdb

foreach ($file in $dbfiles)

{

# Tempdb can be hosted on the temporary drive on D-Series VMs

if ($DSeries-eq $true)

{

if ($file.physical_name.substring(0,2) -eq $TempDrive -and $file.dbname -ne "tempdb")

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

# Report any databases on temporary drive for non-D-Series VMs

else

{

if ($file.physical_name.substring(0,2) -eq $TempDrive)

{

Write-Host "ISSUE:" $file.dbname "(Filename:" $file.name " Location:" $file.physical_name") found on temporary drive" $TempDrive -ForegroundColor Red

}

}

}

Please keep in mind that the above script has not been tested for mount points.

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/
Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3
Azure Storage and SQL Server – Part 4
https://troubleshootingsql.com/2014/11/13/azure-storage-and-sql-server-part-4/

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.

Azure Storage and SQL Server – Part 4

In Part 3 of this series, I had talked about how to determine the IOPS supported by the data disks attached to the virtual machine and determine if all the data disks belong to the same storage account. In this post, I shall talk about how to determine if your compute and storage resources are located in the same region. It would not make sense to have your compute and storage resources in different data centers especially for high performance workloads. This would be equivalent of having geographically dispersed storage which would not really give you good throughput. It is recommended that you create your Azure storage account in the same data center as your SQL Server virtual machines to reduce transfer delays.

Colocation of service and storage account

Again, we are going to use Azure cmdlets to make our work easier. The first script finds out the location of your Azure service and the Azure storage account and checks to see if they are in the same location. If not, it flags an issue. The Get-AzureService cmdlet provides the location of the Azure service and the Get-AzureStorageAccount gives the location of the storage account.

The Powershell script and a screenshot of the output is shown below.


$AzureServiceName = "<Service Name>"

$AzureStorageName = "<Storage Account Name>"

$ComputeLocation = Get-AzureService -ServiceName $AzureServiceName | Select-Object Location

$StorageLocation = Get-AzureStorageAccount -StorageAccountName $AzureStorageName | Select-Object Location

if ($ComputeLocation.Location -ne $StorageLocation.Location)

{

Write-Host "ISSUE: Service and storage are NOT co-located. Storage location:" $StorageLocation.Location "Service location:" $ComputeLocation.Location -ForegroundColor Red

}

else

{

Write-Host "PASSED: Service and storage are co-located in" $StorageLocation.Location -ForegroundColor Green

}

image

Finding out the temporary drive

It is possible that someone might re-map the temporary drive to another drive letter other than D:. In such scenarios, it is important to find out the temporary drive letter. It would do well to not assume that the temporary drive is not the D: drive. I had blogged earlier on how to re-map the temporary drive.

Once you have logged into the Azure virtual machine, you would need to jump a few PowerShell and WMI hoops to get the temporary drive letter:

1. Find out the IDE drive which has a SCSI Target ID of 1. The C: drive would also be an IDE drive but with a SCSI Target ID = 0. This information is available from the Win32_DiskDrive class.

2. Now that you have the partition name, it is time to get the disk # and partition # using the Win32_DiskDriveToDiskPartition class.

3. And the last hoop that you need to jump is to use the disk # and partition # to get the logical drive name using the Win32_LogicalDiskToDiskPartition class.

This information will be used in future automation posts to determine if the temporary drive is being used by SQL Server or not. The temporary drive can be used for Buffer Pool Extensions and the tempdb files only on the D-Series virtual machines.

The Powershell script and screenshot of the output is shown below.


# Find out the IDE drives on the machine which has a SCSI target = 1

$Name = Get-WmiObject -Class Win32_DiskDrive -Filter "InterfaceType = `"IDE`" and SCSITargetId = 1" | Select-Object Name

# Find the parition id corresponding to the IDE temporary drive

$Antecedent = "*" + $Name.Name.Replace("\","").Replace(".","") + "*"

$Dependent = Get-WmiObject -Class Win32_DiskDriveToDiskPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent

# Find the logical disk which corresponds to the temporary drive partition

$Antecedent = "*" + $Dependent.Dependent.Split("`"")[1] + "*"

$TempDrive = (Get-WmiObject -Class Win32_LogicalDiskToPartition | Where-Object {$_.Antecedent -like $Antecedent} | Select-Object Dependent).Dependent.Split("`"")[1]

Write-Host "INFO: Temporary drive on the machine is:" $TempDrive -Foregroundcolor Red

image

Previous post in the series

Azure Storage and SQL Server – Part 1
https://troubleshootingsql.com/2014/11/10/azure-storage-for-sql-server/

Azure Storage and SQL Server – Part 2
https://troubleshootingsql.com/2014/11/11/azure-storage-and-sql-server-part-2
Azure Storage and SQL Server – Part 3
https://troubleshootingsql.com/2014/11/12/azure-storage-and-sql-server-part-3

References

Azure Storage
http://azure.microsoft.com/en-us/services/storage/

Azure Subscription and Service Limits, Quotas, and Constraints
http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits

* This blog post has been written based on the service details available on 10th November, 2014.