Lessons learnt while using the Cloud Adapter

During the last week of August, I had blogged about how to get your on-premise database to your SQL Server instance running on an Azure virtual machine. I had run into a few issues while trying to run the wizard provided by Management Studio.

The First Stumble

This error is easy to circumvent and pretty much mentioned in the online documentation. The error message would read as:

Failed to locate a SQL Server of version 12.0.2000 or later installed on the remote machine. Please verify that a SQL Server of the same or higher version than the source SQL Server is installed on the remote machine.

The above error is self-explanatory. There is a requirement that the source database engine version be lower or equal to the version of the SQL Server instance running on Azure. Eg. You cannot deploy a database from a SQL Server 2014 instance to a SQL Server 2012 instance running on an Azure VM.

The Second Stumble

The second common error that you might run into is:

The Cloud Adapter port configuration is not valid. Verify the virtual machine endpoint configurations.

The above error will be encountered when the endpoint is not configured for the Azure virtual to accept connections from the outer realm! This can be easily rectified by adding a TCP endpoint to your Azure virtual machine for 11435 which is the port that the SQL Server Cloud Adapter Service is listening on. This is also mentioned in the online documentation. Once you have created the endpoint for your Azure virtual for your on-premise server to connect with the Cloud Adapter service, your endpoint configuration should look like the one in the screenshot below:

image

The Third Stumble

The next issue could be with permissions/authentication or it might not be as easy as it seems.

Cloud Adapter operation failed due to invalid authentication. Verify the virtual machine name, user name, and password.

So the first thing to check if you have the correct account name and password. If it is due to an authentication error, then the application event log of the Azure Virtual Machine will show the following error with the source as SQL Server Cloud Adapter service as shown in Screenshot 2. The text of the error message is mentioned below.

Access denied for user <user name>

image

The other error that you might encounter is when the SQL Server Cloud Adapter service tries to enumerate the database engines installed on the virtual machine. The error would still be talking about the authentication which is reported by the management studio wizard but a little investigation into the application event logs of the virtual machine will show the following error:

[Error] <ip address> Exception in GetSqlInstances(): SQL Server WMI provider is not available on <machine name>.. Stack trace:    at Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer.TryConnect()
   at Microsoft.SqlServer.Management.Smo.Wmi.WmiSmoObject.get_Proxy()
at Microsoft.SqlServer.Management.Smo.Wmi.WmiSmoObject.EnumChildren(String childTypeName, WmiCollectionBase coll)
at Microsoft.SqlServer.Management.Smo.Wmi.ServerInstanceCollection.InitializeChildCollection()
at Microsoft.SqlServer.Management.CloudAdapter.Tasks.GetSqlInstances()
at Microsoft.SqlServer.Management.CloudAdapter.CloudAdapter.GetSqlInstances(String username, String password). Inner Exception: Invalid namespace .

The above error clearly states that the GetSqlInstances() method failed. Microsoft.SqlServer.Management.Smo.Wmi namespace contains classes that provide programmatic access to the Windows Management Instrumentation (WMI) from an SMO application. I had talked about needing the shared management objects in an earlier post. The SQL Server 2014 WMI provider is also required which is available by installing the client connectivity components from any SQL Server 2014 setup including SQL Server Express. The components that I had installed were:

a. Client Tools Connectivity

b. Client Tools Backwards Compatibility

If you are not sure if you have the WMI provider, then look for the file “C:\Program Files (x86)\Microsoft SQL Server\120\Shared\sqlmgmproviderxpsp2up.mof“. This is the SQL Server 2014 MOF file. Another way to test if the WMI provider is working without running the wizard every time and have it fail is to run the PowerShell commands below on your Azure Virtual Machine. This script will tell you where the instance enumeration being performed by the deployment wizard will work or fail.

[System.reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SqlWmiManagement")
$m = New-Object ('Microsoft.SqlServer.Management.Smo.Wmi.ManagedComputer') '.'
foreach ($svi in $m.ServerInstances)
{
	$svi.Name;
}

This post was intended to document that common issues that you might run into while deploying a database from an on-premise SQL Server instance to a SQL Server instance running on an Azure Virtual Machine.

Exporting Data to a SQL Server on an Azure VM

If you have played around with a SQL Server installation on an Azure Virtual Machine, then you will invariably have the need to move a database from an on-premise environment to your Azure Virtual Machine for testing, deployment and a host of other activities which you are involved with on a regular basis at work!

Books Online has complete documentation on this wizard. In this post, we will attempt to understand what happens under the hood. Read on to find out more.

Continue reading

Configuring the Azure VM for SQL Server connectivity

In the last SQL Bangalore UG meeting, I had talked about how to use the Custom Scripting component in Azure to run the post configuration operations on an Azure VM which was hosting a SQL Server instance. The post configuration options that I am going to talk about in this post are necessary for you to be able to connect to your SQL Server instance on an Azure VM from a Management Studio running on your on-premise machine.

Before you can connect to the instance of SQL Server from the Internet, the following tasks must be completed:

  • Configure SQL Server to listen on the TCP protocol and restart the Database Engine.
  • Open TCP ports in the Windows firewall.
  • Configure SQL Server for mixed mode authentication.
  • Create a SQL Server authentication login.
  • Create a TCP endpoint for the virtual machine. This would normally be done while providing the endpoint configuration if you are using the Azure Management Portal wizard.

If you had used an Image from the Image gallery, then you will get a default database engine installed with the TCP/IP port configured as 1433. I had written a post earlier which walks through an Azure VM creation using a SQL Server image from the image gallery.

Here I am going to talk about how to automate the bulleted points mentioned above using PowerShell and the Custom Script extension that the Azure provides. This is going to be a long read… So I suggest you get a coffee before you start reading further!

Continue reading

Remapping the temporary drive on an Azure VM

There might be explicit requirements from an application standpoint which requires the D: drive to be available. While re-mapping your application to use another drive may be a very simplistic suggestion, it might not be viable in a certain scenarios. If you have used an Azure Virtual Machine, it is made clear in various articles that the temporary drive i.e. the D: drive should not be used. In this blog post, I shall show you how to re-claim that the letter D from your Azure Virtual Machine and assign that to another drive that might be craving for this particular letter of the English alphabet.

Azure VM Disk Management The first thing that you will need to do is assign a new data disk or an existing data disk to your Azure VM if you do not have a spare disk. This can be accomplished easily from following the steps mentioned here.

Once you have initialized the disks, your disk management view should be something similar to what you see in Screenshot 1. The temporary disk shows that it hosts the Page File. Remapping this first requires you to move the page file to a new disk or some other data disk that was already present on the server. You will have to reboot the machine for the  changes to take effect.

Once the machine is back up, change the drive letter mapping from Disk Management.

Change the page file settings to use the temporary storage but this time the drive letter would be a different one. Once you reconfigure the page file settings, you will need to reboot the virtual machine again.

When the virtual machine is finally online again, you will have your desired drive letter mapping. As you can see in Screenshot 2, the page file and my temporary storage is now the Z: drive where as the D: drive is assigned to a data drive.

Azure VM Page File Changed

Terminating an Azure SQL Database Replication

In my last post, I talked about setting up geo-replication for Azure SQL databases. There might be situations where you need to terminate your replication between your replicas. This could be a need for various reasons. You want to move your replica to a different region or you want to remove replication temporarily or you want to bring your secondary replica online and allow DML operations on it etc.

To remove replication, Azure provides two options: planned and forced termination. Again, if you have worked with on-premise database mirroring or availability groups, then this will seem familiar to you. Planned termination incurs ZERO data loss for the replica. The forced termination has chances of data loss.

Planned Termination is intended for use in planned operations where data loss is unacceptable. Termination can only be performed on the primary database, after the active secondary has been seeded.

Forced termination is intended for when the primary database or one of its active secondary databases is lost or is inaccessible. A forced termination can be performed on either the primary database or the active secondary. Every forced termination results in the irreversible loss of connectivity between the primary database and at least one active secondary. In addition, forced termination on an active secondary causes the loss of any transactions that have not been replicated from the primary database. If the primary database has only one continuous copy relationship, after termination, updates to the primary database will no longer be protected.

You will have to setup the replication again in case you want a synchronized copy of the database.

image

The steps to accomplish this is mentioned below.

Select the Geo-Replication tab for the database. This tab is only enabled for databases in subscriptions that are enrolled in the Premium preview program. Active Geo-Replication is currently only supported for Premium databases. You should see that the Replication Role for the database is displayed as source.

  1. Select the desired active secondary from the REPLICAS list.
  2. To terminate the continuous copy relationship, click Stop Replica. This launches the Stop Active Geo-Replication dialog which allows you to select the type of termination you want to perform.
  3. The Stop Active Geo-Replication dialog box presents two options when launched from the primary database:Stop replication after synchronization completes: This option ensures that the termination happens after the committed transactions on the primary have been replicated to the active secondary.Stop replication immediately: This option terminates the continuous copy relationship between the primary and the selected active secondary immediately. You should expect some data loss for the active secondary in this scenario.

    Select the Stop replication after synchronization completes option and click to confirm.

Reference:

Terminate a Continuous Copy Relationship
http://msdn.microsoft.com/en-us/library/azure/dn741323.aspx