Moving those large files for secondary databases

I remember seeing a question on the #sqlhelp hashtag on how to move the secondary database files to a new physical location. While this might seem a mundane task but can throw up a few surprises. If you don’t want to tear down your log shipping configuration and re-establish it with the files in the new physical location, then this post will definitely interest you.

If you are running your log shipping in standby mode, you will first need to switch to norecovery mode. This is due to the fact that you cannot execute an ALTER DATABASE command on a standby database as it is not writable. You will be presented with a following error if you attempt to do so:

Msg 5004, Level 16, State 4, Line 1
To use ALTER DATABASE, the database must be in a writable state in which a checkpoint can be executed.

So to avoid the above issue, you need to switch to norecovery mode. If you are already running in this mode, then you have one task reduced from your list. Keep in mind that the switch of the operating mode takes effect only after the next log restore operation.

Now here are the exact set of steps that you need to follow:

1. Disable the Log Shipping Restore SQL Agent job.
2. Use ALTER DATABASE command to change the location of the secondary database files
3. Stop the SQL Server instance hosting the secondary database files
4. Move the secondary database files to the new location as updated in Step #1. (OS File Copy)
4. Start the SQL Server instance
5. Enable the Log Shipping Restore SQL Agent job

Verify that the log shipping jobs are running without any errors. The reason you need to stop the SQL Server instance is because an ALTER DATBASE…OFFLINE doesn’t work when the database is in RESTORING state. You will be blocked by the following error if you attempt to set the database offline:

Msg 5052, Level 16, State 1, Line 1
ALTER DATABASE is not permitted while a database is in the Restoring state.
Msg 5069, Level 16, State 1, Line 1
ALTER DATABASE statement failed.

Hope the above helps in moving the database files of the secondary databases which are configured for log shipping. The steps mentioned above works for both data and log files.

Advertisement

Getting the Default Data and Log path using PowerShell

I remember a while back being asked about a way to fetch the Default Data and Log path for a SQL Server instance. This information is present in the registry keys and the keys are created only if you modify the default paths to a path other than the SQLDataRoot path using which the SQL instance was installed.

So I thought I would put together a quick post to fetch this information using a Powershell script. The function takes in the instance name as the parameter value. Note that for a default instance, the instance name is MSSQLServer.


############################################################

# Author: Amit Banerjee

# Information: Finds out default data and log file paths for a given SQL Server instance

############################################################

Function fnGetDefaultDBLocation

{

Param ([string] $vInstance)

# Get the registry key associated with the Instance Name

$vRegInst = (Get-ItemProperty -Path HKLM:"SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL" -ErrorAction SilentlyContinue).$vInstance

$vRegPath = "SOFTWARE\Microsoft\Microsoft SQL Server\" + $vRegInst + "\MSSQLServer"

# Get the Data and Log file paths if available

$vDataPath = (Get-ItemProperty -Path HKLM:$vRegPath -ErrorAction SilentlyContinue).DefaultData

$vLogPath = (Get-ItemProperty -Path HKLM:$vRegPath -ErrorAction SilentlyContinue).DefaultLog

# Report the entries found

if ($vDataPath.Length -lt 1)

{

$vRegPath = "SOFTWARE\Microsoft\Microsoft SQL Server\" + $vRegInst + "\Setup"

$vDataPath = (Get-ItemProperty -Path HKLM:$vRegPath -ErrorAction SilentlyContinue).SQLDataRoot + "\Data\"

Write-Host "Default Data Path: " $vDataPath

}

else

{

Write-Host "Default Data Path:" $vDataPath

}

if ($vLogPath.Length -lt 1)

{

$vRegPath = "SOFTWARE\Microsoft\Microsoft SQL Server\" + $vRegInst + "\Setup"

$vDataPath = (Get-ItemProperty -Path HKLM:$vRegPath -ErrorAction SilentlyContinue).SQLDataRoot + "\Data\"

Write-Host "Default Log Path: " $vDataPath

}

else

{

Write-Host "Default Log Path:" $vLogPath

}

}

fnGetDefaultDBLocation "MSSQLServer"

Volume Shadow barfs with 0x80040e14 code

I was replying on a MSDN forum thread recently and found that the a VSS backup was failing due to large number of database volumes residing on a particular volume.

I re-created the issue on my box by limiting the number of Max Worker Threads to 128 on my SQL Server 2008 instance. After making the change to the number of worker threads, I created 99 databases on the D: drive of my server using the following script:

set nocount on
declare @cntr int = 1,@sql varchar(8000)
while @cntr < 100
begin
set @sql = 'CREATE DATABASE [db'+cast(@cntr as varchar(5))+'] ON  PRIMARY

( NAME = N' 'db'+cast(@cntr as varchar(5))+''', FILENAME = N' 'D:\DBFiles\db'+cast(@cntr as varchar(5))+'.mdf' ')
LOG ON
( NAME = N' 'db'+cast(@cntr as varchar(5))+'_log' ', FILENAME = N' 'D:\DBFiles\db'+cast(@cntr as varchar(5))+'.ldf'')'

set @cntr = @cntr + 1
exec(@sql)
end

I then attempted to backup the DBFiles folder on the D: drive using Windows Server Backup and it failed.

image

After that I looked into the SQL Server Errorlog and found the following errors:

2010-12-06 03:39:15.280 Backup       Error: 3041, Severity: 16, State: 1.2010-12-06 03:39:15.280 Backup       BACKUP failed to complete the command BACKUP DATABASE db68. Check the backup application log for detailed messages.

2010-12-06 03:39:15.370 spid53       Error: 18210, Severity: 16, State: 1.2010-12-06 03:39:15.370 spid53       BackupVirtualDeviceFile::PrepareToFreeze:  failure on backup device ‘{DB840D1B-DD17-4D73-83F1-7390F676DEE4}1’. Operating system error 995(The I/O operation has been aborted because of either a thread exit or an application request.).

 

I had already documented earlier that the infamous 995 error message could be a problem with the sqlvdi.dll but in this case, it is not!

The next thing I did was to pick the errors from the Application Event logs and I found the following errors repeated multiple times:

Error    12/6/2010 3:39:15 AM    SQLVDI    1    None    SQLVDI: Loc=SignalAbort. Desc=Client initiates abort. ErrorCode=(0). Process=3920. Thread=11604. Client. Instance=SQL2008. VD=Global\{DB840D1B-DD17-4D73-83F1-7390F676DEE4}1_SQLVDIMemoryName_0.Error    12/6/2010 3:39:15 AM    SQLWRITER    24583    None    Sqllib error: OLEDB Error encountered calling ICommandText::Execute. hr = 0x80040e14. SQLSTATE: 42000, Native Error: 3013

Error state: 1, Severity: 16

Source: Microsoft SQL Server Native Client 10.0

Error message: BACKUP DATABASE is terminating abnormally.

SQLSTATE: 42000, Native Error: 3224

Error state: 1, Severity: 16

Source: Microsoft SQL Server Native Client 10.0

Error message: Cannot create worker thread.

If you look closely at the above error, then you will find that the following error message stands out like a sore thumb: “Cannot create worker thread”. After this issue occurs, you will find that the SQLWriter shows the following when you execute the command: vssadmin list writers from a command prompt window:

Writer name: ‘SqlServerWriter’   Writer Id: {a65faa63-5ea8-4ebc-9dbd-a0c4db26912a}

   Writer Instance Id: {9075f235-fdee-4445-851b-a69c54bd8b33}

State: [8] Failed

Last error: Non-retryable error

Ensure that you do not have any memory related errors in the SQL Server Errorlogs or any DBCC MEMORYSTATUS outputs printed in the Errorlog during the time of the backup failure. If there aren’t, then the above issue occurs due to the SQL instance hitting a worker thread limit.

The reason for this is explained by Rob Dorr (Principal SQL Server Escalation Engineer at Microsoft) in his blog post:

The controller and database worker come from the worker pool.  So if your max worker thread is configured for 255 the answer would be a max of 255 / 2 = 127 databases.    You probably won’t get to 127 with this configuration as you have system workers (lazy writer, checkpoint, log writer, and others) that already are using workers.

On my SQL instance, I have 120 database and I need about 240 worker threads to complete this operation. However, I have only 128 worker threads configured for my SQL instance. If you have Dedicated Administrator Connection enabled for the SQL instance, then you can this troubleshooting further and execute a select against the DMV sys.dm_os_schedulers (for SQL Server 2005 or above) and get an output of the following type:

status                 counts
———————- ——–

HIDDEN ONLINE          168

VISIBLE ONLINE         8

VISIBLE ONLINE (DAC)   1

Query used:

select status, count(*) as counts
from sys.dm_os_schedulers
group by status

As you will see that there are about 168 hidden online schedulers created for performing the Snapshot backups. But SQL instance in question doesn’t have sufficient work threads to facilitate this operation.

What can you do in such situations:

1. Increase the number of worker threads to account for the additional worker threads required for completing the VSS backups. This can be counter-productive as it can lead to non-Buffer Pool memory contention on 32-bit instances of SQL Server. The higher the number of max worker threads on your SQL Server instance, more the amount of non-BPool memory is consumed for the thread stack. This results in depletion of the available Buffer Pool memory.

2. Change the physical location of the SQL Server database files so that all the database files do not reside on the same volume.

3. Perform a VSS Backup with SQL Server VSS Writer service turned OFF. However, you cannot recover any SQL Server database files from such a Volume Shadow Backup.

4. Use SQL Native backups or other VDI backup tools (non-Snapshot) to perform SQL Server database backups.

Note: This also happens when you select any file on a drive which has a high number of SQL Server database files. This is because the SQL Server VSS Writer tries to enumerate the database files for the SQL Server instance that reside on the volume that has been selected for the Snapshot backup.

Location of SQL Binaries can flip out Bare Metal backups

Bare Metal recovery allows system administrators to recover a system from failure within a short period of time and also manage system state backups in a more efficient manner. However, there is a slight catch here when you have SQL Sever binaries installed in a non-default location.

I recently worked on an issue where Windows Backup Admin was listing a non-system drive as a critical component when trying to perform a bare-metal backup using the following command:

wbadmin start backup -allcritical -backupTarget:<any existing drive name>:

When I looked into the contents of the drive, I found that the drive only had SQL Server database files and binaries on it. This is the output that I got:

wbadmin start backup -allcritical -backupTarget:S:
wbadmin 1.0 – Backup command-line tool
(C) Copyright 2004 Microsoft Corp.

Retrieving volume information…
This will back up volume OS(C:),New Disk(D:),New Disk(E:) to S:.
Do you want to start the backup operation?
[Y] Yes [N] No N

The operation ended before completion.

I then decided to use DiskShadow to show me all the critical components marked by the VSS Writers. I got the following output:

WRITER “System Writer”:

– Volumes affected by this component:

– \\?\Volume{2407f815-f9c4-11df-aef7-806e6f6e6963}\ [C:\]
– \\?\Volume{22c3ff31-f965-11df-a20b-00155df1c42a}\ [D:\]
– \\?\Volume{0342683d-f96a-11df-8a1f-00155df1c42a}\ [E:\]

This is because the SQL Server program binaries were installed on the D: and E: drives. And these are the paths that show up as critical as per the System Writer:

– d:\program files\microsoft sql server\100\dts\binn
– d:\program files\microsoft sql server\msas10.mssqlserver\olap\bin
– d:\program files\microsoft sql server\msrs10.mssqlserver\reporting services\reportserver\bin
– d:\program files\microsoft sql server\mssql10.mssqlserver\mssql\binn
– e:\program files\microsoft sql server\msas10.inst01\olap\bin
– e:\program files\microsoft sql server\msrs10.inst01\reporting services\reportserver\bin
– e:\program files\microsoft sql server\mssql10.inst01\mssql\binn

DiskShadow command used: list writers detailed

From http://msdn.microsoft.com/en-us/library/aa384630(VS.85).aspx

A volume is a critical volume if it contains system state information. The boot and system volumes are included automatically. The requester must include all volumes that contain system-critical components reported by writers, such as the volumes that contain the Active Directory. System-critical components are marked as “not selectable for backup.” In VSS, “not selectable” means “not optional.” Thus, the requester is required to back them up as part of system state.

Based on the above description, it is clear that I cannot perform a Bare Metal backup without including D: and E: drives.

Note: The above is just an example. There might be other system-critical components which are marked as not selectable for backup by other VSS Writers. VSS works on Volume Shadow concept. So, if a particular drive in a volume is marked as critical, then the entire volume would need to be backed up.

As I mentioned earlier, for system critical components, the “Is selectable: FALSE” is set (VSS_CF_NOT_SYSTEM_STATE flag is set for non-system components). In the above example, the drive on which the SQL binaries exist is marked as critical by the System Writer.

Removing primary transaction log file

Recently I was discussing about removing transaction log files on the #sqlhelp hashtag on Twitter. The question was whether we could remove an existing transaction log file after adding a new one. During the course of the discussion, I learnt that the file that was being removed was the primary transaction log file of the database.

When you attempt to remove the primary log file of a database from Management Studio or using ALTER DATABASE command, you will receive the following error:

Drop failed for LogFile ‘dbLogTest_log’.  (Microsoft.SqlServer.Smo)

The primary data or log file cannot be removed from a database. (Microsoft SQL Server, Error: 5020)

This is by-design. We do not allow the removal of the primary transaction log file of user databases. This was first introduced in SQL Server 2000.

Tibor (Blog), a SQL Server MVP, has already blogged about how to remove transaction log files using T-SQL commands here. So, I am not going to duplicate the post here again by providing the same T-SQL commands.