How to backup SQL Server databases to a mapped drive

While taking backups for SQL Server databases onto a mapped drive you might get the following error:

"The system cannot find the path specified."

This is because a network share that you map using a local drive letter will not be visible to a SQL Server instance as it is running as a service. So, the SQL Server service runs in the context of the local console context with the security context of the startup account of SQL Server. Also, mapped drives are specific to a session and not visible to a service started in the local console context.

So, if you want to backup a SQL Server database to a mapped drive using a local drive letter you have the following options:

1. Run the following command from a query window  EXEC xp_cmdshell ‘net use <drivename> <share name>’ — where <drive name>: Letter used to map the drive <share name>: UNC path to the share

2. After that you should be able to backup using the mapped drive letter

3. Your Management Studio Object Explorer should be able to list the above drive

Net use documentation:

http://technet.microsoft.com/en-us/library/bb490717.aspx

The drawback here is that once the SQL Server service is restarted, the mapped drive will no longer be visible because it will be unmapped. If you want to persist the mapped drive information then you need to create a startup procedure for executing the script in Step 1.

The easiest way would be to create a backup device using the UNC path of the remote share that you want to take the database backups on. One thing you need to keep in mind that the SQL Server startup account needs to have full permissions on the remote share.

Retrieving TCP/IP Properties using WMI

This can be done using the following script 

******* START OF SCRIPT ******** 

strComputer = "." 

Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\Microsoft\SqlServer\ComputerManagement") 

Set colItems = objWMIService.ExecQuery( _ 

"SELECT * FROM ServerNetworkProtocolProperty where InstanceName='MSSQLSERVER' and ProtocolName = 'Tcp'",,48) 

For Each objItem in colItems 

Wscript.Echo "-----------------------------------" 

Wscript.Echo "ServerNetworkProtocolProperty instance" 

Wscript.Echo "-----------------------------------" 

Wscript.Echo "InstanceName: " & objItem.InstanceName 

Wscript.Echo "IPAddressName: " & objItem.IPAddressName 

Wscript.Echo "PropertyName: " & objItem.PropertyName 

Wscript.Echo "PropertyNumVal: " & objItem.PropertyNumVal 

Wscript.Echo "PropertyStrVal: " & objItem.PropertyStrVal 

Wscript.Echo "PropertyType: " & objItem.PropertyType 

Wscript.Echo "PropertyValType: " & objItem.PropertyValType 

Wscript.Echo "ProtocolName: " & objItem.ProtocolName 

Next

******* END OF SCRIPT ******** 

This uses the class ServerNetworkProtocolProperty. Similarly, you use the above script and modify it to obtain information for the other server network protocols.

Performance issues due to USERSTORE_TOKENPERM

On versions of SQL Server 2005, a known issue can degrade query performance on 32-bit and 64-bit systems with 2 gigabytes (GB) or more of memory. When you execute queries under the context of a login that is not a member of the sysadmin fixed server role, you might encounter performance degradation symptoms that arise from a large and growing Security Token cache. These issues can include performance degradation of queries, high CPU usage for the SQL Server process, and a sharp increase in worker threads and SQL user connections. Systems that have less than 2 GB of memory should not experience these issues because normal memory usage requirements keep the Security Token cache from growing too large.

If you are facing the symptoms below, then it is a good time to discuss applying SP3 with CU1 with your management or apply the workarounds mentioned in the articles below. The most common workaround is having a job running at periodic intervals and cleaning up the TOKENPERM cache using DBCC FREESYSTEMCACHE (‘TokenAndPermUserStore’).

Symptoms

  1. A large number of users running queries on the SQL instance and intermittently you find that the duration of queries increases intermittently
  2. Most of the queries in the environment exhibit this pattern for random parameters and not any specific query/proc with any specific parameter
  3. DBCC MEMORYSTATUS output for the SQL instance shows that the memory usage for SQL is gradually increasing over a period of time
  4. High CPU during the problem period
  5. DBCC FREEPROCCACHE or DBCC FREESYSTEMCACHE fixes the issue temporarily

Related Articles

  • KB article 927396: Queries take a longer time to finish running when the size of the TokenAndPermUserStore cache grows in SQL Server 2005
  • KB article 933564: FIX: A gradual increase in memory consumption for the USERSTORE_TOKENPERM cache store occurs in SQL Server 2005
  • KB article 937745: FIX: You may receive error messages when you try to log in to an instance of SQL Server 2005 and SQL Server handles many concurrent connections
  • KB Article 959823: How to customize the quota for the TokenAndPermUserStore cache store in SQL Server 2005 Service Pack 3
  • http://blogs.msdn.com/psssql/archive/2008/06/16/query-performance-issues-associated-with-a-large-sized-security-cache.aspx
  • Deadlock Troubleshooting

    Deadlock troubleshooting has always been a bit tricky. A deadlock is a special type of blocking scenario, but blocking and deadlocking are not the same thing. Deadlocks are a natural side effect of blocking. The most typical deadlock solution is either a stored proc/app code optimization, or a change in the database schema or indexing strategy.

    To capture a deadlock graph in the SQL Server ERRORLOG, use the following trace flags:

    • For SQL Server 2000: 1204 & 3605
    • For SQL Server 2005: 1222 & 3605

    The trace flags can be enabled using DBCC TRACEON(<trace flag #>,-1)

    Refer Bart Duncan’s Weblog for deadlock troubleshooting. Bart was an Escalation Engineer for SQL Server and his blog article spanning three parts for deadlock troubleshooting is probably the most definitive guide for this topic.

    http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx

    The -T1204 output can be interpreted using the following file: http://blogs.msdn.com/bartd/attachment/747119.ashx

    If you find that your SQL Server is encountering deadlocks, then what data do you need to collect:

    1. SQL Server Profiler traces (Refer my server side profiler traces post for setting up profiler traces) with the following events:

    • All statement level events
    • Showplan All events because the statement that is chosen as the deadlock victim will not have a Statistics Profile Event as it would have been terminated
    • Lock: Deadlock and if you are using SQL Server 2005, then capture Deadlock Graphs as well

    2. Perfmon Logs

    3. SQL 2000 Blocker Script output/SQL 2005 PerfStats

    Additionally, if as a developer if you are concerned about your statements being terminated due to deadlocks, then you must build a retry logic into your code for handling the deadlock and resubmitting the query. This can be done using a try-catch block within the query batch that you are executing.

    SQL Nexus

    This is a tool that is used by the Microsoft PSS team to analyze PSSDiag data. This tool has the capability of loading the Profiler Traces collected (provided ReadTrace is installed on the box).

    In addition to this, it also loads the other .OUT files and the PerfStats script output file into the Nexus database. I use this tool regularly to analyze PSSDIAG data due to the following reasons:
    1. Saves me time from manually opening each file and looking through them
    2. Based on the graphs provided in the reports, I can quickly drilldown to the major bottleneck

    The SQL Nexus reports act as a brilliant guide when you are analyzing PSSDIAG for the following scenarios:
    1. High CPU
    2. Blocking
    3. High IO
    4. General Performance Problems

    Since, the SQL Nexus Reports use Reporting Services client-side report viewer, you can create your own reports to work against the Nexus database and then deploy them to the %appdata%/SQL Nexus/Reports folder. Once, this is done, the new reports would show up along with the defult reports every time SQL Nexus is launched.

    What are the advantages of this tool?
    1. Shows aggregation reports for the Profiler Traces loaded into it
    2. Blocking and wait statistics information based on the PerfStats script output. This helps in analyzing blocking on the server
    3. Since, the data collected are loaded into tables, you can run queries against the database to draw inferences. If you use these queries frequently enough, you can create your own reports and deploy them to the above mentioned location
    4. You can export the data from the SQL Nexus Reports into files (.XLS,.PDF,.JPG etc) and send them as reports
    5. The reports let you narrow down to specific time frames which makes analysis easier
    Download location: http://www.codeplex.com/sqlnexus

    Addition: April 4, 2011

    A list of known issues documented with SQL Nexus:

    http://blogs.msdn.com/b/sqlserverfaq/archive/2011/02/07/sql-nexus-and-rml-utilities-a-primer-on-the-issues-that-i-have-dealt-with-last-year.aspx

    Addition: April 6, 2011

    A very common issue that I worked on multiple times after which I realized that it warranted a blog post:

    SQLDIAG: There are no instances of version 10 on this computer