SQL performance gotchas for Distributed Queries

I am not sure how many of us pay attention to BOL notes under certain topics. I must admit that I have overlooked quite a few useful notes once in a while. But while working on a Linked Server permissions issue, I came across this point under the “Security for Linked Servers”:

To create the best query plans when you are using a table on a linked server, the query processor must have data distribution statistics from the linked server. Users that have limited permissions on any columns of the table might not have sufficient permissions to obtain all the useful statistics, and might receive a less efficient query plan and experience poor performance. If the linked server is an instance of SQL Server, to obtain all available statistics, the user must own the table or be a member of the sysadmin fixed server role, the db_owner fixed database role, or the db_ddladmin fixed database role on the linked server.

The reason for this is that the query optimizer is dependant on the table/index statistics to decide on the most optimal plan for the query. For remote queries, that perform a large number of joins or have complex sub-queries or complex filter conditions, you are bound to run into permission issues if the linked server remote login doesn’t have the necessary permissions to extract the statistics information. If you are not willing to give the linked server login the above rights on the remote table, then it would be better to either use an alternative means to fetch the remote data or maintain a local copy of the data using mirroring/log shipping/replication and operate on the local data to make sure that your business logic doesn’t get affected as the data increases on your server.

IMHO linked server queries should never be used for implementing complex business logic! Remote queries should be used to fetch as minimal data as possible. But Utopia is not always a reality!

Unable to import a trace using SQL Nexus

While trying to import a profiler trace using SQL Nexus, you might get the following error in the SQL Nexus log file:

"The system cannot find the file specified (System)"
——————————
Program Location:
   at System.Diagnostics.Process.StartWithCreateProcess(ProcessStartInfo startInfo)
   at System.Diagnostics.Process.Start()
   at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
   at ReadTrace.ReadTraceNexusImporter.DoImport() in C:\jacklidocs3\cprsrc\main\src\sqlnexus_pass\ReadTraceNexusImporter\ReadTraceNexusImporter.cs:line 364
   at sqlnexus.fmImport.DoImport() in C:\jacklidocs3\cprsrc\main\src\sqlnexus_pass\sqlnexus\fmImport.cs:line 557

The issue could be one of the two reasons:

  1. Your RML Utilities version is older than the current version – Install the latest version of RML Utilities.
  2. Your RML utilities location is not registered with SQL Nexus – This can be corrected by navigating to  C:\Program Files\Microsoft Corporation\RMLUtils from Command Prompt and execute the following: orca.exe /R

Now SQL Server 2008 has BPA too

This has been an ask from the community for a long time now. We just launched SQL Server 2008 R2 Best Practices Analyzer. The SQL Server rules are executed using the Microsoft Baseline Configuration Analyzer framework. The rules are all Powershell based. So you need Powershell 2.0 and MBCA 2.0 to use SQL Server 2008 R2 BPA. This version of BPA has an extensive set of rules which can validate a lot of commonly known issues in areas like SQL Setup, Engine, Replication etc. This makes the work of a DBA much easier. Another cool feature about this tool is that you can run remote scans also against other SQL instances.

Addendum: April 13th, 2011

Note: It works with SQL Server 2008 and SQL Server 2008 R2.

Upcoming Zeollar Presentation

I will be presenting a 30 minute demo on how to perform post mortem analysis for SQL Performance issues using RML Utilities and SQL Nexus on 24th June, 2010. The recorded and live session will be available on Zeollar. Zeollar is a Microsoft India DPE initiative that gets you the latest technical content on a daily basis in different channels. Think of it like a personal television that switches on every day at a specified hour allowing you to switch channels and view the channel of your interest.

You can download the calendar invite from here.

Token Perm issues on SQL Server 2008

There were performance issues identified on SQL Server 2005 builds because of the large size of the access cache store on certain servers with huge ad-hoc query workloads. I had blogged about this earlier along with information on how to customize the cache store size using trace flags and registry entries after a certain build of SQL Server 2005.

With SQL Server 2008, we provide access check cache bucket count and access check cache quota size customization through sp_configure. So, if you do need to change these values, you do not have to add trace flags or make registry changes. It is as simple as changing the MaxDop or Max Server Memory on your server.

However, be advised that it is not recommended to change these values unless and until CSS recommends you to do so.

Reference:

Description of the "access check bucket count" and "access check cache quota" options that are available in the sp_configure stored procedure in SQL Server 2008

How to create a Profiler Trace during Server startup

I think I answered this question more than once on multiple different forums. So, I thought it was time to create a blog post on it.   

One of my previous post has the steps to create a .sql file containing the server side trace definition. Once you have the .sql file generated by the SQL Profiler Trace GUI. Use the script to convert that into the stored procedure with the following changes:   

declare @tracefilename nvarchar(255) 

set @tracefilename = N'D:\StartupTrace'+CAST(DATEPART(d,GETDATE()) AS varchar(2))+CAST(DATEPART(M,GETDATE()) AS varchar(2))+CAST(DATEPART(YYYY,GETDATE()) AS varchar(4))+CAST(DATEPART(HH,GETDATE()) AS varchar(2))+CAST(DATEPART(MI,GETDATE()) AS varchar(2)) 

exec @rc = sp_trace_create @TraceID output, 0, @tracefilename, @maxfilesize, NULL

You will have to add the lines highlighted above and the make a change for the trace file name in the line that is not highlighted. This will ensure that every time your SQL instance starts up, you have a unique trace file name. If the filename is not unique, then the second time the service restarts you will get a trace creation error stating that the file already exists. You will see the following error in your SQL Errorlog:   

Launched startup procedure ‘usp_startup_trace’.

Windows error occurred while running SP_TRACE_CREATE. Error = 0x80070050(The file exists.).

Error: 19062, Severity: 16, State: 1.Could not create a trace file. 

After this is done, create the stored procedure to create the trace definition. Let’s say the proc name was “usp_startup_trace”.   

Now, you will need to mark this stored procedure as a startup procedure using the following:

EXEC sp_procoption 'usp_startup_trace','startup',true

 Once that is done, you will need to set the configuration options for the instance to scan for startup procedures. The default is set to 0.   

EXEC sp_configure 'scan for startup procs',1 RECONFIGURE

 When your SQL instance restarts the next time a new file will be created for the profiler trace and sys.traces will show you new trace active on the server.  

Reference: sp_procoption