T-SQL Tuesday #22–Data presentation…What’s that?

It’s time for T-SQL Tuesday again and this time around the topic is “Data Presentation”. The host for this month’s revolving blog party is Robert Pearl (Blog|Twitter) who stresses the need to efficient data presentation techniques… To which I can just sit and say “+1” or nod my head in the affirmative. However, I shall add my $1 worth of thoughts to the topic:

Just presenting raw data to an end-user will create much of an impact for two reasons:

a. The end-user may not be in the same line of work or hail from the same technical background as you.
b. Even if he/she was, the data representation will be viewed by additional people who shouldn’t have to depend on you or the single end-user who understood the data presentation. I shall limit this post to data representation done using Excel/Powerpoint or event Reporting Services. The underlying data does come from a data source (facts and figures) but it is the job of the presenter to spruce the raw format into something presentable.

If you have seen the Mask of Zorro, you will remember that Diego De La Vega (Anthony Hopkins) taught the disheveled Alejandro Murrieta (Antonio Banderas) to fight. But before sending him in the lion’s den, he taught him the finer skills of acting like a gentleman i.e. act the part of Don Alejandro! I view raw data as Alejandro Murrieta (a.k.a. Zorro) and the visual representation of that data as Don Alejandro!

Let me explain this with a SQL example. Let’s say the SQL Server CPU utilization and the system idle CPU utilization is being tracked using Ring Buffers. I had blogged about this in the past. Now the raw data retrieved needs to be show in the format of a report. Here is a crude pivot chart graph that I generated based on the output of the query:

select EventTime, system_idle_cpu, sql_cpu_utilization
from dbo.tbl_SQL_CPU_HEALTH
where EventTime is not null
order by EventTime

It is quite obvious from the report that the SQL CPU usage increased between 2:15 PM to 2:59 PM. However since I have made this chart, it is quite obvious to me. For an end-user looking at this, it might not be pleasure to look at in the first place and for a non-technical person, this might require more time than he/she would want to invest in deciphering the graph. imageThe second graph shows some basic aesthetic changes to the graph which makes it more readable at the first glance. If you looking into the Bottleneck Analysis report of SQL Nexus, you will find that a more refined version of this graph is provided.

image

While creating a report or summary which has visual representation, always think from a third person’s view! Effective visualization can really help boost the value of your presentation and drive the point home! An ineffective representation of data will just de-value your content, no matter how relevant it is!

T-SQL Tuesday #21: It’s easy to be lazy

This month’s revolving blog party is being hosted by the creator himself, Adam Machanic (blog|twitter). This month’s topic chosen by Adam is about the forgettable practices/habits of the past that can be avoided. This is a topic on which I can write long rants about… But I shall limit myself to writing about three areas where I have managed to chuck some bad habits and thereby reduced the amount of redundant time that I needed to spend due to these avoidable habits!

Your code needs some English!

One of the practices to be avoided pertaining to T-SQL coding was brought out in unison by the community members during last month’s T-SQL Tuesday was about adding relevant comments to your code. I shall take this a notch higher and talk about adding comments for two reasons:

a. Remember why a change/modification was made to the code
b. So that someone supporting your code understands the logic behind what you have written

More often than not it’s a convenience and sometimes sheer laziness which has prevented. This happens especially during crisis situations where a quick change to the code resolves the situation but you forgot to mention in the file why and when the change was made. This little indiscretion which I have been a victim as well as a perpetrator of has cost me a lot of precious time in the past. So, don’t be lazy… Add a few English lines to your code to help the person supporting it!

Simplicity in source control can prevent hair loss

Not maintaining any form of version control for your source code is quite common when it is an individual managing the entire project or the tool development. But I still believe that sometimes simple forms of source control can save you a lot of painful hours and wishing that you had one pill that Bradley Copper had in Limitless. You don’t need to have a version control software or VSTS setup to manage your code. You could use simple logic like taking a backup of your code or naming your files v1, v1.2 and so on and so forth before making a huge change to your application code that cannot be handled by the way of comments without writing a small essay. This is also a form of laziness which in the past has cost me hours of coding time. So be smart and maintain your code in such a way that you can revert your code back to a previous version in a matter of minutes!!

Documentation is an important key to saving time

I am sure all of you are well aware that documentation is necessary but it can be mundane at times. The entire reason I setup this blog was to document those unique quirks about SQL Server and those unique solutions that I arrived at by looking at disparate pieces of information. I must admit that I do not have photographic memory and I need to have reference to documentation which is available with the helps of a keywords search. I use the search on my blog to find my old posts!! So if I didn’t document a unique solution, I would end up spending the same X amount of hours working on the same issue the next time I encounter it. The value of documentation is that it saves time!

A wise man/woman learns from his/her mistakes. But a wiser man/woman learns from other people’s mistakes.

T-SQL Tuesday #20 Wrap-up and a few smiles

I had the privilege of hosting this month’s T-SQL Tuesday which is a recurring blog party, the brainchild of MVP Adam Machanic (blog|twitter). It goes without saying that the posts this time were a wonderful read and it’s very encouraging to see such brilliant engagement from the community. A big “thank you” to all the people who took the time to pen down a post for this month’s topic: “T-SQL Best Practices”. If you are a newcomer to the world of T-SQL or a seasoned pro, the insights shared in these posts are definitely a good read if you work with SQL Server.

Below is a round-up of all the posts from this month (in chronological order of comments appearing on my invitation post):

1. Jason Bacani [Blog | Twitter] writes about why T-SQL formatting is best as well as good practice and makes it easier for the person supporting the code that you have written. I feel motivated on working on the code only if I like looking at it.

2. Rob Farley [Blog | Twitter], a popular SQL Server MVP, talks about why understanding query plans can help in writing better and more efficient T-SQL code. Totally agree with Rob here and from experience I can say that this goes a long way in writing efficient T-SQL code.

3. Dirk Wegener [Blog] talks about why including comments to explain your code logic and reiterates why T-SQL formatting is important in his first T-SQL Tuesday post. Way to go Dirk and we look forward to your participation in the future.

4. Noel McKinney [Blog | Twitter] talks about coding best practices which are quite pertinent in today’s development world. I completely agree that everything shouldn’t be done through T-SQL. Sometimes exploring other avenues may be the right answer!

5. Pinal Dave [Blog | Twitter], a friend and now a colleague at Microsoft, talks about practices that can be avoided in environments generally and not just from a T-SQL perspective.

6. Thomas Rushton [Blog | Twitter] talks about coding best practices, some of which is generally applicable to the world outside T-SQL as well. Absolutely loved the quote at the end of your post!

7. Rich Brown [Blog | Twitter] shows with an example how a developer managed to get into trouble by using an ORDER BY construct for an INSERT statement.

8. Sebastian Meine [Blog | Twitter] shows why an interface layer is required between the application and database. This is quite difficult to achieve due to various reasons (some of it maybe just ignorance sometimes) but if you can pull it off, it really does save a lot of time during a post-deployment crisis scenario.

9. Muthukkumaran Kaliyamoorthy [Blog | Twitter] talks about a list of DOs and DON’Ts while writing T-SQL code.

10. Nick Haslam [Blog | Twitter] talks about how multiple database files can help improve throughput of your DML queries for large data warehouse with the help of some TPC-H test numbers that he shares in the same post.

11. Marcos Freccia [Blog | Twitter] talks about a recent performance problem he resolved by avoiding an implicit convert operation in the query plan with the help of a small change in his T-SQL query. An example of how following best practices can help performance.

12. Matt Velic [Blog | Twitter] talks about best practices to follow while writing T-SQL code from a readability, formatting and code clarity standpoint. Supporting and fixing a code is much faster if it’s easy to read!

13. Robert Matthew Cook [Blog | Twitter] shows how not following simple best practices led to a cascading effect where a stored procedure failed to execute. He goes the extra mile to explain why the issue happened!

14. Erin Stellato [Blog | Twitter] explains why following T-SQL coding standards and adding comments to your code can help save painful hours of searching and brain-wracking during the time of a crisis.

15. Josh Feierman [Blog | Twitter] talks about what can be done in the current scheme of things with the business angle kept in mind. And what he talks about in his post are ground realities and what can be done to counter them to make us efficient in what we do without having sleepless nights while handling a critical issue.

16. Aaron Bertrand [Blog | Twitter], a popular MVP, talks about some of the best practices that should be followed while writing T-SQL code. He provides examples on what kind of issues you could run into when such practices are not followed.

17. Jack Vamvas [Blog | Twitter] gives an example and shows how filtered indexes come to the rescue when T-SQL code which doesn’t follow best practices cannot be tuned due to support and license agreements.

18. Gabriel Villa [Blog | Twitter] talks about T-SQL coding best practices which would help in better management of the code base in the post-deployment phase. This would definitely lead to less consumption of aspirins for the team managing and supporting the code.

19. Nancy Hidy Wilson [Blog | Twitter] talks about adding proper comments to your T-SQL code in a crisp and concise post.

20. Ricardo [Blog | Twitter] re-iterates why comments in the code are necessary but they necessarily don’t need to be short stories. Additionally, he mentions about testing the T-SQL code to ensure that it works in production and not just on the Development box that you are using it write the code on. The post is in Spanish but Bing Translate helped me read the English version of the post and I did have a smile on my face while reading through it.

21. Colleen Morrow [Blog | Twitter] explains with an example why using functions in the predicate of a T-SQL query can be bad karma for your code.

22. Carlos Bossy [Blog | Twitter] explains in his post why T-SQL code made to function in a procedural manner is not probably the best approach.

23. Jes Borland [Blog | Twitter] shares her top three T-SQL best practices and how keeping it simple is the best idea!

Once again, my sincere thanks to all the SQL folks that contributed to this month’s T-SQL Tuesday! And also, my thanks to Adam for letting me host this virtual blog party for this month!

Since a major part of my daily job involves troubleshooting SQL Server performance related issues, I do come across T-SQL code which from a best practices standpoint has a lot to be desired. So, I would say that the next few lines are my 2 cents from what I have seen based on my experience but I ‘m afraid it’s going to turn out to be a $1 worth of rambling!

1. If your company has established coding standards/formats, follow them – This helps keep consistent across the board and the standards and formats were put in place majority of the time with a good intention in mind. If there aren’t any defined, then define a set for yourself and follow them!
2. Use some form of source-control – It could be as simple as adding comments in the procedure code itself. It really saves time and helps blood pressure levels stay normal when trying to find out why a small change in the code was made and when, which is the current cause of the performance bottleneck!
3. Add comments/description to your code – I am yet to come across a production application which has a single person in-charge of the production, support, maintenance and version upgrades. Your code more often than not will be supported and maintained by someone else. Help that person understand why that piece of code is there and what is it supposed to do. What is obvious to you may not be obvious to someone else. So, rather than having a DBA curse you at an unearthly hour, mention (a single line is sufficient most times) why a best practice was not followed.
4. T-SQL is not the answer for everything – There are some things that can probably be done much better through code in some other language like C#. Don’t be afraid to explore such an option. At the end of the day, SQL Server is a database platform and NOT an application server. (*ducks and runs*)
5. It doesn’t hurt to look under the hood – It’s not necessary to know how the query optimizer works or how to read a query plan or even how indexes should be designed for tables for writing T-SQL code. But if you did understand some of this if not a majority of it, you will find that you end up writing much more efficient T-SQL code than you used to when you didn’t know some of under-the-hood workings of the database engine.

Do not misunderstand the above points to mean that I write the best T-SQL code can be written. All I am trying to say is that write T-SQL code which is easy to read, decipher and support! Even then as a Developer you might not become a DBA’s best friend overnight but you would have taken a giant step in getting there and bridging that chasm that exists between the two worlds! Smile

While reading Thomas Rushton’s [Blog | Twitter] submission this month I came across the quote mentioned below. Going forward, this is going to be my punch-line when I talk about best practices. If you are one of those individuals where a visual aid helps, then the above picture should probably aid in following best practices while writing T-SQL code.

Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.

*courtesy Jeff Atwood (blog|twitter)

In case you want to keep getting updates on what is happening on my blog, then you can do so using one of the icons below (Facebook page, Twitter or RSS Feeds).

Facebook Fan PageTwitterRSS Feeds

Disclaimer: I have provided the correct links to your blog, twitter accounts and posts to the best of my knowledge. In case there are discrepancies, please leave a comment and I shall make the necessary corrections. Such an error was purely unintentional!

Invitation for T-SQL Tuesday #20 – T-SQL Best Practices

image

What is T-SQL Tuesday?

T-SQL Tuesday is the brain-child of MVP Adam Machanic (blog|twitter). It happens once a month on the 2nd Tuesday and is hosted by a different person in the SQL community each time. The idea is to get the SQL Community blogging on a common topic on a single day of the month after which a round-up of all the participating posts are done by the hosting blog owner to give the SQL community a one-stop shop of some great articles on a single topic.

What is the topic for July?

The topic for July is “T-SQL Best Practices”. If you work with SQL Server, then undoubtedly you would have had to write T-SQL queries atleast once or would have had to debug the seemingly useful piece of T-SQL code written by your developers to find out where the performance bottleneck or problem was. Your post for this month’s revolving blog party could be along one of the areas:

a. A set of T-SQL best practices that you follow in your shop that or you believe that should be followed always. It could be as specific as for writing only linked server queries or writing queries for SSIS packages etc.
b. An issue that you resolved because certain T-SQL best practices were not followed.
c. A workaround that you used (like query hints) to resolve an issue where T-SQL best practices couldn’t be implemented due to involvement of a third party solution.

Why did I choose this topic?

Over the years of troubleshooting SQL performance related issues, I have found on multiple occasions that the T-SQL query in question was performing badly because certain best practices for writing that piece of code were not followed and the one responsible for the development had not foreseen that such an oversight could become a bottleneck when the data or the number of users increased. So, I thought it would be a good idea to get the SQL Community’s thoughts around best practices in this area. Sometimes, the most obvious things are the easiest to overlook!

What are the Rules?

T-SQL Tuesday has some basic rules which are pretty easy to follow.

1. Your post must be published between 00:00 GMT Tuesday July 12, 2011, and 00:00 GMT Wednesday July 13, 2011.
2. Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
3. Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can view your blog post.

Good-to-have

  • Include a reference to T-SQL Tuesday in the title of your post
  • Tweet about your post using the hash tag #TSQL2sDay
  • Being creative always fetches brownie points and gets a smile from the reader!

So now it’s time for:

SELECT [content] FROM [dbo].[tblbrain] WHERE [content_type] = ‘T-SQL’

Before I wrap-up this post, a big thanks to Adam for letting me host T-SQL Tuesday for this month!

T-SQL Tuesday #19: Disasters and Recovery

This month’s revolving blog party a.k.a. T-SQL Tuesday is being hosted by Allen Kin (blog | twitter). The topic for this month is Disasters and Recovery. The creation of fail-safe mechanisms is probably one of the most important facet of any IT administrator role in today’s world where online transactions have become synonymous to daily lives. When creating a DR strategy, you need to keep three things in mind:

1. RPO Recovery Point Objective
2. RTORecovery Time Objective
3. SLA – Service Level Agreements

Mike Walsh has already documented about the above three points in an earlier post on SQL University DBA Week. So what am I going to talk about in this post….. Well, since a major part of my daily job involves working on critical situations which sometimes involve disaster recovery, I will talk about some key but simple points that you should be aware of while restoring your database[s] in case your server encounters a disaster.

1. Always have backups – This point no matter how many times reiterated is still less! You should always have backups of your databases. You should store your backups on a separate media which is not the same as the disk drives which store the database files. This ensures that you don’t have a single point of failure. I have seen a lot of scenarios where the backups and the database files are stored on the same drive. Once the drive goes BOOM!!… You are left with zilch!! A bad scenario to be in!

2. Test your backups – Just taking regular backups doesn’t ensure that you will be safe when a disaster strikes. You need to restore your backups and ensure that the backups can be restored successfully. If you have an automated DR strategy in place, then it is always good to perform dry-runs to ensure that your team is well versed with the recovery process when the need arises. You don’t want to be grappling with your restore scripts during a crisis situation. The next nugget of information is to ensure that a DBCC CHECKDB on the restored database completes without any errors. Just because the restore was successful, doesn’t mean that the database is consistent!

3. Know your environment – An application doesn’t just depend on your database[s]. There might be customized connection settings, connection aliases, specific logins, database users, linked servers etc. which need to be kept handy in case you need to bring a new environment online which was a clone of your previous disaster ridden system. I have seen multiple times where the databases have been restored successfully but the logins and linked specific to the application are missing. So now you have an environment which has the application databases but other specifics pertaining to the application’s functioning are missing.

4. System databases need to be backed up also – System databases do need to be backed up as well. Eg. Without the master database backup in a disaster scenario, you will be missing the necessary logins that your application needs to login to the user database.

5. Benchmarking is very important – As I mentioned earlier, a dry-run is very important. This is primarily due to the fact that if you do not know how much time a restore is going to take, you cannot define your RTO and adhere to your agreed SLAs. A classic situation is that the application needs to be up within 4 hours but since no once tested the entire restore cycle, no one knows how long it will take to restore the set of full/differential/log backups that are available.

6. Have multiple points of failure – This is mostly considered as a good to have but in critical environments, I consider this as a must-have! A simple implementation of this would be redundancy. Keep two copies of your database backups. If one set of database backups are inconsistent, you have a redundant set of backups to fall back on. A decision taken to disk space by reducing the number of redundant copies can look very daft when you are not able to bring a production system online due to the unavailability of consistent backups.

7. Never rely on REPAIR ALLOW DATA LOSS as your savior – The REPAIR ALLOW DATA LOSS option provided with CHECKDB should always and always be your last resort! This means that when all else fails, then you resort to repair options. This repair option should never be your first option for recovering from a disaster because as the name states it always results in data loss!!

8. Know how long a CHECKDB takes to complete on the database – If you do not run CHECKDB regularly on the database for which you are creating a DR strategy, then you are inviting trouble. Always run periodic CHECKDB on your databases and note the time taken so that you have a fair estimate on how long a CHECKDB should take to complete successfully on the given database.

9. Redundant database copies – A lot of environments use Database Mirroring, Log Shipping and Replication to maintain duplicate copies of the database. If you are using any of these features to maintain copies of the existing databases, then you need to note two things: first being the latency between the primary and secondary copies. This will define your RPO as the average latency will be the amount of data loss that you should be prepared to deal with and this will also define RPO to some measure as the time taken to recover the missing data would be defined by latency. Another point to keep in mind is that if you decide to use one of the alternate database copy of the database as the new production database, then you need to ensure that you avoid certain gotchas. Example: Orphaned users for SQL Authenticated logins when you use log shipping or database mirroring.

10. Keep in mind the additional SQL Server features being used – If you are using replication or mirroring or log shipping on the primary database being recovered, then you need to account for additional steps before restoring the databases as a simple restore of a database backup for such a database will not do. Eg. Special considerations need to be followed for restoring replicated databases.

For the non-technical aspects, a disaster recovery plan should include the following:

Disaster recovery plan types include the following (from Books Online):

  1. A list of people to be contacted if a disaster occurs
  2. Information about who owns the administration of the plan
  3. A checklist of required tasks for each recovery scenario. To help you review how disaster recovery progressed, initial each task as it is completed, and indicate the time when it finished on the checklist.

The above points might seem like basics but it would be surprising that they don’t get religiously followed on some production environments!

T-SQL Tuesday #18: CTEs, XML and Process Monitor

T-SQL TuesdayThis month’s recurring SQL blog party a.k.a. T-SQL Tuesday started by Adam Machanic (blog | @AdamMachanic) is being hosted by Bob Pusateri (blog | @SQLBob). This month’s topic is about Common Table Expressions which was introduced for the first time in SQL Server 2005. CTEs can be thought of as a derived table which can be self-referenced within a query. With the advent of CTEs, a lot of T-SQL developers made full use of this feature. Personally, I have used CTEs for various data collection and parsing scripts that I use for daily for my day-to-day work. I am a big fan of CTEs and have introduced their usage in one of the blocking reports for SQL Nexus. As with any coding construct, the use of CTEs has it’s PROs and CONs but in this post, I shall not go down the path as there are a lot of depends on whether a particular scenario is fit for the use of CTE of a temporary table. That decision has to be made based on testing against the benchmarks that you have or response-time SLAs that have defined for your application.

Since my primary expertise lies with SQL Server, I examine every opportunity of data analysis with the intention of importing the data into a SQL Server database table and then writing an automation routine over it to help me save the repititive iterations when I am analyzing the same data or need to perform the same set of steps over and over again. Recently I have shifted over to Powershell due to affinity for scripting but that will be a story for some other blog post. CSS uses Process Monitor, a vastly popular tool from Sysinternals, to troubleshoot various problems that we encounter while working on SQL Server related issues. Some of our adventures with Process Monitor are mentioned below:

PRB: SQL Backups to a UNC path fail with OS Error 1 (Incorrect Function)
INF: Permissions required for SQL Server Service account to use SSL certificate

During my tenure here with CSS, I too have relied on the Process Monitor to troubleshoot some fairly complex issues. Sometimes the Process Monitor .pml files can be quite large if collected without adding appropriate filters and I am just interested in a particular process(es). By importing the .PML file into a database, I can not only query the imported data using T-SQL queries but I can also write quick automations to look for common stack traces that I know have known issues/problems associated with them. Yes with Process Monitor, you can use locally cached symbols to get the call stack of the function call as well. Note that saving a .PML into XML with resolved stack symbols is slow due to the time taken for symbol resolution. The .XML file that I will use as an example was saved using the “Extensible Markup Language (XML)” option with “Include stack traces (will increase file size)” option enabled. I captured a Process Monitor trace while trying to access a directory which was not present on my server.

I already have a stored procedure to import the saved .XML file into a database table, then parse that existing data and insert the parsed data into the two tables that I have created to stored the event and stack information.

How does the use of CTEs fit in here?

The .XML file has the following structure for the stack and the frames:

<ProcessIndex>2083</ProcessIndex>
<Time_of_Day>5:52:38.8471548 PM</Time_of_Day>
<Process_Name>Explorer.EXE</Process_Name>
<PID>6588</PID>
<Operation>NotifyChangeDirectory</Operation>
<Path>D:\Tools</Path>
<Result></Result>
<Detail>Filter: FILE_NOTIFY_CHANGE_FILE_NAME, FILE_NOTIFY_CHANGE_ATTRIBUTES, FILE_NOTIFY_CHANGE_LAST_WRITE</Detail>
<stack>
<frame>
<depth>0</depth>
<address>0xfffff880013ab027</address>
<path>C:\Windows\system32\drivers\fltmgr.sys</path>
<location>fltmgr.sys + 0x2027</location>
</frame>

<frame>
<depth>1</depth>
<address>0xfffff880013abbe9</address>
<path>C:\Windows\system32\drivers\fltmgr.sys</path>
<location>fltmgr.sys + 0x2be9</location>
</frame>
….
….
</stack>

Now each operation has a call-stack associated with it which I am interested in importing into my database. But with the existing schema above, I cannot get the information that I require . After exploring the use of multiple ways of parsing the data using temporary variables, temporary tables, recursive CTEs, XML indexes and blah.. blah.. , I found that that quickest way to do this was:

1. Get the event list imported into a temporary table
2. Using the event list generated above, create the call stacks from the <frame> nodes (see code snippet below)

The logic here is that the cteXML used below extracts all frames for a particular event using a counter variable and creates XML data for each of the frames which are parsed by referencing the cteXML in the INSERT INTO…SELECT query.

-- Extract events from the XML file and put them into a staging table

INSERT INTO #tblXMLstaging (event,fname)

SELECT C.query('.') as event,@fname

FROM dbo.tbl_XMLData

CROSS APPLY xmldata.nodes('/procmon/eventlist/event') as T(C)

WHERE fname = @fname

-- Fetch values from staging table, store parsed frames in an XML and parse them to insert data into procmon_stacks table

WITH ctexml as

(

SELECT TOP 15 eventid,C.query('.') as frames,fname

FROM #tblXMLstaging

CROSS APPLY event.nodes('/event/stack/frame') as T(C)

WHERE eventid = @counter

)

INSERT INTO dbo.tblProcMon_Stacks (StackID, FName, FrameDepth, [Address], [Path], Location)

SELECT

eventid as StackID,

fname,

frames.value ('(/frame/depth)[1]', 'int') as FrameDepth,

frames.value ('(/frame/address)[1]', 'varchar(50)') as [address],

frames.value ('(/frame/path)[1]', 'varchar(255)') as [Path],

frames.value ('(/frame/location)[1]', 'varchar(255)') as [location]

FROM ctexml

The output of

If you are curious as to how long it takes to import the data, it took me a little over a minute (84 seconds to be precise) to load a 100MB XML file, parse and shred it! I am sure that there are optimizations possible to this method. The screenshot on the left shows the load times. The system specifications on where the data import is being done is: 

Operating System: Microsoft Windows Server 2008 R2 Enterprise (x64)

Machine: Hewlett-Packard HP Z800 Workstation

Processor: 2 Quad Core [Intel(R) Xeon(R) CPU E5506  @ 2.13GHz, 2128 Mhz, 4 Core(s), 4 Logical Processor(s)]

RAM: 16.0 GB

Hard Disk: Barracuda 7200.12 SATA 3Gb/s 500GB Hard Drive

SQL Server: SQL Server 2008 R2
Now that the data is available in a database table, I can query the information using T-SQL queries. I had attempted to open the directory C:\Foo on my server using Windows Explorer. This information and the callstack can be visible using the following T-SQL queries:

-- Get the event information associated with the directory that I was looking up
select ProcessName, PID, Operation, Path, Result, Detail, StackID from dbo.tblprocmon_events

where fname = 'FileNotFound.xml'

and path like '%C:\Foo%'

order by StackID

-- Get information for the call-stack using the StackID obtained above
select FrameDepth, address, Path, Location

from tblProcmon_stacks

where fname = 'FileNotFound.xml' and StackID = 662

order by FrameDepth

Below is a screenshot showing me that the directory doesn’t exist and the call-stack associated with the function call that failed.

The options are now limitless, you can take this forward by creation for common callstacks in-case you are troubleshooting an application developed by you and have access to the private symbols. Or you could look for common results for certain operations like Operation = QueryDirectory and Result = NO SUCH FILE. My use of CTEs are for the following tasks:

1. Parsing XML data as shown here
2. Extracting useful information from Ring Buffers
3. Tracking available contiguous memory used and available in the SQL Server process address space

The scripts containing the Stored Procedure definitions and table schemas can be downloaded from here.

T-SQL Tuesday#17: It’s all about APPLYcation this time

imageIt’s time for another round of T-SQL Tuesday and this round of the revolving blog party is being hosted by Matt Velic [Blog | Twitter].

APPLY – That is the topic for this month’s T-SQL Tuesday! The APPLY operator was added to the T-SQL repertoire and which has resulted in lesser use of cursors for a large number of diagnostic scripts that CSS uses to collect data while working on SQL Performance issues. In this blog, I shall share a few examples of such queries that we use to collect data while working on SQL Performance cases.

TOP Query Plan Statistics

The following query gives you a list of the SQL batches/procedures with their CPU usage, Query/Batch duration and Physical Reads rank. This query helps identify the TOP CPU/Duration/Read consuming queries by making use of system DMVs. The output below is useful for the following reasons:

1. I get the usecount of the procedure/batch and if this batch is called multiple times and the use count of a Compiled Proc cached object is only 1, then the plan is not being re-used. This now tells me that I need to look at reasons behind inability of plan re-use.

2. I get the total and average resource usage statistics for each of the queries listed in the output.

3. A quick glance at the output gives me an idea of the most expensive queries on the instance w.r.t. reads or/and CPU and/or query duration.


SELECT

LEFT(p.cacheobjtype + ' (' + p.objtype + ')',35) AS cacheobjtype,

p.usecounts,

p.size_in_bytes/1024  AS size_in_kb,

PlanStats.total_worker_time/1000 AS tot_cpu_ms,

PlanStats.total_elapsed_time/1000 AS tot_duration_ms,

PlanStats.total_physical_reads,

PlanStats.total_logical_writes,

PlanStats.total_logical_reads,

PlanStats.CpuRank,

PlanStats.PhysicalReadsRank,

PlanStats.DurationRank,

LEFT(CASE WHEN pa.value = 32767 THEN 'ResourceDb' ELSE ISNULL(DB_NAME(CONVERT(sysname,pa.value)),CONVERT(sysname,pa.value)) END,40) AS dbname,

sql.objectid,

CONVERT(nvarchar(50), CASE WHEN sql.objectid IS NULL THEN NULL ELSE REPLACE(REPLACE(sql.[text],CHAR(13),' '),CHAR(10),' ') END) AS procname,  REPLACE(REPLACE(SUBSTRING(sql.[text],PlanStats.statement_start_offset/2+1,CASE WHEN PlanStats.statement_end_offset=-1 THEN LEN(CONVERT(nvarchar(max),sql.[text]))

ELSE PlanStats.statement_end_offset/2 - PlanStats.statement_start_offset/2+1 END),CHAR(13),' '),CHAR(10),' ') AS stmt_text

FROM

(

SELECT

stat.plan_handle,

statement_start_offset,

statement_end_offset,

stat.total_worker_time,

stat.total_elapsed_time,

stat.total_physical_reads,

stat.total_logical_writes,

stat.total_logical_reads,

ROW_NUMBER()OVER ( ORDER BY stat.total_worker_time DESC ) AS CpuRank,

ROW_NUMBER()OVER ( ORDER BY stat.total_physical_reads DESC ) AS PhysicalReadsRank,

ROW_NUMBER()OVER ( ORDER BY stat.total_elapsed_time DESC ) AS DurationRank

FROM sys.dm_exec_query_stats stat

) AS PlanStats

INNER JOIN sys.dm_exec_cached_plans p

ON p.plan_handle =  PlanStats.plan_handle

OUTER APPLY sys.dm_exec_plan_attributes ( p.plan_handle ) pa

OUTER APPLY sys.dm_exec_sql_text ( p.plan_handle ) AS sql

WHERE

(PlanStats.CpuRank<50

OR PlanStats.PhysicalReadsRank<50

OR PlanStats.DurationRank<50)

AND

pa.attribute='dbid'

ORDER BY tot_cpu_ms DESC


Top Queries with Similar Query Hash and Query Plan Hash

 

SELECT TOP 10 query_plan_hash, query_hash,

COUNT (distinct query_plan_hash) as 'distinct query_plan_hash count',

SUM(execution_count) as 'execution_count',

SUM(total_worker_time) as 'total_worker_time',

SUM(total_elapsed_time) as 'total_elapsed_time',

SUM (total_logical_reads) as 'total_logical_reads',

MAX(REPLACE (REPLACE (SUBSTRING (st.[text], qs.statement_start_offset/2 + 1,CASE WHEN qs.statement_end_offset = -1 THEN LEN (CONVERT(nvarchar(max), st.[text])) ELSE qs.statement_end_offset/2 - qs.statement_start_offset/2 + 1 END), CHAR(13), ' '), CHAR(10), ' '))  AS sample_statement_text,

MIN(CAST(query_plan as varchar(max))) AS 'ShowPlan XML'

FROM sys.dm_exec_query_stats AS qs

CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st

CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as sp

GROUP BY query_plan_hash, query_hash

ORDER BY sum(total_worker_time) ASC;

This is a query which can help you identify queries which have the same query plan hash

SQL Server Books Online topic “Finding and Tuning Similar Queries by Using Query and Query Plan Hashes” has more information on this topic. The query hash feature was added in SQL Server 2008 which made it easier to troubleshooting performance issues caused by ad-hoc queries which differed in just literal values. RML Utilities does a similar task by creating query hash but now if you are troubleshooting on the server, you can do this using DMVs without having to capture a profiler trace.

The right operand supplied to the Apply operator is a function of one or more column values that are present in the left operand. So basically, the right operand is a table-valued expression of which is evaluated once for each row that appears in the left operand. The Cross Apply and Outer Apply are the two flavors of the Apply operator. So if I wanted to simulate an Apply Operation without the Operator itself, it would require the use of temporary tables or table variables.

I use the APPLY operator a lot while parsing XML data like Process Monitor traces or XML query plans which make life a lot easier and saves me from writing a huge bunch of T-SQL code.