Virtual Tech Days Session

The 12th edition of Virtual Tech Days is scheduled from 28th September to 30th September. I will be delivering a session under the Manage Your Data – DBA Track. The session is about Managing SQL Server Made Easy. You can register for this edition of Virtual Tech Days here.

In this session, I shall demonstrate how to use Management Studio and the existing features in SQL Server to manage your SQL Server instances. This will feature use of Extended Events, Custom Reports, Management Data Warehouse and much more.

Event Date: 29th September
Event Time: 4:15PM – 5:15PM IST

The DBA track will also feature sessions from well known SQL Server community members: Pinal Dave [Blog | Twitter] and Vinod Kumar [Blog | Twitter].

Pinal’s session: Ancient Trade of Performance Tuning – Index, Beyond Index and No Index | Timing: 11:45am-12:45pm

Vinod’s session: Monitoring Performance in Cloud for SQL Azure Applications | Timing: 3:00pm-4:00pm

My colleague, Balmukund Lakhani [Blog | Twitter], will also be delivering two sessions during Virtual Tech Days:

imageTips for Successful SQL Server Deployment in Enterprise Environment | Timing: 10:30am-11:30am IST

High Availability – A Story from Past to Future | Timing: 1:45pm-2:45pm IST

Day 2 Agenda: http://www.virtualtechdays.com/agendaDay2.aspx

And there are some cool prizes to be one!! Will you be there?

T-SQL Tuesday #19: Disasters and Recovery

This month’s revolving blog party a.k.a. T-SQL Tuesday is being hosted by Allen Kin (blog | twitter). The topic for this month is Disasters and Recovery. The creation of fail-safe mechanisms is probably one of the most important facet of any IT administrator role in today’s world where online transactions have become synonymous to daily lives. When creating a DR strategy, you need to keep three things in mind:

1. RPO Recovery Point Objective
2. RTORecovery Time Objective
3. SLA – Service Level Agreements

Mike Walsh has already documented about the above three points in an earlier post on SQL University DBA Week. So what am I going to talk about in this post….. Well, since a major part of my daily job involves working on critical situations which sometimes involve disaster recovery, I will talk about some key but simple points that you should be aware of while restoring your database[s] in case your server encounters a disaster.

1. Always have backups – This point no matter how many times reiterated is still less! You should always have backups of your databases. You should store your backups on a separate media which is not the same as the disk drives which store the database files. This ensures that you don’t have a single point of failure. I have seen a lot of scenarios where the backups and the database files are stored on the same drive. Once the drive goes BOOM!!… You are left with zilch!! A bad scenario to be in!

2. Test your backups – Just taking regular backups doesn’t ensure that you will be safe when a disaster strikes. You need to restore your backups and ensure that the backups can be restored successfully. If you have an automated DR strategy in place, then it is always good to perform dry-runs to ensure that your team is well versed with the recovery process when the need arises. You don’t want to be grappling with your restore scripts during a crisis situation. The next nugget of information is to ensure that a DBCC CHECKDB on the restored database completes without any errors. Just because the restore was successful, doesn’t mean that the database is consistent!

3. Know your environment – An application doesn’t just depend on your database[s]. There might be customized connection settings, connection aliases, specific logins, database users, linked servers etc. which need to be kept handy in case you need to bring a new environment online which was a clone of your previous disaster ridden system. I have seen multiple times where the databases have been restored successfully but the logins and linked specific to the application are missing. So now you have an environment which has the application databases but other specifics pertaining to the application’s functioning are missing.

4. System databases need to be backed up also – System databases do need to be backed up as well. Eg. Without the master database backup in a disaster scenario, you will be missing the necessary logins that your application needs to login to the user database.

5. Benchmarking is very important – As I mentioned earlier, a dry-run is very important. This is primarily due to the fact that if you do not know how much time a restore is going to take, you cannot define your RTO and adhere to your agreed SLAs. A classic situation is that the application needs to be up within 4 hours but since no once tested the entire restore cycle, no one knows how long it will take to restore the set of full/differential/log backups that are available.

6. Have multiple points of failure – This is mostly considered as a good to have but in critical environments, I consider this as a must-have! A simple implementation of this would be redundancy. Keep two copies of your database backups. If one set of database backups are inconsistent, you have a redundant set of backups to fall back on. A decision taken to disk space by reducing the number of redundant copies can look very daft when you are not able to bring a production system online due to the unavailability of consistent backups.

7. Never rely on REPAIR ALLOW DATA LOSS as your savior – The REPAIR ALLOW DATA LOSS option provided with CHECKDB should always and always be your last resort! This means that when all else fails, then you resort to repair options. This repair option should never be your first option for recovering from a disaster because as the name states it always results in data loss!!

8. Know how long a CHECKDB takes to complete on the database – If you do not run CHECKDB regularly on the database for which you are creating a DR strategy, then you are inviting trouble. Always run periodic CHECKDB on your databases and note the time taken so that you have a fair estimate on how long a CHECKDB should take to complete successfully on the given database.

9. Redundant database copies – A lot of environments use Database Mirroring, Log Shipping and Replication to maintain duplicate copies of the database. If you are using any of these features to maintain copies of the existing databases, then you need to note two things: first being the latency between the primary and secondary copies. This will define your RPO as the average latency will be the amount of data loss that you should be prepared to deal with and this will also define RPO to some measure as the time taken to recover the missing data would be defined by latency. Another point to keep in mind is that if you decide to use one of the alternate database copy of the database as the new production database, then you need to ensure that you avoid certain gotchas. Example: Orphaned users for SQL Authenticated logins when you use log shipping or database mirroring.

10. Keep in mind the additional SQL Server features being used – If you are using replication or mirroring or log shipping on the primary database being recovered, then you need to account for additional steps before restoring the databases as a simple restore of a database backup for such a database will not do. Eg. Special considerations need to be followed for restoring replicated databases.

For the non-technical aspects, a disaster recovery plan should include the following:

Disaster recovery plan types include the following (from Books Online):

  1. A list of people to be contacted if a disaster occurs
  2. Information about who owns the administration of the plan
  3. A checklist of required tasks for each recovery scenario. To help you review how disaster recovery progressed, initial each task as it is completed, and indicate the time when it finished on the checklist.

The above points might seem like basics but it would be surprising that they don’t get religiously followed on some production environments!

Tools Tips and Tricks: Round-up

Last month I ran a blog series on the different Tools that CSS uses for certain troubleshooting scenarios SQL Server related issues. There were 12 posts that covered the use of different tools like Debug Diagnostic, XPerf, tools from Sysinternals, SQLDIAG, SQL Nexus and some debugging tips.

Now it is time to do a SELECT [PostURL], [Summary] FROM [TroubleshootingSQL].[tblBlogSeries] WHERE [Series] = ‘Tools Tips and Tricks’…..

Post#1: Tools Tips and Tricks #1: Process MonitorSome best practices to be followed when capturing Process Monitor traces to ensure that the data collection doesn’t causes additional performance issues.

Post#2: Tools Tips and Tricks #2: SQL Express RANU instances Explains how to connect to SQL Express Run As User Instances (RANU) using existing tools.

Post#3: Tools Tips and Tricks #3: Custom Rowsets using SQL NexusAnother quick tip on using SQL Nexus to import outputs of .sql scripts used to capture diagnostic data using Rowset Importer.

Post#4: Tools Tips and Tricks #4: RML Utilities – Some helpful tips on the use of RML Utilities which is used by SQL Nexus under the hood for importing SQL Profiler traces into a SQL Server database.

Post#5: Tools Tips and Tricks #5: SQLDIAG and RANU – Explains how to capture diagnostic data for SQL Express Run As User Instances (RANU) using SQLDIAG.

Post#6: Tools Tips and Tricks #6: Custom Reports in SQL NexusA quick tip on how to create custom reports for SQL Nexus a tool widely used within CSS for analyzing performance diagnostic data collected from a SQL Server instance.

Post#7: Tools Tips and Tricks #7: PsExec as parent and ProcMon as childExplains how to use PsExec (a tool from Sysinternals) to launch a process remotely and capture data. In this example, I have used Process Monitor as the remote process.

Post#8: Tools Tips and Tricks #8 – Debug Diagnostic and Crash Rules – A walkthrough on using Debug Diagnostic tool for capturing crash dumps and analyzing them using the Crash Analysis rule.

Post#9: Tools Tips and Tricks #9: PSSDIAG Configuration Manager – Explains how to configure PSSDIAG collection using Configuration Manager GUI with a few tips and tricks on tweaking the XML configuration file.

Post#10: Tools Tips and Tricks #10: Caching PDB files locally – Explains how to cache symbol files locally using CDB.exe.

Post#11: Tools Tips and Tricks #11: Debug Diag and Memory leaks – A walkthrough on configuring Debug Diag for tracking memory leaks for a program which can be extended to tracking non-BPool allocations for SQL Server.

Post#12: Tools Tips and Tricks #12: XPerf, Memory usage and much more – A walkthrough on how to use XPerf Heap allocation tracking for identifying memory consumers for a program. Can be extended to SQL Server Out-of-Memory (OOM) issues for non-BPool memory crunch.

Webcast Material for Virtual Tech Days

In May, I had done a webcast on “Understanding Performance Bottlenecks using Performance Dashboard”. The presentation material is now available on the SQLServerFAQ MSDN blog and the webcast videos are available for download on MSDN. Refer the links below for the presentation deck and webcast video download link.

Managing and Optimizing Resources for SQL Server [PPT | Webcast Download] – Balmukund Lakhani [Blog | @Blakhani]
Optimizing and Tuning Full Text Search for SQL Server [PPT | Webcast Download] – Sudarshan Narasimhan [Blogs @ SQLServerFAQ]
Understanding Performance Bottlenecks using Performance Dashboard [PPT | Webcast Download | QnA]
Cool Tools to have for SQL Server DBA [Webcast Download] – Pradeep Adiga [Blog | @PradeepAdiga]
Learn Underappreciated Features of SQL Server to Improve Productivity [Webcast Download] – Nakul Vachhrajani [Blog]

 

del.icio.us

Tags: ,,,

QnA: Performance Dashboard Webcast

I had an overflow of questions during the webcast today and wasn’t able to answer a few as we ran short of time. I shall attempt to address the most common questions asked during the webcast.

How do I configure SQL Server Performance Dashboards?
SQL Server Performance Dashboard was originally released for SQL Server 2005. Setup instructions are available with the download link. However, the same install can be tweaked to work for SQL Server 2008 and SQL Server 2008 R2 using the blog post by Sudarshan: Configuring Performance Dashboard for SQL Server 2008
Download link: SQL Server 2005 Performance Dashboard

The SQL Server instance being monitored must be running SQL Server 2005 SP2 or later. After completing the installation, you must:
1. Run the Setup.sql file on each instance of SQL Server 200x that you wish to monitor with the SQL Server 2005 Performance Dashboard Reports.
2. Open the performance_dashboard_main.rdl file with the Custom Reports functionality new to Management Studio in SQL Sever 2005 Service Pack 2.

Does Performance Dashboard work for SQL Server 2000?
No, Performance Dashboard uses DMVs to generate the data for the reports which are available from SQL Server 2005 and higher.

Where can I get the presentation and webcast recording?
The presentation and webcast recording for the entire webcast series will be made available on the SQLServerFAQ MSDN blog. I shall post an update once both are available.

What is the performance impact of running these reports?
If your server is unresponsive or experiencing high CPU usage on all the processors, then these reports will also face issues while retrieving diagnostic data from the DMVs. However, the overall impact of running this reports on a SQL Server instance is very minimal as compared to any other performance diagnostic tool.

Can these reports be used on SQL Azure or any other RDBMS?
No, they cannot be used to monitor SQL Azure databases or any other non-SQL Server RDMBS environment.

What permissions are required for using Performance Dashboard?
You need to have VIEW SERVER STATE permission to view all the information exposed by Performance Dashboard as most of the information is collected from DMVs.

Can these reports be deployed to a local shared repository?
You can use Visual Studio and create custom reports similar to the Performance Dashboard reports or modify the existing reports as per your needs and deploy them. Be advised, that this will be a customized solution and not supported by CSS w.r.t. performance issues while running these reports or configuration issues.

Lastly, thank you for attending the session! And for your patience.

In case there are more questions, please feel free to add a comment and I shall reply back accordingly.

Cheers!

 

Webcast: Understanding Performance Bottlenecks using Performance Dashboard

I will be delivering a webcast on “Understanding Performance Bottlenecks using Performance Dashboard”. The idea of this is webcast is demonstrate the use of tools offered by Microsoft to analyze performance bottlenecks when the issue is happening. This will be a good folow-up on the recent write-up that I did for SSWUG on tackling SQL Performance issues using tools provided by Microsoft.

Time: 2:30PM IST – 3:45PM IST
Date: 4th May, 2011

Here is the session abstract:

This session will demonstrate how to troubleshoot a SQL Server performance issue using out-of-the-box features without having to collect diagnostic data for post-mortem analysis.

Apart from this session, there are other sessions being delivering as part of this webcast series.

Managing and Optimizing Resources for SQL Server (Date: May 2, 2011 Time: 2:30 pm – 3:45 pm IST)

Optimizing and Tuning Full Text Search for SQL Server (Date: May 3, 2011 Time: 2:30 pm – 3:45 pm IST)

Cool Tools to have for SQL Server DBA (Date: May 5, 2011 Time: 2:30 pm – 3:45 pm IST)

Learn Underappreciated Features of SQL Server to Improve Productivity (Date: May 6, 2011 Time: 2:30 pm – 3:45 pm IST)

If you are interested, here is the registration link for signing up: http://virtualtechdays.com/SQLServer2008R2/

 

SQL MEME Monday <= 11 words

I just came across this post by Thomas LaRock [Blog | Twitter] regarding a blog post on SQL Server which has 11 words or less. So here is my less than 11 words of wisdom:

Document unique solutions so that others can re-use it!

Tagged:

Pinal Dave [Blog | Twitter]

Pradeep Adiga [Blog | Twitter]

Argenis Fernandez [Blog | Twitter]

Technorati Tags: