Performance benchmarking: Explicit vs Implicit Transactions


imageBenchmarking… The single most important word in Performance Tuning. Without an established benchmark, you really don’t know when to actually call it a day!

If you have ever worked in testing on database projects, you would have probably done testing by trying to narrow down the data set involved which is nearly representative of the actual production data set or used a sub-set of the queries which are representative of the workload which is expected or is active on the production environment.

Now this brings me to the reason of this post. It is quite often during testing that we forget that SQL Server executes all DML operations by default in autocommit mode. This means that all individual statements are committed if they complete successfully. If you want to avoid this behavior, then you either need to set IMPLICIT_TRANSACTIONS setting to ON for your database connection or use a BEGIN TRANSACTION command before you execute your query.

SQL Server follows Write Ahead Logging protocol which means no data modifications are written to disk before the associated log record is written to disk. This maintains the ACID properties for a transaction. (when you involve disk-caching, that opens up another can of worms but that discussion is outside the scope of this blog post)

My belief is that if you are comparing execution times between two different environments, then you should be doing that on exactly the same hardware configuration and if that is not feasible, you should at-least match the CPU, physical RAM and disk sub-system on both sides. I had already documented in an earlier blog post why RAM/CPU configurations are important when comparison execution statistics between two different SQL Server environments. Ideally, you should have the same exact environment as your test environment including the workload (i.e. number of users, connections etc.). However, we all know that Utopia is not a place where we live in and hence the toned down test scripts, scaled down/up environments, shared resources and I could keep lamenting!!

In the last month, I dealt with two such issues where in a T-SQL batch performing a large number of inserts on a database table was being used to compare the performance between two different environments. Since I get called in to fix a problem and no-one generally calls CSS to tell us that their server is running as expected… the problem invariably happened to be that a bigger, beefier server was taking a longer time to execute the same batch. I shall demonstrate where not knowing about the WAL protocol can cause such test mechanisms to be skewed and take you down the proverbial rabbit-hole!

The script that I shall be using for demonstrating the pitfalls of this behavior is pretty simple:

declare @cntr int = 1
while @cntr <= 1000000
begin
insert into tblInserts (SNo,RowVal) values(@cntr,’Record ID ‘ +CAST(@cntr as varchar(7)))
set @cntr += 1
end

The script inserts 1 million rows into a database table (a heap table) using a while loop. During the time of the insert, I shall capture various performance counter values during the execution along with wait statistics for the query.

Modification: September 9th, 2011: Based on Kendra’s (Twitter) feedback, changing the sub-headings. Test 1 makes use of auto-commit mode of SQL Server which is the default and Test 2 can be achieved either by using implicit transaction mode or performing an explicit transaction (BEGIN…COMMIT).

Test 1: With AutoCommit mode

For two iterations the above script on an average took 8 minutes and 30 seconds to complete. When I looked into the wait statistics captured (at 5 second intervals), I don’t see anything standing out other than a few non-zero wait durations for WRITELOGs during certain periods. The non-zero wait times (in milli-seconds) are shown below with the highest value being captured being 10 milli-seconds. The Average Disk Secs/(Read/Write) don’t show me any outstanding values during the data capture to indicate that the disk was a bottleneck, then why does the data below show so many WRITELOG waits. Keep in mind that the total amount of time waited for as indicated by the data below is also not a significantly large amount. So why is it taking over eight minutes to insert the data??image

Runtime Session_id Status Wait_type Wait_time Last_Wait_Type
12:46 AM 51 suspended WRITELOG 10 WRITELOG
12:46 AM 51 suspended WRITELOG 7 WRITELOG
12:47 AM 51 suspended WRITELOG 1 WRITELOG
12:47 AM 51 suspended WRITELOG 2 WRITELOG
12:47 AM 51 suspended WRITELOG 3 WRITELOG
12:47 AM 51 suspended WRITELOG 2 WRITELOG
12:48 AM 51 suspended WRITELOG 7 WRITELOG
12:49 AM 51 suspended WRITELOG 8 WRITELOG
12:49 AM 51 suspended WRITELOG 1 WRITELOG
12:49 AM 51 suspended WRITELOG 10 WRITELOG
12:49 AM 51 suspended WRITELOG 6 WRITELOG
12:50 AM 51 suspended WRITELOG 2 WRITELOG
12:51 AM 51 suspended WRITELOG 3 WRITELOG
12:51 AM 51 suspended WRITELOG 4 WRITELOG
12:51 AM 51 suspended WRITELOG 1 WRITELOG
12:51 AM 51 suspended WRITELOG 2 WRITELOG
12:52 AM 51 suspended WRITELOG 5 WRITELOG
12:52 AM 51 suspended WRITELOG 7 WRITELOG
12:53 AM 51 suspended WRITELOG 1 WRITELOG
12:53 AM 51 suspended WRITELOG 8 WRITELOG

To explain the query, I would need to fall back of SQL Server Performance Monitor counters (sampled at 5 second intervals). On analysis of the performance monitor counters, I found the following:
a. SQLServer:Databases: Log Bytes Flushed/sec showed that on an average 3.1 MB of log bytes were being flushed every 5 seconds for the database on which I was performing the inserts.
b. SQLServer:DatabasesLog Flushes/sec showed that about 6000 log flushes were occurring for this database every 5 seconds on an average.
c. SQLServer:Wait Statistics: Log write waits shows me that on an average there were about 9000+ waits started per second. However, the Cumulative wait time (ms) per second for the same counter shows me negligible values and the Average wait time (ms) value is nearly zero through the data collection period.

So how can I explain where the time is being spent?? Now I decided to run another test using implicit transactions.

imageTest 2: Without AutoCommit mode

I ran two iterations of the above T-SQL batch within BEGIN TRANSACTION…COMMIT block and the average duration was 14 seconds! Whattttt!??!?! Yes.. And all this can be simply explained using the same counters that I looked above. This time around the performance monitor data told me the following story:
a. SQLServer:Databases: Log Bytes Flushed/sec showed that on an average 26 MB of log bytes were being flushed every 5 seconds for the database on which I was performing the inserts.
b. SQLServer:DatabasesLog Flushes/sec showed that about 468 log flushes were occurring for this database every 5 seconds on an average.
c. SQLServer:Wait Statistics: Log write waits shows me that on an average there were about 23(approx.)+ waits started per second.

If you look at the Performance Monitor graphs for the disk statistics that I captured for a single run for Test 1 (screenshot above) and Test 2 (screenshot on the left), you will see that the %Idle Time for the disk, on which the database files resided on (in my case F: drive), shows was busy 50% of the time (see blue squiggly above) during the test and the value climbed back to ~100% after the test completed. On the contrary, the %Idle Time for the same disk has a very zig-zag pattern (see black squiggly on left) during Test 2 which indicates that the disk was sporadically busy and it was not a continuous pattern as observed in Test 1.

The Avg. Disk Sec/Write counter shows absolutely no latency which means that the there is no problem in terms of latency on the disks.

During Test 1, data was being written to the disk at the rate of 907 KB/s where as during Test 2, the write rate was 5MB/s which was determined by monitoring the Disk Write Bytes/sec counter.

The reason for this difference is that for every insert in Test 1 is followed by an autocommit. This means that you need to flush the log buffer for each insert. However in Test 2, the log buffer was being flushed much lesser but at the same time more data was being written to the disk for each log flush. Since SQL Server follows a WAL protocol, till the earlier log flush is completed, it cannot commit/move onto the next transaction.

If you are running a similar system with two different sets of hardware, then you would start having to chase down disk speeds/RPMs between the two servers. For servers which have disks performing optimally, this behavior is quite difficult to gather without the right set of data.

With SQL Server 2008, tracking down the waits for a single session is much, much easier with the help of XEvents. But that shall be a story for another day… errrr.. post!!

Moral of the story: If you are using a test query similar to the one shown above to test/benchmark performance and the query batch is not indicative of your actual workload, then you would probably not be able to gauge or establish an accurate performance benchmark for the queries executing on your system. And REMEMBER that SQL Server runs in auto-commit mode by default!

HAPPY TESTING! Smile

Advertisements

Removing primary transaction log file


Recently I was discussing about removing transaction log files on the #sqlhelp hashtag on Twitter. The question was whether we could remove an existing transaction log file after adding a new one. During the course of the discussion, I learnt that the file that was being removed was the primary transaction log file of the database.

When you attempt to remove the primary log file of a database from Management Studio or using ALTER DATABASE command, you will receive the following error:

Drop failed for LogFile ‘dbLogTest_log’.  (Microsoft.SqlServer.Smo)

The primary data or log file cannot be removed from a database. (Microsoft SQL Server, Error: 5020)

This is by-design. We do not allow the removal of the primary transaction log file of user databases. This was first introduced in SQL Server 2000.

Tibor (Blog), a SQL Server MVP, has already blogged about how to remove transaction log files using T-SQL commands here. So, I am not going to duplicate the post here again by providing the same T-SQL commands.

Why you shouldn’t NUKE your transaction log file?


I have seen multiple scenarios where DBAs will delete the T-LOG file of a SQL Server database because it is eating up disk space.

THIS IS WRONG AND SHOULDN’T BE DONE UNDER ANY CIRCUMSTANCE!

The transaction log of a SQL Server maintains information about all logged activity that occurs in a database. During a database startup, a recovery phase is initiated which performs the Analysis, Redo and Undo phases to bring the database into a consistent state. The complete process is dependant on the availability of the database transaction log. Without this, you are being exposed to the loss of transaction consistency. Sounds like a fancy term doesn’t it? Well all it means is that you can now have data in your database tables which shouldn’t be there.

Let me elaborate on this with an example. I created a small test database which has a simple table containing two columns (record, amount). I inserted 1000 rows in the table. The T-SQL commands used are:

create database dbLogTest
go
use dbLogTest
go
create table tblTest (record int, amount bigint)
go
set nocount on
declare @cntr int
set @cntr = 1
while (@cntr <= 1000)
begin
insert into tblTest values (@cntr, @cntr * 100)
set @cntr = @cntr + 1
end

A SELECT of the first 9 records in the table looks like this:

image

Now I start a transaction in the database to update the first 10 records with an increment of 100. I do NOT commit the transaction and something happens to my SQL Server transaction log file. (Maybe someone deleted the file when SQL was offline!!) After I restart the SQL Server service and try to use the database, I get the following error in the SQL Server ERRORLOG:

2010-05-04 18:20:25.740 spid19s      Starting up database ‘dbLogTest’.
2010-05-04 18:20:26.300 spid19s      Error: 17207, Severity: 16, State: 1.
2010-05-04 18:20:26.300 spid19s      FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file ‘D:\Database Files\dbLogTest_log.LDF’. Diagnose and correct the operating system error, and retry the operation.
2010-05-04 18:20:26.700 spid19s      File activation failure. The physical file name "D:\Database Files\dbLogTest_log.LDF" may be incorrect.
2010-05-04 18:20:26.700 spid19s      The log cannot be rebuilt because the database was not cleanly shut down.

Since the transaction log file is not available, I cannot start the database. Now I will put the database in EMERGENCY mode and run a CHECKDB on the database. The CHECKDB output states that the database has no consistency errors. So the data file is intact and there are no errors.

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘dbLogTest’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

You would now expect that the data in the data file is consistent and can be exported out and merged with your database backup (if one exists). Before we pass the verdict of “Everything is OK”, let’s have a look at the first 9 records in the tblTest table which we modified before killing the SQL Server service.

image

And now you have all values incremented by 100 even though you didn’t commit the transaction. This is because a CHECKPOINT on the database would have flushed all the dirty buffers to the database file. SQL Server follows WAL (Write Ahead Logging) for the Transaction Log. The database transaction log would have had the information that the database has an open transaction which was not committed and had modified data. If the transaction log file was present, the database recovery process would have rolled back the above changes and you would have data which is supposed to be present in the database. The transaction log is the heart of the transactional consistency of a SQL Server database. If your database was not cleanly shutdown, you will not be able to rebuild the transaction log with the ATTACH_REBUILD_LOG clause for CREATE DATABASE. The only option would be Export the data into another database with the same schema.

Whenever CSS is left with no other option other than to rebuild the transaction log to get the database online, we do explain in detail the ramifications of how rebuilding the transaction log can affect your transactional consistency. We never recommend rebuilding the transaction log. So, the best option is always to restore a last known database backup and reconcile the remaining portion of the data (since the last backup till date) manually. It is a manual operation but something that cannot be compromised on when you are handling databases in a financial institution or critical systems where a mishap like the one described above could prove disastrous.

A CREATE DATABASE with ATTACH_REBUILD_LOG for the database fails with the following error:

File activation failure. The physical file name "D:\Database Files\dbLogTest_log.LDF" may be incorrect.
The log cannot be rebuilt because the database was not cleanly shut down.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘dbLogTest’. CREATE DATABASE is aborted.

This post was an attempt in explaining how a transaction log file loss can affect you adversely. So, the takeaways here are:

1. Ensure that you have a tried and tested disaster recovery plan in place to recover from hardware/software failures that may occur in your environment

2. Never ever DELETE your transaction log file

Rebuilding the transaction log of a SQL Server database is one of the TOP 2 worst things that you can do as a DBA. The second one is running CHECKDB with REPAIR_ALLOW_DATA_LOSS. I have already documented about Repair Allow Data Loss in a previous post of mine.

Read more about these in Paul Randal’s posts on MSDN:

Ta da! Emergency mode repair

When should you rebuild the transaction log?

A snippet from the above post:

Product Support will recommend rebuilding the transaction log only if something has corrupted it and you have no backups (plus gently reminding you to get a backup strategy). Before they do so they’ll make sure you’re aware of the possibly consequences of doing it and advise you to run DBCC CHECKDB to determine whether the database is in a structurally and transitionally consistent state. If not, you’ll most likely need to run REPAIR_ALLOW_DATA_LOSS and then begin the process of working out what data was lost (and start dealing with irate customer?)

There is a no excuse for not having a backup strategy and there is definitely no excuse for running the above NOT recommended steps when you have other options.

I have found various MVPs stressing the same point. Here is one such example from Gail [Blog].