Why you shouldn’t NUKE your transaction log file?


I have seen multiple scenarios where DBAs will delete the T-LOG file of a SQL Server database because it is eating up disk space.

THIS IS WRONG AND SHOULDN’T BE DONE UNDER ANY CIRCUMSTANCE!

The transaction log of a SQL Server maintains information about all logged activity that occurs in a database. During a database startup, a recovery phase is initiated which performs the Analysis, Redo and Undo phases to bring the database into a consistent state. The complete process is dependant on the availability of the database transaction log. Without this, you are being exposed to the loss of transaction consistency. Sounds like a fancy term doesn’t it? Well all it means is that you can now have data in your database tables which shouldn’t be there.

Let me elaborate on this with an example. I created a small test database which has a simple table containing two columns (record, amount). I inserted 1000 rows in the table. The T-SQL commands used are:

create database dbLogTest
go
use dbLogTest
go
create table tblTest (record int, amount bigint)
go
set nocount on
declare @cntr int
set @cntr = 1
while (@cntr <= 1000)
begin
insert into tblTest values (@cntr, @cntr * 100)
set @cntr = @cntr + 1
end

A SELECT of the first 9 records in the table looks like this:

image

Now I start a transaction in the database to update the first 10 records with an increment of 100. I do NOT commit the transaction and something happens to my SQL Server transaction log file. (Maybe someone deleted the file when SQL was offline!!) After I restart the SQL Server service and try to use the database, I get the following error in the SQL Server ERRORLOG:

2010-05-04 18:20:25.740 spid19s      Starting up database ‘dbLogTest’.
2010-05-04 18:20:26.300 spid19s      Error: 17207, Severity: 16, State: 1.
2010-05-04 18:20:26.300 spid19s      FileMgr::StartLogFiles: Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file ‘D:\Database Files\dbLogTest_log.LDF’. Diagnose and correct the operating system error, and retry the operation.
2010-05-04 18:20:26.700 spid19s      File activation failure. The physical file name "D:\Database Files\dbLogTest_log.LDF" may be incorrect.
2010-05-04 18:20:26.700 spid19s      The log cannot be rebuilt because the database was not cleanly shut down.

Since the transaction log file is not available, I cannot start the database. Now I will put the database in EMERGENCY mode and run a CHECKDB on the database. The CHECKDB output states that the database has no consistency errors. So the data file is intact and there are no errors.

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘dbLogTest’.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

You would now expect that the data in the data file is consistent and can be exported out and merged with your database backup (if one exists). Before we pass the verdict of “Everything is OK”, let’s have a look at the first 9 records in the tblTest table which we modified before killing the SQL Server service.

image

And now you have all values incremented by 100 even though you didn’t commit the transaction. This is because a CHECKPOINT on the database would have flushed all the dirty buffers to the database file. SQL Server follows WAL (Write Ahead Logging) for the Transaction Log. The database transaction log would have had the information that the database has an open transaction which was not committed and had modified data. If the transaction log file was present, the database recovery process would have rolled back the above changes and you would have data which is supposed to be present in the database. The transaction log is the heart of the transactional consistency of a SQL Server database. If your database was not cleanly shutdown, you will not be able to rebuild the transaction log with the ATTACH_REBUILD_LOG clause for CREATE DATABASE. The only option would be Export the data into another database with the same schema.

Whenever CSS is left with no other option other than to rebuild the transaction log to get the database online, we do explain in detail the ramifications of how rebuilding the transaction log can affect your transactional consistency. We never recommend rebuilding the transaction log. So, the best option is always to restore a last known database backup and reconcile the remaining portion of the data (since the last backup till date) manually. It is a manual operation but something that cannot be compromised on when you are handling databases in a financial institution or critical systems where a mishap like the one described above could prove disastrous.

A CREATE DATABASE with ATTACH_REBUILD_LOG for the database fails with the following error:

File activation failure. The physical file name "D:\Database Files\dbLogTest_log.LDF" may be incorrect.
The log cannot be rebuilt because the database was not cleanly shut down.
Msg 1813, Level 16, State 2, Line 1
Could not open new database ‘dbLogTest’. CREATE DATABASE is aborted.

This post was an attempt in explaining how a transaction log file loss can affect you adversely. So, the takeaways here are:

1. Ensure that you have a tried and tested disaster recovery plan in place to recover from hardware/software failures that may occur in your environment

2. Never ever DELETE your transaction log file

Rebuilding the transaction log of a SQL Server database is one of the TOP 2 worst things that you can do as a DBA. The second one is running CHECKDB with REPAIR_ALLOW_DATA_LOSS. I have already documented about Repair Allow Data Loss in a previous post of mine.

Read more about these in Paul Randal’s posts on MSDN:

Ta da! Emergency mode repair

When should you rebuild the transaction log?

A snippet from the above post:

Product Support will recommend rebuilding the transaction log only if something has corrupted it and you have no backups (plus gently reminding you to get a backup strategy). Before they do so they’ll make sure you’re aware of the possibly consequences of doing it and advise you to run DBCC CHECKDB to determine whether the database is in a structurally and transitionally consistent state. If not, you’ll most likely need to run REPAIR_ALLOW_DATA_LOSS and then begin the process of working out what data was lost (and start dealing with irate customer?)

There is a no excuse for not having a backup strategy and there is definitely no excuse for running the above NOT recommended steps when you have other options.

I have found various MVPs stressing the same point. Here is one such example from Gail [Blog].

Advertisements

How to find out how many objects of different types are there in a SQL database


I have sometimes found the need on data corruption cases to compare the number of objects exported to the destination database with the source database to find out which objects got exported and which didn’t. You would need to run this script against the source and destination database and compare the output. 

One of the drawbacks of the “Generate Scripts” option in SQL Server Management Studio is that SSMS scripts out the Primary Key constraints along with the CREATE TABLE script. So, if the table is already created, the primary key will not be created if you use the script that was generated by the SSMS Generate Scripts Wizard

Script 

select 

CASE xtype 

WHEN 'C' THEN 'CHECK constraint' 

WHEN 'D' THEN 'Default or DEFAULT constraint ' 

WHEN 'F' THEN 'FOREIGN KEY constraint ' 

WHEN 'L' THEN 'Log ' 

WHEN 'FN' THEN 'Scalar function ' 

WHEN 'IF' THEN 'In-lined table-function ' 

WHEN 'P' THEN 'Stored procedure ' 

WHEN 'PK' THEN 'PRIMARY KEY constraint (type is K) ' 

WHEN 'RF' THEN 'Replication filter stored procedure' 

WHEN 'S' THEN 'System table ' 

WHEN 'TF' THEN 'Table function ' 

WHEN 'TR' THEN 'Trigger ' 

WHEN 'U' THEN 'User table ' 

WHEN 'UQ' THEN 'UNIQUE constraint (type is K) ' 

WHEN 'V' THEN 'View ' 

WHEN 'X' THEN 'Extended stored procedure' 

ELSE 'UNKNOWN' 

END, count(*) as counts 

from sys.sysobjects 

group by CASE xtype 

WHEN 'C' THEN 'CHECK constraint' 

WHEN 'D' THEN 'Default or DEFAULT constraint ' 

WHEN 'F' THEN 'FOREIGN KEY constraint ' 

WHEN 'L' THEN 'Log ' 

WHEN 'FN' THEN 'Scalar function ' 

WHEN 'IF' THEN 'In-lined table-function ' 

WHEN 'P' THEN 'Stored procedure ' 

WHEN 'PK' THEN 'PRIMARY KEY constraint (type is K) ' 

WHEN 'RF' THEN 'Replication filter stored procedure' 

WHEN 'S' THEN 'System table ' 

WHEN 'TF' THEN 'Table function ' 

WHEN 'TR' THEN 'Trigger ' 

WHEN 'U' THEN 'User table ' 

WHEN 'UQ' THEN 'UNIQUE constraint (type is K) ' 

WHEN 'V' THEN 'View ' 

WHEN 'X' THEN 'Extended stored procedure' 

ELSE 'UNKNOWN' 

END

CHECKDB with Repair Allow Data Loss: Should I shoot myself?


Over the past years, I have seen multiple DBAs execute DBCC CHECKDB with REPAIR ALLOW DATA LOSS to repair the SQL Server database corruption in their database without understanding repercussions. Even when you are working with CSS on such issues, CSS engineers actually explain what the command does and also send across a disclaimer stating why this command should NOT be executed on a database and what the repercussions are. Alas due to a lack of a sound database backup strategy, a lot of DBAs are forced to resort to losing data. In a nutshell, Repair Allow Data Loss tries to first repair the corruption in the page but if it cannot, then it checks if it can drop the page and repair the error and hence the name Allow Data Loss! I had a question for all the DBAs who advocate the use of this command:

How would you like your Payroll DBA to run the command? (Good enough food for thought!!)

The only way to compare how much data you lost would be to perform a CHECKDB before running the CHECKDB with REPAIR ALLOW DATA LOSS and after the CHECKDB has completed. The row counts returned by the two CHECKDB outputs would have to be compared. If CHECKDB is failing due to system catalog corruption, then you are plain out-of-luck.

What most people fail to understand that CHECKDB can verify integrity of the database based on the SQL database rules imposed on it like referential integrity, data length, record structures, page structures, header correctness etc. But these are database integrity rules that we are aware of. What about the business logic rules that you imposed while designing the database.

Eg: Let’s say you had a database table called Employee which depended on a Department ID valued to be present in the Department table. No foreign key relationship exists between the two tables. You experienced corruption in the database and used Repair Allow Data Loss command along with CHECKDB which dropped a few pages from the Department table. However, CHECKDB cannot know that a Department ID is missing in the Department table because you didn’t create a Foreign Key between the two tables. So when your application now tries to retrieve the Department ID of the Employee, it will BARF on you! This is the kind of scenario we specifically want to avoid which is why we advise against the use of data loss commands.

SQL Server wouldn’t know what data is supposed to present in the table rows. The only thing that SQL can verify while you are retrieving data or checking integrity of the database is that the column data and column datatype match. I have heard questions like “Why can’t SQL Server figure out if the value is correct?”. Well, would you know why a person’s name is Uncle Sam and not Uncle Bill? 🙂 So, how would SQL Server know that the FirstName column (an example) was supposed to store Sam and not Bam. As per SQL Server, both strings match the column datatype, hence from a structural and logical integrity, the data is correct.