Troubleshooting SQL Setup Failures

  1. The Setup Logs for SQL Server 2005 are created in the following location
    “(system drive):\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG” 
    There are two folders underneath this location:
    a. Files
    b. Hotfix
  2. The files under the Files folder are created during the RTM installation of SQL Server 2005
  3. The files under the Hotfix folder are created during Service Pack/Hotfix installation
  4. For every failed setup of SQL Server 2005, there will be a .CAB (Cabinet file) created under the Files or Hotifx depending on what installation you are performing: RTM/Hotfix/Service Pack installation
  5. Also, the LOG folder would have a summary.txt file which would give a brief overview of which component(s) failed to install
  6. SQL Server 2005 uses MSI to install the components. For RTM setup, we use .MSI files and for patches, we use MSP files. The error # that you see in the setup summary.txt file would be present in the associated MSP or MSI log along with extended information of which function was being called. Eg. If setup fails during the configuration phase, then you might get the connectivity errors related to the attempts that setup made to connect to SQL Server. This would further provide some more hints to troubleshoot the issue
  7. Using the Error Number and the Component Name, find out if you come back with any helpful links from http://support.microsoft.com/
  8. If you are getting nowhere, then please post a question on the MSDN/TechNet Forums (Setup and Upgrade) or open up a support incident with Microsoft Product Support Services

Please refer the following for more details:
http://technet.microsoft.com/en-us/library/ms143702.aspx
http://msdn2.microsoft.com/en-us/library/ms144287.aspx

Another comprehensive blog post which talks about SQL Server 2005 Setup Logs:

http://blogs.msdn.com/raja_sivagaminathan/archive/2010/01/07/sql-server-2005-setup-log-files.aspx

SQL Server 2000 Setup Failures
The setup logs for RTM installations for SQL Server 2000 are installed with the following log sqlstp.log getting created in the %windows% folder.
The Service Pack setup logs are created having the following filename sqlsp.log under the same location specified above.
However, the hotfix logs are created with the following prefix “KB***.log

SQL Server 2000 setup doesn’t create multiple setup logs. You can use sqlstp.log (RTM setup) or sqlsp.log (Service Pack setups) or KB*****.log (Patch setup) and look for “return value 3“. If an action specified by a setup function errors out, then it prints the Return Value as 3 in the setup logs.
The drawback with the SQL setup logs for SQL Server 2000 is that the setup logs get overwritten every time setup is run. But in SQL Server 2005, this is not the case. All the logs are retained in the BOOTSTRAP folder and a new set of log files are created with incremental numbers in the same location.

Database Integrity Checks

Database integrity checks are recommended to ensure that the database consistency is intact and if there is a problem with consistency, then it is reported to the appropriate team(s) so that necessary action can be taken to rectify it. This can be done with the help of Database Maintenance Plans. In the event that a CHECKDB on a database fails, then it needs to be reported by Email(using xp_sendmail or SQL Agent Mail or Database Mail for SQL Server 2005) or events fired in the Operating System Event Logs (if these are monitored regularly) with the help of Operators which can be configured for SQL Server Agent. The xp_sendmail feature is not available for SQL 2005 64-bit versions.
The frequency of these checks largely depends on the following factors:
1. Importance of the database
2. How often data changes in the database (If a database integrity check fails for a database where data is not modified, then it would be advisable to restore the last known good backup rather than trying to repair the consistency database)
3. The size of the database
4. In the event of consistency checks failing, it needs to be determined which is the most feasible option:
a. Restore the last known good backups in accordance with the recovery model for that database to allow for a minimum amount of data loss
b. Or try and repair the database and falling back on Option (a) only if this fails In case a repair option is suggested in the CHECKDB output, it is important to note that a REPAIR_ALLOW_DATA_LOSS be never performed on the database without understanding its full consequences and consulting Microsoft PSS. In the event, that this route needs to be taken, it is always recommended to fall back on the last known good backups if possible. The REPAIR_FAST and REPAIR_REBUILD repair options can be performed without having any data loss. Please note that these are time consuming operations and in the event of database inconsistency it is not possible for us to predict how long these tasks would run for. Also, the time taken for CHECKDB on a database cannot be predicted. An educated guess can be made to how long it would take by referring to the last durations of the CHECKDB operations on that particular database. For the above mentioned repair options, please refer the following article: http://msdn2.microsoft.com/en-us/library/aa258278(SQL.80).aspx

REPAIR_ALLOW_DATA_LOSS
Performs all repairs done by REPAIR_REBUILD and includes allocation and deallocation of rows and pages for correcting allocation errors, structural row or page errors, and deletion of corrupted text objects. These repairs can result in some data loss. The repair may be done under a user transaction to allow the user to roll back the changes made. If repairs are rolled back, the database will still contain errors and should be restored from a backup. If a repair for an error has been skipped due to the provided repair level, any repairs that depend on the repair are also skipped. After repairs are completed, back up the database. This option shouldn’t be used without the assistance of a SQL Support Engineer.
REPAIR_FAST
Performs minor, nontime-consuming repair actions such as repairing extra keys in nonclustered indexes. These repairs can be done quickly and without risk of data loss.
REPAIR_REBUILD
Performs all repairs done by REPAIR_FAST and includes time-consuming repairs such as rebuilding indexes. These repairs can be done without risk of data loss. In case of SQL Server 2005, you have the option of checking the suspect_pages in the MSDB database to find out the affected pages.

Please refer the following articles for more detailed information:
Suspect_pages table (SQL 2005)
http://msdn2.microsoft.com/en-us/library/ms174425.aspx
Understanding and managing the suspect_pages table
http://msdn2.microsoft.com/en-us/library/ms191301.aspx
Designing a Backup and Restore Strategy
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adminsql/ad_bkprst_63eh.asp

In the event, that only a few pages have a problem, then for SQL Server 2005 a page level restore can be performed. Online page restore operation is a feature available in SQL Server 2005 Enterprise Edition but all other Editions of SQL Server 2005 support offline page level restores. The page level restores are done using the NORECOVERY option for that database. Then a backup of the current transaction log is taken and applied to the database with the RECOVERY option. This feature is applicable to databases in FULL or BULK-LOGGED recovery models.
For performing page level restores, please refer the following article: http://msdn2.microsoft.com/en-us/library/ms175168.aspx
It is highly important that a disaster recovery plan is in place to ensure the following:

  • · A plan to acquire hardware in the event of hardware failure
  • · A communication plan.
  • · A list of people to be contacted in the event of a disaster.
  • · Instructions for contacting the people involved in the response to the disaster.
  • · Information on who owns the administration of the plan.
  • · A checklist of required tasks for each recovery scenario. To help you review how disaster recovery progressed, initial each task as it is completed, and indicate the time of completion on the checklist.

In conclusion, if the database is of a considerably large size, then an integrity check needs to be scheduled during a window when the load on the server is at a minimum. The definition of minimum here refers to a load which is lesser than the normal workload on the server. If the database sizes are quite small, then daily integrity checks on the database would be the order of the day.
It is recommended that DBCC CHECKDB be run during hours when the load is light on the server. If DBCC CHECKDB is run during heavy peak usage time, expect a performance hit on the transaction throughput as well as DBCC CHECKDB completion time.

Recommendations for Good DBCC Performance
· Run CHECKDB when the system usage is low.
· Be sure that you are not performing other disk I/O operations, such as disk backups.
· Place tempdb on a separate disk system or a fast disk subsystem.
· Allow enough room for tempdb to expand on the drive. Use DBCC with ESTIMATE ONLY to estimate how much space will be needed for tempdb.
· Avoid running CPU-intensive queries or batch jobs.
· Reduce active transactions while a DBCC command is running.
· Use the NO_INFOMSGS option to reduce processing and tempdb usage significantly.

How to find out the CPU usage information for the SQL Server process using Ring Buffers

SQL Server 2005 Ring Buffers stored information related to CPU usage. This information is updated every 1 minute. So at any given point of time you can trace back the CPU usage for 4 hours 15 minutes. A sample output below shows the SQL instance CPU utilization against the other processes (active on the server) CPU utilization. This will definitely help us identify whether the SQL Server process is actually driving CPU. 

Sample Output:

Event Time SQL Process Utilization System Idle Other Process Utilization
2009-11-19 20:00:49.947  0 96 4
2009-11-19 19:59:49.933  0 92 8
2009-11-19 19:58:49.910  0 91 9
2009-11-19 19:57:49.893  0 92 8

For SQL Server 2005: 

declare @ts_now bigint 

select @ts_now = cpu_ticks / convert(float, cpu_ticks_in_ms) from sys.dm_os_sys_info 

select record_id, 

dateadd(ms, -1 * (@ts_now - [timestamp]), GetDate()) as EventTime, 

SQLProcessUtilization, 

SystemIdle, 

100 - SystemIdle - SQLProcessUtilization as OtherProcessUtilization 

from ( 

select 

record.value('(./Record/@id)[1]', 'int') as record_id, 

record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') as SystemIdle, 

record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') as SQLProcessUtilization, 

timestamp 

from ( 

select timestamp, convert(xml, record) as record 

from sys.dm_os_ring_buffers 

where ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR' 

and record like '%<SystemHealth>%') as x 

) as y 

order by record_id desc

For SQL Server 2008: 

declare @ts_now bigint 

select @ts_now = ms_ticks from 

sys.dm_os_sys_info 

select record_id, dateadd (ms, (y.[timestamp] -@ts_now), GETDATE()) as EventTime,




SQLProcessUtilization, 

SystemIdle, 

100 - SystemIdle - SQLProcessUtilization as OtherProcessUtilization 

from ( 

select 

record.value('(./Record/@id)[1]', 'int') as record_id, 

record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') 

as SystemIdle, 

record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 

'int') as SQLProcessUtilization, 

timestamp 

from ( 

select timestamp, convert(xml, record) as record 

from sys.dm_os_ring_buffers 

where ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR' 

and record like '%<SystemHealth>%') as x 

) as y 

order by record_id desc

When are memory dumps generated in SQL Server

If your SQL Server instance encounters the following conditions, a mini-dump is created in the SQL Server LOG folder:

  1. Non-yielding scheduler (SQL Server 2005 and above for the first occurrence. This is equivalent to a 17883 dump in SQL Server 2000.)
  2. Non-yielding resource monitor (SQL server 2005 and above for the first occurrence)
  3. Non-yielding IOCP listener (SQL server 2005 and above for the first occurrence)
  4. Deadlocked Schedulers (SQL server 2005 and above. This is equivalent to a 17884 in SQL Server 2000.)
  5. Exceptions/Assertions
  6. Database Corruption
  7. Latch Timeout
  8. Other conditions (eg: when DUMPTRIGGER is used to generate a dump for a particular exception)

The first 4 conditions are documented under the following whitepaper:

How To Diagnose and Correct Errors 17883, 17884, 17887 and 17888

In the SQL Server Errorlogs, you will see a message which will indicate which of the following conditions listed above resulted in the generation of the dump. You can use the KB Article mentioned below to list the stack using the Public Symbols with the help of WinDbg:

Use the Microsoft Symbol Server to obtain debug symbol files

http://support.microsoft.com/kb/311503

Addendum: April 14, 2011

Paul Randal wrote a blog post on how to download a PDB file using symchk utility which is installed along with Windows Debugging Tools:

http://www.sqlskills.com/BLOGS/PAUL/post/How-to-download-a-sqlservrpdb-symbol-file.aspx

Technorati Tags: ,

Export Import Wizard throws "Class not registered" error

When you launch the Export/Import Wizard from SQL Server Management Studio, you get the following error: 

TITLE: Microsoft SQL Server
——————————
This wizard will close because it encountered the following error:
——————————
ADDITIONAL INFORMATION:
Class not registered
 

When you look into the Technical Details of the error, you will find the following call-stack:
===================================
This wizard will close because it encountered the following error: (Microsoft SQL Server)
===================================
Class not registered
——————————
Program Location:
at Microsoft.SqlServer.Dts.Runtime.Wrapper.ApplicationClass.get_DBProviderInfos()
at Microsoft.SqlServer.Dts.DtsWizard.DTSWizard.GetDtsDbProviderInfos(WizardInputs wizardInputs)
at Microsoft.SqlServer.Dts.DtsWizard.DataSourceCombo.PopulateProviders(Boolean fSources, WizardInputs wizardInputs)
at Microsoft.SqlServer.Dts.DtsWizard.Step1.OnInitializePage(EventArgs e)
at Microsoft.SqlServer.Management.UI.WizardPage.RaiseEnterPage()
at Microsoft.SqlServer.Management.UI.WizardForm.NextPage(WizardPage nextPage)
at Microsoft.SqlServer.Management.UI.WizardForm.Next_Click(Object sender, EventArgs e)
 

If you try to execute the following query, you will get “Class not registered” error or you will get 0 rows as the output: 

EXEC master..xp_enum_oledb_providers  

When you expand Server Objects -> Linked Servers -> Providers, you will find no entries. 

This issue happens because the OLE DB provider DLLs are not registered on the machine. The reason the Export/Import wizard fails is that the landing page of the Wizard executes the code which enumerates all the provider DLLs that are registered with SQL Server. 

You would have to manually register the OLE DB provider DLLs present in the C:\Program Files\Common Files\System\Ole DB\ directory. Or you can run the script below from a Query window to obtain the regsvr32 commands for the all the DLLs present in the OLE DB folder. After executing the regsvr32 commands, check if the providers are listed under Object Explorer in Management Studio. 

/*** Script start ***/ 

set nocount on 

create table #dlltable (rowid int identity(1,1),dllname varchar(4000)) 

insert into #dlltable 

exec xp_cmdshell 'dir "C:\Program Files\Common Files\System\Ole DB\"*.dll /b' 

delete from #dlltable where dllname is null 

update #dlltable set dllname = 'regsvr32 /s "C:\Program Files\Common Files\System\Ole DB\'+dllname+'"' 

select dllname as cmd from #dlltable 

drop table #dlltable 

set nocount off

/*** Script end ***/