Search This Blog

Showing posts with label SQL Server Tuning Tips. Show all posts
Showing posts with label SQL Server Tuning Tips. Show all posts

Microsoft SQL Server Tuning Tips (Part 2)

Microsoft SQL Server Tuning Tips( Part1)

The Performance Monitor console for Microsoft Windows 2000 Server can monitor both the operating system and the utilization of SQL Server resources. Unfortunately, it stores the results of the monitoring as a binary file or comma delimited text file, which is inconvenient for further analysis. Although Windows XP and Windows 2003 can write directly to the SQL Server database, the format of the stored data is also hard to use. However, we can work around this problem.
A typical monitoring setup includes the following scripts:
  1. A Performance Monitor Counter Log file containing a set of monitoring counters stored as an HTML file.
  2. The script creating the SQL Server database table containing proper fields for the Counter Log’s list of counters.
  3. A script for copying data from the performance log file to the table mentioned above.
  4. Scripts to analyze the resulting performance data.
The typical SQLperfmon Counter Logs I usually use contain the following counters:
Memory counters
     - Pages/sec
     - Page Reads/sec
Disk counters for every physical disk
     - Avg. Disk Queue Length,
     - Avg. Disk sec/Transfer
Processor counters
     - % Processor Time – _Total
System counters
     - Processor Queue Length
SQL Server counters
     - Access Methods – Page Splits/sec
     - General Statistics – User Connections
     - Buffer Manager – Buffer Cash Hit Ratio
Observing the Counter Log HTML file, you’ll notice that a computer name is not mentioned anywhere in the script, thus this SQLperfmon.htm file can be installed on any machine running Windows 2000 Server with SQL Server 2000. Here is the script for SQLperfmon.htm:
<HTML>
<HEAD>
<META NAME=”GENERATOR” Content=”Microsoft System Monitor”>
<META HTTP-EQUIV=”Content-Type” content=”text/html; charset=iso-8859-1?>
</HEAD><BODY>
<OBJECT ID=”DISystemMonitor1? WIDTH=”100%” HEIGHT=”100%” CLASSID=”CLSID:C4D2D8E0-D1DD-11CE-940F-008029004347?>
     <PARAM NAME=”_Version” VALUE=”196611?>
     <PARAM NAME=”LogName” VALUE=”SQLperfmon”>
     <PARAM NAME=”Comment” VALUE=”">
     <PARAM NAME=”LogType” VALUE=”0?>
     <PARAM NAME=”CurrentState” VALUE=”1?>
     <PARAM NAME=”LogFileMaxSize” VALUE=”-1?>
     <PARAM NAME=”LogFileBaseName” VALUE=”SQLperfmon”>
     <PARAM NAME=”LogFileSerialNumber” VALUE=”1?>
     <PARAM NAME=”LogFileFolder” VALUE=”C:PerfLogs”>
     <PARAM NAME=”LogFileAutoFormat” VALUE=”6?>
     <PARAM NAME=”LogFileType” VALUE=”0?>
     <PARAM NAME=”StartMode” VALUE=”0?>
     <PARAM NAME=”StopMode” VALUE=”0?>
     <PARAM NAME=”RestartMode” VALUE=”0?>
     <PARAM NAME=”LogFileName” VALUE=”C:PerfLogsSQLperfmon_10271514.csv”>
     <PARAM NAME=”EOFCommandFile” VALUE=”">
     <PARAM NAME=”Counter00001.Path” VALUE=”MemoryPage Reads/sec”>
     <PARAM NAME=”Counter00002.Path” VALUE=”MemoryPages/sec”>
     <PARAM NAME=”Counter00003.Path” VALUE=”PhysicalDisk(0 C:)Avg. Disk Queue Length”>
     <PARAM NAME=”Counter00004.Path” VALUE=”PhysicalDisk(0 C:)Avg. Disk sec/Transfer”>
     <PARAM NAME=”Counter00005.Path” VALUE=”PhysicalDisk(1 D:)Avg. Disk Queue Length”>
     <PARAM NAME=”Counter00006.Path” VALUE=”PhysicalDisk(1 D:)Avg. Disk sec/Transfer”>
     <PARAM NAME=”Counter00007.Path” VALUE=”Processor(_Total)% Processor Time”>
     <PARAM NAME=”Counter00008.Path” VALUE=”SQLServer:Access MethodsPage Splits/sec”>
     <PARAM NAME=”Counter00009.Path” VALUE=”SQLServer:Buffer ManagerBuffer cache hit ratio”>
     <PARAM NAME=”Counter00010.Path” VALUE=”SQLServer:General StatisticsUser Connections”>
     <PARAM NAME=”Counter00011.Path” VALUE=”SystemProcessor Queue Length”>
     <PARAM NAME=”CounterCount” VALUE=”11?>
     <PARAM NAME=”UpdateInterval” VALUE=”60?>
     <PARAM NAME=”SampleIntervalUnitType” VALUE=”2?>
     <PARAM NAME=”SampleIntervalValue” VALUE=”1?>
</OBJECT>
</BODY>
</HTML>
In order to prepare and execute the monitoring processes you should perform the following steps on the server you will monitor:
  1. Create a C:PerfLogs folder and copy SQLperfmon.htm to this folder.
  2. Start the Windows Performance Monitor console by clicking the Start button on your desktop, selecting theRun option, and typing in a “perfmon” command.
  3. On the Tree tab, expand Performance Logs and Alerts. Right-click on Counter Logs and choose New Log Settings From… (Figure 1).


    Figure 1
  4. Select the SQLperfmon.htm file from the C:PerfLogs folder (Figure 2).


    Figure 2
  5. The Counter Log properties window appears (Figure 3).


    Figure 3

    On the General tab, you can see the performance log file name, a list of all the predefined counters, and the monitoring interval. You can change the log file name and the monitoring interval without any impact on the described monitoring process. The counters list assumes that the monitored server has two physical disks: “0 C:” and “1 D:”. If the disk configuration is different from the counter list, you can modify the counter list along with the corresponding PerfmonDataCustomer01 table and import scripts.

    On the Schedule tab, you can define when to start and stop monitoring. For example, you may want to monitor the server only during business hours. In order to change the location of the file or the file name pattern, use the Log Files tab. When editing, make sure that the file type remains “Text File – CSV.”
  6. Click OK, and the new SQLperfmon counter log will be added to the log list. You can start the monitoring process immediately, or wait for it to be started by the scheduler, if set (Figure 4).


    Figure 4
  7. When the monitoring process is completed, you can transfer the performance log file data to your SQL Server. For this purpose, create a table in which to save the monitoring data using the following script:

    CREATE TABLE [PerfmonDataCustomer01] (
         [CounterDateTime] [datetime] NOT NULL,
         [Page Reads/sec] [numeric](18, 2) NULL,
         [Pages/sec] [numeric](18, 2) NULL,
         [Avg Disk0 Queue Length] [numeric](18, 2) NULL,
         [Avg Disk0 sec/Transfer] [numeric](18, 2) NULL,
         [Avg Disk1 Queue Length] [numeric](18, 2) NULL,
         [Avg Disk1 sec/Transfer] [numeric](18, 2) NULL,
         [Processor Time] [numeric](18, 2) NULL,
         [Page Splits/sec] [numeric](18, 2) NULL,
         [Cache Hit Ratio] [numeric](18, 2) NULL,
         [User Connections] [numeric](18, 2) NULL,
         [Processor Queue Length] [numeric](18, 2) NULL
    )
    GO
    ALTER TABLE [dbo].[PerfmonDataCustomer01] WITH NOCHECK ADD
         CONSTRAINT [PK_PerfmonDataCustomer01] PRIMARY KEY CLUSTERED
         (
          [CounterDateTime]
         )
    GO
  8. I use the OpenDataSource function in the import script because in my opinion it is a more flexible solution than the BCP utility, a DTS package, or a BULK INSERT statement.

    The following script imports data from the C:PerfLogsSQLperfmon_02271405.csv file into the PerfmonDataCustomer01 table:

    INSERT INTO [PerfmonDataCustomer01] (
         [CounterDateTime]
         ,[Page Reads/sec]
         ,[Pages/sec]
         ,[Avg Disk0 Queue Length]
         ,[Avg Disk0 sec/Transfer]
         ,[Avg Disk1 Queue Length]
         ,[Avg Disk1 sec/Transfer]
         ,[Processor Time]
         ,[Page Splits/sec]
         ,[Cache Hit Ratio]
         ,[User Connections]
         ,[Processor Queue Length]
    )
    SELECT
         [(PDH-CSV 4#0) (Pacific Standard Time)(480)]
         ,cast([MemoryPage Reads/sec] as float)
         ,cast([MemoryPages/sec] as float)
         ,cast([PhysicalDisk(0 C:)Avg# Disk Queue Length] as float)
         ,cast([PhysicalDisk(0 C:)Avg# Disk sec/Transfer] as float)
         ,cast([PhysicalDisk(1 D:)Avg# Disk Queue Length] as float)
         ,cast([PhysicalDisk(1 D:)Avg# Disk sec/Transfer] as float)
         ,cast([Processor(_Total)% Processor Time] as float)
         ,cast([SQLServer:Access MethodsPage Splits/sec] as float)
         ,cast([SQLServer:Buffer ManagerBuffer cache hit ratio] as float)
         ,cast([SQLServer:General StatisticsUser Connections] as float)
         ,cast([SystemProcessor Queue Length] as float)
    FROM OpenDataSource( ‘Microsoft.Jet.OLEDB.4.0',
    – csv file on a local drive
    ‘Data Source=C:PerfLogs;Extended properties=Text’)…SQLperfmon_02271405#csv

    If your performance log file is stored in a location that is different from the one specified in the script, then the last line of the script that specifies the file name and the Data Source parameter should be modified. Please note that you should use the “#” character instead of “.” in the file name. If the performance log file is stored in the shared folder on the remote server the Data Source part could look like this:

    – csv file on a shared folder
    ‘Data Source=\server01PerfLogs;Extended properties=Text’)…SQLperfmon_02271405#csv

    Please note that the monitoring server’s time zone defines the first field name of the Performance Log file. The field names shown in the first line of the .csv file must match the first field name in the SELECT statement of the import script. If not, you need to edit the first selected item in the script above. Again, please make sure that you use the “#” character instead of “.” in the field name. For example, if the field name in the .csv file is “(PDH-CSV 4.0) (Pacific Standard Time)(480)” you should use “[(PDH-CSV 4#0) (Pacific Standard Time)(480)]” in this script.
  9. When the import is completed, you can compare the performance data of different servers, analyze recorded data using Microsoft recommendations, and decide how to resolve the issues that were found. Several sample queries for the data analysis are provided below:

    SELECT AVG([Processor Time]) FROM PerfmonDataCustomer01
    SELECT AVG([Processor Queue Length]) FROM PerfmonDataCustomer01
    SELECT TOP 10 [Processor Time], [Processor Queue Length] FROM PerfmonDataCustomer01
    ORDER BY [Processor Queue Length] DESC
  10. Another monitoring setup that could be useful for SQL Server performance monitoring was recommended in “Performance Monitoring — Basic Counters” by Steve Jones. It also uses the Counter Log file with the corresponding table and import scripts. I included all the counters recommended in that article except the “Network Interface Object” because it uses a particular network card name, which may not be the same on different servers. The following script assumes that the monitored server has a physical disk, “0 C:”, and that the Transactions/Sec counter will check the “pubs” database.
    SJperformanceset.htm:
    <HTML>
    <HEAD>
    <META NAME=”GENERATOR” Content=”Microsoft System Monitor”>
    <META HTTP-EQUIV=”Content-Type” content=”text/html; charset=iso-8859-1?>
    </HEAD><BODY>
    <OBJECT ID=”DISystemMonitor1? WIDTH=”100%” HEIGHT=”100%” CLASSID=”CLSID:C4D2D8E0-D1DD-11CE-940F-008029004347?>
         <PARAM NAME=”_Version” VALUE=”196611?>
         <PARAM NAME=”LogName” VALUE=”SJperformanceset”>
         <PARAM NAME=”Comment” VALUE=”">
         <PARAM NAME=”LogType” VALUE=”0?>
         <PARAM NAME=”CurrentState” VALUE=”0?>
         <PARAM NAME=”LogFileMaxSize” VALUE=”-1?>
         <PARAM NAME=”LogFileBaseName” VALUE=”SJperformanceset”>
         <PARAM NAME=”LogFileSerialNumber” VALUE=”1?>
         <PARAM NAME=”LogFileFolder” VALUE=”C:PerfLogs”>
         <PARAM NAME=”LogFileAutoFormat” VALUE=”6?>
         <PARAM NAME=”LogFileType” VALUE=”0?>
         <PARAM NAME=”StartMode” VALUE=”0?>
         <PARAM NAME=”StopMode” VALUE=”0?>
         <PARAM NAME=”RestartMode” VALUE=”0?>
         <PARAM NAME=”LogFileName” VALUE=”C:PerfLogsSJperformanceset_02281337.csv”>
         <PARAM NAME=”EOFCommandFile” VALUE=”">
         <PARAM NAME=”Counter00001.Path” VALUE=”MemoryAvailable MBytes”>
         <PARAM NAME=”Counter00002.Path” VALUE=”MemoryPages/sec”>
         <PARAM NAME=”Counter00003.Path” VALUE=”PhysicalDisk(0 C:)% Idle Time”>
         <PARAM NAME=”Counter00004.Path” VALUE=”PhysicalDisk(0 C:)Avg. Disk Queue Length”>
         <PARAM NAME=”Counter00005.Path” VALUE=”Processor(_Total)% Processor Time”>
         <PARAM NAME=”Counter00006.Path” VALUE=”SQLServer:Access MethodsFull Scans/sec”>
         <PARAM NAME=”Counter00007.Path” VALUE=”SQLServer:Buffer ManagerBuffer cache hit ratio”>
         <PARAM NAME=”Counter00008.Path” VALUE=”SQLServer:Databases(pubs)Transactions/Sec”>
         <PARAM NAME=”Counter00009.Path” VALUE=”SQLServer:General StatisticsUser Connections”>
         <PARAM NAME=”Counter00010.Path” VALUE=”SQLServer:Locks(_Total)Average Wait Time (ms)”>
         <PARAM NAME=”Counter00011.Path” VALUE=”SystemProcessor Queue Length”>
         <PARAM NAME=”CounterCount” VALUE=”11?>
         <PARAM NAME=”UpdateInterval” VALUE=”60?>
         <PARAM NAME=”SampleIntervalUnitType” VALUE=”2?>
         <PARAM NAME=”SampleIntervalValue” VALUE=”1?>
    </OBJECT>
    </BODY>
    </HTML>
    SJperformanceset Table Script:
    CREATE TABLE [SJperformanceset] (
         [CounterDateTime] [datetime] NOT NULL,
         [Available MBytes] [numeric](18, 2) NULL,
         [Page Reads/sec] [numeric](18, 2) NULL,
         [Percent Idle Time] [numeric](18, 2) NULL,
         [Avg Disk Queue Length] [numeric](18, 2) NULL,
         [Processor Time] [numeric](18, 2) NULL,
         [Full Scans/sec] [numeric](18, 2) NULL,
         [Cache Hit Ratio] [numeric](18, 2) NULL,
         [Transactions/Sec] [numeric](18, 2) NULL,
         [User Connections] [numeric](18, 2) NULL,
         [Average Wait Time] [numeric](18, 2) NULL,
         [Processor Queue Length] [numeric](18, 2) NULL
    )
    GO
    ALTER TABLE [SJperformanceset] WITH NOCHECK ADD
         CONSTRAINT [PK_SJperformanceset] PRIMARY KEY CLUSTERED
         (
          [CounterDateTime]
         )
    GO
    Insert Monitoring Data Script:
    INSERT INTO [SJperformanceset] (
         [CounterDateTime]
         ,[Available MBytes]
         ,[Page Reads/sec]
         ,[Percent Idle Time]
         ,[Avg Disk Queue Length]
         ,[Processor Time]
         ,[Full Scans/sec]
         ,[Cache Hit Ratio]
         ,[Transactions/Sec]
         ,[User Connections]
         ,[Average Wait Time]
         ,[Processor Queue Length]
    )
    SELECT
         [(PDH-CSV 4#0) (Pacific Standard Time)(480)]
         ,cast([MemoryAvailable MBytes] as float)
         ,cast([MemoryPages/sec] as float)
         ,cast([PhysicalDisk(0 C:)% Idle Time] as float)
         ,cast([PhysicalDisk(0 C:)Avg# Disk Queue Length] as float)
         ,cast([Processor(_Total)% Processor Time] as float)
         ,cast([SQLServer:Access MethodsFull Scans/sec] as float)
         ,cast([SQLServer:Buffer ManagerBuffer cache hit ratio] as float)
         ,cast([SQLServer:Databases(pubs)Transactions/Sec] as float)
         ,cast([SQLServer:General StatisticsUser Connections] as float)
         ,cast([SQLServer:Locks(_Total)Average Wait Time (ms)] as float)
         ,cast([SystemProcessor Queue Length] as float)
    FROM OpenDataSource( ‘Microsoft.Jet.OLEDB.4.0',
    – csv file on a local drive
    ‘Data Source=C:PerfLogs;Extended properties=Text’)…SJperformanceset_02272122#csv
    If you need to consistently monitor the database server’s performance and analyze the log data, you can run both the file copying process and the data importing process using the scheduler.
    A command to copy a file is shown below:
    copy /Y SJperfomanceset_03010952.csv \yetoc$perflogs
    Where option /Y overwrites the existing performance log file.
    The following template of an import script with an added WHERE clause will insert only new records into the SJperformanceset table — those added after the last insert:
    INSERT INTO [SJperformanceset] (
         [CounterDateTime]

    )
    SELECT
         [(PDH-CSV 4#0) (Pacific Standard Time)(480)]
         …
    FROM OpenDataSource( ‘Microsoft.Jet.OLEDB.4.0',
    – csv file on a local drive
    ‘Data Source=C:PerfLogs;Extended properties=Text’)…SJperformanceset_02272122#csv
    – add new records only
    WHERE [(PDH-CSV 4#0) (Pacific Standard Time)(480)] > (SELECT max([CounterDateTime]) FROM SJperfomanceset)
    The monitoring sets provided in this article should be enough to do most of your database performance analysis, but you can modify the monitoring set to your liking by adding or deleting some of the particular counters and modifying tables and import scripts accordingly. Using these examples, you can also create your own monitoring tool set for fine-grain monitoring of resource utilization in particular areas such as memory, I/O, processor, etc.
    Predefined monitoring sets will increase your ability to setup a performance base line, to compare the performance of multiple servers, to find possible performance bottlenecks, and to monitor the overall health of your system. You can also use these sets for additional system analysis, generating reports, and predicting future problems.

Microsoft SQL Server Tuning Tips (Part 1)

Microsoft SQL Server Tuning Tips(Part2)

The following excerpts are summaries from Microsoft sites on the topics of tuning SQL Server for large production environments. See URL at end for the full 94-page article. We found the following performance-tuning related extracts highlights the article. Judge for yourself.
RAM Caching

RAM is a limited resource. A major part of any database server environment 
is the management of random access memory (RAM) buffer cache. Access to 
data in RAM cache is much faster than access to the same information from disk. 
But RAM is a limited resource. If database I/O (input/output operations to 
the physical disk subsystem) can be reduced to the minimal required set of 
data and index pages, these pages will stay in RAM longer. Too much unneeded 
data and index information flowing into buffer cache will quickly push out 
valuable pages. The primary goal of performance tuning is to reduce I/O 
so that buffer cache is best utilized. 

To enable Windows 2000 Advanced Server or Windows 2000 Dataleft Server 
to support more than 4 GB of physical memory, you must add the /pae 
parameter to the Boot.ini file. 

For computers with 16 GB or less you can use the /3gb parameter in 
the Boot.ini file. This enables Windows 2000 Advanced Server and 
Windows 2000 Dataleft Server to allow user applications to address 
extended memory through the 3 GB of virtual memory, and it reserves 1 GB 
of virtual memory for the operating system itself.

If more than 16 GB of physical memory is available on a computer, the 
Windows 2000 operating system needs 2 GB of virtual memory address 
space for system purposes. Therefore, it can support only a 2 GB virtual 
address space for application usage. For systems with more than 16 GB of physical 
memory, be sure to use the /2gb parameter in the Boot.ini file.  See how to 
turn on these settings here.
Memory Usage Considerations

--
To enable the use of AWE memory by an instance of SQL Server 2000, use 
sp_configure to set the awe enabled option. Next, restart SQL Server to 
activate AWE. Because AWE support is enabled during SQL Server startup and 
continues until SQL Server is shut down, SQL Server will notify users when AWE 
is in use by sending an "Address Windowing Extension enabled" message to 
the SQL Server error log.

When you enable AWE memory, instances of SQL Server 2000 do not 
dynamically manage the size of the address space. Therefore, when you 
enable AWE memory and start an instance of SQL Server 2000, one 
of the following occurs, depending on how you have set max server memory.

- If max server memory has been set and there are at least 3 GB of free 
memory available on the computer, the instance acquires the amount of 
memory specified in max server memory. If the amount of memory available 
on the computer is less than max server memory (but more than 3 GB), then the
instance acquires almost all of the available memory and may leave only 
up to 128 MB of memory free.
- If max server memory has not been set and there is at least 3 GB of f
ree memory available on the computer, the instance acquires almost all 
of the available memory and may leave only up to 128 MB of memory free.

If there is less than 3 GB of free memory available on the computer, 
memory is dynamically allocated and, regardless of the parameter setting 
for awe enabled, SQL Server will run in nonAWE mode.

When allocating SQL Server AWE memory on a 32-GB system, Windows 2000 may 
require at least 1 GB of available memory to manage AWE. Therefore, 
when starting an instance of SQL Server with AWE enabled, it is recommend 
you do not use the default max server memory setting, but instead limit it 
to 31 GB or less. 

IO Considerations

Performance. Hardware RAID controllers divide read/writes of all data 
from Windows NT 4.0 and Windows 2000 and applications (like SQL Server) 
into slices (usually 16-128 KB) that are then spread across all disks 
participating in the RAID array. Splitting data across physical drives 
like this has the effect of distributing the read/write I/O workload 
evenly across all physical hard drives participating in the RAID array. 
This increases disk I/O performance because the hard disks participating 
in the RAID array, as a whole are kept equally busy, instead of some 
disks becoming a bottleneck due to uneven distribution of the I/O requests 

Disk I/O Parallelism

An effective technique for improving the performance of large SQL Server 
databases that are stored on multiple disk drives is to create disk I/O 
parallelism, which is the simultaneous reading from and writing to multiple 
disk drives. RAID implements disk I/O parallelism through hardware and 
software. The next topic discusses using partitioning to organize SQL 
Server data to further increase disk I/O parallelism. 

Objects For Partitioning Consideration

The following areas of SQL Server activity can be separated across 
different hard drives, RAID controllers, and PCI channels 
(or combinations of the three):
- Transaction log
- tempdb
- Database
- Tables
- Nonclustered indexes

egregating the Transaction Log

Transaction log files should be maintained on a storage device physically 
separate from devices that contain data files. Depending on your database 
recovery model setting, most update activity generates both data device 
activity and log activity. If both are set up to share the same device, 
the operations to be performed will compete for the same limited resources. 
Most installations benefit from separating these competing I/O activities. 

alter database tempdb modify file 
     (name='tempdev',filename= 'e:\mssql7\tempnew_location.mDF')
alter database tempdb modify file 
     (name='templog',filename= 'c:\temp\tempnew_loglocation.mDF')

the new locations above are physcially separate disks.

Segregating Nonclustered Indexes

Indexes reside in B-tree structures, which can be separated from their 
related database tables (except for clustered indexes) by using the 
ALTER DATABASE command to set up a distinct filegroup. In the example below, 
the first ALTER DATABASE creates a filegroup. The second ALTER DATABASE adds 
a file to the newly created filegroup.

alter database testdb add filegroup testgroup1
alter database testdb add file (name = 'testfile', 
   filename = 'e:\mssql7\test1.ndf') to filegroup testgroup1

After a filegroup and its associated files have been created, the filegroup 
can be used to store indexes by specifying the filegroup when the indexes are created.

create table test1(col1 char(8))
create index index1 on test1(col1) on testgroup1
SQL Coding Tips

Index Intersection 



The following example makes use of index intersection:
Create index Indexname1 on Table1(col2)
Create index Indexname2 on Table1(col3)
Select col3 from table1 where col2 = 'value'

When the previous query is performed, the indexes can be combined to 
quickly and efficiently resolve the query. 

Execution plan options can also be set within the query by executing 
either of the following commands:

set showplan_all on
go
set showplan_text on
go

Query 1
select ckey1,col2 from testtable where ckey1 = 'a'

Text-based execution plan output
|--Clustered Index Seek
     (OBJECT:([TraceDB].[dbo].[testtable].[testtable1]),
     SEEK:([testtable].[ckey1]='a') ORDERED FORWARD)

he execution plan takes advantage of the clustered index on column ckey1 
to resolve the query, as indicated by Clustered Index Seek.  If the clustered 
index is removed from the table and the same query is executed again, the query 
reverts to using a table scan. The following graphical execution plan indicates the change in 
behavior. 

Text-based execution plan output
|--Table Scan(OBJECT:([TraceDB].[dbo].[testtable]), 
     WHERE:([testtable].[ckey1]=[@1]))

A table scan instead. So we need the index.

The full 94-page article is here SQL Server 2000 although the methodology still applies to the latest version of SQL Server too!
Another useful DYI article on SQL server performance monitoring where it talks about how to use both perfmon output and a performance table to save the data, and use queries to display useful performance statistics.
The typical SQLPerfmon counter logs (Windows 2003) to include the following:

Memory counters 
     - Pages/sec 
     - Page Reads/sec 
Disk counters for every physical disk 
     - Avg. Disk Queue Length, 
     - Avg. Disk sec/Transfer 
Processor counters 
     - % Processor Time - _Total 
System counters 
     - Processor Queue Length 
SQL Server counters 
     - Access Methods - Page Splits/sec 
     - General Statistics - User Connections 
     - Buffer Manager - Buffer Cash Hit Ratio

On the SQL Server db, do:

CREATE TABLE [PerfmonDataCustomer01] ( 
     [CounterDateTime] [datetime] NOT NULL, 
     [Page Reads/sec] [numeric](18, 2) NULL, 
     [Pages/sec] [numeric](18, 2) NULL, 
     [Avg Disk0 Queue Length] [numeric](18, 2) NULL, 
     [Avg Disk0 sec/Transfer] [numeric](18, 2) NULL, 
     [Avg Disk1 Queue Length] [numeric](18, 2) NULL, 
     [Avg Disk1 sec/Transfer] [numeric](18, 2) NULL, 
     [Processor Time] [numeric](18, 2) NULL, 
     [Page Splits/sec] [numeric](18, 2) NULL, 
     [Cache Hit Ratio] [numeric](18, 2) NULL, 
     [User Connections] [numeric](18, 2) NULL, 
     [Processor Queue Length] [numeric](18, 2) NULL 
) 
GO 
ALTER TABLE [dbo].[PerfmonDataCustomer01] WITH NOCHECK ADD 
     CONSTRAINT [PK_PerfmonDataCustomer01] PRIMARY KEY CLUSTERED 
     (  [CounterDateTime]  )
GO
See the full article Part 2.