Reading and writing to the disk is the heart of what any database management system does, SQL Server included. Input/Output (I/O) performance can make or break an application. This article discusses the diagnostic tools that can be used to examine SQL Server's I/O statistics so that you can make fact-based judgments about disk configurations. There are several ways to request I/O statistics from SQL Server such as the System Statistical functions, sp_monitor, and fn_virtualfilestats. Each method has its advantages and disadvantages. I'll show you how they work and their pros and cons.
I rely primarily on fn_virtualfilestats because it gives the most detailed information. The other methods aggregate information at the instance level. The instance level may be the only meaningful alternative when 'you are accounting for the CPU, but when working with file I/O having the detailed breakdown is helpful.
One of the limitations of all of system statistical functions and fn_virtualfilestats is that their reports are always based on the resources consumed since the instance started. This includes both peak usage times and low usage times. If your instance has been running through several cycles of peak to low usage these overall aggregates may be of some interest, but they are usually most interesting during times of peak usage. After we discuss the various methods for statistics gathering, I will show you a stored procedure for gathering I/O statistics during peak time periods.
Ways to get I/O Statistics
Although the statistics are nearly identical, there are several ways to request them from SQL Server 2000. The methods are:
- The system statistical functions such as @@CPU_BUSY
- sp_monitor
- fn_virtualfilestats
The first two methods give you information that is aggregated at the instance level. Let's take a look at them first.
Using the System Statistical Functions
The system statistical functions cover I/O, network and CPU resource utilization. Table 1 lists them.
Table 1 System Statistical Functions
Function | Description |
@@CONNECTIONS | The number of connections or attempted connections. |
@@CPU_BUSY | Timer ticks that the CPU has been working for SQL Server. |
@@IDLE | Time in timer ticks that SQL Server has been idle. |
@@IO_BUSY | Timer ticks that SQL Server has spent performing I/O operations. |
@@PACKET_ERRORS | Number of network packet errors that have occurred. |
@@PACK_RECEIVED | Number of packets read from the network. |
@@PACK_SENT | Number of packets written to the network. |
@@TIMETICKS | Number of millionths of a second in a timer tick. |
@@TOTAL_ERRORS | Number of read/write errors during I/O operations. |
@@TOTAL_READ | Number of disk reads. |
@@TOTAL_WRITE | Number of disk writes. |
For monitoring I/O the most interesting numbers are @@IO_BUSY, @@Total_READ and @@TOTAL_WRITE. Here is a simple query that shows the raw statistics:
-- Take a look at raw I/O Statistics
SELECT @@TOTAL_READ [Total Reads]
, @@TOTAL_WRITE as [Total Writes]
, CAST(@@IO_BUSY as FLOAT) * @@TIMETICKS / 1000000.0 as [IO Sec]
GO
(Results) Total Reads Total Writes IO Sec
----------- ------------ -----------
85336 322109 25.375
When using the functions @@IO_BUSY, @@CPU_BUSY, and @@IDLE, the function returns clock ticks. To convert ticks to seconds, multiply by @@TIMERTICKS and then divide by one million. Be sure to convert the quantities to floating point, numeric, or bigint to avoid integer overflow during intermediate calculations.
The raw numbers alone aren't very interesting. 'It is more informative to turn the numbers into rates. To do that you need to know how long the instance has been running. This next script uses a user-defined function (UDF), udf_SQL_StartDT, which uses the start time of the Lazy Writer process as a proxy for the start time of the instance. udf_SQL_StartDT is available from my free T-SQL UDF of the Week Newsletter Volume 1 #11.
The start time is turned into a number of seconds and the script performs the division, being careful to CAST to data types that 'will not lose information due to rounding or integer division:
-- Turn the raw statistics into rates
DECLARE @SecFromStart bigint
SET @SecFromStart = DATEDIFF(s, dbo.udf_SQL_StartDT(), getdate())
SELECT CAST(CAST(@@TOTAL_READ as Numeric (18,2))/@SecFromStart
as Numeric (18,2)) as [Reads/Sec]
, CAST(CAST(@@TOTAL_WRITE as Numeric (18,2))/@SecFromStart
as Numeric (18,2)) as [Writes/Sec]
, CAST(@@IO_BUSY * @TIMETICKS/10000.0/@SecFromStart
as Numeric (18,2)) as [Percent I/O Time]
GO
(Results) Reads/Sec Writes/Sec Percent I/O Time
-------------------- -------------------- --------------------
24.34 92.53 .42
The read and write rates are often in the tens or hundreds, at least over short time spans. You might ask, "Why do you bother to retain even two digits to the right of the decimal?" Most of the time these extra two digits do not come into play. However, when a system has been idle for a long time, let's say over the weekend after being restarted on Friday night, it's possible to have rates that are less than one. Showing zero for the rates is confusing, so I have tried to be sure that at least a small number shows up.
Showing rates from the time the instance started until the query is run forces you to average over a long time period. SQL Server supplies a stored procedure that shows the values of the system statistical functions since it was last run, this let's you get a quick snapshot of your I/O rates.
sp_monitor is a system stored procedure that produces four recordsets that display the values of system statistical functions since sp_monitor was last run. It also shows the last_run time, the current_run time, and the number of seconds in the measurement. There are no parameters to sp_monitor. Here is a short script to see its output:
-- run sp_monitor
sp_monitor
go
(Results) last_run current_run seconds
------------------------------ ------------------------------ -------
2003-08-03 13:52:50.000 2003-08-03 13:52:53.000 3
cpu_busy io_busy idle
----------------- --------------- ------------------
498(0)-0% 773(0)-0% 276153(2)-66%
packets_received packets_sent packet_errors
------------------ ------------------ ----------------
826(1) 1741(1) 0(0)
total_read total_write total_errors connections
------------- ------------- -------------- -------------
139818(0) 392372(1) 0(0) 52(0)
sp_monitor uses the system statistical functions discussed in the previous section to get its information. It stores the value from the last time the procedure was invoked in the table spt_monitor in the master database.
Each of the measurements, such as cpu_busy and total_read, are character strings that have several parts. The first number is the measurements from when the instance started. The second number, the one in parenthesis, is the measurement since the last time sp_monitor was run. Finally, for the CPU related measurements in the second recordset, the percentage of time in the column's category follows.
I generally find the sp_monitor is the best way to get a quick picture of what is happening to your system; just: run it, wait 10 to 100 seconds and run it again. The result is a snapshot of your system activity.
Other than getting a quick look at your system, several factors weigh against trying to use sp_monitor for long-term information gathering:
- There is only one set of statistics that can be saved at a time. If someone else runs sp_monitor while you're waiting, you see the results since anyone last ran it.
- The four recordsets of output are difficult to work with in T-SQL.
- Combining the measurement from the instance startup time with the current measurement, and possibly a percentage also makes the output difficult to work with from a program.
- The numbers are aggregated to the instance level.
These factors combine to limit the usefulness of the procedure. I generally use a system build-in function, fn_virtualfilestats, to get a more detailed look at Input/Output statistics.
Using fn_virtualfilestats to Get I/O Statistics
fn_virtualfilestats returns a table of I/O statistics at the file level. It takes two parameters: the Db_ID of a database to retrieve information for and the file_id of the file to retrieve information for. Supplying -1 to either of the parameters asks for all information about the dimension. For example, executing this query:
select * from ::fn_virtualfilestats(-1, -1)
asks for information about all files in all databases. Executing
select * from ::fn_virtualfilestats(-1, 2)
asks for information about file number 2, usually the first log file, for all databases.
Table 2 lists the output columns for fn_virtualfilestasts. All measurements are "since the instance started."
Table 2 Output columns from fn_virtualfilestats
Column | Data Type | Description |
Dbid | Smallint | Database ID from master..sysdatabases |
FileID | Smallint | File ID from sysfiles |
Timestamp | Int | Number of seconds since the instance started. |
NumberReads | Bigint | Number of reads. |
NumberWrites | Bigint | Writes. |
BytesRead | Bigint | Bytes read. |
BytesWritten | Bigint | Bytes written. |
IoStallMS | Bigint | Milliseconds users waited for I/O complete on the file. |
Here is a short query that shows the I/O statistics for the current database with sample output:
-- Get information about the current files
DECLARE @myDBID int
SET @myDBID = db_id()
SELECT * FROM ::fn_virtualfilestats (@myDBID, -1)
go
(Results) DbId FileId TimeStamp NumberReads NumberWrites BytesRead BytesWritten IoStallMS
---- ------ --------- ----------- ------------ ---------- ------------ ---------
16 1 288070359 134585 195377 6172327936 8675352576 833376
16 2 288070359 390 191536 2158592 11449939968 6384
The raw numbers are useful, but they are even more interesting when aggregated in different ways. Some of the ways that you might want to aggregate are:
- For the system as a whole
- By database
- By drive
- By log files vs. data files
The following UDF, udf_Perf_FS_ByDriveTAB, aggregates the files by drive letter. It uses information in master..sysaltfiles to get the path to the physical file. Here is the CREATE FUNCTION script:
SET QUOTED_IDENTIFIER OFF
SET ANSI_NULLS OFF
GO
CREATE FUNCTION dbo.udf_Perf_FS_ByDriveTAB (
@DriveLetter CHAR(1) = NULL -- Drive or NULL for all
) RETURNS TABLE
/*
* Returns a table of statistics by Drive letters for all drives
* with database files in this instance. They must match
* @Driveletter (or NULL for all). Returns one row for each
* Drive. Information about physical files is taken from
* master..sysaltfiles which has the physical file name needed.
* Warning: Drive letters do not always correspond to physical
* disk drives.
*
* Example:
select * from dbo.udf_Perf_FS_ByDriveTAB (default)
****************************************************************/
AS RETURN
SELECT TOP 100 PERCENT WITH TIES
LEFT(saf.[Filename], 1) + ':' as DriveLetter
, Count(saf.FileID) as NumFiles
, CAST (Sum(saf.[size] / 128.0) as numeric(18,3)) as SizeMB
-- convert to Megabytes from 8-k pages
, Sum(NumberReads) as NumberReads
, Sum(NumberWrites) as NumberWrites
, Sum(BytesRead) as BytesRead
, Sum(BytesWritten) as BytesWritten
, Sum(IoStallMS) as IoStallMS
, Avg(vfs.[TimeStamp]/1000) as SecondsInMeasurement
FROM ::fn_virtualfilestats(-1, -1) vfs -- -1 = all db & files
inner join master..sysaltfiles saf
on vfs.DbId = saf.DbId
and vfs.FileID = saf.FileID
WHERE (@DriveLetter is NULL
OR LEFT(saf.[Filename], 1) = @DriveLetter)
GROUP BY LEFT(saf.[Filename], 1) + ':'
ORDER BY DriveLetter asc -- by Drive letter C, D, ....
GO
The parameter to udf_Perf_FS_ByDriveTAB is a drive letter, which you can use if you are only interested in information about a single drive. Otherwise, use NULL to retrieve information for all drives that have SQL Server data or log files.
Here is the result of executing it on a test server:
-- Get input output by drive
SELECT DriveLetter as Drive, NumFiles as Files, SizeMB
, NumberReads as Reads, NumberWrites as Writes
, BytesRead, BytesWritten, IOStallMS
, SecondsInMeasurement as Sec
FROM dbo.udf_Perf_FS_ByDriveTAB (default)
GO
(Results) Drive Files SizeMB Reads Writes BytesRead BytesWritten IoStallMS Sec
----- ----- ---------- -------- ------- ---------- ------------ --------- -------
D: 47 5032.625 140293 392340 6258849792 20267508224 11024112 290955
E: 21 6004.125 340876 882233 9558932367 43481143983 13666091 290955
The most important factor needed to analyze these numbers is knowledge of the disk configuration that is associated with each drive letter. Is it a single drive? Is there a RAID configuration? If so, which one? Are there multiple partitions on any of the drives? A SAN? The answers to all of these questions make a big difference in how you evaluate the results. Different configurations, such as RAID 0 vs. RAID 5, have different capacities and you will have to know your configuration to make meaningful comparisons.
There are other ways to aggregate the output of fn_virtuafilestasts. You will find one such alternative in a user-defined function udf_Perf_FS_byDBTab
A problem with the numbers produced by fn_virtualfilestats is that they include information from the start of the instance. The numbers during peak usage are more interesting and the next section develops a solution to gathering them that is similar to the solution used by sp_monitor.