Article 16727 of comp.sys.ibm.pc:
>From: gerard@tscs.UUCP (Stephen M. Gerard)
Subject: Re: Question about hard disk interleave factors
Keywords: fixed disk interleave
Message-ID: <187@tscs.UUCP>
Date: 20 Apr 88 05:54:19 GMT
Organization: Total Support Computer Systems, Tampa, Florida

Computing the optimum interleave factor for MS-DOS machines is indeed a
"Bag of Worms"!

I have attempted to describe the key problems that come into play when
attempting to obtain the optimal performance from your disk drive.  It is hard
to do this without getting into Operating System or hardware design.

There are basically four factors that affect the performance of the disk
subsystem.  They may be classified as:

1.) Ability of the disk controller.				(hardware)

2.) Ability of system board to transfer data.			(hardware)

3.) Intelligence of the Operating System buffering scheme.	(software)

4.) (lacking #3) Intelligence of the applications program.	(software)

At the hardware level, the disk controller must be able to read/write
the data at a high enough rate so that it can receive a command to handle
the next logical record following the interleave sequence being used.  If it
can not, the controller is forced to wait until the next disk revolution
places the desired sector under the disk drives read/write head.  If the
controller was able to handle the data quickly enough, it may still have been
held up by the inability of the system boards data bus to transfer the data
before the next logical sector passes under the disk head.  At the hardware
level this is all pretty much straight forward.  Either the disk controller
and the system board can support the selected interleave factor or they can
not.

At the software level, the applications program makes a request to the
Operating System (O/S) for a particular chunk of data to be read from or
written to the disk.  Without getting into a discussion of how an O/S does or
should handle disk I/O, for reads let's say that the O/S will check its
internal buffers and see if the requested block of data resides in memory.  If
the requested block is in the O/S's buffer, it will pass the block to the
applications program.  If the buffer is not in memory, the O/S will issue a
command to the disk controller to read the selected block from the disk drive.
When that block has been read from the disk, it will be given to the
applications program.  Each time the applications program requests a block of
data from the O/S, a certain amount of overhead is incurred.  Generally
speaking, the larger the block of data requested by the applications program,
the less overhead will be incurred.  This is to say, the less the O/S is
involved, the higher the data transfer rate.  Not all applications programs
read/write the same size blocks of data.  Applications programs that use large
blocks of data (buffers) are more able to handle higher data transfer rates
that are attainable with lower interleaves.  Of course, up to now we have
assumed that the applications program is not attempting to process any of the
data in between reads and writes.  Overhead caused by the applications program
processing the data as it is being read may cause the disk controller to wait
for the next revolution of the disk before it may perform the requested read.
For example, a word processor that reads the entire document into memory
before it attempts to figure out how to format the document can achieve a
higher data transfer rate than a word processor that formats each block as it
reads it.

Ok, what does all of this mean?

Well, quite simply, an applications program may perform better with a higher
interleave factor than with a lower interleave factor that was selected by using
a program such as "CORETEST".  CORETEST does nothing but read data, by default
it uses a 64K buffer.  A typical applications program may only read 512 bytes
or even less with each read.  By the time you add in the overhead caused by
the O/S, chances are pretty good that the next record may have already passed
the disks read/write head.

Programs should load faster with a lower interleave factor.  This is because
DOS would allocate a chunk of memory and load the program into that memory
using large buffers.  By the same token, the DOS copy command should also
achieve better performance with a lower interleave as it again only has to
handle large chunks of data.

Database applications may run faster with a higher interleave factor depending
on how poorly the disk I/O code was written and what data is being processed
between disk reads.

What can be done to improve performance?

Use a better disk controller.  Some controllers now have built-in cache
memory.  If the requested disk record is already in cache memory, it may be
transfered without waiting for it to be read from the disk.

Improve DOS, if DOS would read an entire track into its buffers, DOS could,
in many cases, supply the next record requested by the applications program
without needing to go to the disk controller.

Improve applications programs, use large buffers for disk I/O.

Optimize your disk drive often.  Use a program like "SD" which is included
with Peter Norton's Advanced Utilities.

Summary:

The best way to tell which interleave factor you should be using is to
try each one with the applications program that you use the most.  With
the WD-1002 you are using, an interleave of 5 is most likely the best you
can do.  With an Adaptec ACB-2010, try 2 through 5.  With an OMTI 5520, try
1 through 5.

The following formula may be used to calculate the maximum data transfer rate
of a disk drive.  The actual transfer rate will be lower due to system speed,
disk controller, bus width, Operating System overhead, the applications program,
etc.

		 Sectors-Per-Track * Sector-Size * RPM
	KB/S =  ---------------------------------------
		          Interleave * 61440

KB/S = Kilo Bytes per Second

Sectors-Per-Track = 17 for MFM drives
		    25 or 26 (controller dependant) for RLL

I hope this helps take a little bit of the mystery out of this Bag of Worms.

 