                                                                 page 1 of 4

March 7, 1988                                         X3T9.2/88-030

To: X3T9.2 Committee (SCSI)   

From: James McGrath (Quantum)

Subject: Proposed Definition of Caching




  This proposal is in response to an action item to define what the
standard means when it refers to a cache memory.  I have tried to
retain the distinctions made in the current text referring to caching
devices, but I have also tried to broaden it to allow for a greater
diversity of cache implementations


Replace 6.2.4 (pg 6-6) with the following:


    6.2.4   Cache Control Bits
 
    6.2.4.1    Definition of Cache

      Some targets may implement a secondary media, referred to as
    "cache memory" or "cache," which is used in the following manner.
    When data is read from the primary media (referred to throughout
    this standard as "the media"), all or part of the data may be
    stored, or "cached," in the secondary media.  Reading this data
    from the primary media may be in response to a specific SCSI
    command or at the device's discretion (in which case it shall be
    referred to as "auto- prefetching").  A request for data by an
    initiator which would normally involve an access to the primary
    media may instead be satisfied by accessing the secondary media
    if the identical data is currently stored there as well.  In this
    case the secondary media is acting as a "read cache," and such an
    access shall be denoted a "read hit" for the data requested.  

      When data is to be written to the primary media, all or part of
    the data may instead be stored, or "cached," in the secondary
    media.  At this point the write operation shall be successfully
    terminated.  In this case the secondary media is acting as a
    "write cache," and such an access shall be referred to as a
    "write hit" for the data written.  At a later time that data may
    be transferred from the secondary media to the primary media.  As
    long as the most recent data is obtainable from the secondary
    media, such a "copy-back" operation is not strictly required.

      Targets may cache data for any or all of the attached logical
    units.  The logical units themselves may also contain cache
    memory.
                                                                 page 2 of 4






      A "cache consistency" problem arises if there is more than one
    data path to the primary memory of a logical unit, and cache
    memory resides in at least two different data access paths.
    These cache memories must, whenever they can be accessed, reflect
    the same state of data for the same portions of the primary
    memory.  However, there are no mechanisms available in this
    standard to insure this degree of consistency under all possible
    system configurations.

      What data to store in a cache memory is determined by the
    "fetching strategy" (e.g. demand, prefetch, demand and prefetch,
    heuristic).  At some times additions of data to the cache memory
    may involve simultaneous displacements of other data from the
    cache.  Which data is displaced is determined by the cache's
    "replacement strategy" (e.g.  FIFO, random, LRU, heuristic).

      Normally the cache memory has a much smaller capacity, and a
    faster access time, than the primary media.  Its small capacity
    implies that the fetching and replacement strategies are
    important in determining the "hit rate" (proportion of accesses
    that can be satisfied by accessing only the cache and not the
    primary media).  Since a read or write hit is normally serviced
    faster than a normal read or write that accesses the primary
    media, potentially improving the target's overall performance,
    the hit rate is an important performance parameter.  To precisely
    compute a cache's contribution to improved performance, the
    average service times of hits (SH) and misses (SM) (misses are
    requests which are not hits) using the cache, the average service
    time of accesses without the cache (S), and the hit rate (HR) are
    used as follows:

      % improvement = 100 * ([S / ( SH * HR + SM * (1 - HR) )] - 1)


      Note that the hit rate may be based upon either blocks of
    data for which hits were made or SCSI commands for which hits
    were made.  Which basis is used is important in computing the
    improvement, so the above formula must be carefully and
    consistently applied.


    6.2.4.2    Cache Control

      The initiator may explicitly request a prefetch of data into
    the cache memory by issuing the PREFETCH command.  The initiator
    may also insure that the data in the cache memory and the data in
    the primary media is the same for a given logical block address
    by issuing a SYNCHRONIZE CACHE command.  Finally, the initiator
    may force the target to give the highest retainment priority to
    some data in the cache memory, or to subsequently remove that
    priority, by issuing the LOCK/UNLOCK CACHE command.
                                                                 page 3 of 4






      The cache control bits, disable page out (DPO) and force unit
    access (FUA), are defined for peripheral device types 0, 4, 5,
    and 7 (see Table 7-__).  They provide a means to manage the cache
    memory.  SCSI-2 devices that do not contain cache memory shall
    ignore the cache control bits.  

      IMPLEMENTORS NOTE:  One may determine whether a SCSI-2 device
      contains a cache memory by examining the cache bit returned by
      the MODE SENSE command.  Some SCSI version 1 devices that do
      not support cache memory may reject the DPO and FUA bits.


      A DPO bit of one indicates that the target shall give the
    logical blocks accessed by this command the lowest priority for
    being fetched into or retained by the cache.  A DPO bit of zero
    indicates the priority assigned shall determined by the target in
    a vendor unique manner.  All other aspects of the algorithm
    implementing the cache replacement strategy is not defined by
    this standard.

      IMPLEMENTORS NOTE:  The DPO bit is used to control replacement
      of the logical blocks in the cache when the host has
      information on their usage.  If the the DPO bit is set to one,
      the host knows the logical blocks accessed by the command are
      not likely to be accessed again in the near future and should
      not be put in the cache or retained by the cache.  If the DPO
      bit is zero, the host expects that logical blocks accessed by
      this command are likely to be accessed again in the near
      future.  


      An FUA bit of one indicates that the SCSI device shall access
    the primary media in performing the command prior to returning
    GOOD status.  In particular, write commands shall not return GOOD
    status until the logical blocks have actually been written on the
    media (i.e. the data is not write cached).  Read commands shall
    access the specified logical blocks from the primary media (i.e.
    the data is not directly retrieved from the cache).  In the case
    where the cache contains a more recent version of a logical block
    than the primary media, the logical block shall first be written
    to the primary media.

      An FUA bit of zero indicates that the SCSI device may satisfy
    the command by accessing the cache.  In the case of a read
    access, any logical blocks that are contained in the cache may be
    transferred to the initiator directly from the cache (i.e. there
    may be a read hit).  In the case of a write access, logical
    blocks may be transferred directly to the cache (i.e. there may
    be a write hit).  GOOD status may be returned to the initiator
    prior to writing the logical blocks from the cache to the primary
    media.  Therefore, error information may not be reported until a
    subsequent command (e.g., SYNCHRONIZE CACHE command).  
                                                                 page 4 of 4






  Delete the last sentence of the CONDITION MET paragraph in 6.3 (pg
6-9), i.e. "This status is also returned by the PRE-FETCH command
when there is sufficient space in the cache memory for all of the
addressed logical blocks." 

  Add as the next to the last paragraph of the LOCK/UNLOCK CACHE
command (8.1.2, pg 8-14):

      GOOD status shall be returned even if the cache memory cannot
    allocate sufficient space to lock all of the logical blocks in
    the range of logical blocks.


  Replace the next to the last paragraph of the PRE-FETCH command
(8.1.4, pg 8-44) with [I believe we resolved this issue in the
following manner at San Jose, but my memory may be faulty]:

      GOOD status shall be returned even if the cache memory cannot
    allocate sufficient space to pre-fetch all of the logical blocks
    in the specified logical blocks.  In this case, the target shall
    transfer only the number of logical blocks that fit into the
    cache memory.

