Please consider a donation to the Higher Intellect project. See or the Donate to Higher Intellect page for more info.

Hard Drives and Computer System Performance

From Higher Intellect Vintage Wiki
Jump to navigation Jump to search

Tip b.gif Highlights

  • As the complexity of hardware designs, operating systems, and applications software increases, users of mid- to high-end desktop computer systems require disk drives with greater performance and larger capacities than ever before.

  • Disk drive performance is a critical factor in optimizing computer system performance. Although many factors contribute to disk drive performance, the ultimate measure is data throughput rate, provided other system components can process the data as fast as the disk drive can read or write it. In general, higher data transfer rates from the disk to the host lead to improved system performance.

  • Quantum's ProDrive LPS 270/340/540 products complement the performance of today's mid- to high-end desktop computer systems by implementing several performance-enhancing technologies, including:

    Cache-Related Technologies:
    Adaptive Segmentation Firmware
    AutoRead and AutoWrite ASIC
    DisCache Firmware
    WriteCache Firmware

    Interface Enhancements:
    Fast Multiword DMA
    Local Bus IDE Compatibility
    SCSI-2 and Fast SCSI-2

    Other Performance Improvements:
    Advanced Embedded Servo Technology
    AutoTransfer ASIC Technology
    Error Correction Code (ECC) On-The-Fly
    Read-On-Arrival Firmware
    Read/Write Multiple Firmware

Tip g.gif Quick Look

This issue of Quantum Technical Information Papers (TIPS) examines disk drive performance factors that impact computer system performance - specifically for mid- to high-end desktop computer systems. The evolution of computer system design and the increased complexity of computer applications used in the workplace, educational

arena, and home have driven the search for greater computer system performance.

Chip designers have made dramatic improvements in the processing speeds of the computer's central processing unit (CPU), with 33 megahertz (MHz) now common in 386 and 486 systems. With clock doubler and tripler chip designs on the way, these processors can obtain speeds of up to 99 MHz. In addition, Intel has introduced the Pentium, the long-awaited P5 microprocessor, which runs at speeds of 50 and 66 MHz. While these enhancements on the CPU side have a tremendous effect on overall computer system performance, manufacturers of peripheral components, such as hard disk drives, must match this technological progress to ensure optimum computer system performance and minimize the possibility of the disk drive becoming a system bottleneck .

Even in the early days of CPU and operating system design, peripheral components such as hard disk drives represented a key factor in overall system performance. The advent of high performing CPUs (the 486, Pentium, and 68040) and the adoption of graphical operating systems (Microsoft's Windows and Windows NT, Apple's System 7, and IBM's OS/2) has added to the need for increased performance and capacity in disk drives. As the speed and size of data transfers have increased in the system design, the hard disk has kept pace using techniques such as caching, "look-ahead" fetching, and higher areal densities. Decreasing disk access times and new, high speed disk-to-host interfaces also have enabled disk drives to support the demands of today's high performance applications.

One way Quantum has met the challenge of keeping pace with today's mid- to high-end computer systems is through the introduction of a high performance line of hard disk drives for mid- to high-end desktop computer systems: the ProDrive LPS 270/340/540 products. This TIPS discusses the advanced technologies Quantum has implemented in these drives to meet high system performance requirements.

Tip y.gif Marketplace Perspective

As with many sectors of the electronics industry, hard disk drive technologies and products are evolving continually. Each new technological development spurs yet another competitive thrust among vendors to develop new drives that are smaller, faster, and "smarter" - at a reduced cost. In barely a decade, hard drives have evolved from being devices that only received instructions from external controllers to today's intelligent subsystems complete with their own resident controller comprising a CPU, application specific integrated circuits (ASICs), and microcode. These intelligent devices free the host CPU from most data management tasks - a crucial factor to increasing performance for emerging multitasking and networked computing environments.

In addition to system performance improvements, increases in storage requirements are also impacting the need for higher performance drives. Only a few years ago, computer systems commonly sold with hard drive capacities of 80 megabytes (MB) or less. But, for today's mid- to high-end desktop computer systems, the entry level storage requirements have more than doubled to more than 200 MB. As the complexity of hardware designs, operating systems, and applications software increases, computer systems require disk drives with greater performance and in larger capacities than ever before - and at increasingly aggressive prices. For example, Windows NT will require 70 MB of hard drive storage simply to operate, and multimedia applications can require hun dreds of megabytes of storage capacity.

Tip r.gif Close Up

Even as computer system designers incorporated advanced technologies, performance lags by any one component - for example, the hard disk drive - could create a system bottleneck and decrease overall system performance. Historically, overcoming the "wait states" created by host systems performing at faster rates than disk drives has challenged both computer system and disk drive designers to develop solutions that increase overall system performance. Increased areal data density, integrated components, and ASICs have had the greatest positive impact on both drive performance and overall system performance. In addition, advances in heads and media, reliability, and environmental characteristics have contributed to increased drive performance. Quantum, f or example, has met the performance challenge in the latest generation of disk drive products with improvements to cache-related technologies and internal drive

performance, as well as in the use of the interface between the host system and the disk drive.

Efficient use of the bus is a key factor to increasing computer system performance. The width of the bus and the speed of data transfer along it have the greatest effect on system performance. This interface, which was once 4 or 8 bits wide, typically transferred data at rates only up to 1 megabyte per second (MB/sec.). Now the bus is commonly 16 bits or 32 bits wide and transfers data at speeds of 10, 20, and even 40 MB/sec. - thanks to the use of a faster bus, called the local bus. The next logical development for the bus is a 64-bit wide interface, which will allow drives and other peripherals to reach even higher data transfer speeds.

By developing more highly integrated chips, drive manufacturers not only are reducing the space requirements for drive electronics, they also are fine-tuning each new generation of disk drive products. As drive electronics become more integrated, less data transfer and communication problems occur between system components, and overall system performance increases. For example, Quantum has implemented a single-chip read channel integrated circuit (IC), which combines functions that previously were performed in two or more separate IC components. This new read channel improves disk-to-buffer data transfer rates by more than 80% over previous multi-chip implementations.

The pursuit of improving disk drive performance and reliability while decreasing cost has led to advances in firmware design that are now implemented in ASICs. For example, Quantum's dedicated ASIC design team has developed several new technologies that contribute to the high performance of our disk drives. AutoTransfer ASIC Technology, one of these technologies, decreases host overhead delays while completing a disk read or write data transfer of one sector. AutoTransfer ASIC Technology also generates a time savings of over 60% compared to previous firmware implementations. This technology significantly contributes towards improving computer system performance, especially with larger files such as those for multimedia or full-motion video applications.

New head and media advances provide lower flying heights over the disk platters and allow data to be positioned closer together. The result is higher areal data density and shorter seek and rotational latency times. Faster platter spindle speeds, measured in revolutions per minute (RPM) also contribute to the reduction in overall seek times. In general, however, the higher the RPM of a drive, the more power required, the noisier the drive, and the more "wear and tear" on the drive components. Because high RPMs cause greater stress on all moving drive components, the net effect can decrease mean time between failure (MTBF) ratings. But, with the movement toward energy-efficient computer systems, system designers are achieving higher RPMs with lower power, which reduces stress on components and provides better acoustics. Today's drive designers are learning how to optimize the balance among RPM speed, power consumption, acoustics, and drive reliability.

Quantum is one of the industry leaders in developing advanced technologies for disk drive products. We have received over 50 patents in the U.S. alone for disk drive technology and have incorporated many technological "firsts" into our ProDrive LPS 270/340/540 products including Adaptive Segmentation firmware and AutoTransfer ASIC Technology.

In addition, we have continued with our tradition of designing "Intelligent Performance" into our ProDrive LPS 270/340/540 disk drives. Intelligent Performance is a design philosophy that favors the further development of built-in drive software and custom circuitry to increase performance, without increasing drive cost or power consumption. In our latest generation of disk drives, our Intelligent Performance philosophy has yielded several performance-enhancing technologies that, in turn, enhance overall computer system performance. These technologies are detailed on pages two and three..

Tip b.gif Cache-Related Technologies

Quantum has engineered several cache-related technologies, which improve the performance of disk drives by speeding data throughput between the drive and the host CPU. In today's demanding multitasking, multi-user environments, Quantum's unique caching technologies - Adaptive Segmentation, DisCache and WriteCache firmware, and the AutoRead and AutoWrite ASIC - simply outperform other caching methods and contribute significantly to

computer system performance.

Figure 1 is a simplified diagram of the relationships among Quantum's cache-related technologies. As you read about the cache-related technologies in the sections that follow, refer back to this figure, and you will gain a better understanding of how these technologies work together to make Quantum drives the best drives on the market.


Tip y.gif Adaptive Segmentation Firmware

The first cache systems for disk drives were implemented as a means of bridging the performance gap between disk drives and faster computer system components and preventing disk drives from being system bottlenecks. The idea was that, by adding a buffer from which previously requested data could be retrieved very quickly, computer system performance would be improved. If the host CPU requested data in the buffer, it could retrieve the data from the buffer in microseconds, instead of in the milliseconds required to retrieve data from the hard disk. The first cache systems used single, non-segmented buffers in which the entire buffer was overwritten each time the host requested data from a non-sequential disk address. The single-buffer design worked well for the single-user systems in use at that time.

As Intel's 386 emerged and computer system designs, operating systems, and applications software grew in complexity, higher performance disk drives were required. Drive vendors upgraded their cache systems by moving from a single buffer design to a segmented buffer design. Quantum's drives, for example, were improved by creating a segmented buffer, which divided the buffer into four fixed segments, each 64K. Three buffer segments were allocated only for read operations, and one was allocated only for write operations. This design works much better for higher performance computer systems because it increases the chances that the requested data is in the buffer. Thus, drive performance was enhanced for non-sequential data transfers, and to a greater extent for sequential data transfers in which the data the host CPU requests is stored on the disk immediately following the previously requested data.

Recently, to further enhance the performance of desktop computer systems, Quantum has developed a new buffer segmentation method using Adaptive Segmentation Firmware that uses the available buffer space more efficiently. With Adaptive Segmentation Firmware, no longer is the buffer divided into fixed length segments, nor is a particular part of the buffer dedicated to use with only read or write operations. Instead, an algorithm determines the size of the data transfer, regardless of whether the operation is a read or write, and calculates the optimum amount of the buffer to allocate. Available buffer space is optimized because - with Adaptive Segmentation Firmware - the drive can write data to the buffer contiguously.

As the example in Figure 2 illustrates, a 128K buffer with Adaptive Segmentation can provide as much or more usable buffer space as a 256K buffer with fixed segmentation. After the three data requests, R1, R2, and R3, the buffer with fixed segmentation is full. At the next request, if the data is not already in the buffer, at least one 64K segment will be overwritten with new data. With the same three requests, the buffer with Adaptive Segmentation still has space available. In addition, when this buffer is filled, the drive will overwrite only the amount of buffer space required. Thus, Adaptive Segmentation minimizes the buffer space required and uses it for optimum performance.


Tip g.gif AutoRead and AutoWrite ASIC

Quantum's AutoRead and AutoWrite ASIC speeds sequential data access for the ProDrive 270/340/540 by transferring several time-consuming read and write functions, previously implemented in the firmware, to an ASIC. Working in conjunction with Quantum's other cache-related technologies, AutoRead and AutoWrite receive commands, interpret them, and perform the data transfers required to complete them.

AutoRead controls data transfer from the cache buffer to the host CPU, while AutoWrite controls data transfer from the host CPU to the cache buffer. This means that, in the event of a cache hit - that is, if the required data is already in one of the buffer segments - the data is transferred automatically from the buffer to the host. In the event of a cache hit, Quantum's caching scheme decreases command overhead by up to 90%, resulting in increased throughput.

Tip r.gif DisCache Firmware

DisCache firmware uses a "look-ahead" design and an on-board cache buffer to optimize disk drive performance. When the host CPU requests data, DisCache not only provides the requested data, but it also adapts its cache algorithm based on the next request. If the next request is for sequential data, the "look-ahead" caching scheme continues filling the buffer with new sequential data. Thus, especially for long sequences of sequential commands, the continuous prefetch provides round-robin filling and emptying of the buffer, resulting in increased throughput.

Typically, over 50% of all disk requests are sequential. So, once DisCache fills the buffer with sequential data, there is a high probability that data requested by the CPU will be in the on-board cache. If it is, DisCache eliminates both the seek time and rotational latency delays that dominate non-cached disk transactions. Because the host CPU doesn't have to access the disk drive to transfer the data, the data retrieval process occurs almost instantaneously.

The DisCache "look-ahead" prefetch strategy complements most computers' system-level caches, which typically store large blocks of previously requested data and prefetch data on a file-by-file basis. The most effective system-level caches are very large - up to several megabytes - and require system overhead to maintain. The reuse of previously accessed data is one of the primary benefits of a system-level cache. Quantum's DisCache complements all system-level caches by increasing data throughput in applications, such as multimedia, that require large transfers of sequential data.

In a multitasking environment where a hard disk services multiple CPU operations, the disk must divide available time among all the operations, even though each might be requesting data sequentially from the disk. In conventional disk drive systems, the read/write heads seek from one location to another to service multiple data requests. With DisCache, however, the number of seeks required typically will be reduced. After the first seek and read has been performed for each task, DisCache often can transfer the data directly from the high-speed, on-board cache memory to the host CPU.

Tip b.gif WriteCache Firmware

Ordinary disk drive technology allows host-to-buffer and buffer-to-disk transfers to occur simultaneously during data writing operations. Quantum's proprietary WriteCache technology takes caching a step further by allowing the host-to-buffer transfer of disk drive data to occur while the buffer-to-disk transfer of a prior command still is executing. With WriteCache - when a write command is executed - the drive stores the data to be written in its cache buffer and immediately sends a "command complete" message to the host before the data actually is written to the disk. The host is then free to perform other host-to-buffer tasks. This process eliminates rotational latencies during sequential access, and overlaps rotational latency and seek time with system processing during random access. As a result, sustained data transfer rates increase by up to a factor of ten for sequential writes and up to 30% for random writes.

Tip y.gif Fast Multiword DMA

Used with IDE-AT interface system designs, Fast Multiword DMA is an alternative data transfer technique to Programmed Input/Output (PIO). It increases the disk drive's data transfer rate by transferring multiple words of data with only one set of overhead commands. This reduction in command overhead relative to the total amount of data transferred means that the drive can transfer large data blocks more efficiently. Typically, high-end desktop computer systems running the following applications receive the most benefit from Fast Multiword DMA technology:

  • Multitasking systems

  • File servers and networking environments

  • Multimedia or full-motion desktop video

Figure 3 illustrates the difference between a regular DMA data transfer and a Fast Multiword DMA data transfer.


With multiple word data transfers directly between the disk drive and system memory bypassing the CPU, Fast Multiword DMA reduces host overhead, improving overall system performance. The ProDrive LPS 270/340/540 supports industry standard EISA Type B and F Direct Memory Access (DMA) compatible transfers across a PC with an IDE-AT interface. Further, Mode 1 ATA DMA is supported by the LPS drives which produces a 13 MB/sec. buffer-to-host data transfer rate. Previous DMA transfers supported single word DMA transfers which only reached data transfer rates of 2 MB/sec. Another DMA transfer standard supported by competitive disk drives is Mode 0 which transfers data at 4 MB/sec. Fast Multiword DMA increases the speed of today's advanced system and application requirements since the CPU is free to process additional requests. Furthermore, Fast Multiword DMA triples the speed of data transfers by achieving the 13 MB/sec. rate.

Tip b.gif Local Bus IDE Compatibility

Another very significant advancement making today's disk drives perform better in IBM-compatible PC systems is their use of the PC's local bus, which traditionally has been the high speed communication pathway between the central processing unit (CPU) and the system memory. As CPU speeds and bus widths have increased from 8 MHz and 16 bits to today's advanced systems with 50 MHz CPUs and 64-bit buses, the CPU and memory could operate at maximum throughput because the local bus matched these performance improvements with faster clock cycles (higher frequencies in megahertz than the peripheral bus) and a wider data channel. Video and graphics controllers, able to meet the high speed requirements for communication over the local bus, also were designed into the local bus.

However, peripherals such as hard disk drives traditionally have interfaced to the system over the PC's expansion bus, which generally has limited communication to 8 MHz and 16 bits. The expansion bus is usually referred to as the Industry Standard Architecture (ISA) bus. Additional expansion buses have been introduced over time including the Extended Industry Standard Architecture (EISA) bus and the Micro Channel bus. Even though these newer buses improve ISA's limited speed and bus width, each still has been exceeded in performance by the modern high-end 486s and Pentium-based PC designs. Only recently have disk drives tapped into the increased performance capabilities provided by the local bus. The system performance gains with a PC local bus supporting peripherals is up to six times faster throughput over an ISA system.

Figure 4 illustrates the internal bus architecture of an IBM-compatible PC system and shows an example of the new connection between the disk drive and the local bus.


Two standards for local bus implementations have been developed: the Video Electronics Standards Association (VESA) Local (VL) bus and Intel's Peripheral Components Interface (PCI) local bus. VL has been in use since system designers started putting video controllers on the local bus and has evolved into a standard for 486 and compatible systems. The PCI local bus is the standard for all Pentium-based PC designs and offers a more complete command set for communications than the VL bus. However, Intel's Pentium system design has not enhanced the communication capabilities of the traditional expansion bus. Therefore, disk drives on the expansion bus in Pentium systems will perform at the same rate as disk drives on the expansion bus in a 486 system. For this reason, it is almost a necessity for Pentium system designers to put hard disk drives on the local bus to avoid bottlenecks in performance. The net result in using either PCI or VL local bus designs is that system transfer rates will exceed those of any ISA, EISA, or Microchannel system by at least three times.

Quantum's ProDrive LPS 270/340/540 AT products work with either VL or PCI local bus designs. The drives support Mode 3 ATA PIO (programmed input/output) transfers with increase data transfer speeds to 11 MB/sec. Consequently, the Local Bus IDE Compatibility that these LPS drives support represents nearly a tripling of normal AT-PIO data transfers which only produce 2 MB/sec. to 4 MB/sec. rates.

Tip r.gif SCSI-2 and Fast SCSI-2

Quantum's continued leadership and involvement in setting SCSI standards led to the development of the new SCSI-2 standard, which has an improved command set for data transfer operations and allows for controlling the interface in firmware rather than in software. Quantum put the SCSI-2 protocol enhancements to work and leveraged the design of our very high-end, top-performance drive lines to implement Fast SCSI-2 support in its ProDrive LPS 270/340/540 products for mid- to high-end desktop computer systems. To maximize the advantages of SCSI-2 technology, the Quantum ASIC design team developed custom SCSI-2 controller chips and brought products with SCSI-2 support to market before any other disk drive company.

Quantum's Fast SCSI-2 provides data transfers at 10 MB/sec., twice the SCSI-1 protocol data transfer rate. In addition, Fast SCSI-2 enhances data integrity by improving the data signal quality at faster transfer rates through active termination, in which a voltage regulator drives a constant voltage along the bus. The result of active termination is that the data signal remains constant over the entire length of the bus transfer.

Tip y.gif Advanced Embedded Servo Technology

ProDrive LPS 270/340/540 products efficiently utilize the disk surface to achieve a higher areal density using proprietary Advanced Embedded Servo Technology. Traditionally, embedded servos for disk head positioning were difficult to implement in drives with multiple data zones: the varying number of sectors on tracks in different zones vastly complicates the task of the drive's servo electronics during data reads. The lack of mature, off-the-shelf chips, capable of handling embedded servo information, further complicates matters for drive designers. Quantum solved these problems by developing proprietary servo feedback and controller ASICs, which intersperse servo control information within each sector's user data, as shown in Figure 5. The drive's sophisticated custom controller electronics efficiently handle the task of separating the servo information from the data.


While Quantum previously has implemented this technology in drives of greater capacities, this is the first time we have incorporated Advanced Embedded Servo Technology in drives of this capacity. The increased areal density provided by Advanced Embedded Servo means data can be packed more tightly on the disk surface and results in shorter seek time and rotational latency.

Tip g.gif AutoTransfer ASIC Technology

In IDE-AT systems, system interrupts occur repeatedly during data transfer in response to a host read or write request. More specifically, regardless of the total data transfer size, the drive generates a new interrupt for each sector (512 bytes) of data transferred. Each system interrupt causes tasks to be completed by both the drive and the host CPU before the transfer of data can begin:

  • At the drive, all of the status registers must be set up before the interrupt is generated.

  • At the host, the interrupt must be identified and processed before the transfer can take place.

Quantum's AutoTransfer ASIC Technology resets the status registers almost instantaneously - nearly eliminating the command overhead required to complete each sector data transfer. In previous product generations, the status registers were reset through firmware, creating a delay of up to 400 microseconds ( s).

As shown in Figure 6, AutoTransfer ASIC Technology dramatically decreases host overhead delays per sector of data transferred, so the drive can transfer each sector of data more than 60% faster. System performance increases because the CPU can process additional requests faster, and the wait states for command overhead are decreased.


AutoRead and AutoWrite cache scanning techniques and the increased data transfer speeds of Fast Multiword DMA or Read/Write Multiple firmware complement the performance benefits of the AutoTransfer ASIC Technology. These technologies, working together, significantly improve overall computer system performance in IDE-AT systems.

Tip r.gif Error Correction Code (ECC) On-The-Fly

The increasing demand for drives with higher capacities and areal densities requires more powerful error correction schemes. High speed, "on-the-fly" error correction saves precious milliseconds on single burst errors. In disk drives without the capability for error correction on-the-fly, ECC correction of a single burst error requires at least the amount of time it takes for a full disk revolution (about 13 ms) to re-read the sector and apply the ECC. Thus, in drives without ECC on-the-fly, throughput rates decrease substantially.

Quantum uses custom ASICs to implement a state-of-the-art error correction code (ECC) scheme. Quantum has leveraged the design of our very high-end, top-performance drive lines to implement a sophisticated Reed-Solomon error-correction polynomial in its ProDrive LPS 270/340/540 products for mid- to high-end desktop computer systems. The error-correction polynomial preserves high data throughput by correcting single-burst errors of up to 3 bytes per 512 byte sector "on-the-fly." When an error is corrected on-the-fly, the microprocessor handles the correction, and the sequencer continues running unless more than one error occurs in the same sector. The result is that most recoverable errors have no noticeable effect on performance; effectively, they are transparent to the user.

When the ECC ASIC can not correct an error on-the-fly, it automatically retries with a more rigorous correction algorithm, which enables the correction of double-burst errors of up to 3 bytes each. This feature results in an unrecoverable error rate of less than 1 error in 10^14 bits read.

Tip y.gif Read-On-Arrival Firmware

Typically, a drive head must "settle" - that is, lateral movement of the actuator must stop - before a read or write operation can begin. While the head must completely settle before a write can occur, Quantum's Read-On-Arrival Firmware lets the drive start a normal read operation before settling is complete. (Then, if a read error caused by settling does occur, it is detected and corrected by advanced ECC algorithms.) This feature can typically cut up to 10% from the average seek time for reads.

Tip g.gif Read/Write Multiple Firmware

As previously mentioned in the "AutoTransfer ASIC Technology" section, a system interrupt occurs in an IDE-AT system for each sector (512 bytes) of data transferred. All of this overhead takes away from the primary

work of transferring data.

In PCs using PIO system designs, Quantum's Read/Write Multiple Firmware significantly reduces the number of interrupts handled during the processing of an I/O request by allowing the transfer of multiple sectors of data per interrupt. (Read/Write Multiple Firmware is similar to the Fast Multiword DMA but is used in the more common and less expensive PIO, not DMA, system designs.) For each system interrupt avoided through the use of Read/Write Multiple Firmware, valuable CPU time is reclaimed for the primary process of transferring data, and processing overhead is cut substantially.

Tip b.gif In Summary

As the complexity of computer systems, operating systems, and applications software increases, computer system components require ever-increasing performance demands. Not only should each component perform efficiently and prevent system bottlenecks, but it also should contribute to enhancing overall computer system performance. Disk drives are no exception.

Quantum continues to provide industry-leading technological advancements for the disk drive industry. Driven by the philosophy of Intelligent Performance, Quantum favors the further development of built-in drive software and custom circuitry (ASICs) to increase performance, without increasing drive cost or power consumption. The most recent enhancements for the ProDrive 270/340/540 product line includes Adaptive Segmentation and AutoTransfer ASIC Technology. In addition, Quantum has ported features previously implemented in higher-end workstation products, such as the AutoRead and AutoWrite ASIC, ECC on-the-fly, Fast Multiword DMA, and Fast SCSI-2 to disk drive products for the mid- to high-end desktop computer system.

At Quantum, we take our commitment to providing the highest possible performance disk drives at competitive prices very seriously. That commitment shows in our ProDrive LPS 270/340/540 products that deliver high performance - through advanced technological design - and maximum value to the customer needing a disk drive for mid- to high-end desktop computer systems.

See Also