The Future of SCSI

Posted on May 01, 1999

RssImageAltText

The Future of SCSI

SCSI is keeping pace with workstation/server performance demands, with 160MBps versions due later this year.

By Mark Delsman

Since its inception in the early 1980s, SCSI has been used to connect disk drives, tape drives, optical devices, scanners, printers, removable disks, and other devices. Demand for the interface continues to increase at about 14% per year, with an estimated 22 million SCSI peripherals to ship in 1999.

Challenged by several peripheral interfaces, such as Fibre Channel, with new feature sets and end-user value, SCSI has not halted. The interface continues to grow and develop because it offers a steady stream of improvements, works with legacy devices, and has proven compatibility across vendors and device types.

The original SCSI interface was an 8-bit bus that supported seven peripherals and transferred data at a maximum of 5MBps. The latest specification, which will be introduced later this year, has a 16-bit bus (15 devices per bus) and a data transfer rate of 160MBps. New reliability features have also been added.

This article looks at SCSI`s future over the next four or so years and the role it will continue to play in servers and workstations.

High-end computer systems need increased storage subsystem speed to keep up with the rapidly increasing data transfer rates disk drives are delivering, thanks to higher track densities and faster spin speeds. Historically, these improvements occur at a 35% compounded annual growth rate, which means an individual drive transfer rate increases every two years. To prevent data from overrunning the bus (or limiting the number of drives that can be connected to the bus), bus speeds must also increase at this rate.

So, it`s necessary to double the bus speed every two years to keep available bandwidth at four times the data rate of one drive (see chart). This ratio has been a traditional guideline for bus performance. This does not mean that it takes four drives to saturate a bus; other factors such as I/O patterns also need to be considered. What the ratio does suggest, however, is that Ultra2`s 80MBps transfer rate may not be sufficient for heavily loaded systems in 1999.

A second driver of increased bus throughput is faster network I/O, such as Gigabit Ethernet. To support both inbound and outbound data on Gigabit links, a storage subsystem may require more than 200MBps. By improving the speed and width of the PCI bus to 66MHz and 6 bits such high-speed data movement may be possible.

The third trend that drives storage subsystem performance is the rapidly increasing amount of data in corporate databases and from the Internet. All this data make servers work harder every year. Expanding data storage and the ability to retrieve it quickly stresses traditional subsystem connections.

SCSI keeps pace

Can SCSI--or any bus--improve at a rate similar to Moore`s law for semiconductors? The SCSI Trade Association (www.scsita.org) recently agreed that the plan to continue doubling SCSI`s transfer rate every two years looks reasonable for many years to come. 160MBps SCSI is due this year, 320MBps in 2001, and 640MBps in 2003.

Getting to 160MBps was an easy upgrade, technically speaking. By using double-transition clocking--which changes the state of the data on the rising and falling edge of each clock pulse, or ACK--data can be moved twice as fast over the interface without increasing the overall frequency of the bus (see diagram on p. 34).

To get to the next level (i.e., 320MBps), more sophisticated techniques will be needed to control two attributes of the bus: the skew between the signals and a bus capacitance effect called "intersymbol interference." Skew results when the 16 data lines run through the cable at slightly different rates, causing them to arrive at the receiving end slightly out of step. Circuitry will be required to adjust for this effect, adding slight delays to some of the data lines so that they are clocked into the receiver simultaneously.

If a signal is left at a high state for a period time, it "charges" up a wire much like a capacitor does. When the signal returns to a low state, it takes time for the charge to go away, causing slow switching time. This "intersymbol interference" means that, once again, a few of the bits on the bus may arrive at the receiver after others. To make sure this does not happen, the data bits must switch between states on a regular basis to prevent charging. Various data encoding schemes can guarantee regular transitions to avoid this situation.

Performance can also improve through more efficient protocols across the bus. By the end of the year, SCSI will use packetized protocol, which groups commands, data, and messages into single data transfers. Packetized protocol eliminates slow transitions between phases on the bus and allows commands to move quickly to target devices. This will allow systems to use the bus more efficiently, by making sure more of the bandwidth is used for data and less for management overhead.

SCSI vendors and the SCSI Trade Association realize that while performance is important, it is not the only critical attribute of a computer bus. Other improvements are necessary. For example, Ultra160/m SCSI offers powerful cyclic redundancy code (CRC) and a communication test called domain validation. These features help protect the data environment as on-the-fly upgrades or fixes are made to the system.

With CRC, extra bytes are transferred with each block of data. The extra bytes are a mathematical code that allows the receiving circuitry to verify that the data is correct. The code used by SCSI is a powerful 32-bit version of CRC that protects against poor connections or unexpected events such as hot-plugging a new drive.

Domain validation is a new concept for a subsystem bus and will lead to many other improvements. The idea behind domain validation is to accept a negotiated speed only after a validation test is successfully completed. With Ultra160/m SCSI, this test will be implemented as a simple communication check, similar to that used by modems attempting to connect.

After determining the speed capabilities of the target, the initiator will send out a Write Buffer command to the device. The data transfer will initially occur at full speed. The initiator will then read back the data and check to see if the data compares and whether there are any CRC errors. If the test fails, the initiator will shift down to the next lower speed and repeat the test. In this manner, a compatible speed will be found and locked in before user data transfers begin.

In the future, this feature will be extended to bus margining and more frequent testing to better define the best operating characteristics of a given connection. Management software may also track frequency of errors to determine if adjustments are required.

SCSI`s long history of compatibility with a broad range of devices is expected to continue. Speeds will double every other year and other improvements, such as packetized protocol, will keep SCSI competitive.

Click here to enlarge image

It`s necessary to double the bus speed every two years to keep available bandwidth at four times the data rate of one drive.

Click here to enlarge image

Mark Delsman is director, advanced technology, for Adaptec Inc., in Milpitas, CA. He is also a director and secretary for the board of directors of the SCSI Trade Association (STA). For more information, visit www.scsita.org.

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives