Ultra 160 SCSI enters the limelight

Posted on June 01, 2000


By Jack Fegreus

For the past five years, hard-disk-drive (HDD) vendors have consistently made impressive performance improvements by increasing both the areal density and the rotational RPM rates of drives. When drive manufacturers implement both techniques simultaneously, there is a corresponding compound increase in the rate at which data is transferred off of the drive. As a result, high-end drives are starting to sport read-channel rates pushing 500Mbps and the race is on to deliver the first Gbps channels.

This growth in internal drive-data rates in turn puts pressure on the storage-systems designer. A host bus adapter (HBA) should not be in the position of being overrun by a single drive, nor should it be limited in the number of drives it can support. With scalability an increasingly hot button for IT, the heuristic storage-bus design is four times the maximum sustainable data rate of a single drive. With internal data rates of 300Mbps to 400MBps fairly common, the top end of sustainable drive throughput is now typically in the neighborhood of 40MBps. That calculation puts desired I/O bus speed at 160MBps.

Figure 1: In all but Ultra160 SCSI implementations, the REQ signal operates at twice the frequency of the data signal. Under the double-edge clocking scheme of Ultra160 SCSI, the REQ and data-signal clocks run at the same frequency, but are skewed in order to use both the leading and trailing edge of the REQ signal.
Click here to enlarge image

As a result, there is an immediate need for SCSI drives and HBAs that support throughput of 160MBps. This is the logic driving the introduction of Ultra160 SCSI, which doubles the 80MBps throughput of the Ultra2 SCSI standard. HDD and HBA manufacturers have brought new SCSI products into the market now by focusing strictly only on the data-transfer aspects-double-transition clocking, cyclic redundancy checking (CRC), and domain validation-of the Ultra3 SCSI spec and tabling the more complex features-like packetized command traffic and quick arbitration-of the Ultra3 SCSI standard, which deal with the transfer of commands, messages, and status information.

For our first examination of Ultra160 SCSI, we examined four Quantum Atlas V 18.3GB drives and four Seagate Cheetah 18.4GB drives, as well as an Adaptec 29160 controller and a QLogic QLA12160 controller. While all of these devices occupy the high-end sector of the Ultra160 SCSI market, they also demonstrate an incredible range of performance diversity.

Clocking edge

The attention-grabber for Ultra160 SCSI is a well-known technique known as double-transition clocking; this technique is already used on Ultra ATA-66 drives. In all previous progressions-from wide SCSI II (10MBps), to Fast SCSI II (20MBps), to Ultra SCSI (40MBps), to Ultra2 SCSI (80MBps)-each doubling of the data-transfer rate was achieved by doubling the bus clock speed.

In the double-transition clocking scheme -often called dual-edge clocking because both the leading edge and the trailing edge of the REQ (request) signal are used to clock data-line sampling-two bits of data are sent per clock cycle instead of one.

With SCSI, data is transferred in synchronous mode, while all command, status, and message transfers are made in asynchronous mode, which is usually limited to 3MBps to 7MBps, depending on cable distance. In all SCSI implementations, data sampling occurs only with the trailing edge of the REQ signal, which operates at twice the frequency of the data signal.

Click here to enlarge image

Figure 2: In all sequential streaming throughput tests, performance results with both the Adaptec 29160 and QLogic QLA12160 controllers varied by only a few percent. For reads of less than 16KB-most applications use 8KB reads, while many system utilities use 64KB reads-the 7,200rpm Quantum Atlas V exhibited a slightly higher throughput than the 10,000rpm Seagate Cheetah drives. For large 64KB reads, however, the Seagate drives displayed a distinct advantage: Maximal sustained throughput is achieved at these large data transfers and the controller scaled linearly for four drives.

Under the Ultra160 SCSI standard, the data-line clock speed equals the frequency of the REQ line. The fundamental frequency for the cable remains the same, while the data lines move up to the fundamental frequency for the cable. By skewing the REQ and data signals, both the leading and trailing edges of the REQ signal can be used to sample the data. As a result, it is possible to double the maximum data-transfer rate from 80MBps to 160MBps for Ultra160 SCSI while maintaining the same SCSI bus clock rate as Ultra2 SCSI. Moreover, Ultra160 can use the same Low Voltage Differential (LVD) SCSI cables, cable lengths, connectors, connector spacing, terminators, backplane designs, and devices as Ultra2 SCSI.

Doubling the data- and parity-line frequency, however, also increases the likelihood of increased error rates. To ensure data integrity, especially during hot-plug operations, Ultra160 replaces simple parity checking with CRC on data-transfer operations. Because the CRC calculation is applied to the entire block of data being transferred, it is significantly more powerful than a simple parity check, which applies only to a single byte. As a result, CRC can detect the following:

  • Single-bit errors
  • Double-bit errors
  • Odd numbers of errors
  • Error bursts up to 32-bits long

The addition of CRC is also essential for the SCSI roadmap, which calls for doubling the bus speed again when Ultra320 SCSI is introduced some time next year.

Figure 3: In testing streaming read throughput on software-based (Windows 2000 Advanced Server) RAID volumes, we discovered that the operating system takes a major performance hit on RAID-5 configurations. A RAID-0 volume had virtually identical performance to a JBOD array with the same number of drives. A RAID-5 volume, however, exhibited a performance hit of approximately 40%.
Click here to enlarge image

Nonetheless, a quick calculation done on today's 32-bit PCI HBAs plugged into host buses clocked at 33MHz yields a maximum throughput of only 132 MBps-far short of the desired 4x improvement in single-drive throughput. To overcome this limitation, high-end Ultra160 SCSI HBAs come in a 64-bit PCI form factor. In addition, these devices are able to function on host buses clocked at either 33MHz or 66MHz. We conducted our Ultra160 SCSI tests on a Dell PowerEdge 2400 server, which is configured with a 64bit PCI bus clocked at 33MHz.

Configuration flexibility

The transparency of the double-transition clocking scheme creates an interesting problem. Ultra160 SCSI drives are electrically indistinguishable from Ultra2 SCSI devices.

At boot time, older SCSI HBAs negotiate with each device on the bus to determine the transfer speed of the devices. This process, however, is done asynchronously at about 5MBps. As a result, the HBA must assume that the negotiated speed will work. When the device will not work at the negotiated speed, it simply disappears from the bus.

Click here to enlarge image

Figure 4: While throughput on streaming reads was virtually identical for the Quantum Atlas V and Seagate Cheetah Ultra160 SCSI drives, this was not the case when we measured the sustainable I/O load in read requests per second. Running the Nova Technica Load benchmark, the Seagate Cheetah drives supported an I/O request load 50% greater than the load the Quantum Atlas V drives supported. In addition, these tests also revealed a 30% decline in performance for a software-based RAID-5 volume with respect to a RAID-0 volume.

For its part, Ultra160 SCSI goes beyond this simple negotiation function by introducing what has been dubbed "SCSI domain validation." At a bus reset, particularly during the initial boot-up, an Ultra160 SCSI HBA will test a negotiated speed to determine if it will work. These host controllers use a new mode of the "Write Data Modified" command to transmit data to a device at its negotiated data transfer rate.

The SCSI device holds that data until it receives a "Read Data Modified" command from the host controller. When it receives the "Read Data Modified" command, it sends back the data previously received. When the host controller receives the identical data that it sent, it tags the negotiated speed as usable.

If the data is corrupted or the bus experiences a bus hang or a CRC error, then the negotiated speed is considered unusable. The controller then immediately renegotiates using different parameters until all possibilities are exhausted or until a connection is successfully created.

Adaptec has also added a proprietary technology dubbed SpeedFlex to its Ultra160 HBA. SpeedFlex creates two electronically isolated SCSI bus segments to provide a means for connecting older single-ended SCSI devices to the HBA in addition to Ultra160 and Ultra2 SCSI devices, which require LVD buses. Directly attaching a single-ended SCSI device to an LVD bus degrades the performance of the bus and all attached LVD devices.

Real performance

In our tests, the maximum streaming throughput rate for either a Quantum Atlas V (7,200rpm) or Seagate Cheetah (10,000rpm) Ultra160 SCSI drive was comparable to that of 10,000rpm Ultra2 SCSI drive. Peak sustainable throughput for a single drive averaged about 35MBps. With four independent Seagate Cheetah drives functioning simultaneously, peak throughput was 128,000MBps with the Adaptec 29160 and 126,204MBps with the QLogic QLA12160.

The importance of this throughput headroom can be seen when software-based RAID schemes are placed into the performance equation. Moreover, the throughput capabilities of all the tested drive and controller combinations provided an excellent way to probe the efficiency of software-based RAID. In all of our tests, we used Windows 2000 Advanced Server as our operating system test bed.

We fully expected that both RAID-0 and RAID-5 configurations would have similar read throughput as a comparable JBOD configuration. This proved to be consistently the case for RAID 0, which simply stripes the data over all of the drives in the set. RAID 5, which adds parity data, proved to be a very different story, however.

Once again, with both controllers and both sets of drives, streaming throughput performance parameters were similar among all of the RAID-5 implementations. Nonetheless, these profiles diverged significantly from both JBOD and RAID-0 performance. Except for very small (2KB and 4KB) data transfers, throughput degraded by upwards of 40%. Such degradation would be expected on writes with RAID 5 because of the added overhead of handling the parity bits, but finding this level of consistent degradation during reads was quite surprising.

Figure 5: Only with the Seagate Cheetah drives were we able to detect a distinct performance difference between the QLogic and Adaptec HBAs. In I/O stress testing, the QLogic HBA was able to push out roughly 25% more I/Os per second than the Adaptec HBA, which had not saturated the Cheetah drives.
Click here to enlarge image

Equally interesting was the near-parity in the streaming throughput between the Quantum and Seagate drives given the difference in rotational speed. We consistently measured slightly higher throughput with the Quantum drives for data transfers less than 16KB and slightly higher throughput with the Seagate drives for data transfers greater than 16KB. A closer look at the underlying architecture of these drives helps explain both the similarity in streaming throughput and the difference in sustainable I/Os per second between these two drives: Platters on the Quantum Atlas V sport an areal density of 6.8 GB/in2, as opposed to 6.3 GB/in2 for the Seagate Cheetahs.

In addition, the platters in the Seagate drive have a smaller diameter. In fact, the Seagate drive uses three platters and six heads, while the Quantum drive uses two platters and four heads. So, while the Quantum drive spins at only 7,200 rpm, the higher bits per inch and longer track length compensate for the lower rotational speed when streaming sequential I/O.

The lower rotational speed also helps account for the lower acoustic noise from the Quantum drives. At idle, the Quantum drive is rated at 3.3Bels, while the Seagate is rated at 3.8Bels. This makes the Atlas V drives very desirable for high-end workstation applications such as multimedia and image editing. On the other hand, the lower mass of the platters in the 10,000rpm Seagate drive pegs the idle power rating for these drives at only 8.5W, as compared to 8.7W for the Atlas V drive.

Still, it is not until the actuator arms are stressed that these design differences manifest significant performance differentials for their ultimate users. When we ran the Nova Technica Load benchmark, which stresses the number of read requests (read sizes are normally distributed about 8KB and are split between a hot spot and the entire drive) per second that the storage subsystem can sustain, performance using the Adaptec 29160 controller and the Quantum and Seagate drives showed very distinct differences.

The Seagate Cheetah drives were able to support twice the I/O request load per second that the Quantum Atlas V drives could support. More importantly, software-based RAID-5 performance on Windows 2000 Advanced Server once again showed significant degradation for RAID 5. In this case, performance degradation was on the order of 30%.

Finally, I/O loading uncovered a performance difference between the QLogic QLA12160 HBA and the Adaptec 29160. With a RAID-0 configuration using the Seagate Cheetah disks, the QLogic controller was able to push approximately 25% more I/O requests onto the Seagate drives.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives