This year marks the 20th anniversary of the publication of the paper that launched the RAID revolution in the storage industry. It’s rare that a technology development in this industry truly qualifies as revolutionary, but RAID fits the bill.

Over the years, there haven’t been many radical departures from the fundamental concept of RAID, although the original RAID configurations- or levels-have mushroomed well beyond the traditional RAID 0, 1, and 5 into RAID 1E (aka striped mirroring, enhanced mirroring, or hybrid mirroring), 3, 6 (dual-parity RAID), and hybrid configurations such as RAID 10, 50, 60, and others.

In the context of the popular Serial ATA (SATA) disk drives, RAID 6 has emerged as an important configuration. This is due in part to the high capacity and perceived reliability drawbacks of SATA drives. Virtually all RAID controller vendors have implemented RAID 6, albeit in different ways.

Although RAID 6 is typically described as the ability to protect against two simultaneous drive failures, that description is somewhat fallacious because the likelihood of simultaneous drive failures-even with SATA-is extremely unlikely. More accurately, RAID 6 protects against the failure of-or an error on-Drive B while failed Drive A is being rebuilt. And since the higher the capacity of a disk drive, the longer the rebuild time, RAID-6 protection becomes most important in the case of SATA drives, which can store up to 1TB. Furthermore, the larger the drive, the more potential errors.

To date, the use of RAID 6 has not been widespread, largely because of the write penalty associated with it. But most controller manufacturers claim to have minimized the RAID-6 write penalty. For example, Scott Cleland, director of marketing at AMCC, claims that with AMCC’s controllers there is only a 5% to 7% increase in the write penalty when you move from RAID 5 to RAID 6, which is attributable to the company’s simultaneous parity calculations. In contrast, some RAID-6 implementations exact a 20% to 30% penalty on write operations. However, it should be noted that in many of the applications in which SATA drives are used (e.g., nearline or secondary storage, disk-based backup, etc.) a write penalty of even 30% may not be a significant drawback.

Cleland estimates that less than 5% of RAID users are taking advantage of RAID 6 today, but that “RAID 6 should become the new RAID 5 once users see the benefits, because you should be able to get virtually the same performance as RAID 5 with double the data protection.”

Suresh Paniker, director of worldwide marketing at Adaptec, estimates that 20% or fewer of Adaptec’s SATA customers are taking advantage of RAID 6 today, but that use of RAID 6 will pick up because the associated write penalty is no longer an issue.

Mix n’ match SAS, SATA

RAID 6 may be important for any disk array populated with SATA drives, but it becomes even more important as end users and integrators increasingly put SATA drives in SAS enclosures-sometimes mixing the two drive types-and deploy them in mission-critical applications where drive/array reliability is critical.

The ability to put relatively expensive, high-performance SAS drives in the same enclosure as low-cost, high-capacity SATA drives is often touted as the key advantage of SAS. However, just because you can intermix drive types doesn’t mean you should.

Users and systems/storage integrators experienced a number of problems when they were trying to intermix the two drives types in early implementations of SAS subsystems. Some of those problems were related to vibrations from high-speed 15,000rpm SAS drives causing errors on, or failures of, 7,200rpm SATA drives, as well as performance degradation on the SATA drives.

Controller and subsystem vendors are addressing those issues with recommendations regarding how users and integrators should arrange the drives (for example, putting drive types in their own vertical columns). Another recommendation from some vendors is to put each drive type behind a different controller.

“We recommend putting drive types in their own columns,” says Jerry Hoetger, director of product management, RAID, at Xyratex. “Our analysis found that you can’t stack unlike devices because the differences in vibration characteristics between 15,000rpm SAS drives and 7,200rpm SATA drives can cause performance problems in a RAID scenario.” In fact, in internal testing, Xyratex discovered an 80% performance degradation in a SATA drive that was positioned between two SAS drives.

However, Hoetger says that, assuming you follow vendors’ configuration guidelines, you shouldn’t have any problems mixing SAS and SATA drives in the same enclosure. He estimates that 20% to 40% of Xyratex’s SAS subsystems are configured with mixed drive types.

“Very few users are mixing drive types in the same enclosure, but that’s largely because most users aren’t using SAS drives yet,” says Alan Johnson, director of marketing at Infortrend. “It’s much more common for users to run mission-critical applications on a RAID array with SAS drives, and attach it to JBOD arrays with SATA drives for applications such as virtual tape backup.”

Infortrend allows intermixing SAS and SATA drives with its controllers and enclosures as long as the drive types are arranged in different vertical columns.

Adaptec’s Paniker notes that intermixing drive types in the same enclosure is very rare today, but that the practice will ramp up as SAS drives become less expensive. (In the channel, SAS drives can be 3x to 5x more expensive than SATA drives.)

But intermixing drive types is not without controversy, and some vendors are vehemently against it. “You should not mix SAS and SATA drives in the same enclosure,” states AMCC’s Cleland. “Some RAID vendors are doing an injustice to SAS by selling SAS as a SATA controller and trying to push so-called ‘unified storage.’ There aren’t many applications that will benefit from mixing SAS and SATA drives in the same enclosure.” Clelend says that virtually none of AMCC’s controllers are configured with SAS and SATA drives. “Right now, we won’t allow both drive types to be in the same array,” he notes.

Luca Bert, director of architecture and strategy for DAS storage at LSI, says, “You should not mix SAS and SATA in the same logical drive, or virtual disk. You need to create separate virtual disks.” He also recommends keeping the different drive types as physically separate as possible-ideally in different enclosures-to avoid the rotational vibration problems.

“Mixing SAS and SATA drives is definitely the wave of the future, and we haven’t run into any problems, but we do encourage users to put the different drive types in different enclosures,” says Mike Joyce, senior director of marketing at Promise Technology.

Controller and subsystem vendors are also addressing the RAID reliability issue with software that increases reliability by proactively scanning and monitoring disks, and detecting-and potentially fixing-errors before they occur. Xyratex, for example, uses a “predictive drive cloning” feature that clones a drive that may have errors on it.

Despite the controversy and caveats associated with mixing drive types in the same enclosure, clear-cut benefits exist. Mixing SAS and SATA drives enables in-the-box tiered storage where users can put frequently accessed data on high-speed SAS drives and infrequently accessed data on lower-cost, lower- performance SATA drives. Although this may require manual migration of data in some instances, a variety of vendors have software that automates the migration of data between different drive tiers based on administrator-defined policies according to, say, access frequency or by file/ data types. However, intermixing SAS and SATA drives in the same enclosure is still rare among end users. But certain vertical markets have more of a need to intermix drive types, and adoption of the practice in those markets is picking up. One example is the entertainment market and applications such as collaborative, real-time editing of graphics, animation, and special effects, which can require a combination of high-speed disk access and very high capacities for storing inactive files or data.

In these markets, some vendors are going beyond two-tier schemes (e.g., SAS and SATA drives) into three-tier configurations. For example, at last month’s Siggraph show, Apace Systems introduced its fxStor RAID-5 disk arrays, which allow users to mix 15,000rpm SAS drives, 7,200rpm SATA drives, and very high-speed solid-state disks, or SSDs. (Specifically, fxStor arrays include 32GB of DRAM configured as SSDs, or RAM disks.) A combination of firmware and software manages migration of data among the three tiers.

Apace’s fxStor NAS arrays come with Gigabit Ethernet interfaces that can be upgraded to 10Gbps Ethernet via cards from vendors such as Chelsio. Two pre-configured models, both of which are based on enclosures from AIC, are available:

  • The 3U FX3000-3U6T (based on AIC’s RSC-3E chassis) includes four 147GB SAS drives (588 GB total capacity), 12 500GB SATA drives (6TB), 32GB of RAM, two dual-core 64-bit CPUs, and four Gigabit Ethernet ports in a RAID-5 configuration.
  • The 4U FX3000-4U8T (based on AIC’s RSC-4E enclosure) includes eight 147GB SAS drives (1.2TB), 16 500GB SATA drives (8TB), 32GB of RAM, two dual-core Opteron CPUs, and six Gigabit Ethernet ports in a RAID-5 configuration.

Software-based RAID

At the lower end of the RAID market (at least for now), one intriguing trend is a newfound interest in software-based RAID. Software RAID isn’t new, but has until recently been plagued by high consumption of host CPU cycles and performance drawbacks. However, the advent of multi-core processors has fueled renewed interest in software-based RAID.

Ciprico, which purchased RAID stack vendor RAIDcore, is a particularly vocal proponent of software-based RAID.

“The main driver behind software-based RAID is multi-core CPUs,” says Andy Mills, Ciprico’s senior vice president of marketing and development. “System performance is so high now that you can easily do software RAID without negatively impacting applications.” Software-based RAID takes advantage of under-utilized CPU cycles rather than relying on dedicated RAID hardware.

Ciprico sells software RAID that only requires one of the company’s I/O cards, as opposed to a dedicated hardware RAID controller. Ciprico admits that software RAID takes up more CPU cycles (anywhere from 5% to 30%, according to company officials) than hardware RAID, and hardware RAID generally provides higher performance. However, Ciprico claims that in some internal tests they have demonstrated higher performance than hardware RAID, particularly with I/O loads of very small file sizes.

Other vendors argue that software-based RAID is only for low-end platforms and applications. “Software RAID doesn’t make any sense beyond performance workstations or perhaps entry-level servers,” says Promise Technology’s Joyce. (Promise offers software-based RAID as well as hardware RAID controllers and subsystems.)

Regardless of whether you use RAID 5 or RAID 6, intermix SATA and SAS drives, or opt for software- or hardware-based RAID, SAS itself may be the hottest trend in the RAID market, with SAS drive shipments exceeding early predictions. For example, International Data Corp. (IDC) predicts SAS will account for 15.7% of all enterprise (e.g., non-desktop) disk drive shipments this year, rising to 23.2% next year and 25.9% in 2009 (see figure on p. 18).

Estimates from Gartner Dataquest are even more optimistic for SAS. For example, Gartner predicts that SAS drives will account for 16.4% of all multi-user drive shipments this year, 41.2 % next year, and 44.9% in 2009 (see figure on p. 21).

In an InfoStor QuickVote reader poll (which included end users, VARs, and integrators), 35% said that Fibre Channel will account for the majority of their enterprise drive purchases over the next year, followed by SATA (32%) and SAS (29%). Only 4% cited parallel SCSI (see figure).

Although there was a slowdown this summer in terms of SAS-specific product introductions, the pace is picking up again, and there are expected to be dozens of SAS-related product announcements at next month’s Storage Networking World (SNW) conference in Dallas.

For example, Atto Technology is expected to begin shipments next month of two new SAS host bus adapters (HBAs) in its ExpressSAS line of adapters. Targeted at high-performance applications, the ExpressSAS H380 and H308 are based on Intel’s IOC340 I/O processor and XScale technology, and x8 PCI Express technology. The H380 has two external x4 MiniSAS SFF8088 connectors, while the H308 has two internal x4 MiniSAS SFF8087 connectors supporting up to 256 devices via SAS expanders. The adapters support both SAS and SATA drives.

This month, Arena Maxtronic rolled out two SAS-SATA subsystems. The JanusRAID2 SS-6651E features an AMCC PPC440SP CPU, dual 4Gbps Fibre Channel host interfaces, 16 SAS or SATA drive bays, 512MB to 2GB of cache memory, support for all RAID levels (including RAID 6), and support for up to three JBOD expansion units. The JanusRAID2 SS-6652E has dual x4 SAS host channels.

Also this month, Promise Technology introduced the VTE610fD and VTE610sD RAID subsystems, as well as the VTJ610sD expansion chassis. The VTE610fD has a Fibre Channel host interface, and the VTE610sD has a SAS host interface. Both 3U, 16-drive sub- systems support SAS or SATA disk drives and are part of Promise’s E-Class arrays. Features include support for all RAID levels (including RAID 6), 512MB to 2GB of cache memory, dual controllers, and support for up to four expansion units or 80TB of capacity.

Dynamic Network Factory introduced the SASmaster 12sz, 16sz, and 16sz-HA (high availability via redundant controllers) disk arrays this month. The subsystems have 12 or 16 SAS or SATA drives (including 1TB SATA drives), support for RAID 6, and support for up to three expansion arrays for capacities ranging from 1TB to 64TB. Pricing ranges from $10,000 to $41,000, with expansion arrays starting at $8,000.

Last month, AIC’s Xtore unit began production shipments of three SAS- SATA drive canisters. The XC-23D1-SA10-0-R supports three 3.5-inch drives (up to 2.2TB) and bandwidth up to 900MBps. The entry-level XC-34D1-SA10-0-R and higher-end XC-34D1-SA1C-0-R support four 3.5-inch SAS/ SATA drives (up to 3TB) and up to 1.2GBps of bandwidth.

SAS drives are also showing up in the iSCSI market. For example, Qsan last month began shipments of the P200C and S500C RAID controllers. The P200C is an iSCSI-SAS controller with four Gigabit Ethernet host ports, an Intel IOP342 processor, 16 SAS/SATA drives, support for up to four JBOD expansion units (with 80 drives and 128TB of capacity), and support for RAID 6. The company claims performance of 120,000 I/Os per second with 512-byte block sizes, or up to 600MBps throughput.

Qsan’s S500C is a SAS-SAS/SATA RAID controller with an Intel IOP341 CPU, LSI controller, 16 drives, and two x4 SAS host ports. The company claims throughput performance up to 1GBps.

Enhance Technology’s recently introduced UltraStor RS16 SS is a 3U SAS-to-SAS/SATA subsystem with a 64-bit RAID controller, two SAS host ports and one SAS expansion port, up to 1GB of cache, 16 drive bays, support for all RAID levels (including RAID 6), and performance up to 800MBps. The RS16 SS can be configured with 15,000rpm SAS drives and/or 7,200 or 10,000rpm SATA drives.

De-dupe, RAID rebuilds, and multiple drive failures

By Mark Ferelli

In the scramble to conserve disk space and achieve cost-effective capacity management, data de-duplication seems to be a Holy Grail of storage management. However, de-duplication is also a relatively new technology that can potentially pose risks to critical data.

Karen Dutch, general manager of NEC’s Advanced Storage Products Group, points to the fundamental threats of lost backups and frustrated recovery efforts inherent in first-generation de-duplication products.

“Data de-duplication reduces duplicates to where there is one file that never changes…one spot or one disk where the file is stored,” says Dutch. “Should that file or disk be lost, all of the backup images could be un-recoverable.” With more and more primary backups committed to disk arrays, this problem would make disk failure a threat not only to primary transactional data, but also to backed-up business records.

In addition to this single-point-of-failure risk, Dutch also points to degraded performance that de-duplication can cause in data rebuilds. For those businesses that depend on a rigid recovery time objective (RTO) in the event of disk failures, the I/O-intensive nature of rebuilding can significantly impact CPU use.

The commonplace use of RAID in virtual tape libraries (VTLs) and other disk-based backup appliances provides a measure of protection, but RAID has never been a guarantor of rock-solid data availability. RAID 5 fails if there is more than one disk failure inside the same RAID group. RAID 6, growing in popularity due to widespread use of SATA drives, uses double parity for fault protection, but can impose an overhead burden.

Additionally, both RAID 5 and RAID 6 suffer a write penalty due to parity calculations, and the burden of de-duplication overhead may compound the problem.

NEC’s DataRedux de-duplication technology is a part of the company’s recently introduced HYDRAstor architecture. But Dutch suggests that the ideal solution would be one that handles more than two disk failures, suffers no write-performance penalty during rebuild, and occupies no more disk space than current RAID-5 implementations.

With HYDRAstor, NEC addresses this with a recovery assurance technology called Distributed Resilient Data (DRD), which provides flexible layers of parity. “We protect against three or more disk failures, and match the parity to the data,” notes Dutch. After a unique chunk of data is identified by Data-Redux, it is broken into a given number of data fragments (the default is nine). HYDRA-stor’s DRD feature has a default parity setting of three, protecting against three disk failures in the RAID group, but users can designate as many parity chunks as they require. After the system makes the parity calculation, data is distributed across as many storage nodes as possible in a configuration. Backups can be accomplished during a rebuild without a performance hit and operate in background mode.

 

 

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *