Testing SQL 7.0 with various RAID systems

Posted on February 01, 1999

RssImageAltText

Testing SQL 7.0 with various RAID systems

With the rise of data warehousing technologies, many sites need to seek a platform for SQL Server that can provide optimal performance in decision-support applications. In a decision-support scenario, database performance is characterized by complex queries generating random 8KB reads and the building of OLAP cubes that involve large (64KB) sequential reads and writes.

By Jack Fegreus

Version 7.0 of SQL Server will go a long way to change the dynamics of how databases are used and deployed by corporate IT. Today, fast access to information no longer provides a competitive advantage: it is now a critical success factor. At a time when your next product may be obsolete in a year or your next competitor may pop on the web from an office in Kuala Lumpur, keeping on top of business takes on entirely new dimensions.

For line-of-business managers, the new release of SQL Server opens up all of the business advantages previously restricted to those lucky enough to have a sophisticated data warehousing operation up and running. For departmental and divisional business users, version 7.0 of SQL Server begins to commoditize the infrastructure technologies used in large data warehouses.

For the first time, single-subject data marts can be built on a highly accelerated schedule and still provide all of the comprehensive analysis and decision-support capabilities previously only found in a sophisticated corporate data warehouse. What`s more, both web- and windows-based applications can be built that enable casual users to access a SQL Server database using queries composed in Standard English.

For IT managers and Solutions Providers, SQL Server V7.0 promises to revolutionize all conventional wisdom surrounding database support requirements. The claims from Redmond are that the database engine is now largely self-configuring and self-tuning. There is even a more extravagant claim that SQL Server is now self-managing. Say what you will about the hype component of this claim, the fact remains that this version of SQL Server takes a long stride in the direction of a self-healing service.

About the only performance parameter that will need to be manually tuned is max async I/O, which sets the maximum number of asynchronous I/O operations that can be outstanding for each file at any time. In essence, this parameter sets the maximum number of I/O requests that a server can issue against a file. The default setting is 32 (32 read and 32 write operations).

This setting is adequate for a basic SCSI or ATA (IDE) disk, which can typically handle about 100 random 8KB-sized read operations. Sites serious about database performance, however, will be using intelligent I/O subsystems with hardware-based RAID, and in such a configuration the default setting is woefully inadequate.

It is not unreasonable to expect a robust single-subject data mart, complete with several OLAP cubes, to consume upwards of 50GB to 100GB of storage. Speeding OLAP analysis by storing 80% or more of the possible dimension aggregates of a cube can consume significant disk resources. In addition, opening up a data mart to casual users via an English Query application can significantly raise the amount of SQL (via translation) query traffic.

The right stuff

With this in mind, CTO Labs examined what`s available for a high-performance data mart configuration. Our target hardware budget for this scenario was $50,000. The server we evaluated was a Gateway ALR 9200. This server can be configured with up to four Pentium II Xeon processors and supports up to 4GB of main memory. Our test configuration was a two-way SMP system with Pentium II Xeon processors pegged at 400MHz and 512KB of L2 cache per processor. Main memory was 512MB of PC100 SDRAM with ECC. For n+1 power redundancy, our system had three power supplies.

For I/O, the system`s architecture was built on two PCI buses. Our system came with an Ultra2 SCSI RAID disk controller with 4MB of cache and four 10,000-rpm SCSI drives. The price for our Gateway ALR 9200 test system was $13,844.

We began by getting a baseline of the CPU power of each of the Pentium II Xeon CPUs. Within a 95% confidence interval, the CTO Labs CPU benchmark pegged the performance of the Xeon CPUs at six to ten times that of a 100MHz Pentium-based system. The geometric mean of the 33 benchmark kernels rendered an NTPI rating of 715. This performance was fully 33% faster than a 333MHz Pentium II system.

Even more important for SQL Query performance are the results of the CTO Labs thread-execution benchmark. In version 7.0, Microsoft has significantly improved the query-processing capabilities of SQL Server to speed up the processing of queries over very large databases. These changes include new algorithms for query execution as well as parallel query execution.

The new parallel query processing capability is geared towards database queries that consume significant CPU cycles. Typical candidates for parallel execution are the kinds of large table joins, substantial aggregations, and large result set sorts that mark data warehouse and data mart operations. The overhead incurred in executing a query in parallel, however, outweighs any power boost that any simple transaction-oriented query would get.

At query execution time, SQL Server determines if both the current workload and system configuration support parallel execution. If SQL Server determines that a query should be executed in parallel, it next determines the optimal number of threads over which to spread the query`s execution. Each time a query execution plan is retrieved, SQL Server calculates the optimal number of threads to use based on the current execution environment. As a result, parallel execution of a query can vary in the number of threads used from query execution to query execution.

The Gateway ALR 9200 showed excellent thread execution scaling in the CTO Labs tread execution benchmark. Compared to a two-way Pentium II and a four-way Pentium Pro-based system, thread execution scaled linearly through a system load of 32 concurrent treads. From that point on, performance of the ALR 9200 stayed well ahead of the two-way Pentium II and paralleled the performance level of the four-way Pentium Pro-based system.

I/O flashback

Whether or not SQL Server determines it is beneficial to execute a query in parallel, the database engine still maintains of pool of Windows threads that are used to service all batches of SQL Server commands. Microsoft references these threads associated with SQL Server as "worker threads." These worker threads are responsible for writing out dirty 8KB pages from the SQL Server buffer cache. Under SQL Server version 7.0, data is stored in 8KB pages and each group of eight contiguous pages is called a 64KB extent. The buffer cache is also divided into 8KB pages. Worker threads schedule these I/O operations asynchronously for maximum performance.

The database administrator sets the total number of worker threads available to service all incoming command batches via the max worker threads setting. If the number of active client connections exceeds the max worker threads setting, then the client connections will share worker threads.

Another key component of the SQL Server 7.0 I/O subsystem is the Read-Ahead Manager. In accord with the basic design philosophy that Microsoft has set for SQL Server, the Read-Ahead Manager is completely self-configuring and self-tuning. In addition, Read-Ahead Manager is tightly integrated with the Query Processor.

The Query Processor notifies the Read-Ahead Manager whenever it determines that a query would benefit from performing a 64KB read-ahead scan. Like parallel queries, read-ahead scans tend to have the most application in a data warehouse or data mart environment. Typically it is when SQL Server is doing large table scans, large index range scans, and probes into B-trees that the Read-Ahead Manager triggers 64KB read-ahead scans.

To facilitate this new feature, version 7.0 of SQL server also introduces a new storage structure dubbed the Index Allocation Map (IAM). SQL Server 7.0 records the location of 64KB database extents, which are comprised of eight pages of SQL Server data or index information. Read-Ahead Manger can then construct multiple sequential 64KB read requests by combining the query information from Query Processor and retrieving the location of all extents that need to be read from the IAM.

These underlying I/O architecture constraints effect SQL Server I/O performance. As noted, a typical SCSI or ATA disk can handle about 100 random 8-KB reads per second. As a result, the peak throughput on 8KB reads will be only about 800KBps. That`s far below the typical 40MBps theoretical throughput numbers claimed for Ultra SCSI and Ultra ATA drives. Only on sequential 64KB reads will throughput approach these specs.

These constraints put a significant burden of the overall performance on the I/O subsystem. A smart RAID 10 subsystem with I/O caching is an absolute must. For our test configuration, we turned to a FlashDisk subsystem from Winchester Systems. FlashDisk uses a 100MHz Intel 486 DX4 CPU to manage up to 128MB of EDO DRAM for read-ahead and write-back caching. Because of the danger of a two-tier caching scheme in which the first tier (the Windows NT OS) has no idea of the existence of the second tier (the I/O subsystem cache), CTO Labs always disables write-back caching by default.

The FlashDisk unit that CTO Labs tested was configured with 64MB of EDO cache and 12 Cheetah SCSI drives from Seagate. Each of these drives has a capacity of 9GB and spins at 10,000 rpm. For maximum write performance--remember we would be creating large OLAP cubes for decision support--we set our FlashDisk up to do RAID 10. This meant we had an effective storage capacity of 54GB.

On 64KB sequential reads, the FlashDisk provided a throughput of about 35MBps. This is typical of what we have measured with high-end SCSI subsystems. By contrast, RAIDzone, the Consensys Ultra ATA RAID subsystem, provided 70MBps.

The FlashDisk subsystem truly shined in write performance. For RAID 10, write performance was within 5% of read performance. The FlashDisk was incredibly efficient when writing data in stripes.

On random 8KB reads, the behavior of FlashDisk was also distinctly different from the RAIDzone. RAIDzone quickly ramped up to 5,000 I/Os per second at 100 disk-daemon processes. It then slowly increased to 7,000 I/Os per second with 700 daemons before average response time exceeded 100ms. In contrast, FlashDisk with only three simultaneous daemon I/O processes hit 6,000 I/Os per second. Then, as more daemons fought for cache hits, performance settled back to a very stable 4,000 I/Os per second.

Solutions Summary

Gateway ALR 9200 Server

Gateway, (800) 444-4267, www.gatewaypartners.com

price as tested: $13,844 includes two 400-MHz Pentium II Xeon CPUs, 512KB L2 cache per processor, 512MB SDRAM, four 4-GB Seagate Cheetah SCSI drives, hardware RAID controller with 4MB cache.

FlashDisk

Winchester Systems, (800) 328-3700, www.winsys.com

price as tested: $36,157 includes twelve 9GB hot-swap Seagate Cheetah drives, two (n+1) redundant SCSI controllers, 64MB EDO cache dual hot-swap fans, dual hot-swap power supplies.

Consensys Dual SmartCan Subsystem

Consensys Computers, (905) 940-2900, www.raidzone.com

price as tested: $8,665 includes ten 18.2GB Seagate Ultra ATA drives

Bottom Line

It will take work to get each product working just right, and none is the complete solution to all your BackOffice management problems. Nonetheless, for any organization of any substantial size, we can?t imagine living without at least one of them.

NOTE: This article is reprinted with permission from BackOfficeCTO magazine, a sister publication of InfoStor. For more information or to subscribe, visit www.backofficemag.com

Click here to enlarge image

In the CTO Labs thread execution benchmark, the two-way Gateway ALR 9200 server scaled linearly right through 32 simultaneous threads. With more than 32 threads, performance settled back to the level of a four-way Pentium Pro-based SMP server from Dell.

Click here to enlarge image

A remarkable feature of the FlashDisk RAID subsystem was the virtually identical write performance of a RAID 10 configuration (top). The CTO Labs load benchmark measures disk subsystem performance when doing random I/O. The results can be plotted as the total number of I/Os sustained by the subsystem versus the number of disk daemons that the subsystem can satisfy with an average access time of 100 ms or less (middle). Alternatively, the results can be viewed as average response time versus the number of I/Os per second. An interesting aspect of the FlashDisk performance was the subsystem`s ability to sustain a high number of I/Os per second from as few as three simultaneous I/O processes (bottom).


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives