Data Movement vs. Data Sharing
By Gerry Machi
As the song goes, "Everything old is new again." Mainframes, said to be on the decline since the advent of open-systems superservers and the increasing popularity of networks, are back in vogue due to the very factors that once made them popular: reliability, speed, and huge storage capacity.
The trend toward recentralization and the ever-increasing need for 7x24 operations present key challenges. Gigabytes and terabytes of data need to be backed up, extracted, and loaded between data warehouses in very small timeframes. Intranets need to be seamlessly connected at high speeds to System/390s. More and more, servers are designed to do single tasks.
In many cases, these bandwidth- hungry applications are overloading existing network infrastructures. What`s needed is a fast, cost-effective, and simple method of moving data between mainframes and servers and between servers on a separate "data highway."
Market forces are accelerating the need to move tremendous volumes of data between these heterogeneous systems: For example:
- The data-mining market is expected to grow to more than $8 billion in 2001.
- 80% of the world`s data still resides on mainframes.
- System/390 disk storage is expected to grow at 50% per year through 2000.
- More than 50% of all large UNIX systems are co-located with mainframes in the data center.
Data warehouse users are continuously moving their warehouses or data marts to UNIX or NT servers to facilitate data mining. These server environments are usually optimized for complex database queries and specialized data analytical tools and applications. Mainframes are still the central repositories for the data that`s regularly "moved" to data warehouses or marts for ready access and data mining.
Centralized computing offers re-duced costs and improved application availability. Reports indicate that more than 80% of IT organizations have consolidated their servers--or are planning to. This centralization has created the opportunity to leverage the backup-and-restore capabilities of mainframes for UNIX and NT servers.
Historically, users have had only two options to move large amounts of data: to use the corporate network or to use large dedicated storage server systems primarily designed for data sharing. The first option hasn`t been viable because network throughput cannot move gigabytes of data in minutes and because the use of corporate networks for bulk data movement affects normal network traffic.
The second option--data sharing through dedicated storage server systems--is very costly (often more than a $1 million initial investment) and requires a significant technology commitment. IS managers need a cost-effective alternative. This solution needs to:
- Be fast--in the range of gigabytes-per-minute throughput capacity.
- Support high-volume data movement, including backup and restore, bulk data transfers, and database loads.
- Be simple, cost-effective, reliable, and compatible with existing systems and applications without affecting corporate network traffic.
The "data movement" solution encompasses the connection of existing high-bandwidth interfaces on both enterprise systems (via ESCON channels) and UNIX/NT servers to create a separate "data highway"that can be used exclusively for high-volume data movement. ESCON and SCSI interfaces are familiar, have enormous bandwidth, and are designed for simple, low-overhead, high-performance connectivity. This solution supports any data type and parallel data transfers without any protocol overhead.
Gerry Machi is vice president and general manager of Bus-Tech Inc.`s data movement business unit in Burlington, MA.