Data-center storage trends, challenges

Posted on April 01, 2007

RssImageAltText

From connectivity to data management, the mantra remains the same: Do more with less.

By Mike McNamara

Data centers have become extremely complex. New technologies, fast growth, acquisitions, the online data explosion, and increased security concerns have driven complexity up and utilization rates down. While priorities may change over time, one priority remains constant: the need to do more with less.

This article addresses eight key trends and challenges that presently impact data centers and will impact them into the future: connectivity, tiered storage management, thin provisioning, storage system resiliency, application availability, multi-protocol support, security, and integrated management.

Connectivity

Based on data from various market research firms, growth in the Fibre Channel SAN market is slowing down, but Fibre Channel will still remain the dominant storage network technology for the foreseeable future. Fibre Channel is currently at 4Gbps, with 8Gbps on the road map.

However, Ethernet-based iSCSI is making inroads into the SAN market and represents the biggest challenge to Fibre Channel’s dominance. 10Gbps Ethernet (10GbE) is entering the mainstream and prices are dropping. (For more information on iSCSI trends, see “iSCSI goes beyond SMBs and Windows,” InfoStor, March 2007, p. 22.)

InfiniBand offers a low-cost, low-latency, high-speed switching technology, but today it is still deployed predominantly in niche markets such as high-performance computing (HPC) and clustered architectures, and only a few vendors offer native InfiniBand storage systems.

At the disk-drive connectivity level, Serial ATA (SATA) continues to be the high-capacity, low-cost leader. Serial Attached SCSI (SAS) offers a low-cost alternative to Fibre Channel for low-end to midrange direct-attached storage (DAS), server/storage cluster environments and, potentially, small SANs. And Fibre Channel is positioned as the high-performance, high-reliability interface. (For more information on the relative positioning of the various disk/array interfaces, see “SAS: The new kid on the I/O block,” InfoStor, January 2007, p. 24.)

The good news in this segment of the IT storage market is that data centers have a plethora of options to address their connectivity requirements.

Tiered storage management

IT organizations need ways to enhance capacity utilization and optimize storage costs based on the value of data to the organization. By implementing tiers of storage, organizations can overcome some key data-management challenges such as increased storage requirements placed on primary (expensive) storage while secondary storage remains underutilized, the inability to effectively place data on different types of storage based on its relative business value, and the high cost of backup in the absence of appropriate classification of data, which results in excess data protection for data that is not mission-critical.

Click here to enlarge image

Although some products provide tiers of storage, including traditional hierarchical storage management (HSM) software, their narrow scope in terms of platforms supported or the disruption they cause to users limits their benefit. Some key deterrents to implementing tiers of storage include

  • User downtime resulting from tedious data migration procedures and long restore windows for archived data;
  • Significant administrative effort required to restore user access to migrated data;
  • Lack of solutions that can migrate data across heterogeneous storage platforms;
  • Lack of centralized management of distributed data to reduce administrative complexity; and
  • Inability to organize data intelligently and present it to users logically.

Data-management products are overcoming these limitations to deliver simple yet powerful solutions to implementing tiers of storage across multiple storage systems, independent of their location or performance characteristics (see figure). Tiered storage solutions should provide

  • A high level of automation;
  • Centralized management of heterogeneous storage;
  • Policy-based migration of data across storage tiers;
  • Non-disruptive data movement; and
  • A business view of data independent of physical storage tiers.

Thin provisioning

Data-center IT managers and storage administrators routinely report using only 30% to 40% of their total disk capacity. Whether the problem stems from a DAS infrastructure with its inherent islands of stranded capacity or inefficient data-management software, utilization won’t improve unless the storage architecture is improved. The good news for data-center managers is that by enabling maximum storage utilization, the right storage architecture can dramatically improve capacity/cost ratios to satisfy IT management, users, and corporate financial managers.

Click here to enlarge image

To evaluate storage architectures, a myriad of factors must be considered that impact utilization: operating system efficiency, provisioning techniques, volume management, data protection, and backup facilities. How each service is implemented directly impacts the ability to achieve optimal storage efficiency while still delivering on application and business objectives.

The first step in the implementation of any storage system is the allocation of space to servers and applications. Most storage systems require the storage administrator to pre-allocate specific physical disk space to applications, and once allocated that space is no longer available (free) to be used by other applications that may need it. The problem is that in the early stages of deployment, storage administrators seldom know the exact requirements of users and applications, and most administrators have no way to assign storage space to applications without “locking in” specific disk drives to volumes and LUNs.

For example, if a 500GB volume is allocated to an application with only 100GB of actual data, the other 400GB has no data stored on it (see figure).

That unused capacity still belongs to that application and no other application can use it. As a result, the unused capacity of that 500GB is wasted storage space and money, and even though all of the storage capacity is eventually used, it could take months or years to do so.

Thin provisioning eliminates this waste. Using the same example, the system administrator provisions 500GB to the application with only 100GB of actual data. With thin provisioning, the unused 400GB is still available for other applications. This approach allows the application to grow transparently and at the same time ensures capacity is not wasted.

Thin provisioning is essentially “just-in-time storage.” The application thinks it has 500GB of capacity, but the storage system only gives it the capacity as it needs it. The rest of it stays in the pool and administrators can set thresholds to be alerted when to add more disks to the storage pool. Thin provisioning benefits data centers by improving storage utilization up to 65% to 85% and reducing storage costs and complexity.

Storage system resiliency

Enterprise data centers must provide high levels of application availability and consistent data integrity to support business-critical applications around the world. Data-center managers continually wrestle with the challenges of avoiding unplanned downtime to ensure application data is available while avoiding data corruption to ensure the data is correct and up-to-date.

Compromises to either data availability or integrity can have disastrous consequences for a company’s bottom line and reputation.

Although regional disasters and site failures get the most attention by virtue of causing the most pain, the most common causes of unplanned outages are local errors due to operational failures followed by component or system faults (see figure on p. 37, top). To achieve 99.999% application availability requires a highly reliable storage environment that prevents downtime and data corruption, whatever the cause.

Two industry trends—storage consolidation and the widespread adoption of larger-capacity storage—make high availability a more urgent priority for storage and IT managers. With consolidation, even higher availability is required as more data and applications are at risk. At the same time, increased adoption of SATA arrays with larger capacities increases the risk and probability of failures. Productivity growth, increased global competition, and stringent regulatory requirements create additional demands. As a result, data centers require a comprehensive portfolio of storage resiliency technologies that support very high levels of application availability. To protect against business interruption, storage resiliency should be built into every aspect of the storage solution.

True storage resiliency has two aspects, and storage architectures need to provide both: (1) preventing errors and system failures from happening by means of early detection and self-healing processes, and (2) recovering quickly and unobtrusively from errors and system failures when they do happen. Disk systems should include tools to predict and fix disk drive faults before they happen, protect against multiple drive failures cost-effectively and with minimal impact to performance, maintain data availability in the case of enclosure or storage loop failures, and support synchronous and asynchronous replication, clustered fail-over (local and remote), and full redundancy for fault-tolerance.

Application availability

Data-center managers are under relentless pressure to improve application availability, striving for 100% availability for mission-critical applications. Due to the increasing cost of downtime, organizations need to focus on the various causes of outages and adopt a systematic approach to reducing the risks of downtime. Not only do typical infrastructure issues need to be addressed, but the people and process issues also have to be addressed with a plan in place to quickly recover from unforeseen disasters.

Organizations recognize that disasters and disruptions will occur, but the focus has shifted from disaster avoidance to disaster recovery (DR).

There are two categories of application downtime: planned and unplanned. Failures of one type or another are, for the most part, unavoidable and can lead to unplanned downtime. Improving application availability not only depends on preventing unscheduled downtime and recovering seamlessly from unexpected hardware and software failures, but it also depends on the ability of administrators and operators to perform their daily tasks without reducing the availability of system resources.

Improved application availability and speedy disaster recovery require a storage architecture that protects against all planned and unplanned downtime and allows quick recovery when downtime does occur. A storage system should address all causes of application downtime: preventing operator errors, recovering from operator and application errors, minimizing planned downtime, maximizing system uptime, and recovering from a disaster. All storage vendors deliver availability, but not all focus on delivering protection against the most frequent causes of downtime: application and operational failure.

Site and natural disasters are less likely than operator error, but they can have a much greater impact. Data centers require a flexible and cost-effective DR solution, making it affordable to cover all application tiers under a single DR plan and a solution that puts the DR site to active business use. Application and database administrators need application-integrated storage systems that perform frequent and non-disruptive backups in a matter of seconds to ensure recovery time objectives (RTOs) and recovery point objectives (RPOs) are met.

Multi-protocol support

To fully realize the consolidation and management benefits of networked storage, some data centers need to deploy systems with a single set of management tools that meet both SAN and NAS requirements. A unified pool of storage provides higher-capacity utilization, a single data-recovery solution and a single data-management model, as well as greater leverage of IT staff and skills.

These benefits can result in better return on investment (ROI) and reduced total cost of ownership (TCO).

Storage systems that support multiple protocols-sometimes referred to as “unified storage”-can abstract and virtualize the specifics of SAN and NAS into a common form that can be allocated and managed using common tools. In this case, all of the internal workings required to deal with the specifics of each networked storage approach (FC SAN, IP SAN, and NAS) are transparent to users.

Security

The advantages of networked storage technologies such as SAN and NAS are well-established, but storing an organization’s data on a network creates significant security risks. Data in networked storage environments is significantly more vulnerable to unauthorized access, theft, or misuse than data stored in more-traditional DAS configurations. Aggregated storage is not designed to compartmentalize the data it contains, and data from different departments or divisions becomes co-mingled. Replication, backup, off-site mirroring, and other disaster-recovery techniques increase the risk of unauthorized access from people both inside and outside the enterprise. Partner access through firewalls and other legitimate business needs also create security risks.

With storage networks, a single security breach can threaten the data assets of an entire organization. Technologies such as firewalls, Intrusion Detection Systems (IDSs), and Virtual Private Networks (VPNs) secure data assets by protecting the perimeter of the network. While important, these approaches do not adequately secure storage.

Consequently, they leave data at the core open to both internal and external attacks. And once these barriers are breached data assets are fully exposed.

Click here to enlarge image

Businesses that don’t encrypt sensitive data may wind up spending a lot of money on corrective measures and reparations because of failure to comply with regulatory or contractual data-protection requirements.

Providing wire-speed encryption and protecting data at-rest with secure access controls, authentication, and secure logging simplifies the security model for networked storage.

Security appliances can be deployed transparently in the data center without changes to applications, servers, desktops, or storage resources.

Integrated management

In traditional data centers, application, database, system, and storage administrators each focus narrowly on only a part of the data/storage management problem. Each has distinct areas of responsibility and accountability. As a result, end-to-end data management depends upon communication between data administrators and storage administrators and manual mapping of data to storage. This is a disruptive and potentially error-prone process that can result in critical errors and lost productivity.

Traditional approaches have left a gap between the management of data and the management of storage. This has resulted in inefficient operations, with considerable duplication of effort and frequent interruptions to the activities of highly interdependent administrative groups.

An integrated management approach simplifies the management that encompasses storage devices and the data that resides on those devices (see figure).

Click here to enlarge image

Using this approach, storage administrators can operate more efficiently and with minimal interruptions by automating routine management processes and by linking those processes to specific application requirements. This can be accomplished without sacrificing control over the storage environment by defining appropriate, reusable policies that support different quality of service (QoS) requirements for each application.

By creating linkages between application requirements and storage management processes in a controlled environment, system, application, and database administrators can control their data in a language that they understand, without the need for extensive storage management skills. Because the data owners can perform certain data-management tasks, their ability to respond to changing business conditions is enhanced. In addition, the use of process automation, role-based access, and policy-based management enables business-centric control of data and reduces the interdependencies between storage and data administrators to deliver productivity and flexibility gains.

The storage and system administrators still have all of the tools and capabilities they always did, but they can now create policies that control capacity allocation, protection levels, performance requirements, and replicas. For example, a storage manager can set up policies for different classes of applications. The Tier-1 application can have up to 2TB of capacity, snapshot policies of once a day, remote replication to a specific data center, and a nightly backup to a virtual tape library (VTL) target.

A Tier-2 application that requires a lot of capacity can have up to 10TB of capacity, snapshot policies of once a week, no remote replication, and monthly backups to tape.

Tight integration with business applications, allowing application and server administrators to manage data without having special storage management skills, will allow data centers to be more efficient and cost-effective.

Mike McNamara is chair of the Fibre Channel Industry Association (FCIA) marketing committee. The FCIA recently formed an alliance with the Storage Networking Industry Association (SNIA). McNamara is also a SAN product marketing manager at Network Appliance.

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives