One of the objectives of an IT organization is to streamline operations in order to reduce costs, and well-defined and clearly-communicated service offerings are essential to achieving this goal.
By David Vellante and John R. Blackman
—The basic premise behind a storage services architecture is to address infrastructure as a granular set of reusable services that can be invoked as needed by the appropriate business application. This approach allows organizations to optimize on cost, performance, recovery, and other metrics that are fundamental to business processes.
One way to conceptualize a storage services architecture is via a three-dimensional cube (see Figure, below) with the following vectors:
- Governance/management/process (e.g., service components);
- Standards—technology tiers (e.g., cost, performance); and
- Business requirements—(e.g., availability, classes of service).
In this model, services are defined with a stack of technology components from device to network to storage and server platforms all the way through applications. Technology tiers include various performance and cost levels built on tiered storage. The class of service dimension accommodates a spectrum of protection methodologies from "always on" to "generally reliable."
From this we can define three top-level services: Data protection, infrastructure management, and information management. Sets of various storage services are layered within each dimension—for example, provisioning services as part of infrastructure, replication services in data protection and data classification, and retention/archive services in information management. The intersection of these dimensions and their corollary services will determine the technologies used, service levels, and costs. Importantly, not all intersections are viable. For example, the choice of iSCSI as a protocol today will not fit continuous availability Tier-1 performance requirements and therefore would not be an option on the services menu.
What are the benefits of a storage services architecture?
In addition to the lengthening of depreciation schedules, Wikibon.org users cite four primary benefits of this approach, including the following:
- Provides granular acquisition options for business lines with a high degree of cost transparency;
- Causes businesses and IT to make trade-offs between cost and function and will lead to more efficient use of storage as a resource;
- Minimizes the complexity of the technology portfolio and avoids "one off" solutions that can create migration problems down the road; and
- Limits the number of suppliers, further reducing complexity.
What drawbacks and organizational impediments will a storage services architecture bring?
To be sure, there are some political and practical minefields users should consider with such an approach. First, a reduction in the number of technologies and vendor choices will limit technology options and may create friction as leading technologists will naturally want to deploy the latest innovations. As such, it is critical that organizations use their "sandbox" lab specifically as a means of evolving the services model (i.e., their cube of offerings) so that the services offered remain competitive and current.
In addition, to make storage services effectively pay back, service level agreements (SLAs) and chargeback models must be in place and aligned, which will require deliberate effort, planning, and actions.
Finally, users should be aware that while gaining efficiency, they will be trading best-of-breed optimization on an application-by-application basis. The storage services portfolio will increasingly become more homogeneous by definition and service a wider spectrum of applications from the core services menu.
What does it take to accomplish?
The Wikibon.org community believes there are a few key items that will make this transition successful, including the following:
- Create a targeted, qualified, and limited portfolio of service offerings (i.e., reduced but capable). Define this offering using the three dimensions depicted in the Figure and provide top-level services around infrastructure management, information management, and data protection;
- Limit the number of technology suppliers. For example, two array providers with a supporting suite of software based on SNIA standards, one tape supplier, one common resource management tool, and one or two fabric suppliers. Provide direction via a road map and measurements to success;
- Secure top-level management support. This is critical, and we don't mean signing the check, but buying into the concept, advocating it to users, and ensuring success in delivering services; and
- Construct a chargeback model that enables the acquisition of storage ahead of demand while providing a mechanism for refreshing technology and, if needed, acquiring human capital as workload increases.
Importantly, our research indicates that successful organizations revisit the services portfolio and technology options every 12 to 18 months and preferred/approved suppliers every 24 to 36 months. As an example, a few years ago, energy efficiency was hardly perceived as an issue, whereas today it is becoming a fundamental component of every organization's strategy. Companies must assess each piece of the service offering to ensure it continues to meet changing business requirements.
What best practice advice should be considered?
Organizations should pursue several actions regarding storage services implementation, including the following:
- Construct SLAs and chargebacks consistent with services with a one-time cost and an ongoing monthly fee that includes technology refreshes. Build switching costs (aka early withdrawal penalties) into the model and construct agreements over periods of time where lower pricing applies for longer contract periods;
- Limit the number of suppliers and negotiate on-demand contracts where vendors will install equipment but charge upon usage at the current price when that equipment is turned on and not when it was installed; and
- Be aware that LAN-based backup will present the most difficult SLA challenge and consider disk-based backup for distributed networks; possibly even outsourcing to a remote service provider.
Most organizations will manage such initiatives as a cost center, or at the very least share some of the profits with business lines or invest back in the business. Creating mechanisms to fund a refresh of the portfolio every couple years is important.
Storage in many ways has been insulated from the market forces of services-oriented architecture (SOA), Software as a Service (SaaS), and consumer-like business models. This is changing, and users should evaluate the feasibility of constructing service-oriented models for storage which emphasize granularity, reusability, and cost transparency. This will allow organizations to make intelligent make-versus-buy choices pertaining to an emerging set of managed storage services from a variety of service providers.
Architecting storage services is a way to transform from a cost necessity to a respected service group and value producer where infrastructure is an enabler, not an impediment, to business progress. While the hurdles to this transformation are significant, once achieved they can make an IT organization much more cost-efficient and productive. One trade-off is less technology choice. As such, a self-funding model that charges consumers a modest fee to refresh the technology portfolio periodically will allow such initiatives to remain competitive and thrive.
David Vellante is a co-founder and contributor to The Wikibon Project, a research and advisory community of practitioners dedicated to the open sharing of business technology knowledge. John Blackman is a former infrastructure architect at a Fortune 50 financial institution. He is currently an independent consultant specializing in infrastructure architecture and is a member of The Wikibon Project.