Rethinking 'one-size-fits-all' data protection

Posted on July 01, 2003

RssImageAltText

Not all data is created equal, so you have to match criticality to your data-protection strategies and budget.

By Mark Bradley

With data-storage requirements continuing to double annually at some sites, throwing more storage space at the problem is a flawed strategy. Instead, this issue demands evaluating the importance of data, with data-protection safeguards applied appropriately.

Simply stated, different data has different value within the enterprise. Business-critical databases and financial systems, among other applications, are typically placed at the top of the pyramid. Similarly, corporate systems are granted more importance than departmental systems and databases. But what about sales department prospect information? Sales managers will tell you that they are completely sunk without this data. Yet this type of information is often not protected adequately. Similarly, vital information is often stored on a multitude of laptops that are only backed up to corporate servers sporadically—if at all.

So how do you place value on data? To deal with this array of variables, the cost of data protection should be considered against the projected costs that could result from the loss of that data. A "one-size-fits-all" approach might leave you spread so thin across the enterprise that invaluable data is relatively poorly protected, while you spend inordinate amounts to over-protect data that may never need to be accessed.

The key is to undertake a full risk assessment of the potential effects of data loss, including

  • Evaluating the criticality of information;
  • Identifying risk points;
  • Prioritizing those risks;
  • Identifying solutions; and
  • Implementing the most appropriate solutions, based on data value against cost of protection.

Evaluation of data importance

Too much money is being wasted protecting non-critical information that could be more wisely invested in truly business-critical data. If you create a table plotting data protection against data valuation, it is possible to assign increasingly stringent levels of data-protection methods in accordance with higher levels of data value. Be careful, however, to consider both the cost of data loss as well as the cost of recovery.

Based on the chart, here are some factors to consider in evaluating the importance of your data:

Click here to enlarge image

null

Organizational layers

Core databases and mission-critical files are always prioritized and protected, but what about the applications that access them? Without these up and running, their data is useless. Similarly, local hosted applications such as Microsoft Outlook or Lotus Notes include a plethora of valuable data—address books, calendars, and sales prospect lists, as well as the running record of daily business communication typically contained in the e-mail database. While these can be networked and backed up with Exchange Server and Domino, they often are not afforded much in the way of safeguards. It is largely left up to individuals to back up essential files, and few users give the matter much thought.

Cost versus value

If budgets are to continue their downtrend without a consequent drop in service levels, it is essential to assess what the impact and cost will be to the organization if information is lost for, say, an hour, a day, permanently, etc. Likewise, at the local level, how long can an organization survive the non-productivity of an individual or a specific team because of lost data? By answering these and other questions in a structured risk assessment process, it is possible to understand the potential cost of losing data, and the comparative value of different types of organizational and individual information. Thus budgeting can be done accordingly.

It may appear cost prohibitive to achieve complete redundancy and availability of business-critical data. But that price tag can only be evaluated when compared to the potential dollar cost of operational losses for those systems to be down. In many organizations where redundancy has been deemed too expensive, such an assessment could perhaps isolate specific data to be comprehensively protected, with less-expensive methods deployed for other systems.

Identifying risk points

Once the organization has considered the potential effects and costs of data loss, the next step is to identify risk points and take mitigating actions. The principal sources of data loss are, in decreasing order:

1. Human error

2. Software failures and bugs

3. Hardware failures and bugs

4. Security breaches

Despite survey after survey discovering that human error and software top hardware as sources of data loss, most money is invested in hardware backup and security systems. However, the human and software elements of the risk equation can't be ignored.

Software failures and bugs include operating system crashes, memory faults, and device driver conflicts, with buffer overflow being the most usual suspect. Buffer overflow is most commonly caused by software coding errors, but it is also an increasingly frequent type of security attack.

Human error actually spills over into software, as many software failures are caused by human error, either in terms of coding, installation, configuration, and/or operation.

User errors, too, continue to climb. These include incorrect disk formatting, botched software installations, faulty or missing backups, improper shutdowns and, most common, accidental deletions.

Human factors also encompass system manager errors such as failure to maintain patches to malicious internal attacks and network intrusions. Hacking, viruses, and industrial spying cost businesses billions of dollars per year.

Mitigating data risk

Comprehensive corporate data-protection strategies, therefore, must be built around a value matrix based on cost of loss and potential risk.

And this should include what must be protected for the company as a whole, what must be safeguarded at a departmental or regional level, and what must be protected by individuals.

Mitigating actions can include

  • Regular, automated backups
  • Frequent archiving
  • Backup power
  • Redundant disks
  • Redundant networks
  • Redundant systems
  • Education of users and administrators
  • Security

Traditional backup and redundancy procedures will probably be enough for routine data-protection requirements. Similarly, with software failures the only realistic way to mitigate catastrophic failures is to keep offline copies of complete disk data, including software, patches, and settings, in case a full restore from backup is required.

Up-to-date security systems and processes, and a structured education and training program for users, will go a long way toward mitigating human error.

Probably the highest priority, however, should be system and storage automation. Automated enterprise-class storage systems minimize the risks that are inherent with human involvement.

Therefore, automation must be regarded as the most effective long-term solution to minimizing human error and protecting corporate data.

Many 'sizes,' multiple risks

In addressing backup and recovery within the realities of budgetary constraints, data valuation and risk assessment play a vital role. Instead of "one size fits all," data protection now demands far more sophistication than ever.

Many "sizes" of data protection and a multiplicity of risk mean that for those who can't spend an unlimited amount of money on several layers of top-of-the-line safeguards, there is a solution: Separate out the crucial from the merely desirable or the unnecessary. Evaluate the criticality of information, identify weak points, and prioritize risk. Then identify the mitigating solutions and implement them based on the data value vs. cost-of-protection equation.

Mark Bradley is a chief storage architect at Computer Associates (www.ca.com) in Islandia, NY.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.