Henry Newman's Storage Blog Archives for October 2013

Is Your Data Storage Safe?

There's quite a bit of discussion these about network security and the lack thereof, including the latest proposal from Huawei, who says “The web needs globally backed, verifiable security standards.” But why is there no discussion of storage security as part of the overall security discussion?

We still have the same basic storage security we had in the 1990s. And most of it came from the 1980s, users, groups and ACLs (access control lists). I think this has not changed for a few reasons. First and foremost is that making changes will require changes to file systems or objects. This is a big deal for vendors, as it requires significant file system or objects changes. And that costs lots of money for the development and, most important, for the testing, which has a long term cost for the hardware and running the tests for each release.

Another reason is that there is only one other common operating system that supports enhanced security, SELinux. There are a few specialized operating systems that provide this support but they come at a very high cost. Yet SELinux is not widely used, I think because there is a chicken and egg problem. There is no scalable file system that will support high speed I/O and 100s of TB of space available on SELinux and NFS does not support SELinux mandatory access controls  – so back to the chicken and egg problem.

The storage security problem needs to be addressed in a holistic way, globally. Someone needs to create a reference model that works and is successful so that the problem is addressed. Given that SELinux is available and works and solves the problem – at least in the kernel – it is now up to the file system community to develop a solution. As Linux seems to becoming the OS of choice for many environments, I hope that there is some movement forward in the area of security.

Labels: data storage, storage security

posted by: Henry Newman

Data Storage: No, Tape Isn't Dead

I've seen a number of articles in trade magazines stating that tape is not dead. The articles include:

Tape Never Died, It Was Just Resting

Cloud Backup vs. Tape: Think Hybrid

Hadoop, Web 2.0 Get Path to Tape for Cheap Long-Term Storage

Even 6 months ago there were few if any articles on tape in the mainstream tech media, yet today I'm seeing at least an article per week and sometime more. It could be because a few vendors have released products. But I think that the real reason is that there is more interest in tape given that there has not been any increase in disk storage density in over a 18 months, while during the same period there has been a significant increase in tape density from both LTO and Oracle. And we are all waiting for the IBM announcement on their density increases.

Disk drive density growth is slowing and costs for current drives are not dropping anywhere near as fast as data is growing. This is forcing companies to look at other alternatives for long term storage for data (data that is not accessed often) given the cost of disk drives and available density. There has been lots of industry speculation on future disk drives density but none of the speculation has disk drives doubling in density in 2014, which I think is good news for the tape vendors.

So will tape make a resurgence or is the industry hype a short term flash? (Forgive the pun). I think the answer depends on what happens in two areas. First, if HAMR (Heat Assisted Magnetic Record) disks come out in 2015 with huge density increases then it could be a flash. On the other hand if it does not come out or the density increase are modest, tape will continue to grow.

The second potential is if a technology comes out that provides an alternative to tape with high reliability like tape and low cost. The candidate today is holographic storage. Without either of the two happening tape will continue to resurge given the cost of data needs to go down.

Labels: Flash, data storage, tape backup

posted by: Henry Newman

Congress, the Shutdown and Technology

The Federal shutdown has me thinking about what the US will lose if we stop funding long term basic research. Research for High Performance Computing (HPC) systems is generally funded by the US. Indeed, the government is the basis of a good deal of the basic science research simulation in the US.

If you didn't see this about the NCSA Blue Waters Systems and HIV virus, it is well worth listening to. My understanding of this basic research on viruses is that it will allow us to model other virus structures, enabling us to model the next potential for a flu pandemic. Basic research has suffered over the last year, given funding, and is suffering greatly now given the shutdown.

For our nation to be a leader for the rest of this century we must be a leader in basic research in many scientific areas. The kind of research that most companies don’t do, not because they don’t want to do basic research, but because they’ve been structured to meet Wall Street demands for quarterly profits. Basic research is for the long term and rarely has short term payoffs.

The way I see it, we have three choices:

1. Do nothing about funding basic research and let our businesses falter in 5 to 15 years while other nations fund their own research – and transfer the technological advances to their domestic industries.

2. Realign Wall Street expectations to invest for the long term, where companies self-fund basic research, and change anti-trust laws to allow companies to work together.

3. Demand that Congress fund basic research.

The chances of improvements in scenarios 1 or 2 happening anytime soon resemble chances that a pig is going to spread wings and fly. There has been no discussions of this in any of the communications from Congress. Whether you are on the right or left, there needs to be a way to fund basic research. If 3 is not the answer, we need figure out the funding for basic research some other way. And figure it out quickly, as other nations understand the needs and the long term implications.

Labels: funding, research and development

posted by: Henry Newman

Violin Memory IPO Didn't Go Well. No Surprise.

The IPO for SSD maker Violin Memory didn’t go so well. After some opening gyrations, the stock fell considerably.

It is not clear what specific reasons for Violin’s stock’s poor showing are, but what is clear to me is that this might signal the end of the SSD madness in Wall Street. The insane amount of money being spent on SSD companies, both with IPOs and purchases, might finally be over. I am not anti-SSD in any way shape or form, but as with anything, SSDs cannot solve all storage problems. And the market size is limited based on the cost per TB compared to hard disk.

I feel sorry for the people at Violin as they presumably expected the stock to rise, but Violin, similarly to most SSD companies, is not making a profit. Wall Street likes new technologies that have promise of changing an industry, but the providers of SSD products for the storage market (not the manufactures of NAND) have not change things that much. SSDs have had a significant impact on the storage industry, but I would argue that the changes have not been revolutionary, but more evolutionary in most cases.

Yes there are some applications that have had significant benefit from this technology, both PCIe and external SSDs. But I think there are two major reasons why SSDs are evolutionary technology and not revolutionary: 1) that operating systems, file systems which includes the hardware stack, cannot take full advantage of the technology and 2) applications which are designed around the POSIX I/O framework and synchronous I/O cannot fully utilize the major performance benefits that SSDs provide.

Labels: Flash, data storage, SSD, Violin Memory

posted by: Henry Newman