Archive Needs To Succeed For SSD To Dominate

In my <a href="http://www.informationweek.com/blog/main/archives/2008/09/speed_is_the_ss.html">last entry</a> I wrote that speed is solid state disk's "killer app," but for SSD to really become the primary storage mechanism in tier one, the archive tier needs to be fully established.

George Crump, President, Storage Switzerland

September 27, 2008

3 Min Read

In my last entry I wrote that speed is solid state disk's "killer app," but for SSD to really become the primary storage mechanism in tier one, the archive tier needs to be fully established.With a full-functioning archive tier in place, SSD-based primary storage doesn't need to become less expensive than hard drive-based primary storage. SSD does need to narrow the price gap that there is today (and it will), but it doesn't need to become cheaper. But what is a functioning archive tier?

The current use of archive, assuming you really have a separate archive storage platform (backups don't count), will migrate stale data after a year or two of inactivity. The future archive tier will very aggressively move data in and out of the primary tier in a matter of days of becoming inactive. Primary storage will almost be a very large cache.

We have the foundations for a functioning archive tier in place today. There are great disk-archiving hardware platforms like those from Copan Systems, Permabit, and Nexsan. These platforms provide reasonably quick access to stale data while offering massive scale and retention capabilities. The ability to move data to these platforms is improving through products that offer a Global File System solution like OnStor that has integrated into the NAS head, or standalone products like EMC's Rainfinity or Attune Systems. The data movement utility software is improving as well, such as those from Atempo and Enigma Data. Finally, intelligent storage controllers like those from Compellent or 3PAR can move data at a block level with no user intervention.

If the components of the solution are all there, why have we not seen adoption in the archive tier? First, I think there is the perception that simply expanding tier one is easier than integrating an archive tier. In actuality, growing tier one is quite complex and expensive. Also, I think the aggressive server virtualization rollouts have hurt the development of an archive tier. By having IT personnel so focused on server virtualization deployment, they've had little time to tend to data management projects. Last, there needs to be continuing improvement in the above archive components to simplify the implementation process as well as further automate the migration process.

That said, taking the time today to develop an archive tier will have immediate cost and power savings as well as longer-term preparedness for retention and the eventual move to a SSD-based primary storage. The companies that can implement a SSD-based primary storage strategy first will have a significant competitive advantage.

For more on cloud storage, sign up for our Webcast Sept. 29: Cloud Storage 101.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Read more about:

2008

About the Author(s)

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights