SSD Domination, Sooner Than You Think

Based on the recent news that <a href="http://www.intel.com">Intel</a> has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.

George Crump, President, Storage Switzerland

September 10, 2008

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Based on the recent news that Intel has announced an 80-GB Solid State Disk for less than $600, the end for the mechanical drive may get here within the next five years.Think about it, a 15k 300-GB Fibre drive costs a little more than $400. While the 80-GB SSD from Intel isn't what I would call enterprise class, and certainly there is a delta in capacity, it is more than reasonable to think that the gap between solid state and mechanical drives will close relatively fast. Also in Tier One storage, the gap doesn't need to close completely -- if you subscribe to the conventional wisdom that 80% of your data is inactive, you only need enough SSD capacity to hold that 20% of most active data. When viewed from a watt-per-performance perspective, SSD also is greener. With mechanical drives you many times have to buy extra drive count to get the performance that you need. This isn't the case with SSD. There is some maturing that needs to happen. SSD technology is 30X or more faster than the current state of the art in mechanical drive performance. This means that the current storage drive shelves and controllers need to be optimized, if not totally redesigned, for this zero latency environment. The current practice of storage manufacturers to plug SSD modules into their existing drive shelves is a short-term workaround to get SSD to the masses. Eventually these manufacturers will need to follow the model of companies like Texas Memory Systems, Solid Data, and Violin Memory that have built systems from the ground up to be optimized for the zero-latency environment. Secondly, there needs to be intelligence at the storage controller level to move data in and out of the SSD area. Right now, SSD is expensive enough that most customers know exactly what files or components of a database they want to put on SSD. As SSD becomes less expensive and its capacity larger, its use will broaden and then the need to automate data in and out of the SSD Tier will become more critical. This will continue at least until SSD becomes so inexpensive that all your Tier One storage is SSD in some form. Even when we get to that point, maybe in the next five years, there will need to be some intelligence to move this data to Tier 3 Archive storage. This move will likely not be controller driven and will be done either by a global file system or a specific but simple software data mover. From a time line perspective, I would expect SSD to continue to be application or even file specific for the next 18 months, although the number of applications that utilize it will grow. I don't expect to see wild growth, as some research firms have predicted. Then in the next two to four years I would expect to see a broader application of SSD across ever-growing chunks of Tier One storage with some sort of automated data movement in and out of the SSD areas. Then, finally, within the next five years I would expect most data centers to begin to move toward a two-tier strategy that are polar opposites of each other, SSD and Archive, with nothing in between. Finally, don't think that once we get everything in Tier One over to SSD your performance problems will be solved. Initially, there will be a lot of time spent addressing latency issues that SSD exposes. For example, who thought we would be complaining that drive shelves aren't fast enough? Once the SSD-exposed latency issues are resolved, there will be complaints that SSD itself is not fast enough, and then we will have a whole new tiering system for SSD drives.

Track us on Twitter: http://twitter.com/storageswiss.

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights