Increasing Storage Utilization Rates

In a recent entry by <a href="http://www.informationweek.com/news/government/cloud-saas/showArticle.jhtml?articleID=224202488">John Foley</a> he discusses some of the pros and cons for leveraging cloud computing to increase IT efficiency in the Federal Government. One of the more startling statements is how low utilization of storage is. Of course low utilization is not the sole problem of Federal IT, the private sector has its challenges with storage utilization as well. What can be done to inc

George Crump, President, Storage Switzerland

April 19, 2010

3 Min Read
Dark Reading logo in a gray background | Dark Reading

In a recent entry by John Foley he discusses some of the pros and cons for leveraging cloud computing to increase IT efficiency in the Federal Government. One of the more startling statements is how low utilization of storage is. Of course low utilization is not the sole problem of Federal IT, the private sector has its challenges with storage utilization as well. What can be done to increase storage utilization rates?Increasing storage utilization can come in two areas; making the data you currently have more space efficient and making the capacity you have more efficient. Making the data you have more space efficient usually amounts to leveraging compression or deduplication or both. You can also manage the older data and migrate it off to less expensive and potentially optimized secondary storage. You can even migrate it to cloud storage. Technologies to migrate that data, like file virtualization or data optimization software with migration capabilities, are maturing rapidly, becoming more seamless and easier to use.

The problem however is how do you optimize your capacity? This is storage that has been hard assigned to a server but has not yet been consumed. It's that "little extra" you give to each server to keep from having to continuously provision more storage to that server. Thin provisioning has established itself as the ideal way to free up this captive capacity. Be careful though, thin provisioning is its most effective on its first day and on net new data. As data is deleted from the volume the system does not reclaim that capacity. Modern thin provisioning systems as we discussed in our white paper "Thin Provisioning Basics" have evolved to be able to overcome this weakness, reclamation can now be done with certain file systems and on second generation thin provisioning systems.

If you are like many data centers only 25% to 35% of your actual capacity is actually storing real data. If for some reason you added one of the above data optimization technologies then it may be an even lower percentage. As you know you simply can't turn drive shelves off. Most storage systems are designed to write data vertically across shelves for increased redundancy. There is typically no free shelf of storage.

These advanced thin provisioning systems are ideal when you are refreshing your legacy storage system, but what if you are not able to do so? If you have a legacy storage system with low utilization rates but no thin provisioning type of features, is there a downside? After all you have more than likely already bought and paid for the equipment. Why buy something new to get one capability?

There are downsides to not being able to increase utilization. First this wasted capacity may cause you to buy more storage capacity sooner than you should need to. Second you are paying to power and cool that additional capacity. Third that capacity is taking up additional floor space in your data center. Floor space that you probably need for something else. If you are anywhere close to the 25% utilization statistic, that means you can reduce purchases, power and cooling and floor space by as much as 75%. That alone may justify a sooner than planned storage refresh.

Over the next several entries we will cover how to get more utilization out of what you have, if your going to refresh what to look for and how to manually re-optimize a storage system to be able to power down parts of it.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights