Storage Compute Flexibility Can Maximize Storage Dollars

Storage compute is the amount of processing power that the storage system/array has to be able to handle storage I/O tasks. How powerful your storage processor is directly affects how many drives can be sustained by your system while maintaining full performance and how long it will be before you need to add an additional storage system. The flexibility and efficiency of storage compute engines can maximize the storage dollars you have in your budget.

George Crump, President, Storage Switzerland

July 2, 2009

4 Min Read
Dark Reading logo in a gray background | Dark Reading

Storage compute is the amount of processing power that the storage system/array has to be able to handle storage I/O tasks. How powerful your storage processor is directly affects how many drives can be sustained by your system while maintaining full performance and how long it will be before you need to add an additional storage system. The flexibility and efficiency of storage compute engines can maximize the storage dollars you have in your budget.Storage systems are basically servers with storage software running on them and just like any other server they have a processing core to handle tasks they are asked to accomplish. Many storage systems now in fact run on standard Intel architectures supplied by companies like Xyratex and Intel itself.

The tasks performed by the storage processor include everything from the basic array and volume management all the way up to advanced features like snapshots and thin provisioning. Just like a server processor the more you ask a storage processor to do the less it can be expected to manage other tasks.

The primary responsibility of a storage processor is to spin disk drives as fast as they are able to go. Assuming you have enough inbound storage I/O requests the storage processor becomes very busy routing those I/O requests to the right drives. In a typical environment you can improve performance by adding drives. The more drives you have the faster the system can perform and the better the application will perform.

This method to improve performance usually works until one of two situations occurs. First you run out of storage processing power and the system no longer can scale performance as drives are added; essentially as we describe in our article "Searching for High Performance Storage" the storage compute has become the bottleneck. The second situation which causes the "add drive method" to fail is when the applications themselves can't generate enough simultaneous storage I/O requests to sustain the drives. In this case the only way to increase performance is to improve the response time of the drives themselves, which we will cover in our next entry.

When the storage controller itself becomes the bottleneck the only option at this point is to replace the storage processor with a new one or to add a second storage system. The problem is that the cost to upgrade the storage processing power is usually substantial. More concerning is that the entire storage controller must be replaced. Some vendors have made this less painful by at least not forcing you to replace the drives as well but adding additional processing power is expensive and may require down time.

The other option of adding a second storage system has its own challenges. First data must be migrated to it which means that data needs to be identified and downtime for those applications need to be scheduled. Second two units will typically require more power then one unit which certainly does not maximize the storage I/O budget. Finally the second system is now a second point of management which means reduced administrator efficiencies.

This impending upgrade forces you as a storage administrator to buy more a powerful storage controller than you need up front. This is technology, we all know that today's processing power will be much less expensive in a year. Ideally you only want to buy the processing power you need at the time you need it. This is where storage compute flexibility can maximize storage dollars. If you select a storage system with storage processing flexibility, you can buy the processing power as you need it and when it is cheaper.

There are two storage deployment methods to achieve this flexibility. The first option is to use clustered storage solutions like those from 3PAR or Isilon that can add processing power as needed and claim to sustain maximum drive performance through the full drive population. As more processing power is required clustered storage maintains performance by merely adding additional compute power to the existing system.

The other option is use storage solutions like those from DataCore or StarWind Software that are basically software and can run on virtually any hardware platform. Storage software solutions run on off the shelf hardware, making the cost of upgrading less expensive and that upgrade becomes a matter of acquiring the Intel platform that offers the performance you need and when it gets to a price you can justify.

In our next entry we will cover the second issue; where adding drives won't help performance because the application itself does offer enough simultaneous storage I/O requests. The traditional methods to solve this are very expensive, so knowing the options here can also maximize storage dollars.

Track us on Twitter: http://twitter.com/storageswiss

Subscribe to our RSS feed.

George Crump is founder of Storage Switzerland, an analyst firm focused on the virtualization and storage marketplaces. It provides strategic consulting and analysis to storage users, suppliers, and integrators. An industry veteran of more than 25 years, Crump has held engineering and sales positions at various IT industry manufacturers and integrators. Prior to Storage Switzerland, he was CTO at one of the nation's largest integrators.

Read more about:

2009

About the Author

George Crump

President, Storage Switzerland

George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. With 25 years of experience designing storage solutions for datacenters across the US, he has seen the birth of such technologies as RAID, NAS, and SAN. Prior to founding Storage Switzerland, he was CTO at one the nation’s largest storage integrators, where he was in charge of technology testing, integration, and product selection. George is responsible for the storage blog on InformationWeek's website and is a regular contributor to publications such as Byte and Switch, SearchStorage, eWeek, SearchServerVirtualizaiton, and SearchDataBackup.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights