Understanding Storage Bandwidth Performance
Storage bandwidth is the connectivity between servers and the storage they are attached to. When it comes to understanding storage bandwidth performance you have two challenges to deal with. The first and most obvious is can the storage get the data to the application or user fast enough? The second and less obvious is can the applications and hardware those applications run on take advantage of that bandwidth?
Storage bandwidth is the connectivity between servers and the storage they are attached to. When it comes to understanding storage bandwidth performance you have two challenges to deal with. The first and most obvious is can the storage get the data to the application or user fast enough? The second and less obvious is can the applications and hardware those applications run on take advantage of that bandwidth?For most data centers we can get more than enough storage bandwidth performance. There is 8GB fibre, 10GB FCoE and 10GB Ethernet. The first step should be to make sure that the application can take advantage of that performance if you upgrade to it, if not then don't. This is one of the reasons why smaller environments are served just fine by iSCSI even on 1GB connections, for them that bandwidth is good enough.
Understanding the speed at which I/O is coming out of the server to the storage connection is key. Again as mentioned in our last entry, some of the built-in utilities with the OSs can give you this information. As the environment gets more complicated, server virtualization for example, then we suggest using tools that can give you a more accurate determination like those from Akorri, Virtual Instruments or Tek-Tools.
In addition to the application, you also need to examine the server hardware itself. Does it have the CPU power and memory size needed to get the data requests to and from the storage network? In most cases the chances are yes. If you have an application that needs high I/O you have already upgraded the server hardware.
In the case of virtualized server environments, can that storage bandwidth be channeled or better allocated? Server virtualization changes the rules. Instead of a single server accessing storage or the network through a single interface, now you have multiple servers all accessing storage simultaneously. As we discussed in our article "Why Quality of Service is Even More Important in a Virtual Environment", optimizing high bandwidth in virtual environments may be better served when priorities can be given to each virtual machine.
When it comes to I/O at the storage system most systems today have multiple connections leading up to the switch infrastructure, or in the case of clustered storage the connections to the switch infrastructure scale as more storage is added. It is important there is plenty of bandwidth to the storage controller since it is likely to receive storage requests from many servers and virtual machines simultaneously. Most storage managers overbuy on storage system bandwidth and it is typically not the primary issue in the performance loop.
The disadvantage to this is that you are often paying for bandwidth that you will never use or at least not until a future date. Ideally, you should be buying just the bandwidth you need today and then upgrading it when you need it. This is one of the deliverables of clustered storage systems like those from 3PAR, Isilon and HP's Lefthand Network's because bandwidth can grow as you need it, providing maximum flexibility and CAPEX optimization.
Clustered storage also plays a big role in addressing the next area of performance concern, the storage controller itself, which we will discuss in our next entry.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author
You May Also Like