Proceed Gradually With Fibre Channel Over Ethernet
There has been some concern recently of Fibre Channel Over Ethernet's (FCoE's) readiness to be deployed as an IT infrastructure. While the technology will continue to develop, it should be suitable for many environments. No one should be suggesting that the move to FCoE is a total rip-and-replace, but more of a gradual move as the opportunity arises.
There has been some concern recently of Fibre Channel Over Ethernet's (FCoE's) readiness to be deployed as an IT infrastructure. While the technology will continue to develop, it should be suitable for many environments. No one should be suggesting that the move to FCoE is a total rip-and-replace, but more of a gradual move as the opportunity arises.An example would be deploying a new rack of servers in a virtual environment. This is an area that FCoE is ideal for. Without FCoE virtual servers demand a lot of HBAs and I/O bandwidth. Typical configurations include four quad port 1GB Ethernet cards and two often dual ported 4GB fibre channel cards. Doing the math that is up to 20 cables going into each server. Multiplied over a rack of servers this can be 100 or more cables per server.
While installing 10GBE can help alleviate the cabling problem on the Ethernet side it does nothing to help with the storage side if you are running fibre channel as your storage connection methodology. Today for many environments the jump from 1GBE to 10GBE provides almost too much of an increase. 10GBE needs to have a QoS type of functionality to be able to allocate the bandwidth properly across virtual machines. This requires advanced and more expensive cards while 10GBE would allow greater virtual machine density. Even then much of the bandwidth goes unused by the physical hosts.
FCoE addresses some of these challenges. First it provides 10GB bandwidth to both IP traffic and Fibre traffic, consolidating the cabling concerns. Today FCoE cards come in either dual ported or single ported configurations. For most environments that is going to mean either two or four cables per physical host. It also provides built in QoS. On the storage side the early form of this is likely to be in the form of a prioritization, leveraging N_Port ID Virtualization (NPIV) to provide additional buffer credits to virtual machines as we discuss in our article "Using NPIV to Optimize Server Virtualization's Storage". In addition the Converged Enhanced Ethernet (CEE) specification includes provisions for complete QoS functionality and I expect vendors to add that capability to their cards within the year. Finally FCoE provides a better chance for fuller utilization of the available 10GB bandwidth by placing both storage traffic and IP traffic on the same infrastructure.
With FCoE the two or four cables that run from the physical hosts within the rack are typically going to go to a top of rack switch that will then split traffic to either the IP infrastructure or the Fibre Infrastructure. There are also blades available for organizations using a larger backbone as their core. In either case FCoE can be added to the environment gradually providing the benefits of reduced cabling and interface card in the connecting servers.
An interesting wrinkle is how some of the I/O virtualization vendors will work in this environment. As I mentioned in my blog over on Network Computing these systems can work in conjunction with FCoE to provision that 10GBs of bandwidth across multiple servers. Further improving utilization efficiency, while at the same time further driving down costs.
While FCoE will continue to evolve and improve, the time for FCoE is now, gradually. Just don't throw out your current infrastructure. The technology is well suited to a step-by-step changeover and can peacefully co-exist with your current infrastructure.
Track us on Twitter: http://twitter.com/storageswiss
Subscribe to our RSS feed.
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
About the Author
You May Also Like