What to expect from iSCSI Link Aggregation on your network

More available bandwidth, not higher transfer rates from LACP/LAG

ml qlogicnfcconn

Configuring a datacenter network is extremely complicated. If you're not a network engineer it's real easy to get yourself into trouble by slapping things together (note: I am not a network engineer). Among the many difficult tasks, properly configuring a Storage Area Network (SAN) is one of the most critical. It's so critical because the network requirements can be demanding and the importance of the connected devices being online and responding promptly is paramount.

A common goal in the datacenter is consolidation. It's much nicer to have relatively few storage devices than it is to have many servers each with their own directly attached storage, otherwise you're running all over the place dealing with dead drives. There is a trade off with this strategy however. Direct attached storage is going to be the fastest storage since there is little in the way between the host and the disk. With a SAN you're using network attached storage so the host needs to communicate with the disk over a network cable, whether it be fibre channel or Ethernet. That means your I/O is now subject to two (or more) performance bottlenecks instead of one. Making things more difficult, a SAN generally has many hosts accessing each storage array so a single host needs to compete for I/O with other hosts.

iSCSI, a SAN over normal IP networks instead of fibre, has risen in popularity due to its low cost and respectable performance. It's usually recommended that you run a 10GbE network for your iSCSI SAN, but the reality is that most folks are running the much less expensive 1GbE infrastructure. That means that if your mighty storage array is connected via a 1GbE connection to a host(s), the maximum data rate you can achieve is going to be 100MB/s, regardless of the fact that your storage array can crank out 500MB/s (for example). Your choice here then is to either upgrade to 10GbE end-to-end or try to take advantage of something called Link Aggregation.

Link Aggregation (Cisco calls this etherchannel) is the combining of physical ports on a switch to form a single logical channel. By combining 4 1GbE Ethernet ports into a Link Aggregation Group (LAG), you increase the possible bandwidth to the connected device to 4Gb/sec for example. That previous sentence is the cause for monumental confusion over what this actually does.

Imagine that you have a storage server connected to a switch by 4 1Gb Ethernet cables and the switch is using LACP (standard protocol for link aggregation) to form a LAG. You've also bonded the 4 NIC's on the storage server to act as one. Now imagine that you have a database server connected to the SAN for storage, also connected via 4 NIC's.  Boom, 400MB/s right? Kind of, but not really.

Without going nuts on details like MPIO round robin + LACP + NIC Teaming and whatever else you should / should not do, the gist of what you're going to get here is the potential for 4 different processes to each get 100MB/s of throughput. My favorite depiction of this comes from the folks over at altaro.com:

train Altaro

When a process establishes a connection with the SAN, MPIO on the database server determines which single NIC to use for the connection based on current load. When the connection hits the LAG on the switch, LACP determines which single 1Gbe port should be used based on the current load. The end result is that the communication with the storage server happens over just 1 pathway at a time and is subject to the bandwidth of that single path.

What you do gain however is the ability for 4 of these paths to exist simultaneously at full speed. If you have 4 database servers for example, they can each get the full 100MB/s throughput to the storage server, assuming your disk rates can support 400MB/s. That's the major point of confusion. You're not going to be able to perform a single file copy at 400MB/s over a 1Gb link, even with 4 of them in aggregate.

There are fault tolerance benefits to this as well, but that's a different topic. If one (or more) of the 4 links to the storage server fail, the device will still appear online via the remaining connections.

This should hopefully clear up what you can expect from using link aggregation if you were unsure. It may also push you to invest in 10GbE if you're starting from scratch and have the resources.

This story, "What to expect from iSCSI Link Aggregation on your network" was originally published by ITworld.

Copyright © 2014 IDG Communications, Inc.

Shop Tech Products at Amazon