This isn't a trick question, but one with a lot of tricky answers depending on how you define "big" and "fast."
Ethernet switch vendors such as 3Com Corp., Force10 Networks Inc., Cisco Systems Inc., Extreme Networks Inc., Foundry Networks Inc. and Hewlett-Packard Co.'s ProCurve constantly tussle with claims of the highest performance, density and latency. But keep in mind that what's available right now from such vendors is three-year-old technology, on average. Meanwhile, a host of hungry start-ups such as Raptor Networks Technology Inc. and Woven Systems Inc. have a new take on how to build the "biggest" Ethernet switch. Their approach diverges from single big-iron chassis and more resembles clustered supercomputing or InfiniBand networking topologies.
How fast Ethernet can go is bound by the current 802.3ae standard -- 10Gbit/sec., so no single port is speedier than that, supposedly. Other ways to measure switch heftiness are by the bandwidth of the switch fabric and the density of ports that the chassis or box supports. Then there's the performance of the ports themselves. Latency -- how long a switch holds onto a packet -- are factors in switch performance, as well as jitter, which is a measure of the amount of variance of latency.
"Everybody does line rate on a per-port basis," says David Newman, president of Network Test and a member of Network World's Test Alliance. "The question then becomes how many ports do you do line rate on before you start dropping packets?"
In terms of published specifications, among the biggest of the core enterprise switches are Force10's E1200, Foundry's RX series, Cisco's Catalyst 6500, and Extreme's BlackDiamond.
Highest capacity
Comparing published specs, Foundry's RX-16 switch is the highest-capacity switch. It can run 64 10 Gigabit Ethernet ports at full speed, and up to 192 10-Gigabit ports in a chassis in an oversubscribed configuration (where the sum bandwidth on all ports exceeds the switches capacity). Force10's E1200 TeraScale switch can run 56 10-Gigabit ports, or up to 224 10-Gigabit ports when oversubscribed. Extreme's BlackDiamond 10808 chassis can support 48 nonblocking 10-Gigabit ports. Cisco's Catalyst 6513 can handle 32 10 Gigabit Ethernet connections all running at full duplex.
Some say that how a switch handles variables such as jitter and packet loss when the switch is running full blast are less important than how the vendors carve up per-slot and overall system bandwidth. "What I think is a more useful metric than throughput is latency." Newman says. "There, I'd say clearly Cisco is the best."
Newman says he has clocked a Cisco Catalyst 4948 at around 3 microseconds at 10-Gigabit rates, "which is the lowest I've measured," he adds. "Force10 was in the low double digits [in microseconds of delay]. They used to be hundreds of times higher, which would mean thousands of packets outstanding. But they've fixed it some over time."
With Force10's newest switch offering -- the S-series, which Newman says he has not yet tested -- the company claims it delivers latency numbers in the 200- to 300-nanosecond range -- several orders of magnitude faster than 3 microseconds. (This delivery claim is based on a test of the product conducted by network-testing firm The Tolly Group Inc. and sponsored by Force10.)
Lawrence Berkeley National Laboratory, a U.S. Department of Energy research lab in Berkeley, Calif., uses both Force10 and Cisco switches in its data center and LAN core. Putting a finger on which of the two products is "fastest" or "best-performing" is difficult because the switches are used in different applications, says Mike Bennett, senior network engineer at the LBLnet Services Group.
"[I've] tested a 6500 with two ports of 10G and the E1200 with two ports of 10G -- and neither of them are oversubscribed, and neither drops packets," says Bennett. "So it's not that one's faster than the other. It's just that they both work as advertised."
When large amounts of 10Gbit/sec. ports are needed, Bennett uses the Force10 E1200 for its higher port density. "Typically, when you buy Cisco, you buy the kitchen sink when it comes to features," says Bennett. "Force10 is different in that they don't have everything in a particular version of an operating system." This is preferable for applications at the lab that require a high-density, nonblock switch that simply moves packets fast. "We try to go with the simplest fastest solution in order to minimize the number of variables to keep operating costs low," he says.
Next-generation architectures
Across town at Lawrence Livermore National Laboratory, also a DOE lab, the network team is looking into next-generation switch architectures that will move the Ethernet network to a level of bandwidth and latency found in storage-area networks.
"We're looking at the next-generation machines, and everything is going to go up by a factor of 10," says Dave Wiltzius, network division leader at the lab.
"Everything will be 10G. So we're looking for a switch, or switch fabric, that can give us on the order of 2,000 10G ports," he says. "We're basically interested in building a federated switch environment using fat-tree topologies and things like that."
The "fat tree" topology Wiltzius envisions involves a meshed nonblocking switching architecture modeled somewhat after the traditional public telephone network, where switches are simple devices with a few connectivity ports interconnected via multiple paths. What's more, they effectively utilize the bandwidth.
Ethernet doesn't yet do this -- or do it very well, at least.
One technique Wiltzius already uses to somewhat achieve this fat-tree effect is port aggregation, or Layer 2 "hashing," where multiple Gigabit or 10 Gigabit Ethernet links are bonded into a larger virtual pipe. Tying switches together, or servers to switches, with hashed Ethernet pipes gives a larger virtual throughput, but this linking is limited to eight ports (up to 80Gbit/sec. with eight hashed 10Gbit/sec. links). This method uses an algorithm that randomly sends packets down the bonded connections. "With hashing, you get an uneven distribution, because of the random nature of the algorithm, which doesn't necessarily offer the best utilization of the bandwidth."
Some start-ups looking to address the limitations of switching include Arastra Inc., a closely held router company in Palo Alto, Calif., and Woven Systems in Santa Clara, Calif., which is in semi-stealth mode, developing an Ethernet-based mesh network product.
"What we're trying to do is deliver the best features of Fibre Channel/InfiniBand on a 10G Ethernet fabric," says Harry Quackenboss, president and CEO of Woven.
The approach Woven is taking is similar to the trend of grid or distributed, clustered computing, where large, symmetric multiprocessor (SMP) servers are being replaced by single- or dual-processor nodes coupled together over a network.
"The same thing is going to happen to LAN switching in the data center, with respect to scale out," Quackenboss says. "The big [LAN] switches are expensive, and the biggest nonblocking switch you can buy for data center applications is a 64-port Foundry system."
Woven is working on Layer 2 Gigabit and 10 Gigabit Ethernet data center switches that use special algorithms that allow the boxes to emulate InfiniBand or Fibre Channel networks in some ways. Multiple paths can be established among switches in the fabric, allowing bandwidth to be allocated more dynamically over the paths, since traffic lanes are not shut down, as in spanning tree-based Ethernet, Quackenboss says.
"If you want to build out a network of more than two switches, you can use link aggregation or trunking to bond groups of Ethernet segments," Quackenboss says. "But if you want to put three or more switches in a network, one switch becomes the bottleneck." Layer 3 switching, and protocols such as Open Shortest Path First and Equal Cost Multipath Protocol, can be used to create multipath networks, but these methods add cost. Layer 3 switch ports cost, on average, five times as much as Layer 2 ports, according to IDC.
Plugging servers into multiple ports on different devices in a fabric of switches would also make server reconfiguring easier, he says.
"Data center managers would like to be able to dynamically reconfigure applications and servers without physically recalling them," Quackenboss says. Leveling connectivity in the data center to multipath, Layer 2 Ethernet would help achieve this. The idea is somewhat analogous to the Layer 2 Metro Ethernet technologies being developed by carrier gear makers.
"In a sense, the modern data centers have enough servers in them that they resemble a collapsed a metro-area network into one room," he says.
Raptor's route
Another company already with products available is Raptor Networks Technology Inc., which makes fixed-configuration Gigabit and 10 Gigabit Ethernet switches that connect to one another to form a meshed fabric. Rather than focus on high-density data centers, Raptor's gear is aimed at LAN backbone and aggregation of wiring closet traffic. "We've created the ability to do at L2 what everyone else must go to Layer 3 to do," says CEO Tom Wittenschlager.
The three-year-old company makes low-cost, fixed-configuration 10 Gigabit Ethernet switches with 24 Gigabit and six 10G Ethernet ports and 160Gbit/sec. of total switching bandwidth. The single-rack-unit boxes support Layer 2-4 switching and run a proprietary modification of 10G Ethernet, which allow the devices to be hooked together in a multipath mesh at Layer 2 without using the spanning tree protocol.
Instead, the switches connect with the Raptor Adaptive Switch Technology (RAST), a protocol that binds the switches in a way similar to how modules in a chassis switch are hooked to the backplane or switch fabric. Citing internal company test data, Wittenschlager says the technology can move packets through a mesh of four Raptor switches -- passing a packet in and out of a 10G Ethernet port eight times among four boxes -- with 6.48 microseconds of latency.
"This creates the effect of each Raptor switch acting like a blade in a module, which allows traffic to travel among the switches very fast with low latency," he says. "To achieve this, routing information is inserted into unused header space in a standard Ethernet frame, which gives delivers switch heartbeat and route-path data among Raptor switches in a cluster, according to Wittenschlager.
"We've created the ability for physically separate blades [the Raptor switches] to communicate on a common backplane as if they were all inside one chassis. It's really one virtual switch, with blades that can be sitting up to 80 kilometers apart" when connected via 10G Ethernet over single-mode long-haul fiber, he says.
Non-Raptor switches connected to 10G or Gigabit ports see the switch as a single large LAN switch and can connect as simple Ethernet without added configuration, he says.
A mesh of four Raptor boxes recently replaced a core of two Catalyst 6509 switches in the network of L.A. Care Health Plan, the health care management firm for Los Angeles County employees.
The Raptor boxes were deployed to segment the company's flat Layer 2 LAN into virtual LAN subnets, keeping it at Layer 2, with 10G Ethernet in the core. Three 10G Ethernet pipes connect each box in the mesh; the Catalyst switches have been moved to the LAN edge for connecting the organization's 350 end users and other devices. Servers are plugged into the Raptor core on non-RAST Gigabit Ethernet ports.
After solving some initial spanning tree loop issues between the IOS-based Cisco routers and the RAST-based Raptor switches, the network is running "smooth and very fast," says Rayne Johnson, director of IT and security at L.A. Care. The Raptor product cost around $180,000 to install, while Cisco quoted Johnson at around $500,000 to upgrade the core with 10G Ethernet and VLAN capabilities. "I usually don't get myself involved with a product" in its initial development, Johnson says. "But it was worth it. In the end, you can't beat the price."
This story, "What's the biggest, fastest LAN switch?" was originally published by Network World.