Aaron Martin likes to plan ahead. One year ago, the IT manager at Loro Piano, an Italian luxury goods manufacturer with U.S. operations in New York, plunked down $30,000 for a 10Gbit/sec Ethernet storage array from Nimbus Data Systems Inc. At the time, it was one of the only iSCSI-based storage systems available that took advantage of 10G bit/sec Ethernet speeds, with most systems supporting 1Gbit/sec Ethernet.
As a result, Martin had to do a bit of research to piece together the rest of the infrastructure, including an upgrade to his existing Cisco Systems Inc. Gigabit Ethernet switch and, later, a 10Gbit/sec Ethernet network adapter from Neterion Inc. Even his new ESX servers from VMWare Inc. -- which he bought at the same time as the Nimbus array -- did not support 10Gbit/sec Ethernet but instead tagged the Nimbus system at 1Gbit/sec Ethernet each.
"I had a lot of systems coming at the Nimbus, one G at a time," he says, including other physical servers that used Nimbus for CIFS-based storage. "Even though I couldn't use it at full capacity, I could use it as an aggregate because I had one big, fat pipe always connected to the Nimbus." The array consists of 12 500GB drives, configured with four terabytes of capacity.
Now, with the formation this week of the 10 Gigabit Ethernet Storage Alliance, users who want to implement 10Gbit/sec Ethernet storage systems may not feel quite so far out on the frontier. Nimbus created the alliance in partnership with switch vendors Arastra, Inc., Force10 Networks, Inc., Fujitsu Microelectronics America, Inc., Fulcrum Microsystems, Inc., as well as adapter vendors Mellanox Technologies Ltd., Neterion and NetXen, Inc. The purpose of the alliance, according to Thomas Isakovich, CEO at Nimbus, is to create "a multi-vendor ecosystem," focused on raising awareness of 10Gbit/sec Ethernet as "the superior open and unified platform for storage and networking." Isakovitch also expects blade server vendors and systems integrators to join.
Compared to 4Gbit/sec Fibre Channel, a 10Gbit/sec Ethernet-based storage infrastructure can cut storage network costs by 30% to 75% and increases bandwidth by 2.5 times, according to Isakovich. Plus, since you can combine block and file storage on one network, you can cut costs by 50% and simplify IT administration, he says. By bringing together switch, storage and adapter vendors, Isakovich says, the alliance can demonstrate to customers the interoperable pieces now available for 10Gbit/sec Ethernet storage deployment. "The perception is that there isn't a complete product family," he says. "This will help bring it all together."
"There are very few players in the 10Gbit/sec Ethernet realm, so having them all available in an alliance makes more sense ," Martin agrees. "Since I was so early in market, I was really watching it, so it fell into place for me, but having an alliance to say, 'Here are the players in the industry,' could get the market moving and make it more prevalent to IT managers."
Michael Peterson, president of Strategic Research Corp., a Santa Barbara, Calif.-based consultancy, agrees that creating a community is key to building credibility and differentiation for the 10Gbit/sec Ethernet storage market. But there's a bigger picture that the alliance is addressing: "This is more than storage we're talking about here," he says. "This isn't just Fibre Channel vs. IP." The fact is, he says, 10Gbit/sec Ethernet is an outstanding platform to pool and virtualize server I/O, storage and network resources and to manage them together to reduce complexity. "With 10Gbit/sec Ethernet the dominant platform in servers and blade racks, why not just extend storage directly to it instead of through another interconnect?" he says. "You can extend virtualization from the server to the storage and create a managed environment that can be automated more simply because it's less complex."
In fact, Peterson says, he wouldn't be surprised to see the new alliance create ties with other organizations, such as the Storage Networking Industry Association (SNIA) and the Blade System Alliance (BladeS), to address larger interoperability and management needs.
"The whole vision is consolidate, consolidate, consolidate," Isakovich agrees, "which means collapsing all these applications onto one big server with a ton of memory to run as many virtual machines as possible. But if your connection to storage is a mere 1Gb Ethernet, that's nowhere near the performance you need." By putting in 10Gbit/sec Ethernet, you can consolidate more virtual machines, he says. "It not only leapfrogs Fibre Channel and brings iSCSI up-market; it also enables greater consolidation by delivering a big fat pipe with the simplicity of IP."
Early pioneer
For his part, Martin is happy that 10Gbit/sec Ethernet is catching on. He hopped on the 10Gbit/sec Ethernet bandwagon early, because as a small IT shop, his whole focus is on consolidating his infrastructure onto a minimum number of powerful boxes. His VMWare servers are each outfitted with two quad-core processors, for a total of 16 cores per server. "If I can run that many VMs on one box, I need everything wide open," he says, "not just on the box but on my network connection and the iSCSI array. I need it all running as fast as it can so I can put all my infrastructure on a few boxes and not manage all that hardware." Fibre Channel, he says, was out of the question because it would have required a whole other investment in staff and infrastructure.
Plus, with growing acceptance of 10Gbit/sec Ethernet, he says, VMWare 3.5 now supports the protocol, and in March, he upgraded his two ESX servers with Neterion adapters to run at the higher speed, enabling him to take full advantage of the 10Gbit/sec Ethernet capacity of the Nimbus storage. Neterion is one of the few vendors that sells 10Gbit/sec Ethernet adapters that support VMWare, he says. Martin also had to buy a new 10Gbit/sec Ethernet switch from Fujitsu, since the VMWare servers and the Nimbus array now all plug into the same switch, and his existing Cisco switch didn't have enough ports to accommodate all that hardware.
Before, the VMWare servers posed a bit of a bottleneck, he says, because while he could bond together four network cards for traffic coming into each VMWare server, VMWare only allowed a single 1Gbit/sec Ethernet pathway to the storage array. "Traffic would go through four cards and then drop down to one card of 1Gbit/sec Ethernet, and that was a bottleneck," he says. "But with a 10Gbit/sec Ethernet card on VMWare, everything has sped up -- my VM transfers have gone through the roof in comparison to what I was doing before."
Now, he's just limited to the speed of the hard drives in the Nimbus array -- but not for long. He plans to upgrade some disks in the array to solid-state disks to accommodate some specific database servers that could use the faster speed of those drives. "You can only go as fast as your slowest link," Martin says. In the future, Martin plans to virtualize 40 of the company's desktops, as well, putting more pressure on the VMWare servers.
"I was ahead of the curve because I was doing this a year ago," Martin says. "I realized 10Gbit/sec Ethernet was where it was going to be eventually."