Greg Scherer was nearly a plank owner at Fibre Channel networking vendor Emulex Corp., where the engineer worked for 24 years at the 28-year-old company. Scherer, who retired last year, rose through the ranks to become the chief technical officer at Emulex. This year, however, Scherer reentered the networking industry as vice president of product planning at Neterion Inc., a manufacturer of emerging 10 Gigabit Ethernet server and storage network adapter cards. The introduction of 10 Gigabit Ethernet, in combination with iSCSI, threatens to unseat the long-entrenched Fibre Channel protocol as the storage technology of choice in the data center, and offers a single protocol for multiple networking needs. Computerworld spoke with Scherer about why he left more than a dozen years of Fibre Channel development to re-embrace Ethernet. The following are excerpts from that interview.
Why did you leave Emulex? I retired in the middle of last year. I had no intention of jumping back into the technology world. I planned on doing some consulting but not working for any one entity. When I got to know the team of people at Neterion ... it made me scratch my head. It reminded me of some of the world-class teams that I'd been involved with in the earlier days. It's a company that has good technology and works in an emerging growth area. We really saw a lot of the trends eye to eye and saw how to run a business in terms of not being afraid to take risks. That's one of the things I enjoyed about Emulex in the early '90s when Emulex decided to go after Fibre Channel. It was competing against the then-giants in storage, and there were scores of people saying you just need to get out of that market or you're going to get crushed by the elephants. I see the same spunk in Neterion.
It's the excitement of a new emerging technology area. There's been a view for a long time in the industry you could have a converged network ... one network to run all of our protocols, storage, networking, maybe even clustering. At 10Gbit/sec. you start to realistically think, if most servers need 1Gbit/sec. to 2Gbit/sec. and today's state of the art is dual 4Gbit/sec. Fibre Channel interfaces, that all fits within the bandwidth footprint of a single 10 Gigabit Ethernet port. The idea of finally being able to broach the new world of using one fabric type to consolidate the needs of storage, networking and maybe even some clustering is a paradigm shift in the industry. To me, it's very attractive.
What did you start out doing at Emulex? Emulex started out in the Digital Equipment plug-compatible controller market. I started out looking at emulating disk and tape and communication subsystems. I was involved in some of the very early [DEC] Ethernet adapters.
I kind of grew up in the initial SCSI marketplace and a lot of the early Ethernet marketplace. Then in the early 1990s, I was recruited by a team inside Emulex to start up a Fibre Channel effort. We really didn't know what we were going to do with it at the time, but it was just such an interesting serial technology that could, at the time, scale to quarter speed -- 256Mbit/sec. -- but it also had this outside idea that it could be scaled to gigabit speeds, which in the '90s seemed incredible. The fastest serial bus at the time was a bus that Digital had that they referred to as the CI, and that was 70Mbit/sec. Here we were looking at a technology that was going to become commercially viable at 256Mbit/sec. and scale to 1Gbit/sec. and it seemed like, "Wow, man. What will we do with all that speed?"
I got involved in a lot of the early standards work with Fibre Channel.
One detriment to Fibre Channel's growth has been its high cost of deployment. What's kept it so pricey? If you think about where Fibre Channel is deployed, it's really mainly in tier three of the data center. And it never jumped into being a real channel product. It's really an OEM-driven product. The OEM channel is an expensive channel. The channel [vendors] will make maybe 15 points. It's a pretty slim margin, but there's also very little support. The channel [market] is really just a facilitator to get the product from point A to point B. In the Fibre Channel market, which is OEM driven, there is anywhere from a 40-to-65-point margin. It's very lucrative.
I know there were times when some of the OEMs where making more money off the sales of Fibre Channel adapters and switches than they were off the system sales. It's one of those protected ecosystems where people were willing the pay it and they [the vendors] were willing to take it.
Would you say there was a higher margin of profit in the Fibre Channel market than any other? Certainly for longer periods of time, yes. In the early '90s all the way through 2000, storage arrays enjoyed 70% margins on the hardware. It's still hard to figure out how the big iron folks did. They've changed their model so much. Now they charge separate for software versus array hardware. It looks like the array hardware has very slim margins. Big guys like EMC, it looks like they're reporting somewhere between a 20- and 30-point margin on the array, but then they sell software upgrades, like replication software ... and all those pieces of software that use to be bundled into the array are called out independently.
Fourteen years in Fibre Channel. Now you're in the Ethernet market. What's changed? If you look at over the past three decades worth of technology in terms of household words, there's only a couple of technologies that have been pervasive. One is SCSI in the disk-attached market and the other is Ethernet.
Fibre Channel was an easy technology for me to understand in terms of how it would get to market. We have serial-attached SCSI today, but in the early days of Fibre Channel, it really was a serial SCSI bus. The whole notion of storage area networks came after Fibre Channel started to be adopted as a longer, more featured SCSI bus for the enterprise.
In my role at Emulex, without betraying any confidential information, I was involved in investigating lots of Ethernet companies ... culminating in the acquisition of a company named Aarohi last year (see "Emulex acquires chip maker Aarohi ") So it's a market I've been following for some time professionally and personally. I don't think Fibre Channel has anything to worry about in the short term from Ethernet, but if you look out 10 years from now, Ethernet has been like a tidal wave. Anything that can be done with IP will eventually be just because of its cost structure. I think long term that's going to be Fibre Channel's detriment. It's never been able to penetrate below that data center level. Any technology that can't be pervasive across more than one tier, long term, has to watch its back against technologies that can. If you look at where the general Ethernet and standards are going, there's lots of new technologies that augment Ethernet to give it 100%, and if not, within spitting distance, of what more tailored protocols can do.
10 Gigabit Ethernet still seems too expensive. What's the selling point? 10 Gigabit Ethernet today requires a more dedicated infrastructure, but the lay of the land changes quite a bit when we get to 10G Base-T, twisted pair for 10 Gigabit Ethernet, in that it can downshift and run at other speeds as well because it's the same cabling plant at that point. The reason why it's not practical with short-reach optics or even long-reach optics is if you're spending $500 to buy an optical interconnect, running 1Gbit/sec. over it is probably not your best investment. With 10G Base-T, if you're running CAT6 cable or CAT6a, it's becoming just as cheap as CAT5. So there's no reason to not install something that's 10Gbit/sec. ready and just upgrade your switch ports as you need them.
So why would you need 10Gbit/sec. bandwidth? If you look at processor road maps, we're at two cores today per socket and four cores have been announced. You can extrapolate that we're going to go to other powers of two that are significantly larger. That coupled with the trend of virtualization in the data center ... imagine a year or two years from now when you have systems with four or eight times more processors in them in the same footprint at today. What do you do with all that processing power? The data center knows what to do with that. They'll just run a lot more guest operating systems and be able to do some consolidation of older applications without having to rewrite them. Now systems that could have easily lived with dual Gigabit Ethernet interfaces, you consolidate 10 or 20 of them and all of a sudden that network pipe can't stay at 1Gbit/sec.