Is it time to close the wireless patent office?

I am a big fan of innovation. All of high-tech benefits when new ideas with demonstrable technical or economic advantages are put into practice. Exciting new approaches to improving reliability, throughput, capacity and other dimensions of wireless technologies are always welcome. Sometimes, these ideas result in great leaps forward.

But I wonder, are we reaching the point where the basic technologies that are now in place or will be shortly obviate the need for significant innovation? Should we be getting ready to close the wireless patent office?

OK, that's overstating things just a bit. But consider: We have seen remarkable improvements in radio technologies in recent years, and the full effect of these is yet to be realized.

The best example is multiple input/multiple output, which I covered in a previous column. MIMO is a technique designed to make the best possible use of the scarce resource that is the radio spectrum. MIMO adds a third spatial dimension to the frequency (bandwidth) and time that otherwise define all communications systems. Adding space to frequency and time means higher throughput, capacity and reliability.

MIMO also increases effective range at any given level of throughput and transmitter power output. When fully implemented in products based on the upcoming (now mid-2008) IEEE 802.11n wireless LAN standard, we'll likely see 100Mbit/sec. of application-layer throughput per channel. And keep in mind there are up to 26 802.11 channels available in the U.S. A level of 200Mbit/sec. per channel is not out of the question, and the maturing of advanced technologies like 60-GHz radio (which I'll cover in a few weeks) and ultrawideband could result in gigabit-level performance, albeit likely over shorter distances than today's WLANs. And even this is OK if you're a believer, as I am, in dense deployments. Which leads me to ...

I've often marveled at how well Skype works over the Internet with no special support for quality of service. The reason for this, and the reason we often see such amazing throughput no matter what the application, is the fundamental overprovisioning of the Internet. In other words, there's a lot more capacity available at any given moment in time that could possibly be used. This is because it's really no more expensive to implement very high-performance connections than those with lesser throughput.

I often see throughput up to the limit of what my cable modem is provisioned for; less throughput is more likely the result of limitations at the other end of the connection than congestion in the middle. This performance is itself the result of earlier innovations in both physical-layer technologies (primarily fiber) and advances in routing techniques, advanced protocols and router implementation.

Can we also overprovision wireless? Sure -- that's exactly what we'd have if we had 26 simultaneous channels at 50Mbit to 100 Mbit/sec. each. If you'd like to see an example of this thinking in action, have a look at @LAN arrays, which I discussed in architectural terms a few weeks ago. We can also use microcells and picocells in wide-area applications, and split cells to add more capacity in essentially any wireless network. Do we really need to be clever in our use of bandwidth when we have that kind of raw capacity implemented via a brute-force technique?

Some years ago, I met with a company that had a truly exciting innovation. It adjusted the size of packets sent over the air (or, in fact, over any packet-based network) based on the likelihood of a collision or other interference. The company would make packets smaller when interference was greater, and larger in the opposite case. We were seeing throughput improvements of up to 10 times, depending upon traffic type. And yet no one, to my knowledge, adopted this technology. Part of the reason was a very common bias against adding cost to any product, especially those with commodity pricing. But the other part was a general feeling that the basic technologies would simply continue to get faster, obviating the need for cleverness like this and any corresponding investment.

It's really too bad, but the fundamental economics of networking in general argues against further clever innovation. The ace in the hole for those wishing to innovate in wireless, though, is the fundamental variability in wireless communications. Since even wired networks suffer from similar artifacts -- I'm always amused when someone argues against a WLAN on the grounds that it's a shared medium; just what do they think Ethernet is? I think we'll be seeing all kinds of enhancements to wireless technologies in the future. The only real question is whether the economics of the moment argue for or against them, and thus their deployment in production systems.

A final point: How do we measure the performance of a WLAN and determine if we're really seeing any benefit from a given innovation? I've been spending months on the development of an enterprise-oriented approach to answering that question, and I'll have more on this for you shortly.

Craig J. Mathias is a principal at Farpoint Group, an advisory firm specializing in wireless networking and mobile computing. He can be reached at craig@farpointgroup.com.

Copyright © 2006 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon