Should you go to an all-wireless network?

Networking pro Greg Schaffer explains exactly how to perform a network-design analysis on a case-by-case basis

A popular stance in network design is wireless connectivity should augment but not replace wired connections, primarily because of some of the disadvantages of wireless networking. Yet wireless LAN applications continue to mature in features and usability. Higher speeds, increased security, quality of service (QoS) and centralized management are just a few of the wireless developments in the past few years, and more advances are coming. Does this mean that the time has come to completely abandon the traditional design of a wired port to every desktop? Certainly, there are numerous successful wireless only deployments out there.

Determining whether a network should be wired, wireless or a mixture of both should be part of every new network design process. Often a decision on how to proceed is based on what worked well the year before for a similar project. However, since the available offerings change rapidly, the question of wired vs. wireless should be explored in-depth as part of every network design process, regardless of what worked for a previous project of similar scope.

I recently had the task of designing a LAN for a new addition to an existing medical building, and I went through the process of determining what application worked best for that environment. While doing so, I carefully considered technologies and trends, network needs and security, and cost and management factors.

Technologies and trends

Understanding the available and future technologies is necessary in designing a network. In widespread deployment today are 100Mbit/sec. and, to a lesser extent, 1Gbit/sec. Ethernet over twisted-pair copper cabling to the desktop. On the wireless side, 802.11b has proven to be the workhorse of WLAN connectivity, with 802.11a and 802.11g providing higher speeds for that freedom.

The need for bandwidth is ever-growing, particularly in medical, financial, advanced computing and research environments. To address these needs, the IEEE has focused on expanding both wireless and wired capabilities. 802.3an was approved last year as a standard for providing 10Gbit/sec. Ethernet over copper cabling, and recently the IEEE 802.3 Higher Speed Study Group announced it will focus on developing a 100Gbit/sec. Ethernet over copper standard.

On the wireless side, according to the 802.11 Official Timelines, 802.11n promises up to 540Mbit/sec. throughput and is projected for approval as a standard in April 2008. Devices based on the 802.11n draft, such as the Linksys Wireless-N Broadband Router, are currently on the market. However, installing based on a standard before it is ratified can potentially introduce future interoperability issues.

The important concept to keep in mind is that while the network equipment industry is producing products for both media with increased bandwidth capacity, wired will most likely continue to hold a significant edge in throughput. The question then that needs to be addressed is whether the added bandwidth is needed by the projected uses of the network and is addressed in the next section.

Convergence is a trend that will continue. Voice over IP (VoIP) is no longer a wired-only option, with softphones and handheld 802.11b sets, such as the Cisco Unified Wireless IP Phone 7920, currently available. Many access points offer QoS to ensure the small bandwidth needed per call is available. Voice terminals -- wired or wireless -- don't need much bandwidth per call (64Kbit/sec.) but demand low latency. A QoS configured access point can ensure someone downloading a large file will not interfere with a sales call to a client.

Convergence does not end with VoIP. Roaming between IP and cellular networks is already offered in a pilot project by T-Mobile in limited areas. So long as the service is sound and the technology matures, it is reasonable to expect offerings of this type of converged services to expand.

Keeping up with the developments in technology and trends is important, but only if applied properly. To do so, a clear understanding of the needs of the network users is necessary.

Network needs and security

When designing a network, it's necessary to consider what type of data the network will transmit. A network is simply a tool, and an old mechanic's axiom is you must use the right tool for the job. A thorough analysis of such uses will ensure the performance of the network is satisfactory.

Bandwidth needs are dictated by the applications that will use the network. Two application specific bandwidth requirements need to be considered: throughput and latency. Throughput is the data transmission rate, measured in bits per second, whereas latency is the delay, or lag, in data transmission.

Some applications such as software downloads, Web browsing and e-mail work well with some amount of latency because where the induced lag is not noticeable. As noted above, "real time" type applications, such as VoIP, don't require a large amount of throughput but do demand low latency. Even where data transfer is heavy, such as with online backups where high throughput is necessary, some latency is acceptable.

One user connected to an access point at 54Mbit/sec. that has a hardwired 100Mbit/sec. switched connection will most likely experience acceptable performance. But all wireless designs are not equal, and several factors will decrease the end user's perceived performance.

Access point connectivity is a shared network model. One user connected to an access point downloading a Linux DVD ISO image of, say, 3.5GB from an intranet site (eliminating the variable of "last mile" Internet connectivity) may experience average throughput of (for the sake of argument) 50Mbit/sec. from an 802.11g access point. At this rate, the download will finish in about (3.5GB*8b/B)/0.05Gbit/sec.= 560 seconds), or somewhat over nine minutes.

Add nine more computers downloading the same image, and the task will be completed in slightly over one and a half hours ((3.5GB*10*8b/B)/0.05Gbit/sec.=5,600 seconds). Those same 10 computers connected to a switch at 100Mbit/sec. with an uplink to the server at 1Gbit/sec. would complete downloading the image in 1/20 the time -- or about five minutes. Other variables, such as contention, packet overhead and signal strength are ignored in this example, but it illustrates the magnitude of the bandwidth difference well.

There are inherent limitations to the number of connections to an access point. A good rule of thumb for designing is that each access point can support 20 to 30 simultaneous users. Applications that don't demand high bandwidth and/or low latency will suffer performance when the number of simultaneous access point users is too great.

How the access points connect to the backbone is important as well. In the previous example, the wireless access point was assumed to have a wired 100Mbit/sec. connection as an uplink. What about in mesh networks? Depending on how the mesh is laid out, an uplink at 54Mbit/sec. may connect to an access point that feeds two others, and the uplink capacity to the backbone network for each of the two edge access points is therefore decreased by 50%. More access points relying on the uplink would mean a further reduction in bandwidth.

If the primary use is for Internet access, the bandwidth limitation typically lies in the ISP connection. In those cases, LAN bandwidth limitations are usually not much of a consideration. If mobility and cost is, wireless may be the way to go, but there is also the security aspect. Wireless network connectivity adds security issues because of the inherent nature of the unrestricted physical medium.

While there is some validity to the argument that security can be handled effectively above the physical layer, the reality is a physical cable is more secure than a radio signal that has no physical limitations. Interception of data, penetration of the network and misuse of network resources by unauthorized users can expose the corporate network to theft of bandwidth and information. Implementation of applications such as SSL/SSH, VPN, 802.11i and Network Access/Admission Control on wireless networks can reduce these risks.

The obvious aspect where a wireless network excels above wired is mobility. Having the ability to connect anywhere, anytime is a powerful motivator for wireless. If the risks and bandwidth issues can be mitigated to an acceptable level for the intended use of the network, wireless may be the way to go. Remember, this is not purely a technological decision, and the technology is only a tool towards a desired result.

Costs and management

A networking rule of thumb is that the costs for cabling are primarily rooted in the labor. A purely wireless client network would still entail some Cat6 cabling, but only to the access points (or, in the case of a mesh design, only some of the access points) so a wireless design cabling costs are much lower.

Costs can also be lowered by implementing "thin" wireless access points, particularly in larger networks. Traditional wireless deployments utilize "fat" access points where each are configured individually. Centralized management system, such as the Enterasys' RoamAbout Switch System, moves all intelligence from the access points to one appliance. Because of the benefits described below, centralized access point management has become a popular method of wireless installation.

In a traditional WLAN design, "thick" wireless access points are connected to the corporate network via a separate Layer 3 virtual LAN. A VLAN delivered via trunking or by building a physically separate network can involve significant upfront configuration and equipment costs. A separate VLAN is desired for security concerns, and is often separated by the corporate network with a firewall and a VPN concentrator.

A centralized wireless deployment allows for the VLAN to be extended over the existing wired network. The access point creates a tunnel to the central manager, regardless of what VLAN it is placed on. In other words, one access point may be placed on the accounting VLAN, another on the sales VLAN, yet in both instances wireless clients would be on the wireless VLAN. This makes deployment of a wireless network where a wired infrastructure exists much easier.

The centralized model provides other attractive features as well. Configuration changes are applied at the management switch instead of at each access point. Since the access points are communicating with a central device, advanced capabilities, such as automatic channel and power configuration and rogue detection, are possible. In addition, each "thin" access point generally costs significantly less than its more feature rich cousin.

The downside to a centralized model is the upfront costs. The central management switch is usually expensive. However, if the deployment involves many access points, or wireless expansion is anticipated in the future, the upfront costs of a centrally managed application are often eclipsed by the benefits.

The bottom line

As previously stated, often a mixed network is the preferred application. However, because of the fluidity of network technology, engaging in a "wired vs. wireless" exercise is necessary for every network design project. Knowing what capabilities are available and around the corner, coupled with your company's needs and policies, is paramount in arriving at the optimal network design.

Ultimately, I decided that a mixed wired and wireless design was best for the network I was designing. Since the corporate network was already utilizing centralized access point management technology, the costs of adding a half-dozen access points were minimal compared with the overall cost of the project. As this was a medical facility, there were security (such as the Health Insurance Portability and Accountability Act) and application bandwidth (such as medical imaging) concerns that necessitated wired connections. However, since the facility was expected to host visitors, wireless access for such, with appropriate security measures, pointed to a mixed solution.

If faced with the same customer parameters on a similar project scope six months from now, I would do the exact same analysis. Because available tools and costs inevitably change, I wouldn't be surprised if my final design differed.

Greg Schaffer is the director of network services at Middle Tennessee State University. He has over 15 years of experience in networking, primarily in higher education. He can be reached at

Copyright © 2007 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon