Skip the navigation

The New Cluster Model

By Pimm Fox
June 9, 2003 12:00 PM ET

Computerworld - High-availability clustering is too compelling to ignore. Typical clustering models for Unix have one server doing the work, with another standing by idle in case of failure. This active/passive approach can double hardware costs and add time and expense to deployment and management. That's a fair amount of capital to sink into unused computing resources. But the confluence of three factors may change the way clustering is approached.
First, Intel's Xeon processors offer a one-two punch of lower price and competitive performance when compared with RISC chips running Unix.
Second, the emergence of Linux for Intel servers, coupled with low-latency interconnects to bind servers together using Gigabit Ethernet or Infiniband technology, means better server communication.
Finally, storage decoupled from servers and the acceptance of storage-area networks and Fibre Channel technology make it possible to aggregate servers that then can act as one large machine.
Companies such as PolyServe Inc. in Beaverton, Ore., and Sistina Software Inc. in Minneapolis offer software to tackle different aspects of clustering. Sistina's products enable the sharing of stored data, while PolyServe software makes it possible to manage Intel-based server clusters.
Compare prices. A Unix clustering product for Solaris begins at around $6,000 (and up to $50,000 per server). For two Sun Fire 6800 machines with active/passive clustering, the cost is about $30,000 per server. This is designed for high availability and doesn't have any shared data clustering or multiple server combinations. PolyServe says its software costs $3,000 per CPU, which on a cluster of four two-processor machines (eight CPUs) would run $24,000.
Rather than dump a ton of money into a big box for a mission-critical data center, buying a bunch of Intel servers and coupling them to a SAN could be just as effective as a Unix setup, at much lower cost.
Another trend is new clustering and shared data storage software that goes beyond the availability layer to the file system level so all servers can see and share data. The software arbitrates who reads and writes data to disk at what time, and concurrent processing of data means more for your hardware dollar.
Also, consider the flexibility that clustering brings. When infrastructure has to grow, you can connect the new server to the SAN and add clustering software to that node. Applications don't need to change, and the IT learning curve for moving from Unix to Linux shouldn't be too arduous.
Perhaps some of the savings could go to IT staff bonuses.
Pimm Fox is a freelance writer in Santa Barbara, Calif. Contact him at

Read more about Servers in Computerworld's Servers Topic Center.

Our Commenting Policies