Computerworld - High-availability clustering is too compelling to ignore. Typical clustering models for Unix have one server doing the work, with another standing by idle in case of failure. This active/passive approach can double hardware costs and add time and expense to deployment and management. That's a fair amount of capital to sink into unused computing resources. But the confluence of three factors may change the way clustering is approached.
First, Intel's Xeon processors offer a one-two punch of lower price and competitive performance when compared with RISC chips running Unix.
Second, the emergence of Linux for Intel servers, coupled with low-latency interconnects to bind servers together using Gigabit Ethernet or Infiniband technology, means better server communication.
Finally, storage decoupled from servers and the acceptance of storage-area networks and Fibre Channel technology make it possible to aggregate servers that then can act as one large machine.
Companies such as PolyServe Inc. in Beaverton, Ore., and Sistina Software Inc. in Minneapolis offer software to tackle different aspects of clustering. Sistina's products enable the sharing of stored data, while PolyServe software makes it possible to manage Intel-based server clusters.
Compare prices. A Unix clustering product for Solaris begins at around $6,000 (and up to $50,000 per server). For two Sun Fire 6800 machines with active/passive clustering, the cost is about $30,000 per server. This is designed for high availability and doesn't have any shared data clustering or multiple server combinations. PolyServe says its software costs $3,000 per CPU, which on a cluster of four two-processor machines (eight CPUs) would run $24,000.
Rather than dump a ton of money into a big box for a mission-critical data center, buying a bunch of Intel servers and coupling them to a SAN could be just as effective as a Unix setup, at much lower cost.
Another trend is new clustering and shared data storage software that goes beyond the availability layer to the file system level so all servers can see and share data. The software arbitrates who reads and writes data to disk at what time, and concurrent processing of data means more for your hardware dollar.
Also, consider the flexibility that clustering brings. When infrastructure has to grow, you can connect the new server to the SAN and add clustering software to that node. Applications don't need to change, and the IT learning curve for moving from Unix to Linux shouldn't be too arduous.
Perhaps some of the savings could go to IT staff bonuses.
Pimm Fox is a freelance writer in Santa Barbara, Calif. Contact him at firstname.lastname@example.org.
Read more about Servers in Computerworld's Servers Topic Center.
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- How WAN Optimization Helps Enterprises Reduce Costs If you wanted to break down innovation into a tidy equation, it might go something like this: Technology + Connectivity = Productivity. Productivity...
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All Servers White Papers | Webcasts