Computerworld - High-availability clustering is too compelling to ignore. Typical clustering models for Unix have one server doing the work, with another standing by idle in case of failure. This active/passive approach can double hardware costs and add time and expense to deployment and management. That's a fair amount of capital to sink into unused computing resources. But the confluence of three factors may change the way clustering is approached.
First, Intel's Xeon processors offer a one-two punch of lower price and competitive performance when compared with RISC chips running Unix.
Second, the emergence of Linux for Intel servers, coupled with low-latency interconnects to bind servers together using Gigabit Ethernet or Infiniband technology, means better server communication.
Finally, storage decoupled from servers and the acceptance of storage-area networks and Fibre Channel technology make it possible to aggregate servers that then can act as one large machine.
Companies such as PolyServe Inc. in Beaverton, Ore., and Sistina Software Inc. in Minneapolis offer software to tackle different aspects of clustering. Sistina's products enable the sharing of stored data, while PolyServe software makes it possible to manage Intel-based server clusters.
Compare prices. A Unix clustering product for Solaris begins at around $6,000 (and up to $50,000 per server). For two Sun Fire 6800 machines with active/passive clustering, the cost is about $30,000 per server. This is designed for high availability and doesn't have any shared data clustering or multiple server combinations. PolyServe says its software costs $3,000 per CPU, which on a cluster of four two-processor machines (eight CPUs) would run $24,000.
Rather than dump a ton of money into a big box for a mission-critical data center, buying a bunch of Intel servers and coupling them to a SAN could be just as effective as a Unix setup, at much lower cost.
Another trend is new clustering and shared data storage software that goes beyond the availability layer to the file system level so all servers can see and share data. The software arbitrates who reads and writes data to disk at what time, and concurrent processing of data means more for your hardware dollar.
Also, consider the flexibility that clustering brings. When infrastructure has to grow, you can connect the new server to the SAN and add clustering software to that node. Applications don't need to change, and the IT learning curve for moving from Unix to Linux shouldn't be too arduous.
Perhaps some of the savings could go to IT staff bonuses.
Pimm Fox is a freelance writer in Santa Barbara, Calif. Contact him at firstname.lastname@example.org.
Read more about Servers in Computerworld's Servers Topic Center.
- Securing Mobile App Data - Comparing Containers and App Wrappers Analysts agree that Mobile Device Management (MDM) is not enough when it comes to securing app data. Although it remains a critical component...
- Capabilities You Need in an IP Address Management Solution A mismanaged IP space can cripple an otherwise healthy network. Take a moment to understand what you need in an enterprise-ready IPAM solution.
- IPv6 Fundamentals IPv6 is needed to sustain the growth of the Internet. The transition from IPv4 will require planning and likely some degree of support...
- Optimize IT Performance & Availability: Four Steps to Establish Effective IT Management Baselines More than ever before, your company's ability to grow hinges on IT performance and availability. Download this how-to report on establishing IT baselines,...
- Accelerate your innovation with IBM Bluemix™ Join us for a webcast introducing the new IBM BluemixTM. IBM Bluemix (www.bluemix.net) is a developer oriented Platform as a Service (PaaS) environment...
- Maximizing Availability for the Modern Data Center Check out this information-packed resource center for help in maximizing the availability of your data center - from overcoming challenges to choosing the... All Servers White Papers | Webcasts