Grid computing: Term may fade, but features will live on

Grid computing's goal of sharing resources is still a plan for many corporate customers; the question is how to get there most effectively.

1 2 Page 2
Page 2 of 2

An estimated five-year plan was put in place last year, with computational services running on Platform Computing's Symphony software. In other words, the goal is to move from servers to services. Step 2 will be to share the hardware, using a single platform for all services. "Once everyone is comfortable on the software, it's a small [technical] leap to share hardware, but it's a valiant leap in terms of trust," Mitsolides says. That's because before the advent of Platform Symphony, users accessed each application as it ran on a specified server or server farm. Now, users will be able to run applications without worrying where the applications reside or whether there's enough capacity available to support demand. In this new world, a range of servers will be used to supply processing power and the software will be "smart" enough to figure out where to run. Mitsolides says this notion of availability on demand has been a difficult or challenging concept for business managers to grasp, believe and trust.

Step 3 will be to move the grid implementation to other offices around the globe. So far, users notice that services are accessible via the Web, including Platform's load-balancing, scaling, prioritizing, scheduling and monitoring capabilities. Ultimately, Mitsolides says, users will be able to access both compute and noncompute intensive services through the same application programming interface.

Grid's expansion

Ultimately, the evolution of grid computing is all about IT's transformation from vertically integrated silos to horizontally integrated, service-oriented systems. In fact, if Fellows predictions are accurate (see info box, below), grid computing will likely be absorbed into network fabrics.

Predictions about grid computing's evolution

In 2007, the 451 Group expects several pivotal changes in the IT landscape, including:

•  Virtualization will go mainstream, changing the data center

•  Grid infrastructure will get baked in to support utility computing, and on-demand activities

•  SOA will move from experimentation to implementation

•  Open-source technology will move up the value chain

•  Web 2.0 will morph into Enterprise 2.0 and change how companies interact internally and with others

•  Silos will become horizontally integrated resources ('flat IT')

•  Virtualization will allow grids to be absorbed into enterprise fabrics

Meanwhile, developments expected by 2010 include:

•  Grid technology will move beyond analytics into mainstream applications

•  As virtualization allows grids to be absorbed into the fabric, the term 'grid' will fade away

•  Enterprise utilities will form

•  A wide range of providers will offer some form of grid-enabled, utility-type computing, from telcos to IT vendors to systems integrators to Amazon and Google

Source: 451 Group

Research firm IDC sees several different types of grids growing in importance over the next few years. IDC details this market in its report, "Worldwide Grid Computing 2006-2010 Forecast." The first is a compute grid, which is mostly driven by hardware and uses freeware and staff-provided services. Second is a data grid, which ties together information from disparate sources. The third type, optimization grids, is how many customers define the term and includes pooled resources that are allocated based on demand.

In the future, the sweet spot for grid computing technology is expected to be in enterprise utilities. According to Ian Foster, Director of the Computation Institute at the Argonne National Laboratory and University of Chicago, "grid computing 'fabrics' are now poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations."

In Foster's vision, these fabrics combine virtualization, SOA, commodity hardware and open-source software.

Grid in the cloud

One example of these worlds converging is Amazon.com Inc.'s Elastic Compute Cloud (EC2), a service in beta that provides computer power on demand over the Internet. EC2 allows software and Web developers to set up virtual servers, instead of having to buy computers or hire a computer-hosting company. Developers can quickly set up a virtual computer and in minutes or hours add or subtract capacity based on their needs.

Much like Amazon's S3 storage service, EC2 allows developers to experiment and get running with a new service almost instantly, with no capital costs. EC2 costs 10 cents per hour to run a custom server, plus data and bandwidth charges.

Although Amazon doesn't consider its implementation a grid environment, some testers are running compute grids on EC2. Werner Vogels, vice president and chief technology officer of Amazon.com, describes the situation this way. EC2 is a "resource management layer that can run under a grid environment," he said. "Ultimately, we provide the resources for grid computing to exist on top of our service."

Examples of application grids that use EC2's resources include Randall Render Rockets using EC2 as a grid to provide image rendering, enabling customers to have digital images and short movies rendered. Others have built environments on EC2 for digital file encoding, to translate MP2 to MP3 files, for example. "We're using virtualization technology to make EC2 instances appear like full-blown servers, using dynamic allocation to assign virtual images to replicate physical servers," Vogels explained.

Amazon currently uses EC2 internally to perform efficient resource management. This means that excess capacity on idle servers is used as required by applications and user demand throughout each day. This compares to nongrid environments, in which one application depends on one server or server farm to supply compute power, requiring copious amounts of overhead to cover any potential bursts in demand.

But it's the business opportunity provided by the on-demand compute resources that appeals to a whole range of developers, from Web 2.0 developers to those who specialize in advanced enterprise computing development, Vogels says.

Still, despite grid's criticality to underlying applications, Vogels says that in the past few years, the grid concept has been overhyped, and grid computing's limitations have been exposed. Grid "still requires middleware and resource management capabilities" to make it usable for on-demand services, he explains.

In fact, the biggest challenge to the success of grid computing's evolution is the current lack of standardization, says Foster. "Different vendors are all trying to compete, creating different methods or features that likely won't integrate easily without standardization," he notes. Also, would-be resource sharers must also navigate layers of technology to create a grid architecture.

But Foster, considered a father of the grid computing concept, believes the need for grid computing may create sufficient momentum to overcome the challenges. "The grid vision ... of resource federation and resource-sharing across enterprises is something a lot of people are eager to see happen," Foster says.

Barb DePompa has been writing about technology and business issues for a wide range of publishers, for more than two decades (before the dawn of the grid computing). Accurately capturing this technology during its "trough of disillusionment" phase has been a challenge. Let her know what you think, or comment below.

1 2 Page 2
Page 2 of 2
Enterprise mobility 2018: UEM is the next step
  
Shop Tech Products at Amazon