Companies looking for more agile data centers are increasingly turning to public (external) or private (internal) clouds with virtualized servers, storage and networks. Getting the lowest cost and the best speed and flexibility from those systems requires assessing everything from performance to control and interoperability. And the larger your organization is, the more planning it takes to create an "enterprise grade" cloud that meets your performance, security and compliance needs.
Smaller and newer organizations with simple needs often get the most dramatic and immediate results from public cloud services. That's because they have no "sunken costs" in existing infrastructure and fewer internal applications to integrate with cloud platforms. In addition, the per-seat pricing plans offered by software as a service (SaaS) vendors tend to favor smaller customers.
Some of the biggest beneficiaries of cloud setups are SaaS vendors themselves. Digital Technology International, for example, manages its own hardware in remote data centers to provide hosted applications to publishers, and it found the cloud to be "five times more profitable and half as expensive" as it predicted, says Byron Oldham, vice president of engineering and development at the Springville, Utah-based company.
Public Cloud Caveats
Sometimes, the public cloud is no bargain. "If you had an application running constantly, where the size is the same, the load is the same, you're pretty much winding up paying more to have that constant load running on the public cloud than in a dedicated environment," says Paul Carmody, senior vice president at Internap Network Services, an Atlanta-based provider of routing and content delivery services designed to speed data transmission over the Internet for cloud customers. The reason, he says, is that large customers pay a disproportionate share of the provider's overhead.
Given the complexities of cloud setups, it's worthwhile to do an analysis that considers not only short-term savings from server consolidation and virtualization, but also long-term management costs and compliance requirements.
Generally, the older an application is, the less well-suited it is to the cloud. Systems built with hard-wired interfaces among components, or those that rely on large central databases, are harder to scale up or down as business needs change and may therefore lack the agility that organizations seek from the cloud.
Applications that are better suited to the cloud use Web service standards and multiple tiers, as well as modern distributed databases rather than one giant database linked to one server. Those features make it easier to increase or reduce the amount of servers, storage and network bandwidth available to an application.
Processes, Terms and Conditions
Constant Contact, a provider of SaaS-based email delivery systems, chose the Puppet open-source systems management tool from Puppet Labs because of the richness of its scripting language and its ability to identify all of the applications, middleware and other components on which a server depends, says Mark Schena, manager of systems automation. Puppet "pretty much eliminated human error and helped eliminate service downtime," he says, and it allowed Constant Contact to reduce its administrator-to-server ratio from 1-to-300 last year to 1-to-400 this year, with a drop to 1-to-600 expected next year.
Puppet also enabled developers to create applications that require less work to deploy, offloading to developers work that would otherwise require an operations staff. That's the sort of cross-functional team that some observers say is critical to properly managing a public, private or hybrid cloud architecture.