When placing an order over the Web, it's not unusual for the site to lock up during the transaction -- it's all part of the Web experience, and customers know to call a customer service rep to see if the order went through. But take the human factor out of the equation, as in a Web services transaction. Who's 100% sure that their messages got through, unduplicated and in the right order? After all, you're relying on two applications to communicate with each other to complete the order, using standards such as XML, HTTP and SOAP.
"HTTP is a nice, lightweight mechanism to send HTML or almost any type of format. The problem is, there's no guarantee that the message will get delivered," says Anne Thomas Manes, an analyst at Burton Group in Midvale, Utah. "It offers 92% to 96% reliability, but if you need 99.9% reliability, or at the very least notification if a message doesn't get there, HTTP won't do that for you."
Although analysts don't think reliability is the only problem impeding Web services adoption, it certainly casts a pall of uncertainty over the idea of building anything mission-critical on such a platform.
Some users, such as Peter Osbourne, director of Internet research and development at Dollar Thrifty Automotive Group Inc. in Tulsa, Okla., say they do just fine with TCP/IP and HTTP. "We use Web services extensively [for Web-based car reservations] but have not encountered message reliability problems," he says. Dollar has mechanisms to detect message bottlenecks, but Osbourne says the company hasn't lost any messages.
At companies that need a higher degree of reliability, it's a matter of "hacking their way through the jungle with a machete," says Ron Schmeltzer, an analyst at ZapThink LLC. These users build reliability into the business logic of the application or employ a mature messaging platform such as IBM's MQSeries.
Once Isn't Enough
ShopNBC is one organization that has given Web services reliability a lot of thought, according to Steve Craig, chief technology officer and vice president of IT development at the shopping network, which is broadcast into 56 million homes.
ShopNBC relies on Web services to fulfill customer orders over the Web and to receive real-time pricing updates from its participating merchants. Because it's such a mission-critical endeavor, Craig and his group structured a three-layer system that supports three types of Web services messaging: real time, near real time and periodic. Each layer reinforces the others to create message redundancy in case the first message doesn't get through.
"If a real-time transaction fired and was missed, there'd be another job downstream to cover that need," Craig explains. Say, for example, a merchant system sends a message to the Web services system at ShopNBC to take a 10% markdown on a product that's airing on TV. If the real-time message wasn't received, another message sent to another server would eventually lower the price -- maybe not while the product was airing, but on the next database sync.
Another example is when customers order products over the Web. The Web server attempts to commit the order in real time with the back-end server, but if the server doesn't respond, the order is sent to a separate server to be synced up later.
Build, Don't Add
As ShopNBC demonstrates, reliability must be considered from the very beginning. "You never add reliability to something after the fact -- it's not something you can just slap on top of the application," says Don Reeves, vice president of engineering at Black Pearl Inc. in San Francisco.
Black Pearl's B4 platform uses Web services to collect data from disparate sources to provide real-time customer profiles and data analysis. For instance, it sends financial services agents daily lists of high-priority prospects, as well as real-time alerts on products that might interest particular customers. The system needs to collect data from a variety of sources, such as legacy customer databases, marketing campaign systems and live data feeds on stock prices.
Reeves estimates that half the code that makes up B4 is aimed at functionality, and half is intended for error checking. "Whenever you connect to a data source, you have to assume it will fail and have a planned response," he says.
Another approach is to employ a reliable transport protocol, such as a message queuing (MQ) service, rather than using HTTP to transmit Simple Object Access Protocol (SOAP) messages. The dominant MQ service is IBM's WebSphere MQ, which acts as a mediator to guarantee delivery by storing messages locally until it receives delivery acknowledgment.
"You can send it over SMTP or FTP or an MQ system. Web services is completely independent of the underlying protocol," Manes says. The trouble is, MQ systems are proprietary -- IBM's WebSphere MQ doesn't talk with, say, Sonic Software Corp.'s MQ system. This is less of a problem if you're just using Web services internally, but even then, it's an expensive proposition. According to Manes, MQ deployments can go into the seven figures.
Using Your Header
To get beyond dependence on a proprietary protocol, companies will have to code specifications into their SOAP headers, Schmeltzer says -- such as having a system try to resend a message if an acknowledgment of receipt isn't received within 300 milliseconds.
Today, Schmeltzer points out, individual companies need to do this sort of coding themselves, which can be extremely difficult. However, this coding will eventually be built into standards such as WS-Reliability and WS-ReliableMessaging.
In the midst of this industry confusion, the lack of a reliable messaging specification is pretty far down the list of what's stopping people from implementing Web services, says Dwight Davis, vice president of Summit Strategies Inc. in Kirkland, Wash. "There's a lot of baggage that comes into play on the list of factors," he says, including the relative newness of the technology and the need to focus on day-to-day crises rather than long-term architectural change. Just the same, before mission-critical Web services applications enter the mainstream, reliable messaging will have to become less complex and costly.
Brandel is a freelance writer in Grand Rapids, Mich. Contact her at mary.brandel@comcast.net.
Danske Bank A/S's trailblazing work to build a service-oriented architecture had gotten so advanced that it exposed more than 1,000 services from its mainframes and application servers. But the Copenhagen-based bank found itself in a frustrating predicament.
"We couldn't find them," says Claus Torp, the company's chief architect.
The problem threatened to wipe out one of the main benefits of service-oriented architectures (SOA)—reuse. So Danske set about revising its concept of a service, refining its repository and establishing a governance process to enforce best practices.
The result was a collection of 140 services that is far more manageable.
An in-depth look at several SOA pioneers shows that the steps Danske Bank took are key to a company's ability to reuse code, build applications with greater speed and efficiency—and ultimately save money.
But it's not easy, and the implementation sequence is important. Sun Microsystems Inc., for instance, built a registry and set up an architecture review board. But the IT department is just now circling back to do a closer examination of Sun's 80 to 100 Web services.
Karen Casella, an IT director at Sun, recommends that a company starting down the SOA path first look at its business requirements and identify which Web services are needed. "We learned the hard way," she says. "We put some of the infrastructure in place before we completely understood what we needed to have in play."
The Services
Companies need to figure out which business processes can be turned into services, carefully design and define the services and distinguish them from components.
When Danske Bank began building standard interfaces to expose its legacy programs, it defined a service as "one function." Now it describes a service at a higher level, as a logical grouping of functionality and data, such as "customer" or "account."
The company's 140 services are each composed of about 10 "operations," or components, that are essentially more granular services. There are currently more than 1,365 operations. Danske expects to eventually have 250 services.
How well a company can break down its business processes and application functions into services will determine the level of flexibility and reuse it gets, Torp says.
Danske uses modeling tools to develop logical maps of the functional building blocks and business processes. Then it matches the business processes to the services to make sure it has solved the right problem.
"A lot of doing service-oriented development is making sure you can run different business processes on top of the same service building blocks," says Torp. "If you want to be effective, you have to make sure there is only one place to do the same function."
Cendant Corp.'s Travel Distribution Services division spends a considerable amount of time determining the optimal granularity of its services and service components, according to Chief Technology Officer Robert Wiseman. A service is something that can be called externally through Cendant's business domain model, dubbed Rosetta Stone. A service component, such as logging, is called only internally.
So a "get hotel" service might call several low-level services, such as a latitude/longitude "destination finder" that the company makes available to customers. But Cendant's currency converter is a component, since it currently isn't exposed to customers.
Cendant expects an ongoing project to extract components from monolithic applications to have a big payoff, Wiseman says. For instance, passenger name record (PNR) is a basic unit of data used by booking engines and global distribution systems such as Cendant's Galileo. By making "Super PNR" available as a service, the IT department won't have to maintain six or seven instances of PNR in different applications.
The Hartford Financial Services Group Inc. built pockets of Web and other services over three years ago, but its enterprise-scale SOA work didn't start until 18 months ago.
A good candidate for an enterprise service is one that two or more applications need, says Benjamin Moreland, manager of application infrastructure delivery at the Connecticut-based insurance company. "But not everything should be a service," Moreland warns, noting the potential performance hit from exposing services.
Establishing the Registry
Vendors may have expected Internet-based registries based on the Universal Description, Discovery and Integration (UDDI) standard to spread like wildfire. But early SOA adopters care more about internal registries.
That doesn't mean UDDI is dead, though. UDDI was so important to The Hartford that it chose its registry based on the product's conformance to UDDI 3.0. (Officials declined to name the product due to a company policy against endorsing vendors.) The registry includes metadata describing the services and the means to connect to services via particular transports.
But the UDDI registry isn't meant for everything. Departments continue to maintain local registries for some services they create, because The Hartford is selective about what goes into its enterprise registry.
"We don't want to create a junk drawer of services," says Moreland. "What we feel should be in the enterprise UDDI are services that will give us leverage and flexibility across the enterprise."
Providence Health System uses the Infravio Inc. management framework for its service library, and much to the surprise of company skeptics, its developers are actually reusing services, now that they can find the Web Services Description Language (WSDL) files defining the interfaces.
"We commonly refer to this as 'Google-izing' Web services," says Michael Reagin, Providence Health's Portland, Ore.-based director of research and development. "They can reuse services with minor modifications in a couple of hours. People are more productive. Everyone's happy."
Providence Health's greater concern these days is managing its growing number of Web services and SOA framework from an operational standpoint. The company has close to 50 composite services, each one comprising one to 20 more granular subservices.
Early adopters that couldn't find a registry to suit their needs built their own. Danske Bank maintains separate repositories for components from its mainframes and J2EE- and Microsoft .Net-based application servers. The repositories replicate between each other, forming one logical repository that essentially is a superset of a UDDI registry, adding operational parameters for functions such as load balancing, says CIO Peter Schleidt. A service integrator agent dynamically selects the most efficient way to call a service, using SOAP over HTTP or more efficient, proprietary protocols.