How to avoid Web application pitfalls

Chances are your users have a better idea of how your Web application investments are serving your business than you do. They may not know the intricacies of your infrastructure or understand how the applications work behind the scenes; they probably don't care. But they do know when they can't complete a transaction or when they can't get the information they need. In those cases, your customers are painfully aware when your technology investments aren't serving your business goals. Are you?

Because the Web is an integral part of business for many companies, organizations spend billions of dollars each year monitoring the software and hardware systems powering their sites. Outages, slowdowns and other errors are costly. However, due to the increasing complexity of Web applications, tracing the source of errors is becoming more and more difficult.

Today's tools, and the business processes that have grown up around them, measure Web performance based on infrastructure components -- not on the actual experiences of users. Because applications are being monitored in a piecemeal fashion, every troubleshooting exercise begins with having to re-create the initial problem. This is always the most time-consuming part of the process. But what if that step went away? What if we changed perspectives and began to quantify Web application success from the customer's point of view?

Questions such as "Is the network up?" and "Are the pages loading quickly?" provide only limited visibility into the success or failure of an application. No one assumes the customer perspective. Does the application deliver the right information? Which users are affected by application failures, who are they, and how much is it costing the business? With so much invested in the success of mission-critical Web applications, why are we still relying on outdated success metrics such as page download speed and system uptime? Are these measurements really telling us how technology is enabling business?

Robert Wenig

Absent a completely predictable and unchanging technology environment, Web application support specialists are limited to creating synthetic test scripts based on every instance they can imagine and/or afford to model. Using 20th-century systems management tools and methodologies to extrapolate Web application success is really nothing more than "best-guess computing."

All systems go, but the car won't start

A great way to illustrate best-guess computing is with an example from another discipline where components, uptime and infrastructure reign supreme: automotive repair. Recently, my car decided not to start. It was running fine the night before, but a turn of the key a mere 12 hours later inexplicably yielded nothing. So it was off to the mechanic. She looked under the hood and reported that all the parts and belts and liquids were intact and healthy. She ran several diagnostics that assured us that all of my car's systems and components were fine. By all accounts, my car should be running smoothly. Only it wasn't.

After a lot of head scratching, the mechanic and I started running through everything I had done with the car in the past 24 hours. When I mentioned that I had lost my keys the night before, the light bulb went on. Apparently, the spare key hadn't been properly activated by the dealer. Once the key was activated, the problem was solved.

On the way back to work, cursing myself, the dealer and the mechanic for hours of wasted time, the epiphany occurred: The crucial information the mechanic needed to solve my car's failure was nowhere to be found in the car's myriad systems and components. The cause could be found only by taking a closer look at the experience of the owner -- me. The methodology to derive that information from me was frustrating, inefficient and time-consuming.

Now consider the Web application support person, or "mechanic." How much do they have access to the real users who are experiencing application issues? They have plenty of data with respect to speeds and feeds, but little in the way of how the final application was presented to the end user. Furthermore, IT organizations deal with millions of customers accessing thousands of applications with billions of different possible use cases. The idea of "running through" the actions leading up to every application failure is simply ludicrous.

The sad fact is, most of us are relying on our end users -- our customers -- to bring Web application problems to our attention, and to help us solve them. Not only is this alarming because it involves drafting your customers as unwilling members of your QA team, it's also ineffective. A very small percentage of users will actually report a problem, and once they do, the time to find and repair it increases exponentially based on the number of dynamic interdependencies involved. Finally, even if you can reproduce the error within your Web environment, can you determine the scope of the problem? Did it effect one or 1,000 users? Did it occur one time or 1,000 times?

If uptime isn't a reliable means of measuring application and technology ecosystem health, and application-centric monitoring can tell you only which pieces of the puzzle are behaving as designed, what can IT departments do to ensure that their Web applications are performing as intended?

The first step is to realize the limitations of your current application management tools and understand that incorporating the customer perspective is crucial to measuring the success of your IT investments. While it may seem like an impossibility, finding a way to run through every application failure from the customer point of view is critical to moving beyond best-guess computing.

Once you've embraced the importance of the customer perspective, you can begin examining strategies and tools that will give you visibility into what your users are really doing with your applications. Application performance technologies are emerging that can monitor the transactions between real users and your applications, tracking what the user requests and what the application returns. More importantly, these management tools will put this information in context for you and enable you to capture and sessionize the events leading up to the failure, linking them across disparate applications so that the related chain is viewable.

While network uptime and speed remain important indicators of your IT performance, these metrics alone can't tell you how your Web applications are achieving the business goals they were implemented to enable. And, chances are, your CEO and other business stakeholders aren't satisfied with uptime and speed, and they certainly won't accept the idea of best-guess computing when it comes to business-critical systems. The systems management tools we rely on to tell us how our applications are performing are myopic. The narrow picture that they paint is costing us money, losing us customers and preventing us from truly measuring the value of our applications from a business perspective.

Enterprise mobility 2018: UEM is the next step
Shop Tech Products at Amazon