In our last newsletter we discussed how it is common practice to develop an application with little or no focus on how the application will perform once implemented. We also discussed how in most cases IT organizations make changes to the IT infrastructure with little knowledge as to what impact the change will have on application performance. In both of these examples the IT organization is hoping for acceptable application performance. In this newsletter we will begin to develop an alternative approach to merely hoping for acceptable application performance. We will refer to this approach as Application Performance Engineering (APE).
A formal definition of APE is that it is the roles, skills, activities, practices, tools and deliverables applied at every phase of the application life cycle that ensures that an application will be designed, implemented and operationally supported to meet the non-functional performance requirements.
In a less formal fashion, APE is the recognition of the fact that IT organizations need to design for application performance and then test, measure and tune performance throughout the application life cycle. Since APE can be difficult and consume scarce resources, we are not suggesting that an IT organization would necessarily apply APE to the hundreds of applications that it supports. What we are suggesting is that an IT organization needs to apply APE to the handful of applications that the company relies on to run its business.
As noted, APE needs to occur throughout the application life cycle. For example, a key component of the typical application life cycle is requirements gathering. What APE brings to requirements gathering is a focus on performance. At a minimum this involves establishing objectives for how long it will take an application to complete a particular operation, such as a user transaction or a file transfer.
However, a complex application is typically comprised of multiple modules each of which can have different response time objectives. For example, assume that an IT organization were either developing or evaluating the acquisition of a unified communications application. The IT organizations might establish a performance objective that states that 95% of all instant messages would be successfully delivered within 15 minutes.
However, this type of performance objective wouldn't make any sense for the voice component of unified communications. Recognizing that, the IT organization might establish a performance objective that states that 98% of all voice calls would have a Mean Opinion Score (MOS) of 4.2 or higher.
Of course it is well known that due to its attendant latency, jitter and packet loss that the WAN has a major impact on application performance. This raises a key question that we will address in a future newsletter: Does an IT have 'one size fits all' performance objectives. By that is meant, will the IT organization commit to the same performance objects independent of where the user resides?
Alternatively, will the IT organizations provide one set of performance objectives for users who reside in the U.S. and a different set of performance objectives for users who reside outside of the U.S.?
In our next newsletter we will discuss the linkage between APE and SLAs.
This story, "Application performance engineering" was originally published by NetworkWorld.