Is it time to fire your SLAs?

The metrics used for measuring IT success have becomes detethered from IT’s mission: Aligning technology with business needs. Digital experience monitoring is promising way forward.

intro breaches getting someone fired
Thinstock

“What to measure” and “How to measure” are well-known dilemmas for IT executives. The reality is that many IT projects are measured by criteria that are more closely aligned with completion rather than success—meaning IT is telling the business, “measure us by our adherence to the project plan rather than the benefits that the project provides to the organization.”  

Unsurprisingly, business managers are often left unsatisfied with the outcomes of IT initiatives when success is not easily measurable. 

Consider a Windows 10 migration. A sales director likely couldn’t care less about what flavor and version of an operating system runs on their laptop, but they might care a lot if their laptop performance slows down after IT “did something to it.” So, while IT might regard the Windows 10 migration project as a success once all devices in the project plan have been upgraded, end-users suffering performance degradation as a result of migration might have a very different take on the project’s “success.” 

I posit that the key to supporting today’s workforce lies within the concept of the “quantified user” where, just as we are able to quantify the number of steps we take per day to help us improve our personal health, the “quantified user” is one whose experience within the digital workspace is quantified and given a score in order to measure IT’s impact on that user’s productivity. 

Gartner calls this “Digital Experience Monitoring” (DEM); an availability and performance monitoring discipline that is implemented through the application of visualization, analytic and machine-learning functionality into a combination of datasets ingested from various IT metrics. These datasets may be then observed and analyzed with the goal of optimizing the holistic operational experience, productivity and behavior of end-users.  

Calculating employees’ “digital experience score” may be accomplished by analyzing all the factors that could impact their productivity using a data-collection agent right where they conduct most if not all their work—the endpoint. 

Why the endpoint? Because as critical IT functions are being outsourced and managed by third parties, reduced visibility into network transactions, data center transactions and overall IT infrastructure is inevitable. Therefore, an end-user’s workspace—the endpoint-has become the most privileged point of view IT could have into the state and the health of an increasingly scattered IT environment. 

Like one’s credit score, their digital experience score might be calculated based on the availability and performance of all the factors that could impact their technology experience. That is, everything from network and resource problems to infrastructure and system configuration issues, resulting in a number that is normalized and trustworthy. Also, like a credit score, the digital experience score will likely be relied on for critical business decisions, compared across teams and groups of users, and used by IT to improve service delivery to better align with business productivity. 

In contrast to credit scores, drawing parallels to digital experience scores in the context of Service Level Agreement (SLA) may be risky and even dangerous. Traditional SLAs include service metrics that measure different criteria based on performance levels so that “good” service can be differentiated from typical “bad” service. While the importance and critical role of a SLA is inevitable, most SLAs focus on specifying the amount of efforts that are to be spent on a certain task, projector process rather than specifying the results that enhance business productivity. That is, SLAs rarely align service effectiveness and business objectives. For example, an IT SLA might state that “in case of any downtime in application X, IT is supposed to resolve it in a certain amount of time.”

However, this does not take into consideration end-user business objectives, such as getting an order inputted into a CRM system before a quarter end deadline. Second, service specifications are usually stated in terms of metrics that are unclear or impossible to be measured, such as “uptime.” End-users don’t care if their systems are “up” but unusable. They will perceive this state as “down” despite what the SLA might measure. It is difficult to determine the exact meaning of “good” in the context of business values.

For instance, what does the following statement, which is a very common service level, exactly mean? “The availability percentage of the network should be 99.99%.” This service level brings up several questions. What is the difference between 99.99% of availability and 99.999%? What if the service is available on a weekend when nobody uses it and unavailable at a peak time such as a Monday morning? Does this 99.99% of availability bring business value to the customer?

My main gripe with SLAs, however, is that they tend to be long, technical documents with deep focus on terminologies that may be understandable for IT and service providers but not for the end-users and business line managers that also need to have a clear understanding of them.  

Still, shifting SLAs to focus on digital experience scores doesn’t mean starting from scratch. In fact, the shift behooves us to borrow some of the things that we know work in traditional SLAs: Proper, well documented mechanisms must exist to ensure both IT and service providers and end-users and business line managers understand what is being measured and what recourse they have in the event of service level deterioration. DEM SLA frameworks should be based on agreed-upon, measurable, technically valid, communicable and understandable metrics; be normalized such that measurements are consistent across users and platforms; and come with a stated set of remediations should service levels drop. 

Digital experience scoring provides a more informed basis for key analytical components that stem from observation of real user needs and behavior. By providing a measurable metric for how well the current computing environment meets the needs of end-users, DEM opens the door to exploring each aspect of their usage—application behaviors, mobility requirements, system resource consumption, and so on. From there users can be assigned SLAs that are aligned with their business function, creating a measurable mapping of IT and business requirements. This, in turn, leads to more data driven procurement practices, easier budget rationalization and enhanced business productivity.

This article is published as part of the IDG Contributor Network. Want to Join?

Related:
Windows 7 to Windows 10 migration guide
  
Shop Tech Products at Amazon