Putting an end to IT monitoring sprawl

“Silo-fication” of IT monitoring and analytics has created a modern-day Tower of Babel that threatens the success of large organizations.

puzzle tower / growth / achievement / risk / balance

Enterprise IT groups are at odds with each other—especially at large corporations. IT professionals have few choices available to them in driving their organizations towards solutions that scale and solve for the needs of every IT silo. This leads to the deployment of point solutions managed separately by each group, further driving the silo-fication of IT. Part of the problem is that niche monitoring solutions provide views that are either too acute to be relevant to the larger organization, or too obtuse to be useful in solving niche issues. Each IT domain, be it application, network or infrastructure, tends to be monitored, analyzed and sometimes even remediated via its own runbook. However, these processes are happening per domain and not holistically across the business. Additionally, the increasing TCO pressures, evolving best practices and the dynamic nature of modern IT environments have intensified this fragmentation.

Despite the trend to add new capabilities to these monitoring tools from the other domains via systems-of-record, large IT organizations often struggle to deliver the unified analysis—the “source of truth”—needed to ensure digital business success. As a result, IT leaders find it difficult to combine the “hints” (or “puzzle pieces”) that each domain-specific monitoring tool uncovers into a cohesive understanding of the behavior of the overall service, the root causes of problems and the niche views that groups require across the entire IT landscape.

User experience as “The Source of Truth”

Today, end-user computing leaders have standardized on toolsets called Digital Experience Monitoring (DEM) to measure and respond to end-user experience. However, end-user computing is but one silo in the larger IT organization. How can DEM be used as the definitive system of record for the entire IT estate?

DEM technologies gather data from multiple applications and services, across multiple channels to observe and measure the quality of a person, device or application’s interactions with all facets of IT, whether managed internally, externally or by the user. To make this information useful to the entire organization, IT leaders must initiate projects where, using DEM, attempts are made to identify anything that impacts user experience, for the better or worse, as they interface and interact with IT assets, services and locations. For each silo, an approach must be developed that contextually gathers measurements from multiple applications, devices, networks and service interactions to provide a meaningful and useful picture of the impact that technology has on business productivity.

For example, if a SaaS service outage impacts all users in a particular region more than others, it’s important to understand the cause of the deterioration (internal or external), the class of users impacted, whether there is any commonality among the devices being used to access the impacted service, and insights on the scope of the problem for triage purposes.

IT tools are only as valuable as the actionable insights they provide

Today's enterprise IT environments are like Frankenstein's monster—patchworked, emotionally-charged and difficult to wrangle. To work on them, IT uses a variety of monitoring and management tools to gain understanding from multiple perspectives (e.g. endpoint, network, infrastructure, application, etc.). But it’s challenging to correlate events across multiple tools. Duplicate alarms and events make environments noisy and the speed of support drops as IT tries to sort through a flurry of notifications while managing multiple data silos.

DEM tools cut through the clutter and provide a unified viewpoint by ingesting and storing data from multiple sources with end-user experience as the core underlying normalized metric for scoring—like a credit score. The richer the data streams, the better the insights. DEM tools should be measured by the quality of data and the correlations they generate. Every department must be an equal stakeholder to achieve quantifiable success.

Eventually, the goal is for these tools to become incorporated as feeds for mechanisms such as Artificial Intelligence for IT Operations (AIOps) that might automatically resolve issues without human intervention.

DEM + AIOps = scalable IT success

Without jumping the acronym shark, a sound DEM strategy should be inclusive of all IT systems and silos across the enterprise and must interoperate as well as provide benefits universally. Enterprises should chart a Venn diagram-esque path to collapsing traditional application performance management (APM), IT Service Management (ITSM) and IT Operations Management (ITOM) into AIOps.

The use of AIOps in IT is gaining traction in nearly every IT silo. Gartner forecasts that by 2019, “25% of global enterprises will have strategically implemented an AIOps platform supporting two or more major IT operations functions.”

But, again, the benefits of AIOps are directly tethered to the quality of DEM data ingested. Bad and/or noisy data means a deluge of false positives, false negatives and costly cross-silo finger pointing. Further, before considering DEM, the business strategy must have consensus at the highest executive level of an organization—to make the effort for a coordinated mission from the top down. The question posed should be not whether a business should, but can they afford not to?

This article is published as part of the IDG Contributor Network. Want to Join?

5 ways to make Windows 10 act like Windows 7
Shop Tech Products at Amazon