Use DataOps to support your changing data culture

business intelligence data visualization tools analytics
Getty Images

Organisations have spent handsomely on a range of data and analytics technologies to gain a competitive edge. According to Gartner, the analytics and business intelligence (BI) market grew to over $21.5 billion globally in 2018, while the database management market grew to over $46 billion.

Despite multiple years of investment in technologies, roles and skills, organisations still struggle to extract value from their data and analytics initiatives. Many have difficulty deploying into existing business processes and applications. Others have an inability to demonstrate ROI, or adequately secure or govern the inputs and outputs from their data and analytics initiatives.

At the same time, the number of consumers of data and analytics capabilities across organisations is increasing rapidly, each with its own tribal knowledge bases, skills and tooling expertise. Clearly, the challenges are diverse.

Data management professionals are looking for alternative approaches to scale delivery, respond to ad hoc demands and still maintain some sense of control and oversight. DataOps has emerged as a response to these needs.

What is DataOps?

Gartner defines DataOps as a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organisation.

DataOps applies the traditional DevOps concepts of agility, continuous integration and deployment, as well as end-user feedback, to data and analytics efforts. The key difference is that DataOps focuses on data service delivery and resilience for data producers and consumers.

However, DataOps isn’t strictly a technical competency. The real focus and benefit of DataOps is as a lever for organisational change, to steer behaviour and enable agility.

The point is to change how people collaborate around data and how it’s used in the company. Rather than simply throwing data over the virtual wall, where it becomes someone else’s problem, the development of data pipelines and products becomes a collaborative exercise with a shared understanding of the value proposition.

Overcoming complex and siloed workflows

In the physical world, pipelines move materials from one location or environment to another. Of course, there are often other challenges – pipelines can leak, or they may not be allowed to pass through certain areas, like protected lands.

The same is true of data pipelines. While it’s easy to conceptualise data pipelines as smoothly flowing data delivery mechanisms, the reality is often quite different.

Data pipelines also suffer from leaks, but this typically doesn’t mean you’re losing data. The leak we’re concerned with is lost context. As data moves through the pipeline, from source to target, what that data means and represents may change at each step. Values may be handled differently, leaving the next stage to figure out what the data means. This is repeated throughout the pipeline.

The challenge is that each stage has its own understanding of the data based on its use. This results in brittle pipelines that are incredibly slow to react to change. The problem gets worse as more consumers of data pipelines arise.

The challenges around localised understanding of data, brittleness in the face of change and slow expansion of new uses have been the catalyst for the development of DataOps.

Data and analytics diversity

Organisations are consuming and combining more data types than ever, and the variety of analytical methods continues to increase. While the most traditional data types and analytics still dominate, diversity in both is increasing.

Coupled with this diversity is the wave of decentralisation of analytical consumers. While most analytical questions are answered with the same data and models, demand is increasing on both models and new data types independent of IT, and across both highly skilled data scientists and highly skilled “citizen” roles that are adept at analytical tooling.

All these factors are pushing data producers and consumers to adopt a new model of collaboration. The need for DataOps has never been clearer.

DataOps has significant potential

Currently, there are no standards or known frameworks for DataOps. Today’s loose interpretation makes it difficult to know where to begin, what success looks like or if organisations are even “doing DataOps” at all. This lack of a documented discipline will likely inhibit adoption of the practice over the next 12 to 18 months, feeding confusion and driving hype further.

Despite this, while DataOps is still an early concept in data management, Gartner believes it will trigger significant interest given the tremendous pressure organisations are under to achieve faster delivery of new and enhanced data and analytics capabilities.

The key take-away when it comes to DataOps is that tooling won’t solve your collaboration problems. Start with people and culture first. Save the tooling discussions for later.

Nick Heudecker is a VP analyst at Gartner. He offers guidance on data infrastructure for operations and analytics, as well as information management strategy. Nick will be speaking at the upcoming Gartner Data & Analytics Summit in Sydney, 17-18 February.

Related:

Copyright © 2020 IDG Communications, Inc.

How to supercharge Slack with ‘action’ apps
  
Shop Tech Products at Amazon