What does quality look like?

As John Ruskin, the great Victorian social thinker, once famously argued, ‘Quality is never an accident; it is always the result of intelligent effort.’ This is the principle that underpins the current buzz around ‘building quality into the enterprise’. But what does quality look like?

It may be a subjective measure, but when it comes to application development, quality can generally be considered to be inversely proportional to the number of defects found in production. However, you can’t simply test quality into systems; to adapt an old English proverb, you are what you test. Yet it is surprising to find how many projects begin with a fundamental misstep.

All too often, thinking about test data management extends no further than the hopeful assumption that the right data will appear in the right place, at the right time. The result? Many testers are still spending as much as 50% of their time simply finding and preparing the correct data for testing. Not only is this time-consuming and risks non-compliance with data protection regulations, but it is almost impossible to ensure the functional coverage required to rigorously test the application. Invariably, this leads to poor quality, inefficient testing cycles and the creation of bottlenecks in downstream systems.

As market forces have forced organisations to choose between satiating the consumer’s demand for greater functionality in shorter iterations and delivering quality, it is usually the latter that is compromised. With this in mind, it is perhaps no wonder that Richard Bender has noticed a 15% increase in software defect rates in recent years, unnecessarily costing the software industry billions in the process. In this article series, I will set out a series of practical solutions for implementing an enterprise-wide test data management strategy that will enable you to ‘build quality’ and deliver valuable, working software to market quicker.

* * * *

If our key metric in defining quality is the absence of defects in production, then in order to meet the demands of the market, we need to consider how we can ‘shift testing left’ and discover more defects earlier in development. Like all good stories, let us start at the beginning. In order to test software, you need data and that raises the question, how do we ‘build’ quality to create the data for rigorous testing cycles?

Many organisations still use copies of production for testing. This is often based on the misconception that production provides ‘the truth’ when it comes to testing. However, whilst production provides volume, it offers insufficient functional coverage for rigorously testing the outlying scenarios that break your application; generally 15-20%. Consider a supermarket; most transactions in any given day contain similar items - bread, milk, meat etc - which make for high volume from a small number of product lines. What happens if someone buys, for example, Japanese sake that is on offer? It is these unusual applications which break your application, and cause irritation to your customers.

So, how do we create data that is ‘up to the job’? With the sheer range of variables in most modern systems, manual techniques are out. Instead, organisations are increasingly looking to create synthetic data; an approach which is fast becoming the industry standard.

Synthetic data - known elsewhere as manufactured data - is based on a model of your production database and so contains all the characteristics of the ‘live’ data with none of the sensitive content, eliminating the risk of data leaks. So far, so good, but the real value in synthetic data is that you can generate exactly what you need, not make the best of what you have.

It is because of this that synthetic data is central to building quality into testing. Due to the complexity of the task, any improvement in coverage that can be achieved manually is minimal, as well as incredibly time-consuming and error-prone, rendering it ineffective in terms of beginning to ‘shift testing left’. However, through the use of intelligent coverage techniques such as cause-and-effect graphing and all-pairs, synthetic data can be designed to provide full functional coverage in much smaller, richer, more sophisticated sets of data.

As a result, testers receive ‘fit for purpose’ test data to work with, rather than wasting vast amounts of time manipulating the data. These shorter, more rigorous test cycles, which cover 100% of the functional coverage requirements, are imperative if we want to ‘shift testing left’. In today’s iterative market, it is only by achieving this that we can detect defects early enough in the application development lifecycle - essential if organisations are to deliver valuable, working software that meets the requirements of the market.

Posted by Ray Scott, GT Agile

Copyright © 2013 IDG Communications, Inc.

Shop Tech Products at Amazon