Bursting the CMM hype

As soon as she walked into the meeting, Jane Smith knew that the executive on the other side of the desk wanted to buy something that she wasn't supposed to sell: a trumped-up rating for the executive's software development division so his company could qualify to bid on contracts from the U.S. Department of Defense.

Smith (not her real name) is one of a select group of experienced IT pros, called lead appraisers, who go into companies and assess the effectiveness of their software development processes on a scale from 1 (utter chaos) to 5 (continuously improving) under a system known as the Capability Maturity Model, or CMM. The company she was visiting wanted to move up to Level 2, but based on some initial discussions, Smith knew that the company was a 1. Level 1 describes most of the software development organizations in the world: no standard methods for writing software and little ability to predict costs or delivery times. Project management consists mostly of ordering more pizza after midnight.

After a few initial niceties, the executive leaned across the table to Smith and another lead appraiser who had accompanied her to the meeting and asked, "How much for a Level 2?"

"That's when I got up and left the room," Smith recalls. "The other appraiser stayed. And the company got its rating."

The stakes for a good CMM assessment have gotten only higher since Smith's close encounter with corruption some 10 years ago. Today, many U.S. government agencies in addition to the Defense Department insist that companies that bid for their business obtain at least a CMM Level 3 assessment -- meaning the development organization has a codified, repeatable process for an entire division or company. CIOs increasingly use CMM assessments to whittle down the lists of dozens of unfamiliar offshore service providers -- especially in India -- wanting their business. For CIOs, the magic number is 5, and software development and services companies that don't have it risk losing billions of dollars worth of business from U.S. and European corporations.

"Level 5 was once a differentiator, but now it is a condition of getting into the game," says Dennis Callahan, senior vice president and CIO at The Guardian Life Insurance Company of America. "Having said that, there are some Level 3 or 4 start-ups that we might consider, but they have a lot more convincing to do before I would do business with them. They would be at a disadvantage."

With CIOs increasingly dependent on outside service providers to help with software projects, some have come to view CMM (and its new, more comprehensive successor, CMM Integration, or CMMI) as a seal of approval for software providers. Yet CIOs who buy the services of a provider claiming that seal without doing their own due diligence could be making a multimillion-dollar, career-threatening mistake.

That's because software providers routinely exaggerate their assessments, leading CIOs to believe that the entire company has been assessed at a certain level when only a small slice of the company was examined. And once providers have been assessed at a certain level, there is no requirement that they test themselves ever again -- even if they change dramatically or grow much bigger than they were when they were first assessed. They can continue to claim their CMM level forever.

Worse, some simply lie and say they have a CMM assessment when they don't. And appraisers say they occasionally hear about colleagues who have had their licenses revoked because of poor performance or outright cheating in making assessments.

Yet CIOs who want to check up on CMM rating claims are out of luck. There is no organization that verifies such claims. Furthermore, the Software Engineering Institute (SEI), which developed CMM and is principally funded by the Defense Department, won't release any information about companies that have been assessed, even though appraisers are required to file records of their final assessments with the institute.

As U.S. and European companies stampede offshore to find companies to do their development work, they first need to understand what CMM ratings really mean. Yet few CIOs bother to ask crucial questions, say IT industry analysts and the service providers themselves. "Not even 10% of customers ask for the proof of our CMM," says V. Srinivisan, managing director and CEO of ICICI Infotech Inc., a software services provider that claims a Level 5 certification. "They inevitably take it for granted, and they don't ask for the details."

CIOs who don't ask for the details won't be able to distinguish between companies that are using CMM in the spirit it was intended -- as a powerful, complex model for continuous internal improvement -- and those that are simply going through the motions to qualify for business. Buying by the CMM number alone could mire CIOs in the same problems that caused them to look offshore in the first place: high costs, poor quality and shattered project timetables -- not to mention the loss of thousands of U.S. IT jobs.

"When you talk about something simple like a number, and lots of money is involved, someone's going to cheat," says Watts Humphrey, the man who led the development of CMM and is currently a fellow at the SEI. "If CIOs don't know enough to ask the right questions, they will get hornswoggled."

Where CMM Comes From

The CMM was a direct response to the U.S. Air Force's frustration with its software buying process in the 1980s. The Air Force and other defense agencies had begun farming out increasing amounts of development work and had trouble figuring out which companies to pick. Carnegie Mellon University in Pittsburgh won a bid to create an organization, the SEI, to improve the vendor vetting process. It hired Humphrey, IBM's former software development chief, to participate in this effort in 1986.

Humphrey decided immediately that the Air Force was chasing the wrong problem. "We were focused on identifying competent people, but we saw that all the projects [the Air Force] had were in trouble -- it didn't matter who they had doing the work," he recalls. "So we said let's focus on improving the work rather than just the proposals."

The first version of CMM in 1987 was a questionnaire designed to identify good software practices within the companies doing the bidding. But the questionnaire format meant that companies didn't have to be good at anything besides filling out forms. "It was easy to cram for the test," says Jesse Martak, former head of a development group for the defense contracting arm at Westinghouse, which is now owned by Northrop Grumman Corp. "We knew how to work the system."

So the SEI refined it in 1991 to become a detailed model of software development best practices and trained and authorized a group of lead appraisers to verify that companies were actually doing what they said they were doing. The lead appraisers head up a team of people from inside the company being assessed (usually three to seven, depending on the size of the company).

Together, they look for proof that the company is implementing the policies and procedures of CMM across a "representative" subset (usually 10% to 30%) of the company's software projects. The team also conducts a series of confidential interviews with project managers and developers -- usually during the course of one to three weeks and, again, depending on the size of the organization -- to verify what's really happening. It's a tough assignment for the internal people on the team because they're being asked to tattletale on their colleagues.

"It can be very stressful for the [internal] assessment team," says a lead appraiser, who asked to remain anonymous. "They have conflicting objectives. They need to be objective, but the organization wants to be assessed at a certain level."

David Constant, a lead appraiser and partner at Process Inc., a software projects consultancy, recalls assessing a company where all the developers had been coached by management on what to say. "I had to stop the interviews and demand to see people on an ad hoc basis, telling the company who I wanted to speak to just before each interview began," Constant recalls. "And the sad part was that they didn't need to coach anybody. They would have easily gotten the level they were looking for anyway -- they were very good."

The new model is much tougher to exploit than the original questionnaire. In 1991, says Westinghouse's Martak, he told management, "This is a different ballgame now. If you have a good lead appraiser, you can't fake it out." Martak led his group to a Level 4 assessment and eventually became a lead appraiser himself.

The depth and wisdom of the CMM itself is unquestioned by experts on software development. If companies truly adopt it and move up the ladder of levels, they will get better at serving their customers, according to anecdotal evidence. But a high CMM level isn't a guarantee of quality or performance -- only process. It means that the company has created processes for monitoring and managing software development that companies lower on the CMM scale don't have. But it doesn't necessarily mean those companies are using the processes well.

"Having a higher maturity level significantly reduces the risk over hiring a [company with a lower level], but it does not guarantee anything," says Jay Douglass, director of business development at the SEI. "You can be a Level 5 organization that produces software that might be garbage."

That assessment is borne out by a recent survey of 89 applications by Reasoning Inc., an automated software inspection company, which on average found no difference in the number of code defects in software from companies that identified themselves on one of the CMM levels and those that didn't. In fact, the study found that Level 5 companies on average had higher defect rates than anyone else. But Reasoning did see a difference when it sent the code back to the developers for repairs and then tested it again. The second time around, the code from CMM companies improved, while the code from the non-CMM companies showed no improvement.

Truth in Advertising

Stories about false claims abound. Ron Radice, a longtime lead appraiser and former official with the SEI, worked with a Chicago company that was duped in 2003 by an offshore service provider that falsely claimed to have a CMM rating. "They said they were Level 4, but in fact they had never been assessed," says Radice, who declined to name the guilty provider.

When done correctly, CMM is a costly, time-consuming effort. The average time for a company to move from Level 1 to Level 5 is seven years, and the expense of building a really robust, repeatable software development process with project and metric tracking is many times the cost of a CMM assessment (which alone costs about $100,000). For small companies short on funds and staff, or start-ups, forgoing business while building a software process capable of receiving a Level 5 assessment may seem more risky than fudging a number -- especially when your customers don't know enough to ask about it. And mature companies that already have a high CMM level may not want to risk the disruption, cost and potential disappointment of getting assessed again regularly.

Officials at the SEI deny that companies are exaggerating or lying about their CMM claims.

"There is no one who will declare 'We are CMM Level 3 as an organization,'" says the SEI's Douglass. "They'll say they are Level 3 in this development center or that product group."

Not true. A quick Nexis search revealed four companies -- Cognizant Technology Solutions Corp., Patni Computer Systems Ltd., Satyam Computer Services Ltd. and Zensar Technologies Inc. -- claiming "enterprise CMM 5," with no explanation of where the assessments were conducted or how many projects were assessed, or by whom. Dozens more companies trumpet their CMM levels with little or no explanation.

Indeed, all of the services companies we interviewed for this story claimed that their CMM assessments applied across the company when in fact only 10% to 30% of their projects were assessed. That's partly because experts say that assessing every project at a big company would be too unwieldy and expensive.

Yet few of those same experts support the idea that assessing a 10% slice of projects -- even those considered to be representative of all the different types of work a company does -- should lead to claims of "enterprisewide CMM." Vendors argue that there is logic behind these claims. The higher CMM levels (3 and above) require that a company have a centralized process for software development and project tracking, among other things. Since everyone across the company is supposed to use that same process that was used in the projects that were assessed at Level 5, for example, all projects across the company can be assumed to be at Level 5.

1 2 Page 1
Page 1 of 2
Shop Tech Products at Amazon