Measuring Software Development and Testing

by on Sep.08, 2009, under society, tech

I posted this to the context-driven software testing list I’m on. If you want a quick primer on where the people I am talking with are coming from, go here. I’ll comment anything interesting that comes back from it. This was going to be a response to (a) request for industry standard metrics, but then a distillation of what I think about discussing software development and testing with managers fell out.

It’s very easy and true to say that the application of universal/typical/broadly applicable measurements for software quality is a bad and dangerous idea. Manufacturing measurements of quality don’t work, every “quantitative” measurement is based on qualitative data points and collection, etc.

The management theory we are engaging with believes that all business activities can and should be measured, that there is always a way to measure success, progress towards goals, and see the impacts of changes to process. Whether it is volume/cost/productivity/calls per unit/worker/mile/day, baseball statistics, etc, management of virtually every industry and occupation is believed to have been improved by this type of analysis.

It’s easy enough to see how someone approaching software development from this context grasps at bug counts, reporting dates, lines of code, or anything that smells like something that can be used to get a handle on what is happening and what to expect. Learning a business and developing a good enough “feel” (or bundle of heuristics, if you like) is hard and time-consuming anyways; for those without the background, even if they do have the intellectual horsepower, they are not going to learn fast enough from the insular, insecure, and often brilliant people they find in our field.

In the Power-Point(y-Headed boss) world most of us work in, managers want easy and universally applicable ways to get their bearings. These people need some idea whether they need to start shaking things up or stay the course, they need to be able to demonstrate their own positive impact, and so forth. How do we help these people feel warm and fuzzy about progress towards goals, measuring quality, success, etc? What can they do to help, and how will they know?

We need to not only be able to say what It Depends on, but to educate them about how to think about software quality and productivity in testing. How can we positively reframe the context? I’ve thought about what other contexts we might substitute to replace the manufacturing quality approach. Here’s three I’ve tried so far.

Since writing software is really more of a creative exercise, I have had some limited success asking people to think of it like other writing; focus on the verb and reflect on what it means. I’ve seen many writers say that they never finished a article/story/novel/etc, they simply ran out of time or patience (usually someone else’s) to keep improving it. This gets some nods sometimes, but doesn’t end any “But how do we measure it?” conversations.

Another approach I’ve tried is to say that writing software is like building a house while having to fabricate all the materials. Maybe you have a really skilled 2×6 developer and a really crappy electrical box developer, or he’s the same guy. This seems even farther away though, and lends itself too easily to people trying to participate in the metaphor incorrectly and missing the message that the process is not as simple as proper component assembly.

The other context I could compare it to is managing a research lab. How do you measure breakthroughs? How do you compare the impact of one breakthrough to another? How to you measure the impact of a breakthrough coming one month, week, or day earlier than it otherwise would have? This may be satisfying when you consider that managing a research lab consists of creating a quiet environment with sufficient resources and reliable experiment facilities for best thinking results, but still doesn’t address the issue of measurement and management. Also, this usually translates as “just stay out of the way”, which no one really wants to hear.

None of these frame the problem of the testing context. Testing metrics would be a measurement of how good we are at finding a percentage of a variable quantity dependent on multiple factors. I can probably list a lot of factors to talk about what “It Depends” on here, but still not something anywhere near enough to coherent for 8th grade reading levels and three sentence attention spans.

I’m thinking we may need to solve the software development context problem before we get to the testing one. That requires explaining and selling qualitative methodologies as intellectually rigorous and somewhat reproducible. Measurement? Er…

What do you think? Can you help me reframe the problem statement?

1 Comment for this entry

  • Bobo

    As much as I wish there was a good way to give management (me included) the warm fuzzy they all want by a gut feeling, we have to fall on metrics whether we like it or not. 1) it allows us to TRY (not necessarily successfully) and measure how things are going without having to query people to death. 2) it all comes down to numbers for the business/accounting side. As much as I hate the idea, testing and development are a cost. Though you can argue that without them, you will have nothing to sell and therefore not exist as a company, they do not produce direct income. That’s sales job. If you can’t boil things down to numbers, the business side may assume you aren’t being “efficient and effective” and choose to cut your budget. I wish we could have an environment where testing and development could be left alone to get things done, where management had the time and resources to talk to people to get a feeling about things without trying to boil everything down to some stats.

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!


A few highly recommended websites...