Lately, there has been much buzz about Rapid Software Testing which has been described as “the closest thing in the business to a martial art of software testing.” Michael Bolton – the Rapid Software Guru – explains to Kristoffer Nordström of Softhouse.


Now, what is “Rapid Software Testing” – and how does it differ from “traditional testing”?

Well, until we establish what “traditional testing” is, it’s hard to identify the differences. But let me describe Rapid Testing, and people can decide on the differences between that and “traditional testing” as they think of it.

James Bach and I describe the Rapid Software Testing approach as a skill set and a mindset focused on doing excellent software testing in a way that is very fast and inexpensive, yet entirely credible and accountable, so that managers can make informed decisions about the product, the project, and related risk.

Michael Bolton is the co-author (with senior author James Bach) of Rapid Software Testing, a course that presents a methodolgy and mindset for testing software expertly in uncertain conditions and under extreme time pressure.

Testing’s job is to help to defend the value of the product by helping people to become aware of problems and risks. When there’s value on the line, unskilled slapdash testing isn’t going to cut it. Ponderous testing, where everything is buried under mounds of paperwork and bureaucracy, is too expensive and takes too long unless you have lots of time and lots of money–and you don’t mind wasting them. “Complete” or “exhaustive” testing has several problems: it would take too long, it would be so expensive that management wouldn’t fund it, and nobody knows how to do it because it’s an infinite task and therefore impossible.

So, in Rapid Testing, we focus on doing things quickly, with minimal fuss and busywork, and with all the skill we can bring to the table. We also focus on the fact that excellent testing is far more than confirmation and verification and validation. Instead, great testing is focused on loops of exploration, discovery, investigation, and learning – and reporting quickly and concisely and cogently on what we’ve learned.

Now, as soon as we’ve said something like that, it’s common for people to say, “So, you don’t believe in documentation.” Not true; we do believe in documentation. But we believe in communication more. We don’t believe in wasteful documentation, and we don’t believe in documentation in circumstances where conversation is clearly faster and more effective. Some people say “Exploratory testing? So that’s manual testing; you don’t believe in test automation.” Not true; we do believe in automation as a tool. We favour interacting with the machine as the users of the product do, but we also love using automation for things that automation can help us with.

What is the context-driven testing school?

For a long time, discussions in the testing world seemed to be oriented towards identifying The Right Way to do testing. Various writers and speakers touted their approaches as being appropriate or correct, as being best practices. Cem Kaner, at least, had noticed fairly early on that “best practices” coming from academia didn’t work so well when you tried to apply them in industry; that “best practices published in books by people who worked in the telecom business didn’t work so well for people in Silicon Valley; that “best practices” for mass-market commercial software went against the cultural grain in banks and in the medical business. The trouble, I think, was that people were coming to conclusions about testing without really considering the premises. Based on those conclusions, they were declaring that testing should be thus and so. It’s not that the conclusions were wrong; they may have been right for their context. The bigger problem is that the premises aren’t universal.

As time went by, Cem recognized more and more people were having similar experiences and making similar observations to his. Controversy about testing among reasonable people suggested that you can’t label something a “best practice” and expect it to be relevant to all domains in which testing happens. Standards relevant to the defense industry were ruinously expensive and unhelpful for people developing computer games. Unless you want to put serious limits on your career opportunities, you can’t declare, as some people have suggested, that you should refuse to test until you got a clear, complete, up-to-date, and unambiguous specification. It wasn’t helpful for the testing mice to declare that someone should put a bell on the business and development cats.

James Bach on Rapid Testing. “The closest thing in the business to a martial art of software testing.”

The people who recognized these issues went on to declare a set of principles based on the idea that if you wanted to do testing well, you’d have to consider the context first, before anything else. Then you could make pragmatic choices about what was right for your situation. You can read that declaration and some clarifying remarks at www.context-driven-testing. com. Cem, James Bach, and Bret Pettichord wrote Lessons Learned in Software Testing; the sub-title of that book is “A Context-Driven Approach”.

Not too long after that, Bret Pettichord published a paper identifying what he and others called the four schools of software testing. These included the factory or routine school (following templated processes and documents will guide us best), the mathematical school (intensive and rigourous functional analysis and graph theory will guide us best), and the quality school (requiring other people to adhere to rigid processes and driving them to high standards is our real calling). The context-driven school suggested that ideas from the other schools might be relevant, but no idea was universally best and that people were the most important part of any project’s context.

Naturally, this was controversial. It put the whole idea of best practices in question, and it meant that certain cherished ideas about testing would have to be questioned. Testing itself would have to be tested. This represented a threat to received wisdom and offended some people’s idealized world-views. After all, experts are typically heavily invested in their own expertise. Other people weren’t quite so threatened, and pointed out that, sure, they considered context. What many of them seemed to miss was (is) that to be context-driven, you’d have to consider the context first.

In addition, people didn’t like being labeled or pigeon-holed. I don’t think that we the intention; the intention was to to identify schools of thought, or paradigms. Perhaps the notion of “schools” was unfortunately named. People seem to have a lot of bugbears about the word “school”. It might have been more politically palatable to identify them as four cultures of testing. We’ll never know.

Thinking back, how did you get involved in “Rapid Software Testing”?

I was lucky enough to learn important things about the craft of testing at Quarterdeck in the early 1990s. At that time, Quarterdeck was the publisher of memory management software that regularly topped PC Magazine’s best-seller list; that is, with the exception of DOS, it was the bestselling piece of software in the world. In the context of commercial software that had to be compatible with all of the other software on the market, we had no choice but to test rapidly. That the style the company had, and that I took on: exploratory, highly collaborative, concisely reported, lots of face-to-face – and fast. Like all software, our products shipped with bugs, but we found and eradicated the important ones, and the decision to ship was well-informed. When the product reached the field, we were rarely surprised by bugs that we hadn’t known about.

James Bach introduced me to his Rapid Software Testing course in 2003. As he and I both realized, I had been practicing rapid testing, or something like it, for a good long time before that. I had been missing something, though. I hadn’t paid a great deal of attention to the structures of rapid testing, although I had been using many of them intuitively. As a consequence, it was harder than necessary to describe my work in contexts outside of commercial software. In large financial institutions, for example, I could find more important bugs, more quickly, than the in-house or contracted testers who were given script to follow. I could learn what was important more quickly because I used fast cycles of reading about the product, interacting directly with it, exploring its behaviours, applying curiosity, and asking questions of the people around me. For me, the focus was finding out how the product worked – and how it didn’t work, rather than simply checking to see whether test cases passed or failed. I could create and edit and narrate stories about my work, showing how I had made responsible choices in the face of uncertainty. Yet those stories were often more improvisational than I might have liked. James and Rapid Testing offered a means of identifying the structures of testing, and of describing testing work in a more structured way. Moreover, becoming aware of the structures offered a means of developing testing skill even further – not only my own skills, but the skills of our testing students; not only testing skills, but skills needed to develop and teach testing skills.

I started teaching the Rapid Testing class in 2004, and I’ve been a coauthor and co-developer of the material since 2006. James and I have been refining the class continuously. That’s for three reasons at least: one, because we’re always learning about new things in the world that we can relate to testing; two, because based on that, we’re constantly learning new ways to observe and to model and to describe testing; and three, because testing itself is constantly changing as the world and technology change.

Would you say that anyone can do “Rapid Software Testing”?

Anyone who is capable of learning and practicing the skills can do Rapid Testing. That requires some personal commitment and some work, though. First, there’s a lot of information- gathering and learning to do. It’s important to be an information sponge, inside your working life and outside of it. It’s a great exercise to relate the things that you see, hear, and do back to testing. It’s also very important to observe and reflect on what’s going on, and to be self-critical.

Rapid testing also takes practice in certain patterns of thought. It’s our job to question things, to put “unless …” at the end of sentences that we hear or read, to see the complexity behind apparent simplicity, to consider alternative interpretations of what we hear or see. Rapid testers are professionally uncertain about what we know when everyone else is sure.

Why do you think there is the general notion that “anyone” can do software testing?

Anyone can do unskilled and valueless testing. The certification peddlers don’t help the image of the craft when they provide “certification” for passing a 40-question multiple choice test that involves doing no testing work that anyone observes. Pretty much anyone can do pretty much any kind of work if all that the work requires is following a set of steps laid out by someone else. Anyone can do software testing if, as the person commissioning it, you’re indifferent about the quality of the work.

I think the reason that people (managers, mostly, so it seems) think that anyone can do software testing is that those people simply haven’t been exposed to excellent testing and they haven’t thought very much about what good testing would be. So there’s this feedback loop: poor testing fosters poor expectations about testing. That’s why outsourcing to the lowest bidder seems to make sense; there’s no point in paying big money for valueless work. I’d argue that there’s no point in paying any money for valueless work. Instead, focus on training and requiring people to do valuable work, and then get the benefit from that.

Is there a downside to “Rapid Software Testing”?

If you follow the principles, Rapid Testing makes it hard to do fake testing. That can be a problem for the people who are afraid of learning about scary things in the project. It can be frustrating, for a while, for rapid testers to work in non-rapid environments, unless they pick up a kind of Zen about the situation. Even if you’re doing excellent work, people have a big investment in their security blankets, and won’t drop them lightly. In a highly non-rapid context, it takes time and patience to help people recognize valuable testing.

Michael Bolton Recommends!

Perfect Software: And Other Illusions About Testing

Gerald Weinberg Dorset House (2008) ISBN-10: 0932633692

”This is a book that I encourage testers not only to read, but also to give to their managers to expand their concept of what testing can do for them.”

”Agile testing is contextdriving, not contextdriven”

.

Lessons Learned in Software Testing

Kaner, Bach & Pettichord Wiley (2001) ISBN-10: 0471081124

”A great practical guide for testers, and suggests lots of ways to do testing well.”

 

Leave a Comment

Your email address will not be published. Required fields are marked *