I’m almost free. . . . The final testing project for my software testing techniques course is all that I have left. I can’t wait to finish it. I will soon be able to read the paper, read a book, go photographing, and everything else without thinking that I should be studying.
Without getting into it too much, I’m a little disappointed with the course. I have learned a bit — especially that it really is not possible to completely test software — but I do wish that it didn’t feel straight out of 1990. I wish the course were more about (you know) testing techniques. Oh, and what’s with the 100-question true/false final exam that lifts statements striaght from obscure parts of the book?
What are the big things that I did learn? Primarily, The MathWorks does testing very well. We have detailed test plans that say what’s going to be tested, what the expected output is for given inputs, and so on. We set targets for code coverage. We keep metrics about
defects bugs. We have extensive sets of automated regression tests that enable continuous integration. Our testers work with the developers in every phase of the development lifecycle. In fact, our developers and testers usually have collaborative rather than combative relationships. We have a clear, rational way of classifying the importance of bugs. . . . We’re not perfect, but we try hard.
Believe it or not, for the overwhelming majority of my 13-person class most of those statements don’t apply. In general, software quality assurance is the red-headed stepchild of the development organization. Companies, especially small ones, just don’t want to spend money on testing. The authors of the textbook seem to accept this as an immutable fact of testing. QA is a “low-level department” in organizations that have hierarchical, top-down decision-making processes. By saying that an essential part of doing testing well is to avoid other people’s insecurities and issues, the book is premised on corporate dysfunction and disenfranchised employees. But, the authors do seem to be describing the real world my colleagues experience.
So what did I learn that I can apply?
- The purpose of testing is to find bugs and get them fixed. So focus on testing that finds defects.
- Try to find the most serious defects first.
- Testing should start with requirements analysis — requirements are often poorly specified or impossible to implement.
- Unit testing (and test-driven development) is very important but isn’t software testing.
- Focusing on code coverage — especially condition coverage — is important but can’t find all latent defects (such as dividing by zero).
- Blackbox testing — where you don’t assume you know anything about the code — is touted as the best way to find latent errors.
- Think of ways that software could fail and work backwards toward test cases.
- Focus on equivalence classes and boundary tests. Don’t forget state transitions, especially for GUIs.
Fun fact (that may be out of date): Software bugs cost 0.6% of GDP, or roughly $60 billion/year. One-third of this might have been saved by better testing.
UPDATE: Cem Kaner, one of the authors of the course text, admits that some of the ideas are out-of-date. A new version of the book is in the works. Apparently, “it is not the primary purpose of testing to find bugs.” Damn! I thought I knew what was important.