Plug-and-play replacement ideas for testing

Table of Contents

Introduction

1) Okay, it's time for some testing tweets. Our reluctance to talk about problems (and potential problems) is undermining our most important job as testers: finding problems that threaten the value of the product. Problems happen. If we want to help, we must focus on that. So...

With this words Michael Bolton introduced a Twitter thread (unrolled) about how software testing could evolve.

I refromated it to a more structured way. Over time i want to add more links and my own thougts to this entires.

As introduction one of his last posts, i added all others at the end.

The general idea is this: try replacing all those tired, empty, folkloric, mythodological phrases and ideas that lead to unhelpfully shallow testing. Replace them with words and approaches that focus testing on finding problems that threaten value.

 (Damnit. He writes so much interessting stuff. I can't qoute everything in the beginning.)

Plug-and-play replacements

This is not just about words and phrases, but about using other concepts, changing mindsets and attitudes.

"Try prelacing ... with ..." is the original phrase of Michael.

Try replacingwithAdditions
verify that...challenge the belief that... 
validate investigate 
Confirm that...
Find problems with...
 
Show that it worksDiscover where it *doesn't* work 
Pass or fail?Is there a problem here? 
test caseexperiment 
test caseexplicit of test conditions and coverage ideas 
actual resultexpected result, observation, oracle (that is, the means by which we recognize a problem
FEW HICCUPPS
 
pass/fail ratioa list of problems that matter (no one cares particularly about "passing" test cases; and even if the test case passes, there can still be terrible problems on which the test case is mute) 
counting test casesdescribing coverage

And make sure you describe important testing not yet done; aspects of the product that your testing *hasn't* covered.
 
automated testingprogrammed checking

Testing can't be automated, but occasionally automated checks can alert us to problems. Note, though, that creating automated checks requires programming, even if it's very high-level programming.
 
test automationtool-assisted testing

Automated checks are among the *least* powerful and useful ways that we can use programming to help us gain insight about a product, and to help us test. Use programming for data generation, probing, analysis...
 
use casesuse cases AND misuse cases AND abuse cases AND abstruse cases

Also remember that some people are obtuse sometimes, so think about "obtuse cases" too.
 
measurementassessment

Most of what goes on in complex, cognitive, social domains (like software development and testing) can't be measured easily in valid and reliable ways. But such things can be assessed reasonably.
Assess Quality, Don’t Measure It
 
KPIs and KLoCs and 'measuring quality'something far more important: *learning from every bug*. Please don't reduce engineering to scorekeeping. 
preventing bugsidentifying bugs early

We testers CANNOT prevent bugs; whether it's running code, a design, a story, or an idea that someone gives us to test, the bug is there when we encounter it. But we identify the bug before it goes any farther.
 
AI might put me out of a job!AI is going to need *even more* critical evaluation than stuff for which we have the source code.

That behooves us to become better, more critical technologists and social scientists. Let's get on that, stat.
 
ponderous, overly formalized, scripted, procedural test casesconcise charters

A charter is a mission statement that guides a burst of testing activity. Encourage testers to vary their behaviour *and to keep reliable, professional notes on what they did*.

a description or diagram of a workflow, and charter testers to report on anything that they find difficult, confusing, annoying, frustrating, surprising, or interesting.
 
we have to...we choose too...

Nothing in testing is mandatory in any absolute sense. When you fall into the mistaken belief that you have NO choice, you limit your ability to find problems that matter.
We Have to Automate
 
the usersomething more specific. Consider:
- novice user (who may be confused)
- expert user (who may be sharply critical)

- distracted user (who may forget to do necessary things)
- disabled user (whose needs might have been overlooked)
 

Context

These are the surrounding posts from him for more context.

1) Okay, it's time for some testing tweets. Our reluctance to talk about problems (and potential problems) is undermining our most important job as testers: finding problems that threaten the value of the product. Problems happen. If we want to help, we must focus on that. So...

2) We must obtain experience with the product; perform (thought) experiments; explore the product or its requirements. But it seems to me that a lot of common talk about testing undermines that, and focuses on demonstration and confirmation, not *testing*, not *challenging*.

3) Here is a set of straightforward, plug-and-play replacements that provide alternatives to common testing talk. These can help us to focus on finding problems that matter, at pretty much any stage of development. Ready? Let's begin.

25) That's almost all for now. The general idea is this: try replacing all those tired, empty, folkloric, mythodological phrases and ideas that lead to unhelpfully shallow testing. Replace them with words and approaches that focus testing on finding problems that threaten value.

26) "Almost all", because there's something else. Dear testers, we need to help create better clients for testing. We need to be able to articulate the things I've said here to each other for sure, but we've also got to help our clients understand the significance of these ideas.

27) We must start to speak and think like experts, challenging the misbegotten ideas that most people have about testing. This starts with cutting out the "manual testing" and "automated testing" business. Testing isn't manual, and it can't be automated.
The End of Manual Testing

28) No skilled profession allows other people to talk about (for instance) "manual medicine" or "automated medicine"; "manual research" or "automated research"; "manual programming" or "automated programming"; "manual management" or "automated management". It's ridiculous.

29) The worst of it is, testers keep referring to their work this way. "I'm a manual tester." No, you're not. You're a TESTER. You use tools; that's normal. Maybe you don't write programs, but so what? Most doctors don't build blood assay machines either. But they DO use tools.

30) It's typically a good idea to have programming skills if you're working with computers. I'd recommend learning them — even if you only learn to read code without writing it. But if you're disinclined to do so, don't. The world already has too many barely-competent coders.

31) If you need code written to help you test, learn to code or ask for code to be written. But please don't call yourself or others a "manual tester". You're doing complex, cognitive, critical, analytical, scientific work, right? Try replacing "manual tester" with TESTER.

32) And finally, some words from various sponsors. Check out my blog at https://developsense.com/blog, and @jamesmarcusbach's blog at http://satisfice.com.We're performing a first-time-ever experiment next week: Rapid Software Testing Explored Online: https://improveqs.nl/english/rapid-software-testing-explored/

33) We're also presenting Rapid Software Testing for Managers online the following week: https://improveqs.nl/english/rapid-software-testing-for-managers/ You could join yourself; you could also tell your manager. In the coming weeks and months, we'll be providing more stuff online. Stay tuned!