Once I had a testing project in industry in which we ran into the common situation that we did not know how to interpret a given requirement. Well, we asked the requirements manager for advice. Since it was a tricky question, he forwarded our request to the customer and to the developers. We were fortunate in getting quick responses. However, there was a problem: the answers were contradicting each other.
What had happened? Well, the customer and the developers had different interpretations of the requirements. There was a misunderstanding about this requirement. The consequence: the developers were implementing an incorrect system
from the customers point of view.
So what about the tester? His understanding of the requirements is crucial. In the example above, the asking of us testers highlighted a serious problem. Without a clear understanding of the requirements, no reliable test verdict (pass or fail) can
be given.
However, how do we know, if a tester understands the requirements? He could have made the same mistake as the developers. What if two testers (e.g. the customer's and the developer's) have different understandings of the requirements of a given SUT? Well, they would give different test verdicts: one might accept the SUT the other not.
Is there a way to prevent such misunderstandings?
Not in general, because misunderstanding is a psychological process of wrong interpretation. However, we can limit the roots of such misinterpretations. We need to define the semantics, i.e. the meaning, of the requirements. And if nobody does it in a project, the tester should.
How do we define the semantics of requirements?
Answer: by writing them in a formal notation with a precise semantics. There are modeling languages that come with a precise semantics, like
VDM-SL,
Z,
B,
Alloy, RAISE,
CSP,
LOTOS etc. These languages serve different purposes, but what they have in common is that their meaning is precisely defined, i.e. there is no ambiguity how to interpret what is written down.
Therefore, model-based testing should always apply models with a precise, formal semantics. Ok, most of the time a tester will not need it, because the meaning seems obvious, but as testers know, the rare cases matter. It even becomes more critical for model-based testing tools. If there is no precise standard semantics, different tools might behave differently for
the same models. (A common problem of compilers for programming languages without precise semantics).
Here is my advice: if somebody tries to sells you a model-based testing technique, ask him if his notation has a formal semantics. If he answers positively, double-check and ask for formal proofs done in this notation. No proofs, no formal semantics. Don't accept notations with misunderstanding built in!
More on model-based testing with formal notations can be found in my publications.