Wednesday, 29 July 2009

Modeling and Testing Multi-Threaded Asynchronous Systems with Creol


Bosphorus bridge in Istanbul viewing from Asia to Europe.
After a long waiting period, yesterday our TTSS 2008 paper was published by Elsevier. The paper is a result of the CREDO project.

Bernhard Aichernig, Andreas Griesmayer, Rudolf Schlatte, and Andries Stam. Modeling and testing multi-threaded asynchronous systems with Creol. In Proceedings of the 2nd International Workshop on Harnessing Theories for Tool Support in Software (TTSS 2008), Istanbul, Turkey, 30 August 2008, volume 243 of Electronic Notes in Theoretical Computer Science, pages 3–14. Elsevier, July 2009. (PDF) (doi:10.1016/j.entcs.2009.07.002)

The paper describes a model-based testing technique for asynchronous, concurrent systems and shows how the modeling language Creol can serve as a test oracle. Creol is a nice object-oriented modeling language for distributed systems by University of Oslo. We applied the technique to a commercial agent-based information system.

The basic steps of our method are:
  • Create an abstract model of the system under test (SUT) in Creol.
  • Instrument the source code of the SUT by aspect-oriented programming techniques (here AspectC) for recording events.
  • Execute tests on the SUT and record the events.
  • Translate the event traces to Creol test cases.
  • Replay the test case on the model.
  • If successful: the test passed;
  • If it fails: model check if there is a different non-deterministic path for this test case, if not the test failed.

Check out my other conference papers.

Monday, 27 July 2009

ICTAC 2010, Sept 1-3, Natal, Brazil


Photo courtesy of Dany Sakugawa
I have been invited to serve on the Program Committee of ICTAC 2010.

It will have a special track on Formal Testing Approaches headed by Marie-Claude Gaudel.

ICTAC 2010 is the 7th International Colloquium on Theoretical Aspects of Computing, the latest in a series founded by the International Institute for Software Technology of the United Nations University (UNU-IIST). ICTAC brings together practitioners and researchers to present their research and to exchange ideas and experience addressing challenges in both theoretical aspects of computing and in the exploitation of theory through methods and tools for system development.

The seventh edition of the event, ICTAC2010, will be held from the 1st to the 3rd of September, 2010, in Natal, Brazil.

ICTAC2010 will invite submissions of research papers of up to 15 pages and of tool papers of up to 6 pages. Research papers should contain original research, and sufficient detail to assess the merits and relevance of the contribution. Submissions reporting on industrial case studies are also welcome. Tool demonstration papers should stress the technical and practical side, illustrating how one can apply the theoretic contributions in practice. The schedule for submission and evaluation of papers is as follows.

Submission of abstracts: 08 March, 2010
Submission deadline: 15 March, 2010
Notification of acceptance: 30 April, 2010
Final version: 16 May, 2010

Thursday, 23 July 2009

Semantics? Yes tester, you need it!

Once I had a testing project in industry in which we ran into the common situation that we did not know how to interpret a given requirement. Well, we asked the requirements manager for advice. Since it was a tricky question, he forwarded our request to the customer and to the developers. We were fortunate in getting quick responses. However, there was a problem: the answers were contradicting each other.
What had happened? Well, the customer and the developers had different interpretations of the requirements. There was a misunderstanding about this requirement. The consequence: the developers were implementing an incorrect system from the customers point of view.
So what about the tester? His understanding of the requirements is crucial. In the example above, the asking of us testers highlighted a serious problem. Without a clear understanding of the requirements, no reliable test verdict (pass or fail) can be given.
However, how do we know, if a tester understands the requirements? He could have made the same mistake as the developers. What if two testers (e.g. the customer's and the developer's) have different understandings of the requirements of a given SUT? Well, they would give different test verdicts: one might accept the SUT the other not. Is there a way to prevent such misunderstandings?
Not in general, because misunderstanding is a psychological process of wrong interpretation. However, we can limit the roots of such misinterpretations. We need to define the semantics, i.e. the meaning, of the requirements. And if nobody does it in a project, the tester should.
How do we define the semantics of requirements? Answer: by writing them in a formal notation with a precise semantics. There are modeling languages that come with a precise semantics, like VDM-SL, Z, B, Alloy, RAISE, CSP, LOTOS etc. These languages serve different purposes, but what they have in common is that their meaning is precisely defined, i.e. there is no ambiguity how to interpret what is written down.
Therefore, model-based testing should always apply models with a precise, formal semantics. Ok, most of the time a tester will not need it, because the meaning seems obvious, but as testers know, the rare cases matter. It even becomes more critical for model-based testing tools. If there is no precise standard semantics, different tools might behave differently for the same models. (A common problem of compilers for programming languages without precise semantics).
Here is my advice: if somebody tries to sells you a model-based testing technique, ask him if his notation has a formal semantics. If he answers positively, double-check and ask for formal proofs done in this notation. No proofs, no formal semantics. Don't accept notations with misunderstanding built in!
More on model-based testing with formal notations can be found in my publications.

Lecturing in China

From 2002 to 2006 I worked for the UN as a Research Fellow of UNU-IIST in Macao, a special administrative region of China. Part of the job was to hold advanced computer-science courses in developing countries. The participants were post-graduates, typically young assistant professors.

In 2003, at a course in Lanzhou I certainly got the most impressive welcome: my name on a banner at the entrance to the campus (see picture). In China protocol is important, but Lanzhou Jiatong University was special. As a UN representative I was also officially welcomed by the rector and interviewed by their campus TV. I must admit, I was slightly impressed. Even more so by the fact that they had their own hotel on campus in which I stayed.

The course title was Formal Methods in Software Development. Lanzhou Jiatong University, Lanzhou, China, December 15--19, 2003.

Teaching formal methods is fun!
The course participants.

Tuesday, 21 July 2009

Lecturing in Africa

From 2002 to 2006 I worked for the UN  as a Research Fellow of UNU-IIST in Macao, China. Part of the job was to hold advanced computer-science courses in developing countries. The participants were post-graduates, typically young assistant professors. 
Most fascinating and adventurous were my two courses in Africa, one in Lagos, Nigeria and one in Dakar, Senegal.
University of Lagos, Nigeria, Nov. 4-8, 2002
The course title was Formal Methods in Software Development. It was part of the School on Methods, Tools and Techniques for Industrial Software Development jointly organized by UNU-IIST and University of Lagos.
The Lagos course participants and their lecturer (guess who it is!).
Our lunch was colorful.
Universite Cheikh Anta Diop, Dakar, Senegal, November 24–28, 2003

The course in Dakar was on Foundations of Software Testing covering formal testing techniques.
The Dakar course participants and myself.
On the weekend there was time to enjoy the beauty of Africa.

Thursday, 9 July 2009

Bernhard's Research Area

Bernhard’s research focuses on the foundations of software engineering. In particular, I'm interested in
  • formal development methods
  • specification, modeling and design of reliable software
  • theories of programming, refinement calculi, concurrency
  • foundations of software testing.

His current research is mainly concerned with the automated generation of test-cases from formal requirements specifications as well as with the associated testing theories. This includes the investigation of existing testing, specification and refinement techniques. The aim is to develop testing theories that are able to unify the existing results on specification-based testing and advance the field.

His current emphasis is on fault-based testing. Fault-based testing is a technique where testers anticipate errors in a system under test in order to asses or generate test cases. The idea is to have enough test cases capable of detecting these anticipated errors. He applies this method on the specification level. His latest testing theory is based on Hoare and He's Unifying Theory of Programming (UTP) and has been instantiated to test case generator’s for OCL, Spec#, LOTOS and most recently Action Systems.

MOGENTES Project

I am the project manager and senior researcher for TU Graz in the MOGENTES project.

MOGENTES stands for Model-based Generation of Tests for Dependable Embedded Systems and is an European STREP project in the 7th EU framework programme.

The coordinator is the Austrian Institute of Technology.

MOGENTES aims at significantly enhancing testing and verification of dependable embedded systems by means of automated generation of test cases relying on development of new approaches as well as innovative integration of state-of-the-art techniques.

Driven by the needs of its industrial partners, it will address both testing of non-functional issues like reliability, e.g. by system stress and overload tests, and functional safety tests, meeting the requirements of standards such as IEC 61508, ISO WD 26262, or AUTOSAR.

MOGENTES will demonstrate that different domains with a wide variety of requirements can significantly benefit from a common model-based approach for achieving automated generation of efficient test cases and for verifying system safety correctness using formal methods and fault injection, as this approach increases system development productivity while achieving predictable system dependability properties. For that purpose, proof-of concept demonstrations will show the applicability of the developed technologies in two application domains: railway and automotive (on- and off-road).

In particular, MOGENTES aims at the application of these technologies in large industrial systems, simultaneously enabling application domain experts (with rather little knowledge and experience in usage of formal methods) to use them with minimal learning effort. All in all, MOGENTES will increase knowledge and develop new techniques and tools in the area of verification and validation of dependable embedded systems which can be applied in model-based development processes also by non-experts in formal methods.