Thursday 12 November 2009

SEFM 2010, 13-17 September, Pisa, Italy


Photo: Pisa 2007
I have been invited to serve on the Program Committee of SEFM 2010, the 8th IEEE International Conference on Software Engineering and Formal Methods. SEFM 2010 will take place in Pisa. Submission deadline for abstracts is 22 March 2010. The PC-chairs are Jose Luis Fiadeiro and Stefania Gnesi.

The aim of the conference is to bring together practitioners and researchers from academia, industry and government to advance the state of the art in formal methods, to facilitate their uptake in the software industry and to encourage their integration with practical engineering methods. Papers that combine formal methods and software engineering are especially welcome.

I served as a PC-Cochair for SEFM 2005. For information on the past SEFM conferences visit the general SEFM webpage.

Here the call for papers will be published as soon as it is out.

Friday 23 October 2009

Fault-based Test Case Generation for Component Connectors

Sun Meng at CWI had the idea to translate my UTP-theory of mutation testing to REO connectors. This resulted in a TASE 2009 paper published in July:

Bernhard K. Aichernig, Farhad Arbab, Lacramioara Astefanoaei, Frank S. de Boer, Sun Meng, and Jan Rutten. Fault-based test case generation for component connectors. In TASE 2009, Third IEEE International Symposium on Theoretical Aspects of Software Engineering, Tianjin, China, July 29–31, pages 147–154. IEEE Computer Society, July 2009. Copyright by IEEE. (PDF) (doi:10.1109/TASE.2009.14)

The paper demonstrates that the idea of formalizing basic software engineering concepts in Hoare and He's Unifying Theories of Programming really helps to transfer knowledge and technology. In this case mutation testing found a new application domain: component connectors for modeling coordination in service-oriented architectures.

Abstract. The complex interactions appearing in service-oriented computing make coordination a key concern in serviceoriented systems. In this paper, we present a fault-based method to generate test cases for component connectors from specifications. For connectors, faults are caused by possible errors during the development process, such as wrongly used channels, missing or redundant subcircuits, or circuits with wrongly constructed topology. We give test cases and connectors a unifying formal semantics by using the notion of design, and generate test cases by solving constraints obtained from the specification and faulty connectors. A prototype symbolic test case generator serves to demonstrate the automatizing of the approach.

More conference papers.

Tuesday 13 October 2009

FMCO 2008: Conformance Testing of Distributed Concurrent Systems with Executable Designs


Photo: *clairity*

In August our FMCO 2008 paper was published:

Bernhard K. Aichernig, Andreas Griesmayer, Einar Broch Johnsen, Rudolf Schlatte, and Andries Stam. Conformance testing of distributed concurrent systems with executable designs. In Formal Methods for Components and Objects: 7th International Symposium, FMCO 2008, Sophia Antipolis, France, October 21–23, 2008, Revised, volume 5751 of Lecture Notes in Computer Science, pages 61–81, Berlin/Heidelberg, August 2009. Springer. (PDF) (doi:10.1007/978-3-642-04167-9_4)

The paper is a result of the CREDO project. It shows our testing process based on the modeling language Creol. Concolic execution on the model generates test stimuli and these are then executed on a system under test. During a test run the event traces are recorded and then replayed on the model. A small GUI helps to identify the controllable events for generating the test driver for Creol.

Abstract. This paper presents a unified approach to test case generation and conformance test execution in a distributed setting. A model in the object-oriented, concurrent modeling language Creol is used both for generating test inputs and as a test oracle. For test case generation, we extend Dynamic Symbolic Execution (also called Concolic Execution) to work with multi-threaded models and use this to generate test inputs that maximize model coverage. For test case execution, we establish a conformance relation based on trace inclusion by recording traces of events in the system under test and replaying them in the model. User input is handled by generating a test driver that supplies the needed stimuli to the model. An industrial case study of the Credo project serves to demonstrate the approach.

More conference papers.

Monday 12 October 2009

MBT 2010, March 21, Paphos, Cyprus


Photo: Grabowski
I have been invited to serve on the Program Committee of MBT 2010, the 6th Workshop on Model-Based Testing. MBT 2010 is a satellite worksop of ETAPS 2010 to be held at Paphos, the birthplace of Aphrodite.

The workshop is devoted to model-based testing of both software and hardware. Model-based testing is closely related to model-based specification. Models are used to describe the behavior of the system under consideration and to guide such efforts as test selection and test results evaluation. Both testing and verification are used to validate models against the requirements and check that the implementation conforms to the specification model.

The call for papers is out. Submission deadline: December 10, 2009

Model-based testing has gained attention with the popularization of models in software/hardware design and development. Of particular importance are formal models with precise semantics, such as state-based formalisms. Testing with such models allows one to measure the degree of the product's conformance with the model.

Techniques to support model-based testing are drawn from diverse areas, like formal verification, model checking, control and data flow analysis, grammar analysis, and Markov decision processes.

The intent of this workshop is to bring together researchers and users of models for to discuss the state of the art in theory, applications, tools, and industrialization of model-based specification, testing and verification.

Original submissions are solicited from industry and academia. They are invited to present their work, plans, and views related to model-based testing. The topics of interest include but are not limited to:

  • Online and offline test sequence generation methods and tools
  • Test data selection methods and tools
  • Runtime verification
  • Model-based test coverage metrics
  • Automatic domain/partition analysis
  • Combination of verification and testing
  • Models as test oracles
  • Scenario based test generation
  • Meta programming support for testing
  • Formalisms suitable for model-based testing
  • Application of model checking techniques in model-based testing
  • Game-theoretic approaches to testing
  • Model-based testing in industry: problems and achievements

Wednesday 7 October 2009

Journal Paper: Fault-Based Conformance Testing in Practice


Photo: marcusuke

We got our work on mutation testing of communication protocols published in the International Journal of Software and Informatics.

The article is part of a special double issue on Formal Methods edited by Prof. Dines Bjorner. The journal is published by the Chinese Academy of Science and has an international editing board.

Martin Weiglhofer, Bernhard Aichernig, and Franz Wotawa. Fault-based conformance testing in practice. International Journal of Software and Informatics, 3(2–3):375–411, June/September 2009. Copyright by Institute for Software, Chinese Academy of Science. (PDF)

In this work we present our results on testing two communcation protocols the Session Initiation Protocol and the Conference Protocol. The protocols are modeled in LOTOS and then mutation testing is applied on the LOTOS specs for generating test cases. We present detailed data of the test process including the nice result that we were able to find a new bug in an implementation that was not detected by random and scenario-based testing.

Paper abstract: Conforming to protocol speci cations is a critical issue in modern distributed software systems. Nowadays, complex service infrastructures, such as Voice-over-IP systems, are usually built by combining components of di erent vendors. If the components do not correctly implement the various protocol speci cations, failures will certainly occur. In the case of emergency calls this may be even life-threatening. Functional black-box conformance testing, where one checks the conformance of the implemented protocol to a speci cation becomes therefore a major issue. In this work, we report on our experiences and ndings when applying fault-based conformance testing to an industrial application. Besides a discussion on modeling and simpli cations we present a technique that prevents an application from implementing particular faults. Faults are modeled at the level of the speci cation. We show how such a technique can be adapted to speci cations with large state spaces and present results obtained when applying our technique to the Session Initiation Protocol and to the Conference Protocol. Finally, we compare our results to random and scenario based testing.

Wednesday 23 September 2009

AI meets Formal Methods: Qualitative Reasoning and Action Systems


Photo: cokada

We got our work on combining Qualitative Reasoning Techniques with Action Systems accepted at ICFEM 2009 in Rio.

This year the conference was highly competitive with an acceptance rate below 30 percent. The paper integrates Qualitative Differential Equations into Back's Action System formalism for modeling hybrid systems. The work forms part of the MOGENTES Project and aims at model-based testing of hybrid systems.

To our knowledge this is the first time that Qualitative Differential Equations have been merged into a formal development technique. The results are not limited to Action Systems, but apply also to similar formalisms like e.g. Event-B.

Bernhard K. Aichernig, Harald Brandl, and Willibald Krenn. Qualitative action systems. In Proceedings of ICFEM 2009: 11th International Conference on Formal Engineering Methods, Dec 9-12, 2009, Rio de Janeiro, Lecture Notes in Computer Science. Springer-Verlag, 2009. in press. (PDF)

Paper abstract: An extension to action systems is presented facilitating the modeling of continuous behavior in the discrete domain. The original action system formalism has been developed by Back et al. in order to describe parallel and distributed computations of discrete systems, i.e. systems with discrete state space and discrete control. In order to cope with hybrid systems, i.e. systems with continuous evolution and discrete control, two extensions have been proposed: hybrid action sys- tems and continuous action systems. Both use di erential equations (relations) to describe continuous evolution. Our version of action systems takes an alternative approach by adding a level of abstraction: continuous behavior is modeled by Qualitative Di erential Equations that are the preferred choice when it comes to specifying abstract and possibly non-deterministic requirements of continuous behavior. Because their solutions are transition systems, all evolutions in our qualitative action systems are discrete. Based on hybrid action systems, we develop a new theory of qualitative action systems and discuss how we have applied such models in the context of automated test-case generation for hybrid systems.

Monday 7 September 2009

TAP 2010, June 28 - July 2, Malaga, Spain


Photo: Pat McDonald
I have been invited to serve on the Program Committee of TAP 2010, the 4th International Conference on Tests & Proofs.

The TAP conference is devoted to the convergence of proofs and tests. It combines ideas from both sides for the advancement of software quality.

Gordon Fraser, a former colleague of mine at TU Graz, is co-chairing the PC with Angelo Gargantini. The conference chairs are Yuri Gurevich, Microsoft Research, USA and Bertrand Meyer, ETH Zuerich, Switzerland.

Last year, we had a paper at TAP 2009 on concolic execution of distributed systems (see my conference papers).

To prove the correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the expectation of discovering bugs. The two techniques seem contradictory: if you have proved your program, it's fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope to prove its correctness.

Accordingly, proofs and tests have, since the onset of software engineering research, been pursued by distinct communities using rather different techniques and tools.

And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the first signs that contradiction may yield to complementarity, but in the past few years an increasing number of research efforts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of incompatibility and taking instead the best of what each of these software engineering domains has to offer.

The conference will include a mix of invited and submitted presentation, and a generous allocation of panels and informal discussions. All papers will be published in Springer's LNCS series.

Monday 10 August 2009

Two in One: functional plus OO-programming in Scala


Photo: PyEuler

I believe in the functional first approach to teach computer programming to computer scientists. This means that students start with a functional programming language before programming in an imperative object-oriented style.

The advantage of this approach is that at the beginning the students can focus on problem solving writing functions with a given set of data structures ignoring the computer architecture. Only later they concentrate on how to implement such data-structures in an efficient way by exploiting the computer architecture.

From 1998-2001 I was involved in such a course to first year students of TU Graz. We started with SML in the first half-year  and then switched to Java. All this could now be done in the programming language Scala that combines the pros of both programming paradigms. 

I recommend to take a closer look at Scala. It might be the Java of the future. Scala combines features of both worlds. The most prominent features I've missed so long in OO languages are there:
  • functions are values
  • pattern matching
  • higher order functions
  • closures
  • lazy evaluation
  • Milner's type inference system
However, it is far from being un-orthodox with OO features like:
  • all values are objects
  • compiles to the Java Virtual Machine
  • full overloading including operators
  • (multiple) inheritance
  • direct definition of (singleton) objects 
  • support for concurrency
The power this combination of paradigms brings cannot be underestimated: Scala's control structures can be extended in the language itself. Hence, it is possible to write domain specific commands. An example is Scala's support for the Actor communication model known from Erlang. It has been defined purely in Scala and is included as a library. Voila! Parallel programming with Actors on the JVM.  Use the summer break to dive into Scala! Fun included.

Wednesday 29 July 2009

Modeling and Testing Multi-Threaded Asynchronous Systems with Creol


Bosphorus bridge in Istanbul viewing from Asia to Europe.
After a long waiting period, yesterday our TTSS 2008 paper was published by Elsevier. The paper is a result of the CREDO project.

Bernhard Aichernig, Andreas Griesmayer, Rudolf Schlatte, and Andries Stam. Modeling and testing multi-threaded asynchronous systems with Creol. In Proceedings of the 2nd International Workshop on Harnessing Theories for Tool Support in Software (TTSS 2008), Istanbul, Turkey, 30 August 2008, volume 243 of Electronic Notes in Theoretical Computer Science, pages 3–14. Elsevier, July 2009. (PDF) (doi:10.1016/j.entcs.2009.07.002)

The paper describes a model-based testing technique for asynchronous, concurrent systems and shows how the modeling language Creol can serve as a test oracle. Creol is a nice object-oriented modeling language for distributed systems by University of Oslo. We applied the technique to a commercial agent-based information system.

The basic steps of our method are:
  • Create an abstract model of the system under test (SUT) in Creol.
  • Instrument the source code of the SUT by aspect-oriented programming techniques (here AspectC) for recording events.
  • Execute tests on the SUT and record the events.
  • Translate the event traces to Creol test cases.
  • Replay the test case on the model.
  • If successful: the test passed;
  • If it fails: model check if there is a different non-deterministic path for this test case, if not the test failed.

Check out my other conference papers.

Monday 27 July 2009

ICTAC 2010, Sept 1-3, Natal, Brazil


Photo courtesy of Dany Sakugawa
I have been invited to serve on the Program Committee of ICTAC 2010.

It will have a special track on Formal Testing Approaches headed by Marie-Claude Gaudel.

ICTAC 2010 is the 7th International Colloquium on Theoretical Aspects of Computing, the latest in a series founded by the International Institute for Software Technology of the United Nations University (UNU-IIST). ICTAC brings together practitioners and researchers to present their research and to exchange ideas and experience addressing challenges in both theoretical aspects of computing and in the exploitation of theory through methods and tools for system development.

The seventh edition of the event, ICTAC2010, will be held from the 1st to the 3rd of September, 2010, in Natal, Brazil.

ICTAC2010 will invite submissions of research papers of up to 15 pages and of tool papers of up to 6 pages. Research papers should contain original research, and sufficient detail to assess the merits and relevance of the contribution. Submissions reporting on industrial case studies are also welcome. Tool demonstration papers should stress the technical and practical side, illustrating how one can apply the theoretic contributions in practice. The schedule for submission and evaluation of papers is as follows.

Submission of abstracts: 08 March, 2010
Submission deadline: 15 March, 2010
Notification of acceptance: 30 April, 2010
Final version: 16 May, 2010

Thursday 23 July 2009

Semantics? Yes tester, you need it!

Once I had a testing project in industry in which we ran into the common situation that we did not know how to interpret a given requirement. Well, we asked the requirements manager for advice. Since it was a tricky question, he forwarded our request to the customer and to the developers. We were fortunate in getting quick responses. However, there was a problem: the answers were contradicting each other.
What had happened? Well, the customer and the developers had different interpretations of the requirements. There was a misunderstanding about this requirement. The consequence: the developers were implementing an incorrect system from the customers point of view.
So what about the tester? His understanding of the requirements is crucial. In the example above, the asking of us testers highlighted a serious problem. Without a clear understanding of the requirements, no reliable test verdict (pass or fail) can be given.
However, how do we know, if a tester understands the requirements? He could have made the same mistake as the developers. What if two testers (e.g. the customer's and the developer's) have different understandings of the requirements of a given SUT? Well, they would give different test verdicts: one might accept the SUT the other not. Is there a way to prevent such misunderstandings?
Not in general, because misunderstanding is a psychological process of wrong interpretation. However, we can limit the roots of such misinterpretations. We need to define the semantics, i.e. the meaning, of the requirements. And if nobody does it in a project, the tester should.
How do we define the semantics of requirements? Answer: by writing them in a formal notation with a precise semantics. There are modeling languages that come with a precise semantics, like VDM-SL, Z, B, Alloy, RAISE, CSP, LOTOS etc. These languages serve different purposes, but what they have in common is that their meaning is precisely defined, i.e. there is no ambiguity how to interpret what is written down.
Therefore, model-based testing should always apply models with a precise, formal semantics. Ok, most of the time a tester will not need it, because the meaning seems obvious, but as testers know, the rare cases matter. It even becomes more critical for model-based testing tools. If there is no precise standard semantics, different tools might behave differently for the same models. (A common problem of compilers for programming languages without precise semantics).
Here is my advice: if somebody tries to sells you a model-based testing technique, ask him if his notation has a formal semantics. If he answers positively, double-check and ask for formal proofs done in this notation. No proofs, no formal semantics. Don't accept notations with misunderstanding built in!
More on model-based testing with formal notations can be found in my publications.

Lecturing in China

From 2002 to 2006 I worked for the UN as a Research Fellow of UNU-IIST in Macao, a special administrative region of China. Part of the job was to hold advanced computer-science courses in developing countries. The participants were post-graduates, typically young assistant professors.

In 2003, at a course in Lanzhou I certainly got the most impressive welcome: my name on a banner at the entrance to the campus (see picture). In China protocol is important, but Lanzhou Jiatong University was special. As a UN representative I was also officially welcomed by the rector and interviewed by their campus TV. I must admit, I was slightly impressed. Even more so by the fact that they had their own hotel on campus in which I stayed.

The course title was Formal Methods in Software Development. Lanzhou Jiatong University, Lanzhou, China, December 15--19, 2003.

Teaching formal methods is fun!
The course participants.

Tuesday 21 July 2009

Lecturing in Africa

From 2002 to 2006 I worked for the UN  as a Research Fellow of UNU-IIST in Macao, China. Part of the job was to hold advanced computer-science courses in developing countries. The participants were post-graduates, typically young assistant professors. 
Most fascinating and adventurous were my two courses in Africa, one in Lagos, Nigeria and one in Dakar, Senegal.
University of Lagos, Nigeria, Nov. 4-8, 2002
The course title was Formal Methods in Software Development. It was part of the School on Methods, Tools and Techniques for Industrial Software Development jointly organized by UNU-IIST and University of Lagos.
The Lagos course participants and their lecturer (guess who it is!).
Our lunch was colorful.
Universite Cheikh Anta Diop, Dakar, Senegal, November 24–28, 2003

The course in Dakar was on Foundations of Software Testing covering formal testing techniques.
The Dakar course participants and myself.
On the weekend there was time to enjoy the beauty of Africa.

Thursday 9 July 2009

Bernhard's Research Area

Bernhard’s research focuses on the foundations of software engineering. In particular, I'm interested in
  • formal development methods
  • specification, modeling and design of reliable software
  • theories of programming, refinement calculi, concurrency
  • foundations of software testing.

His current research is mainly concerned with the automated generation of test-cases from formal requirements specifications as well as with the associated testing theories. This includes the investigation of existing testing, specification and refinement techniques. The aim is to develop testing theories that are able to unify the existing results on specification-based testing and advance the field.

His current emphasis is on fault-based testing. Fault-based testing is a technique where testers anticipate errors in a system under test in order to asses or generate test cases. The idea is to have enough test cases capable of detecting these anticipated errors. He applies this method on the specification level. His latest testing theory is based on Hoare and He's Unifying Theory of Programming (UTP) and has been instantiated to test case generator’s for OCL, Spec#, LOTOS and most recently Action Systems.

MOGENTES Project

I am the project manager and senior researcher for TU Graz in the MOGENTES project.

MOGENTES stands for Model-based Generation of Tests for Dependable Embedded Systems and is an European STREP project in the 7th EU framework programme.

The coordinator is the Austrian Institute of Technology.

MOGENTES aims at significantly enhancing testing and verification of dependable embedded systems by means of automated generation of test cases relying on development of new approaches as well as innovative integration of state-of-the-art techniques.

Driven by the needs of its industrial partners, it will address both testing of non-functional issues like reliability, e.g. by system stress and overload tests, and functional safety tests, meeting the requirements of standards such as IEC 61508, ISO WD 26262, or AUTOSAR.

MOGENTES will demonstrate that different domains with a wide variety of requirements can significantly benefit from a common model-based approach for achieving automated generation of efficient test cases and for verifying system safety correctness using formal methods and fault injection, as this approach increases system development productivity while achieving predictable system dependability properties. For that purpose, proof-of concept demonstrations will show the applicability of the developed technologies in two application domains: railway and automotive (on- and off-road).

In particular, MOGENTES aims at the application of these technologies in large industrial systems, simultaneously enabling application domain experts (with rather little knowledge and experience in usage of formal methods) to use them with minimal learning effort. All in all, MOGENTES will increase knowledge and develop new techniques and tools in the area of verification and validation of dependable embedded systems which can be applied in model-based development processes also by non-experts in formal methods.