Friday, 23 October 2009

Fault-based Test Case Generation for Component Connectors

Sun Meng at CWI had the idea to translate my UTP-theory of mutation testing to REO connectors. This resulted in a TASE 2009 paper published in July:

Bernhard K. Aichernig, Farhad Arbab, Lacramioara Astefanoaei, Frank S. de Boer, Sun Meng, and Jan Rutten. Fault-based test case generation for component connectors. In TASE 2009, Third IEEE International Symposium on Theoretical Aspects of Software Engineering, Tianjin, China, July 29–31, pages 147–154. IEEE Computer Society, July 2009. Copyright by IEEE. (PDF) (doi:10.1109/TASE.2009.14)

The paper demonstrates that the idea of formalizing basic software engineering concepts in Hoare and He's Unifying Theories of Programming really helps to transfer knowledge and technology. In this case mutation testing found a new application domain: component connectors for modeling coordination in service-oriented architectures.

Abstract. The complex interactions appearing in service-oriented computing make coordination a key concern in serviceoriented systems. In this paper, we present a fault-based method to generate test cases for component connectors from specifications. For connectors, faults are caused by possible errors during the development process, such as wrongly used channels, missing or redundant subcircuits, or circuits with wrongly constructed topology. We give test cases and connectors a unifying formal semantics by using the notion of design, and generate test cases by solving constraints obtained from the specification and faulty connectors. A prototype symbolic test case generator serves to demonstrate the automatizing of the approach.

More conference papers.

Tuesday, 13 October 2009

FMCO 2008: Conformance Testing of Distributed Concurrent Systems with Executable Designs


Photo: *clairity*

In August our FMCO 2008 paper was published:

Bernhard K. Aichernig, Andreas Griesmayer, Einar Broch Johnsen, Rudolf Schlatte, and Andries Stam. Conformance testing of distributed concurrent systems with executable designs. In Formal Methods for Components and Objects: 7th International Symposium, FMCO 2008, Sophia Antipolis, France, October 21–23, 2008, Revised, volume 5751 of Lecture Notes in Computer Science, pages 61–81, Berlin/Heidelberg, August 2009. Springer. (PDF) (doi:10.1007/978-3-642-04167-9_4)

The paper is a result of the CREDO project. It shows our testing process based on the modeling language Creol. Concolic execution on the model generates test stimuli and these are then executed on a system under test. During a test run the event traces are recorded and then replayed on the model. A small GUI helps to identify the controllable events for generating the test driver for Creol.

Abstract. This paper presents a unified approach to test case generation and conformance test execution in a distributed setting. A model in the object-oriented, concurrent modeling language Creol is used both for generating test inputs and as a test oracle. For test case generation, we extend Dynamic Symbolic Execution (also called Concolic Execution) to work with multi-threaded models and use this to generate test inputs that maximize model coverage. For test case execution, we establish a conformance relation based on trace inclusion by recording traces of events in the system under test and replaying them in the model. User input is handled by generating a test driver that supplies the needed stimuli to the model. An industrial case study of the Credo project serves to demonstrate the approach.

More conference papers.

Monday, 12 October 2009

MBT 2010, March 21, Paphos, Cyprus


Photo: Grabowski
I have been invited to serve on the Program Committee of MBT 2010, the 6th Workshop on Model-Based Testing. MBT 2010 is a satellite worksop of ETAPS 2010 to be held at Paphos, the birthplace of Aphrodite.

The workshop is devoted to model-based testing of both software and hardware. Model-based testing is closely related to model-based specification. Models are used to describe the behavior of the system under consideration and to guide such efforts as test selection and test results evaluation. Both testing and verification are used to validate models against the requirements and check that the implementation conforms to the specification model.

The call for papers is out. Submission deadline: December 10, 2009

Model-based testing has gained attention with the popularization of models in software/hardware design and development. Of particular importance are formal models with precise semantics, such as state-based formalisms. Testing with such models allows one to measure the degree of the product's conformance with the model.

Techniques to support model-based testing are drawn from diverse areas, like formal verification, model checking, control and data flow analysis, grammar analysis, and Markov decision processes.

The intent of this workshop is to bring together researchers and users of models for to discuss the state of the art in theory, applications, tools, and industrialization of model-based specification, testing and verification.

Original submissions are solicited from industry and academia. They are invited to present their work, plans, and views related to model-based testing. The topics of interest include but are not limited to:

  • Online and offline test sequence generation methods and tools
  • Test data selection methods and tools
  • Runtime verification
  • Model-based test coverage metrics
  • Automatic domain/partition analysis
  • Combination of verification and testing
  • Models as test oracles
  • Scenario based test generation
  • Meta programming support for testing
  • Formalisms suitable for model-based testing
  • Application of model checking techniques in model-based testing
  • Game-theoretic approaches to testing
  • Model-based testing in industry: problems and achievements

Wednesday, 7 October 2009

Journal Paper: Fault-Based Conformance Testing in Practice


Photo: marcusuke

We got our work on mutation testing of communication protocols published in the International Journal of Software and Informatics.

The article is part of a special double issue on Formal Methods edited by Prof. Dines Bjorner. The journal is published by the Chinese Academy of Science and has an international editing board.

Martin Weiglhofer, Bernhard Aichernig, and Franz Wotawa. Fault-based conformance testing in practice. International Journal of Software and Informatics, 3(2–3):375–411, June/September 2009. Copyright by Institute for Software, Chinese Academy of Science. (PDF)

In this work we present our results on testing two communcation protocols the Session Initiation Protocol and the Conference Protocol. The protocols are modeled in LOTOS and then mutation testing is applied on the LOTOS specs for generating test cases. We present detailed data of the test process including the nice result that we were able to find a new bug in an implementation that was not detected by random and scenario-based testing.

Paper abstract: Conforming to protocol speci cations is a critical issue in modern distributed software systems. Nowadays, complex service infrastructures, such as Voice-over-IP systems, are usually built by combining components of di erent vendors. If the components do not correctly implement the various protocol speci cations, failures will certainly occur. In the case of emergency calls this may be even life-threatening. Functional black-box conformance testing, where one checks the conformance of the implemented protocol to a speci cation becomes therefore a major issue. In this work, we report on our experiences and ndings when applying fault-based conformance testing to an industrial application. Besides a discussion on modeling and simpli cations we present a technique that prevents an application from implementing particular faults. Faults are modeled at the level of the speci cation. We show how such a technique can be adapted to speci cations with large state spaces and present results obtained when applying our technique to the Session Initiation Protocol and to the Conference Protocol. Finally, we compare our results to random and scenario based testing.