print · login   

Model Learning and Model-Based Testing

Model learning aims to construct aims to construct state diagram models of black-box software and hardware systems by providing inputs and observing outputs. Model-based testing is a testing method that uses a model (or specification) of the black-box behavior of a system to generate test cases to establish whether a system conforms to its specification. Model learning and model-based testing are dual activities: (1) we typically want to use model-based testing in order to increase our confidence in the correctness of a learned model and to find counterexamples, and (2) often a learned model may serve as a specification, for instance to check conformance of a refactored implementation.

Systematic testing is important for software quality, but it is also error-prone, expensive, and time-consuming. Model-based testing can improve the effectiveness and efficiency of the testing process. Model learning is emerging as a highly effective bug-finding technique, with applications in areas such as banking cards, network protocols and legacy software.

Within our group, we tackle the following research challenges:

  • The design of algorithms for model learning constitutes a fundamental research problem. Recently, our group has designed the L# learning algorithm, which is of the most efficient algorithms to date. Our aim is to make learning algorithm even more powerful, that is, able to learn models with more states and input events with fewer data. Also, we aim to extend this algorithm to richer settings, allowing us to learn models of real-time systems, and systems with data parameters.
  • High-tech systems come in many variants, customized for various users. These systems evolve over time to adapt to changing requirements and contexts. As a result, the number of possible different variants grows exponentially, making testing of systems with high variability and evolution a major challenge. Our aim is that not all versions and variants have to be completely re-tested, while still providing high, argued test coverage and confidence in the quality of the whole system. We follow a model-based approach using component-based and feature-driven testing.
  • A mature research is characterized by the presence of a rich set of shared benchmarks that can be used to compare different approaches. We therefore maintain the automata wiki: a publicly available set of benchmarks of state machines that model real protocols and embedded systems. These benchmarks allow researchers to compare the performance of learning and testing algorithms.

Please use the menu on the left for more information.