
Our group works on the design of algorithms that allow computers to learn complex state diagrams by providing inputs and observing outputs. This is an exciting area in which we collaborate with the Digital Security group to learn models of security protocols and also with several companies (e.g. ASML, Philips Healtcare, Canon Production Printing, Thermo Fisher and Axini). We refer to the review article by Frits Vaandrager for a general introduction. In model-based testing, a model is the starting point for testing. This model expresses precisely and completely what a system under test should do, and should not do, and, consequently, it is a good basis for systematically generating test cases. Model learning and model-based testing are complementary techniques. In particular, model-based testing can be used to find counterexamples for candidate models produced by model learning techniques. There are several opportunities for theoretical, tool/programming oriented, and application oriented projects related to model learning and model-based testing:
Did you like the Algorithms & Data Structures course and would you like to design algorithms? Or did you like the Languages and Automata course and are you interested in theory? Well, we can define dozens of projects on automata learning that involve algorithm design and/or theory. We proposed a new learning algorithm for finite state machines, L# that outperforms existing active learning algorithms. But many questions remain related to the design, implementation and analysis of learning algorithms, ranging from super practical to super theoretical:
(contact Frits Vaandrager or Jurriaan Rot).
Did you like the Computer Networks course and do you find it interesting to understand the details of protocols? There are many possibilities for projects in which the goal is to learn models of protocol components. We are particularly interested in learning models of security related protocols (like TCP, SSH, TLS, DTLS, EMV, BLE, etc). Model learning may help us find specific sequences of input events that exhibit security vulnerabilities. (contact Frits Vaandrager or Erik Poll).
We are collaborating with several other companies that are interested in the potential of active learning and/or testing, for instance:
(contact Jan Tretmans or Frits Vaandrager).
Model learning and model-based testing are popular research topics and during recent years numerous researchers have proposed new learning and testing algorithms for slightly different modelling frameworks, which then are evaluated on a few case studies. However in order to get real scientific progress, it is important to have a big, shared collection of benchmarks that can be used for the evaluation of all these algorithms, supported by tools that support translations between different modelling frameworks. In 2019, we have set up a repository with a publicly available set of benchmarks of state machines that model real protocols and embedded systems to allow researchers to compare the performance of learning and testing algorithms. There is still much work to do in extending this repository with new benchmarks and software, and properly comparing the performance of existing learning and testing algorithms. This is an area where, as a student, you may have real impact and do work that is much appreciated by scientists around the globe. (contact Frits Vaandrager).
Theorists have developed a powerful set of algorithms for conformance testing of deterministic Finite State Machine models, see e.g. Lee & Yannakakis. Now a problem is that in practice systems often are nondeterministic and do not exhibit the strict alternation of inputs and outputs from the FSM model: sometimes an input is not followed by any output at all, sometimes an input is followed by a series of outputs. Jan Tretmans has developed a popular theory of testing for this general class of systems, but there are still very few algoritms for efficient conformance testing in this general setting. Recently, we have published papers on combining Tretmans' theory with the theory of Finite State Machines, see here and here, but this theory requires some next steps to make it applicable and to implement in the model-based testing tool TorXakis. The goal of this project would be to extend, adapt, and apply this theory. (contact Frits Vaandrager or Jan Tretmans).
Concolic testing is a combination of symbolic and concrete testing applied for white-box testing, which, among others, is successfully used by Microsoft. Model-based testing (MBT) is a promising approach for black-box testing. The goal is to combine these two approaches by applying the concepts of concolic tesing to (ioco-based) MBT, in order to improve test selection for MBT (contact Jan Tretmans).
Current systems are huge. Scaling model-based testing to such large systems is a challenge, in particular the selection of appropriate test cases for testing. One approach is to use usage (operational) profiles as the basis for testing, i.e., using users' scenarios as the basis for test selection. The goal is to combine user profile-based testing with ioco model-based testing (contact Jan Tretmans).