Research Track "AI-Based Quality, Testing, and Security"

PhD Candidate: Leonhard Applis
Track leader: Annibale Panichella

Over the past few years, the software engineering research community has booked substantial progress in reformulating many software engineering processes as search-based problems.

In particular, testing can be considered as a search for a set of test cases that together meet a given adequacy criterion, such as line coverage. The search can start from randomly generated test inputs. These can then be combined in such ways that the chances of increasing the coverage become higher and higher. This, then, gives rise to the application of evolutionary algorithms for the purpose of software testing. A well known research tool that reflects the state of the art is evosuite.org, to which TU Delft has contributed as well.

In the context of AFR, search-based testing techniques offer unique opportunities to further advance automated testing approaches within ING. In particular, in this track we seek to lift search-based test generation techniques from the unit to the integration and system test levels. Furthermore, we will explore how search-based techniques can be used for the purpose of security testing, bringing more intelligence, for example, to the current state of the art in fuzzing and penetration testing.

Selected publications

  • Leonhard Applis, Ruben Marang, Annibale Panichella (2023). Searching for Quality: Genetic Algorithms and Metamorphic Testing for Software Engineering ML. The Genetic and Evolutionary Computation Conference (GECCO 2023) (preprint).

  • Leonhard Applis, Annibale Panichella (2023). HasBugs - Handpicked Haskell Bugs. Mining Software Repositories (MSR) (preprint).

  • Matthías Páll Gissurarson, Leonhard Applis, Annibale Panichella, Arie van Deursen, David Sands (2022). PropR: Property-Based Automatic Program Repair. The 44th IEEE/ACM International Conference on Software Engineering (ICSE 2022) (preprint).

  • Leonhard Applis, Annibale Panichella, Arie van Deursen (2021). Assessing Robustness of ML-Based Program Analysis Tools using Metamorphic Program Transformations. The 36th IEEE/ACM International Conference on Automated Software Engineering - New Ideas and Emerging Results (ASE-NIER 2021) (preprint).