3rd Workshop on
Validation, Analysis and Evolution of Software Tests

February 18, 2020 | co-located with SANER 2020, London, Ontario, Canada

Call for Papers

Aims, scope and topics of interest.

Software projects accumulate large sets of test cases, encoding valuable expert knowledge about the software under test to the extent of many person years. Over time, the reliability of the tests decreases and they become difficult to understand and maintain. Extra effort is required for repairing broken tests and for adapting test suites and models to evolving software systems.

The International Workshop on Validation, Analysis and Evolution of Software Tests (VST) is a unique event bringing together academics, industrial researchers, and practitioners for exchanging experiences, solutions and new ideas in applying methods, techniques and tools from software analysis, evolution and re-engineering to advance the state of the art in test development and maintenance.

The workshop invites high quality submissions related, but are not limited, to:

 ●  Test minimization and simplification

 ●  Fault localization and automated repair

 ●  Change analysis for software tests

 ●  Test visualization

 ●  Test validation

 ●  Documentation analysis

 ●  Bug report analysis

 ●  Test evolution

 ●  Test case generation

 ●  Model-based testing

 ●  Combinations of the topics above

  Download Call for Papers (txt)

Important Dates

Anywhere on earth.

Paper submission deadline (extended)December 20, 2019 AoE

Notifications January 10, 2020

Camera Ready January 14, 2020


Instructions and submission site.

We encourage submissions on the topics mentioned above with a page limit of max 8 pages, IEEE format. In addition, we will also allow position papers and tool demo papers of two to four pages.

Papers will be by reviewed by at least three program committee members. Paper selection is based on scientific originality, novelty, and the potential to generate interesting discussions. Accepted papers will be published in the IEEE Digital Library along with the SANER proceedings.

Submission Instructions

  • Papers must not exceed the page limit of 8 pages (including all text, references, appendices, and figures), position papers and tool demos 2-4 pages

  • Papers must conform to the IEEE formatting guidelines for conference proceedings

  • Papers must be original work that has neither appeared elsewhere for publication nor which is under review for another publication

  • Papers must be submitted in PDF format via EasyChair at https://easychair.org/conferences/?conf=vst2020

  Submit your paper at: VST 2020 EasyChair submission site


Location and schedule.

09.15-09.30 - Welcome    at Somerville House, Room 1 (SH 2316)
Keynote: "Next Level" Test Automation
Serge Demeyer

Keynote Slides

"Next Level" Test Automation (Keynote)
Serge Demeyer

Abstract - With the rise of agile development, software teams all over the world embrace faster release cycles as *the* way to incorporate customer feedback into product development processes. Yet, faster release cycles imply rethinking the traditional notion of software quality: agile teams must balance reliability (minimize known defects) against agility (maximize ease of change). This talk will explore the state-of-the-art in software test automation and the opportunities this may present for maintaining this balance. We will address questions like: Will our test suite detect critical defects early? If not, how can we improve our test suite? Where should we fix a defect? The research underpinning all of this has been validated under "in vivo" circumstances through the TESTOMAT project, a European project with 34 partners coming from 6 different countries.

Short Bio - Serge Demeyer is a professor at the University of Antwerp and the spokesperson for the ANSYMO (Antwerp System Modelling) research group. He directs a research lab investigating the theme of "Software Reengineering" (LORE - Lab On REengineering). Serge Demeyer is a spokesperson for the NEXOR interdisciplinary research consortium and an affiliated member of the Flanders Make Research Centre. In 2007 he received a "Best Teachers Award" from the Faculty of Sciences at the University of Antwerp and as a consequence remains very active in all matters related to teaching quality. His main research interest concerns software evolution, more specifically how to strike the right balance between reliability (striving for perfection) and agility (optimising for improvements). He is an active member of the corresponding international research communities, serving in various conference organization and program committees. He has written a book entitled "Object-Oriented Reengineering" and edited a book on "Software Evolution". He also authored numerous peer reviewed articles, many of them in top conferences and journals.

Slides - Keynote slides available at Slideshare

10.30-11.00 - Coffee Break
Do Bug-Fix Types Affect Spectrum-Based Fault Localization Algorithms' Efficiency?
Attila Szatmári, Béla Vancsics, and Árpád Beszédes


Do Bug-Fix Types Affect Spectrum-Based Fault Localization Algorithms' Efficiency?

Abstract - Finding a bug in the software is an expensive task, however, debugging is a crucial part of the software development life cycle. Spectrum-Based Fault Localization (SBFL) algorithms can reduce the time spent with debugging. Despite the fact that SBFL is a very well researched topic, there are not many tools that implement it. Many studies have dealt with the effectiveness of SBFL algorithms, although these have been evaluated on Java and C++ programming languages. We performed an empirical study on JavaScript programs (using BugsJS benchmark) to evaluate the relationship between algorithms efficiency and the bug-fix types. First we implemented three popular SBFL approaches, i.e. Tarantula, Ochiai and DStar, then examined whether there was a correlation/connection between the positions of the faulty methods in the suspiciousness ranks and bug-fix types. Results show that certain bug-fix types can be significantly differentiated from the others (in both positive and negative direction) based on the fault localization effectiveness of the investigated algorithms.

Attila Szatmári

Béla Vancsics

Árpád Beszédes

(University of Szeged, Hungary)

An Empirical Evaluation for Object Initialization of Member Variables in Unit Testing
Stefan Fischer, Evelyn Haslinger, Markus Zimmermann, and Hannes Thaller


An Empirical Evaluation for Object Initialization of Member Variables in Unit Testing

Abstract - Automated test case generation techniques usually aim to maximize some coverage criteria. For object oriented languages, like Java, the branches that can be reached in source code, frequently depend on the internal object state. Meaning certain branches will only be taken, if fields inside the tested class are set to specific values. It is however not obvious how much of the internal object state can be controlled. In this paper, we analyzed a corpus of 110 open source systems to evaluate how settable their classes are. Meaning we looked for ways that fields inside classes can be written. For instance, we analyzed the source code to identify setter methods that can be used to set the values of a field. Our results show that 66.5% of fields can be set to a desired value, while 31.5% of fields may be only settable to particular values or require a more in depth analysis. Only for 2% of fields, we did not find any way to set their values.

Stefan Fischer

(Software Competence Center Hagenberg, Austria)

Evelyn Haslinger

Markus Zimmermann

(Symflower GmbH, Austria)

Hannes Thaller

(Johannes Kepler University Linz, Austria)

Semi-Automatic Test Case Expansion for Mutation Testing
Zhong Xi Lu, Sten Vercammen, and Serge Demeyer


Semi-Automatic Test Case Expansion for Mutation Testing

Abstract - Mutation testing is the state-of-the-art technique for detecting weaknesses in a test suite. Unfortunately, alleviating these weakness (i.e. "killing the surviving mutants") is quite labour-intensive. In this paper we investigate a recommender system which expands test cases with extra asserts for the easy-to-fix mutants. We evaluated a proof-of-concept tool on ten open-source projects, and killed up to 6% of the surviving mutants. This illustrates that such a test expansion system would free up valuable time to address the harder-to-fix mutants.

Zhong Xi Lu

Sten Vercammen

Serge Demeyer

(University of Antwerp, Belgium)

12.30-13.30 - Lunch Break
An Early Investigation of Unit Testing Practices of Component-based Software Systems
Georg Buchgeher, Stefan Fischer, Michael Moser, and Josef Pichler


An Early Investigation of Unit Testing Practices of Component-based
Software Systems

Abstract - Component-based software development (CBSD) is one of the main programming paradigms of the last decades. The main idea of CBSD is to modularize a system as a configuration of multiple interacting components. Components interact with each other via dedicated component interfaces hiding a component's implementation and making components interchangeable. In this paper, we present an early investigation of unit testing practices of open source component-based software systems with the goal to find out how component-based software systems are actually tested and how to improve unit testing practices as part of future research. Our preliminary results show that unit tests typically directly test the component implementation and not dedicated component APIs. The method coverage of component APIs varied between 17% and 34% in the analyzed projects.

Georg Buchgeher

Stefan Fischer

Michael Moser

(Software Competence Center Hagenberg, Austria)

Josef Pichler

(University of Applied Sciences Upper Austria, Austria)

Towards Fault Localization via Probabilistic Software Modeling
Hannes Thaller, Lukas Linsbauer, Alexander Egyed, and Stefan Fischer


Towards Fault Localization via Probabilistic Software Modeling

Abstract - Software testing helps developers to identify bugs. However, awareness of bugs is only the first step. Finding and correcting the faulty program components is equally hard and essential for high-quality software. Fault localization automatically pinpoints the location of an existing bug in a program. It is a hard problem, and existing methods are not yet precise enough for widespread industrial adoption. We propose fault localization via Probabilistic Software Modeling (PSM). PSM analyzes the structure and behavior of a program and synthesizes a network of Probabilistic Models (PMs). Each PM models a method with its inputs and outputs and is capable of evaluating the likelihood of runtime data. We use this likelihood evaluation to find fault locations and their impact on dependent code elements. Results indicate that PSM is a robust framework for accurate fault localization.

Hannes Thaller

Lukas Linsbauer

Alexander Egyed

(Johannes Kepler University Linz, Austria)

Stefan Fischer

(Software Competence Center Hagenberg, Austria)

Simulating the Effect of Test Flakiness on Fault Localization Effectiveness
Béla Vancsics, Tamás Gergely, and Árpád Beszédes


Simulating the Effect of Test Flakiness on Fault Localization Effectiveness

Abstract - Test flakiness (non-deterministic behavior of test cases) is an increasingly serious concern in industrial practice. However, there are relatively little research results available that systematically address the analysis and mitigation of this phenomena. The dominant approach to handle flaky tests is still detecting and removing them from automated test executions. However, some reports showed that the amount of flaky test is in many cases so high that we should rather start working on approaches that operate in the presence of flaky tests. In this work, we investigate how test flakiness affects the effectiveness of Spectrum Based Fault Localization (SBFL), a popular class of software Fault Localization (FL), which heavily relies on test case execution outcomes. We performed a simulation based experiment to find out what is the relationship between the level of test flakiness and fault localization effectiveness. Our results could help the users of automated FL methods to understand the implications of flaky tests in this area and to design novel FL algorithms that take into account test flakiness.

Béla Vancsics

Tamás Gergely

Árpád Beszédes

(University of Szeged, Hungary)

15.00-15.30 - Coffee Break
15.30-16.00 - Wrap-Up Discussion & Closing
18.30              Workshop Reception    at The Grad Club, Middlesex College

Workshop Reception

The event is open to all workshop and conference attendees
and it will start at 18:30.

Details see https://saner2020.csd.uwo.ca/socialEvents


Chairs and program committee.

Program Committee

Emil Alégroth, Blekinge Institute of Technology, Sweden

Árpád Beszédes, University of Szeged, Hungary

Serge Demeyer, University of Antwerp, Belgium

Vahid Garousi, Queen's University Belfast, United Kingdom

Takashi Kitamura, National Institute of Advanced Industrial Science and Technology (AIST), Japan

Christian Macho, Alpen-Adria Universität Klagenfurt, Austria

Sebastiano Panichella, Zurich University of Applied Sciences, Switzerland

Fiorella Zampetti, University of Sannio, Italy

Andy Zaidman, Delft University of Technology, The Netherlands


Get In touch.

Email us