8th Workshop on
Validation, Analysis and Evolution of Software Tests

March 4, 2025 | co-located with SANER 2025, Montréal, Canada

Call for Papers

Aims, scope and topics of interest.

Software projects accumulate large sets of test cases, encoding valuable expert knowledge about the system under test to the extent of many person years. Over time, the reliability of the tests decreases, and they become difficult to understand and maintain. Extra effort is required for repairing broken tests and for adapting test suites and models to evolving software systems.

The International Workshop on Validation, Analysis and Evolution of Software Tests (VST) is a unique event bringing together academics, industrial researchers, and practitioners for exchanging experiences, solutions, and new ideas in applying methods, techniques and tools from software analysis, evolution and re-engineering to advance the state of the art in test development and maintenance.

The workshop invites high quality submissions related, but are not limited, to:

 ●  Analysis and validation of test code, test data, and test models

 ●  Test execution monitoring, results analysis, and visualization

 ●  Fault detection and localization

 ●  Co-evolution of tests and production code

 ●  Test case maintenance and automated repair

 ●  Clone detection for test code and data

 ●  Test generation and amplification

 ●  Test prioritization, minimization and simplification

 ●  Application of generative AI and LLMs in testing

 ●  Combinations of the topics above


  Download Call for Papers (txt)


Previous editions of this workshop: VST 2024, VST 2023, VST 2022, VST 2021, VST 2020, VST 2018, and VST 2016.

Important Dates

Anywhere on earth.

Abstract submission deadline November 22, 2024 AoE

Paper submission deadline November 29, 2024 AoE

Notifications December 20, 2024

Camera Ready (extended) January 19, 2025

Workshop March 4, 2025

Submission

Instructions and submission site.

We encourage submissions on the topics mentioned above with a page limit of max 8 pages, IEEE format. In addition, we will also allow position papers and tool demo papers of two to four pages.

Papers will be by reviewed by at least three program committee members following a full double-blind review process. Paper selection is based on scientific originality, novelty, and the potential to generate interesting discussions. Accepted papers will be published in the IEEE Digital Library along with the SANER proceedings.

Submission Instructions

  • Papers must not exceed the page limit of 8 pages (including all text, references, appendices, and figures), position papers and tool demos 2-4 pages

  • Papers must conform to the IEEE formatting guidelines for conference proceedings

  • Submissions should be prepared for a full double-blind review process (author names and affiliations should be omitted and references to own work should be in the third person)

  • Papers must comply with the IEEE policy on authorship. Papers must be original work that has neither appeared elsewhere for publication nor which is under review for another publication

  • Submissions are required in PDF format via EasyChair at https://easychair.org/conferences/?conf=vst2025

  Submit your paper at: VST 2025 EasyChair submission site

Program

Location and schedule.

Montréal, Québec, Canada
Timezone: EST (UTC-5)

Venue: Polytechnique Montréal

close

KEYNOTE
How Can Large Language Models Improve Software Testing: Lessons, Challenges, and Opportunities.

Abstract - Large Language Models (LLMs) have demonstrated remarkable capabilities in addressing a range of software engineering tasks. However, their integration into software testing presents unique challenges, primarily due to LLMs' limited understanding of domain-specific knowledge. In this presentation, I will discuss key lessons and challenges encountered in LLM-based testing, such as the critical need to eliminate noisy data and the benefits of combining LLMs with program analysis to enhance context capture. Additionally, I will explore how LLMs are transforming various software testing methodologies, including test case generation, test updates, and test migrations. This talk aims to explore the limitations and possibilities of employing LLMs in software testing, providing insights into future opportunities in this field.

Bio - Xin Xia is a Chief Expert in Software Engineering Application Technologies at Huawei Technologies, China. His research spans AI and SE, mining software repositories, and empirical software engineering. Xin has published over 340 papers and has been honored with 15 best or distinguished paper awards, including eight ACM SIGSOFT Distinguished Paper Awards for his work presented at ASE (2018-2021), ICPC (2018, 2020), ICSE (2024), and MSR (2024). He also received the ACM SIGSOFT Early Career Researcher Award in 2022. Xin has played a significant role in the software engineering community, serving as a steering committee member for conferences such as MSR, SANER, Internetware, and PROMISE. He has also been involved in organizing numerous SE conferences, including ICSE (2023-2025) and ASE (2020-2021). He serves as Program Co-Chair for ICSME 2026, SANER 2023, TechDebt 2023, and PROMISE 2021, and as General Co-Chair for Internetware 2023 and FORGE (2024-2025). Additionally, Xin is an Associate Editor for several SE journals, including TOSEM, EMSE, ASEJ, and JSEP.

10:30-11:00 - Coffee break
12:30-14:00 - Lunch Break
15:30-16:00 - Coffee break

Organization

Chairs and program committee.

Program Committee

Cyrille Artho, KTH Royal Institute of Technology, Sweden

Wesley K. G. Assunção, North Carolina State University, United States

Peter Backeman, Mälardalen University, Sweden

Carolin Brandt, Delft University of Technology, The Netherlands

Serge Demeyer, University of Antwerp, Belgium

Michael Felderer, German Aerospace Center (DLR), Germany

Gordon Fraser, University of Passau, Germany

Alessio Gambi, Austrian Institute of Technology, Austria

Angelo Gargantini, University of Bergamo, Italy

Malte Lochau, University of Siegen, Germany

Mirosław Ochodek, Poznan University of Technology, Poland

Dietmar Pfahl, University of Tartu, Estonia

Josef Pichler, University of Applied Sciences Upper Austria

Contact

Get In touch.

Email us
Top