Workshop on State-space Exploration for Automated Testing (SSEAT 2008)
July 24, 2008 (afternoon only) |
One of the main conclusions of the workshop was that we should attempt to create a benchmark set for comparing SSEAT tools. We created a mailing list for discussing potential benchmarks.
Testing is the most widely used approach for validating software, but it is labor-intensive and error-prone. Automated testing has the potential to make testing more cost-effective. A number of recent research approaches to automated testing use state-space exploration techniques including explicit-state model checking, symbolic execution, search-based techniques, heuristic-guided exploration, or a combination of techniques. These approaches can be used in various scenarios such as model checking, model-based testing, code-based test case generation, etc. These approaches are implemented in several tools used in both industry and academia. To improve performance, the tools incorporate different methods for state representation, state comparison, function summaries, etc. Some tools work only with the code under test while others can exploit additional information such as state abstraction, state comparison, existing tests, and oracles.
While state-space exploration for automated testing has already shown promising results, a number of challenges remain on this topic, including how to improve performance of tools, how to scale to larger code, how to get a wider adoption in industry, how to handle more advanced language features, how to reduce false alarms, etc. An important issue is also how to compare various tools and techniques since they are typically implemented on different platforms and evaluated on code chosen in an ad-hoc manner.
The goal of this workshop is to bring together researchers from both industry and academia to discuss and evaluate the latest successes and remaining challenges in the use of state-space exploration for automated testing. One important aspect of the workshop is to discuss techniques that were tried and did not work well in certain contexts. Another important aspect is to try to identify a set of programs that can be used for comparing various tools and techniques.
The topics of this workshop include but are not limited to the techniques and tools that automate testing using:
This will be a half-day workshop aimed at discussions on remaining challenges in state-space exploration for automated testing and identification of a set of programs useful for comparing various techniques and tools. There will be a small number of short presentations and explicitly allocated time for discussion sessions.
The organizers invite short position papers (up to 2 pages) that should briefly explain the work being done on the relevant topic, describe remaining challenges, and/or present potential programs for comparison. There will be no special proceedings for the papers, but we will update this web page with the abstracts of the papers before the workshop and a brief summary of the discussion after the workshop. The summary will also appear in the ISSTA proceedings.
Page design based on original work by Ben Liblit for WODA 2008.