Theme Goal Slides Topics Format Submissions Dates Organizers
                    
Supported by
Microsoft Research

Theme

Testing is the most widely used approach for validating software, but it is labor-intensive and error-prone. Automated testing has the potential to make testing more cost-effective. A number of recent research approaches to automated testing use state-space exploration techniques including explicit-state model checking, symbolic execution, search-based techniques, heuristic-guided exploration, or a combination of techniques. These approaches can be used in various scenarios such as model checking, model-based testing, code-based test case generation, etc. These approaches are implemented in several tools used in both industry and academia. To improve performance, the tools incorporate different methods for state representation, state comparison, function summaries, etc. Some tools work only with the code under test while others can exploit additional information such as state abstraction, state comparison, existing tests, and oracles.

While state-space exploration for automated testing has already shown promising results, a number of challenges remain on this topic, including how to improve performance of tools, how to scale to larger code, how to get a wider adoption in industry, how to handle more advanced language features, how to reduce false alarms, etc. An important issue is also how to compare various tools and techniques since they are typically implemented on different platforms and evaluated on code chosen in an ad-hoc manner.

Goal

The goal of this workshop is to bring together researchers from both industry and academia to identify a set of programs that can be used for comparing various tools and techniques that perform state-space exploration for automated testing. The eventual goal is to build a benchmark suite for comparing SSEAT tools, as discussed at SSEAT 2008.

Slides

Tao Xie prepared slides on Benchmarking for Object-Oriented Unit Test Generation which we used to start our discussion about benchmarks.

Topics

The topics of this workshop include but are not limited to the techniques and tools that automate testing using:

Format

This will be a half-day workshop aimed at identifying a set of programs for comparing various techniques and tools. There will be a small number of short presentations about programs and explicitly allocated time for discussion sessions.

Submissions

The organizers invite proposals for presentation at the workshop and submission of potential programs for comparison. We are interested in a variety of example programs. They should contain some statements or states that are hard to reach. Such statements may be marked, and interesting states should be characterized in some way as well.

Please email your proposals to sseat2009@cs.illinois.edu. The proposal should identify who you are, provide a brief description of your work on or interest in the topics of the workshop, and describe at least one program that you propose to be discussed for inclusion in the benchmark suite. Presentation in the workshop will be by invitation only, decided based on the proposals. We expect up to 15 participants.

You are also welcome to discuss potential benchmark programs on the mailing list. The list is currently dormant, but we expect it to become active due to this workshop.

Dates

Submissions:open (email organizers if you have questions)
Workshop:Thursday, July 23, 2009

Organizers


Page design based on original work by Ben Liblit for WODA 2008.