This is an intensive, week-long course at the Summer School RIO 2011, XVIII Escuela de Verano de Ciencias Informáticas - Río 2011, held in Rio Cuarto, Argentina on February 14-19, 2011.
Instructor: Darko Marinov
Meetings: 6pm-8:30pm daily for a week at the National University of Rio Cuarto
Advance description: This course will review techniques and tools for automated generation and repair of tests, with an emphasis on object-oriented unit tests. Such a test consists of a sequence of method calls, where each call takes a list of parameters. These sequences and parameters can be written manually or generated automatically. Automated techniques include random generation, symbolic execution, and bounded-exhaustive exploration. Developers are encouraged to frequently execute tests while changing their software to check whether changes cause the tests to fail. Some failures are caused by bugs in the code changes and some by old tests that do not reflect the changes. In the latter case, developers can use an automated tool that suggests repairs which make the failing tests to pass. We will review in more detail how these techniques work in several tools for test generation (including Pex, Randoop, and UDITA) and one tool for test repair (ReAssert). Time permitting, we will also cover some basics of the Java PathFinder tool (JPF).
Recommended knowledge: some object-oriented language (e.g., Java or C#); knowledge of a unit testing framework (e.g., JUnit or NUnit) and an IDE (e.g., Eclipse or Visual Studio) is helpful but not necessary.
Below is the list of papers and slides to be used in the lectures plus some additional websites and papers for those who may want to explore these topics at more depth. For even more information, you can look at the Illinois Fall 2010 course CS527 on which this intensive C5 course is based.
Topic | Reading | Slides | Lab |
---|---|---|---|
Introduction |
(optional)
How to Read an Engineering Research Paper William G. Griswold (optional) Writing Good Software Engineering Research Papers Mary Shaw (ICSE 2003) Website: Book mentioned in Lab1, Chapter 1 |
Introduction | Lab1 |
Randoop |
Feedback-directed random test generation Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball (ICSE 2007) (optional) Finding Errors in .NET with Feedback-directed Random Testing Carlos Pacheco, Shuvendu K. Lahiri, and Thomas Ball (ISSTA 2008) |
Randoop | |
Pex |
Pex – White Box Test Generation for .NET Nikolai Tillmann and Jonathan de Halleux (TAP 2008) (optional) Moles: Tool-Assisted Environment Isolation with Closures Jonathan de Halleux and Nikolai Tillmann (TOOLS 2010) Websites: PexForFun, TeachPex |
Pex | Puzzle |
UDITA |
Test Generation through Programming in UDITA Milos Gligoric, Tihomir Gvero, Vilas Jagannath, Sarfraz Khurshid, Viktor Kuncak, and Darko Marinov (ICSE 2010) (optional) Automated Testing of Refactoring Engines Brett Daniel, Danny Dig, Kely Garcia, and Darko Marinov (ESEC/FSE 2007) Website: ASTGen |
UDITA | |
ReAssert |
ReAssert: Suggesting Repairs for Broken Unit Tests Brett Daniel, Vilas Jagannath, Danny Dig, and Darko Marinov (ASE 2009) (optional) On Test Repair using Symbolic Execution Brett Daniel, Tihomir Gvero, and Darko Marinov (ISSTA 2010) |
ReAssert | |
JPF |
Note: this is a journal paper, so feel free to skip/skim some sections (3.2, 3.3, 4) Model Checking Programs Willem Visser, Klaus Havelund, Guillaume Brat, SeungJoon Park and Flavio Lerda (J-ASE, vol. 10, no. 2, April 2003) |
Selected Full slides |