Instrumented Container Classes - Predicate Coverage

About This Page

Our study "Testing Container Classes: Random or Systematic?" compared random testing with shape abstraction, a systematic testing technique that showed the best result in a previous study. We compared these two techniques using predicate coverage and mutation score on 13 containers taken from various sources. It was of great help to us that the authors of previous studies shared their code and instructions needed to repeat their experiments. We thank Marcelo Frias, Juan Pablo Galeotti, Corina Pasareanu, and Willem Visser for their help. Based on this experience, we are making available the exact instrumented code we used to measure predicate coverage for all 13 containers in our experiments. We hope that these files will be a good starting point for some further research. If you have any additional questions, please feel free to contact us.

Testing Container Classes: Random or Systematic?

By Rohan Sharma, Milos Gligoric, Andrea Arcuri, Gordon Fraser, Darko Marinov.
Fundamental Approaches to Software Engineering
(FASE), pages 262-277, Saarbrucken, Germany, March 2011.

Abstract: Container classes such as lists, sets, or maps are elementary data structures common to many programming languages. Since they are a part of standard libraries, they are important to test, which led to research on advanced testing techniques targeting such containers and research on comparing testing techniques using such containers. However, these techniques have not been thoroughly compared to simpler techniques such as random testing. We present the results of a larger case study in which we compare random testing with shape abstraction, a systematic technique that showed the best results in a previous study. Our experiments show that random testing is about as effective as shape abstraction for testing these containers, which raises the question whether containers are well suited as a benchmark for comparing advanced testing techniques.

Subject Containers Used in Our Evaluation

Container Reference Original Source
AvlTree [1] link
BinomialHeap [2] link
BinTree [2] link
FibHeap [2] link
FibonacciHeap [3] this page
HeapArray [3] this page
IntAVLTreeMap [3] this page
IntRedBlackTree [3] this page
LinkedList [1] authors of [1]
NodeCachingLinkedList [1] authors of [1]
SinglyLinkedList [1] authors of [1]
TreeMap [2] link
TreeSet [1] authors of [1]

[1] J. Galeotti, N. Rosner, C. Lopez Pombo, and M. Frias.
Analysis of invariants for efficient bounded verification.
International Symposium on Software Testing and Analysis
(ISSTA), pages 25–36, 2010.

[2] W. Visser, C. S. Pasareanu, and R. Pelanek.
Test input generation for Java containers using state matching.
International Symposium on Software Testing and Analysis
(ISSTA), pages 37–48, 2006.

[3] R. Sharma, M. Gligoric, V. Jagannath, and D. Marinov.
A comparison of constraint-based and sequence-based generation of complex input data structures.
Workshop on Constraints in Software Testing, Verification and Analysis
(CSTVA), pages 337–342, 2010.

Download Instrumented Source Code

Instrumented version of all containers that we used to measure predicate coverage is available for download. To instrument the code, we used a semi-automated approach. We automatically found all the branches in the code and manually added predicates. Our choice of predicates follows a similar pattern as that by Visser, Pasareanu, and Pelanek [2].


We thank Marcelo Frias, Juan Pablo Galeotti, Corina Pasareanu, and Willem Visser for providing clarifications about their studies and code used in their experiments. We also thank David Schuler and Andreas Zeller for help with Javalanche. Andrea Arcuri is funded by the Norwegian Research Council. Gordon Fraser is funded by the Cluster of Excellence on Multimodal Computing and Interaction at Saarland University, Germany.