• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

Feedback-directed random test generation (2007)

Cached

  • Download as a PDF

Download Links

  • [www.cs.wm.edu]
  • [www.csc.calpoly.edu]
  • [www.research.microsoft.com]
  • [research.microsoft.com]
  • [homes.cs.washington.edu]
  • [pag.csail.mit.edu]
  • [www.cs.ucla.edu]
  • [pag.lcs.mit.edu]
  • [people.csail.mit.edu]
  • [www.pag.csail.mit.edu]
  • [www.csail.mit.edu]
  • [www.cs.washington.edu]
  • [homes.cs.washington.edu]
  • [www.eecs.northwestern.edu]
  • [people.csail.mit.edu]
  • [homes.cs.washington.edu]
  • [www.eecs.northwestern.edu]
  • [muymughal.googlepages.com]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Carlos Pacheco , Shuvendu K. Lahiri , Michael D. Ernst , Thomas Ball
Venue:In ICSE
Citations:188 - 17 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Pacheco07feedback-directedrandom,
    author = {Carlos Pacheco and Shuvendu K. Lahiri and Michael D. Ernst and Thomas Ball},
    title = {Feedback-directed random test generation},
    booktitle = {In ICSE},
    year = {2007},
    publisher = {IEEE Computer Society}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. Our experimental results indicate that feedback-directed random test generation can outperform systematic and undirected random test generation, in terms of coverage and error detection. On four small but nontrivial data structures (used previously in the literature), our technique achieves higher or equal block and predicate coverage than model checking (with and without abstraction) and undirected random generation. On 14 large, widely-used libraries (comprising 780KLOC), feedback-directed random test generation finds many previously-unknown errors, not found by either model checking or undirected random generation. 1

Keyphrases

feedback-directed random test generation    undirected random generation    model checking    error detection    many previously-unknown error    predicate coverage    test input    widely-used library    previously-constructed input    nontrivial data structure    experimental result    random test generation    program change    unit test    equal block    undirected random test generation    method call    potential error    test suite    code contract   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University