• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

A.: Generating Tests from UML Specifications. (1999)

by J Offutt, Abdurazik
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 111
Next 10 →

Korat: Automated testing based on Java predicates

by Chandrasekhar Boyapati, Sarfraz Khurshid, Darko Marinov - IN PROC. INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS (ISSTA , 2002
"... This paper presents Korat, a novel framework for automated testing of Java programs. Given a formal specification for a method, Korat uses the method precondition to automatically generate all nonisomorphic test cases bounded by a given size. Korat then executes the method on each of these test case ..."
Abstract - Cited by 331 (53 self) - Add to MetaCart
This paper presents Korat, a novel framework for automated testing of Java programs. Given a formal specification for a method, Korat uses the method precondition to automatically generate all nonisomorphic test cases bounded by a given size. Korat then executes the method on each of these test cases, and uses the method postcondition as a test oracle to check the correctness of each output. To generate test cases for a method, Korat constructs a Java predicate (i.e., a method that returns a boolean) from the method’s precondition. The heart of Korat is a technique for automatic test case generation: given a predicate and a bound on the size of its inputs, Korat generates all nonisomorphic inputs for which the predicate returns true. Korat exhaustively explores the input space of the predicate but does so efficiently by monitoring the predicate’s executions and pruning large portions of the search space. This paper illustrates the use of Korat for testing several data structures, including some from the Java Collections Framework. The experimental results show that it is feasible to generate test cases from Java predicates, even when the search space for inputs is very large. This paper also compares Korat with a testing framework based on declarative specifications. Contrary to our initial expectation, the experiments show that Korat generates test cases much faster than the declarative framework.

Test Input Generation with Java PathFinder

by Willem Visser, Corina S. Pasareanu, Sarfraz Khurshid
"... We show how model checking and symbolic execution can be used to generate test inputs to achieve structural coverage of code that manipulates complex data structures. We focus on obtaining branch-coverage during unit testing of some of the core methods of the red-black tree implementation in the Jav ..."
Abstract - Cited by 185 (7 self) - Add to MetaCart
We show how model checking and symbolic execution can be used to generate test inputs to achieve structural coverage of code that manipulates complex data structures. We focus on obtaining branch-coverage during unit testing of some of the core methods of the red-black tree implementation in the Java TreeMap library, using the Java PathFinder model checker. Three di#erent test generation techniques will be introduced and compared, namely, straight model checking of the code, model checking used in a black-box fashion to generate all inputs up to a fixed size, and lastly, model checking used during white-box test input generation. The main contribution of this work is to show how e#cient white-box test input generation can be done for code manipulating complex data, taking into account complex method preconditions.

A UML-Based Approach to System Testing

by Lionel Briand, Yvan Labiche , 2002
"... System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, clas ..."
Abstract - Cited by 133 (3 self) - Add to MetaCart
System testing is concerned with testing an entire system based on its specifications. In the context of object-oriented, UML development, this means that system test requirements are derived from UML analysis artifacts such as use cases, their corresponding sequence and collaboration diagrams, class diagrams, and possibly Object Constraint Language (OCL) expressions across all these artifacts. Our goal here is to support the derivation of functional system test requirements, which will be transformed into test cases, test oracles, and test drivers once we have detailed design information. In this paper, we describe a methodology in a practical way and illustrate it with an example. In this context, we address testability and automation issues, as the ultimate goal is to fully support system testing activities with high-capability tools.

TestEra: A Novel Framework for Automated Testing of Java Programs

by Darko Marinov, Sarfraz Khurshid , 2001
"... We present TestEra, a novel framework for automated testing of Java programs. TestEra automatically generates all non-isomorphic test cases, within a given input size, and evaluates correctness criteria. As an enabling technology, TestEra uses Alloy, a first-order relational language, and the Alloy ..."
Abstract - Cited by 115 (32 self) - Add to MetaCart
We present TestEra, a novel framework for automated testing of Java programs. TestEra automatically generates all non-isomorphic test cases, within a given input size, and evaluates correctness criteria. As an enabling technology, TestEra uses Alloy, a first-order relational language, and the Alloy Analyzer. Checking a program with TestEra involves modeling the correctness criteria for the program in Alloy and specifying abstraction and concretization translations between instances of Alloy models and Java data structures. TestEra produces concrete Java inputs as counterexamples to violated correctness criteria. This paper discusses TestEra's analyses of several case studies: methods that manipulate singly linked lists and red-black trees, a naming architecture, and a part of the Alloy Analyzer.

Testing web applications by modeling with fsms

by Anneliese A. Andrews, Jeff Offutt, Roger T. Alexander - Software and Systems Modeling , 2005
"... Abstract. Researchers and practitioners are still trying to find effective ways to model and test Web applications. This paper proposes a system-level testing technique that combines test generation based on finite state machines with constraints. We use a hierarchical approach to model potentially ..."
Abstract - Cited by 91 (6 self) - Add to MetaCart
Abstract. Researchers and practitioners are still trying to find effective ways to model and test Web applications. This paper proposes a system-level testing technique that combines test generation based on finite state machines with constraints. We use a hierarchical approach to model potentially large Web applications. The approach builds hierarchies of Finite State Machines (FSMs) that model subsystems of the Web applications, and then generates test requirements as subsequences of states in the FSMs. These subsequences are then combined and refined to form complete executable tests. The constraints are used to select a reduced set of inputs with the goal of reducing the state space explosion otherwise inherent in using FSMs. The paper illustrates the technique with a running example of a Web-based course student information system and introduces a prototype implementation to support the technique.
(Show Context)

Citation Context

...ation language Z. Luo, Bochmann and Petrenko [20] applied Fujiwara’s method to communicating concurrently executing FSMs. FSMs have also been used to test object oriented programs [9, 18] and designs =-=[24, 31]-=-. Kung et al. [9, 18] extract the FSM from the code using symbolic execution, while Turner and Robson [31] derive the FSM from the design of classes. Offutt and Abdurazik [24] derive tests from UML st...

Generating test data from state-based specifications

by Jeff Offutt, Shaoying Liu, Aynur Abdurazik, Paul Ammann - The Journal of Software Testing, Verification and Reliability , 2003
"... Although the majority of software testing in industry is conducted at the system level, most formal research has focused on the unit level. As a result, most system level testing techniques are only described informally. This paper presents formal testing criteria for system level testing that are b ..."
Abstract - Cited by 77 (8 self) - Add to MetaCart
Although the majority of software testing in industry is conducted at the system level, most formal research has focused on the unit level. As a result, most system level testing techniques are only described informally. This paper presents formal testing criteria for system level testing that are based on formal specifications of the software. Software testing can only be formalized and quantified when a solid basis for test generation can be defined. Formal specifications represent a significant opportunity for testing because they precisely describe what functions the software is supposed to provide in a form that can be automatically manipulated. This paper presents general criteria for generating test inputs from state-based specifications. The criteria include techniques for generating tests at several levels of abstraction for specifications (transition predicates, transitions, pairs of transitions and sequences of transitions). These techniques provide coverage criteria that are based on the specifications, and are made up of several parts, including test prefixes that contain inputs necessary to put the software into the appropriate state for the test values. The test generation process includes several steps for transforming specifications to tests. These criteria have been applied to a case study to compare their ability to detect seeded faults.
(Show Context)

Citation Context

...ed to a variety of specification languages that use a state-based representation. They have been applied to Software Cost Reduction (SCR) [1,2], CoRE [3], Unified Modelling Language (UML) Statecharts =-=[4,5]-=- andthe Structured Object-oriented Formal Language (SOFL) [6,7]. The test data generation model includes techniques for generating tests at several levels of detail, with views moving from clauses in ...

Test Adequacy Criteria for UML Design Models

by Anneliese Andrews, Robert France, Sudipto Ghosh, Gerald Craig - Journal of Software Testing, Verification and Reliability , 2003
"... Systematic design testing, in which executable models of behaviours are tested using inputs that exercise scenarios, can help reveal flaws in designs before they are implemented in code. In this paper a technique for testing executable forms of UML (Unified Modelling Language) models is described an ..."
Abstract - Cited by 58 (11 self) - Add to MetaCart
Systematic design testing, in which executable models of behaviours are tested using inputs that exercise scenarios, can help reveal flaws in designs before they are implemented in code. In this paper a technique for testing executable forms of UML (Unified Modelling Language) models is described and test adequacy criteria based on UML model elements are proposed. The criteria can be used to define test objectives for UML designs. The UML design test criteria are based on the same premise underlying code test criteria: coverage of relevant building blocks of models is highly likely to uncover faults. The test adequacy criteria proposed in this paper are based on building blocks for UML class and interaction diagrams. Class diagram criteria are used to determine the object configurations on which tests are run, while interaction diagram criteria are used to determine the sequences of messages that should be tested. Copyright c ○ 2003 John
(Show Context)

Citation Context

...g whereas the proposed approach is targeted toward integration testing related to interactions and behaviours of objects. Moreover, this approach does not evaluate UML artifacts. Offutt and Abdurazik =-=[18]-=- developed a technique for generating test cases for code (rather than designs) from a restricted form of UML state diagrams. The state diagrams used in their approach utilize only enabled transitions...

Automatic Test Generation: A Use Case Driven Approach

by Clémentine Nebut, Franck Fleurey, Yves Le Traon, Jean-marc Jézéquel - IEEE Transactions on Software Engineering , 2006
"... Abstract—Use cases are believed to be a good basis for system testing. Yet, to automate the test generation process, there is a large gap to bridge between high-level use cases and concrete test cases. We propose a new approach for automating the generation of system test scenarios in the context of ..."
Abstract - Cited by 51 (2 self) - Add to MetaCart
Abstract—Use cases are believed to be a good basis for system testing. Yet, to automate the test generation process, there is a large gap to bridge between high-level use cases and concrete test cases. We propose a new approach for automating the generation of system test scenarios in the context of object-oriented embedded software, taking into account traceability problems between highlevel views and concrete test case execution. Starting from a formalization of the requirements based on use cases extended with contracts, we automatically build a transition system from which we synthesize test cases. Our objective is to cover the system in terms of statement coverage with those generated tests: An empirical evaluation of our approach is given based on this objective and several case studies. We briefly discuss the experimental deployment of our approach in the field at Thalès Airborne Systems. Index Terms—Use case, test generation, scenarios, contracts, UML. 1

Assessing and Improving State-Based Class Testing: A Series of Experiments

by Lionel C. Briand, Massimiliano Di Penta, Yvan Labiche , 2004
"... This paper describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components wi ..."
Abstract - Cited by 37 (5 self) - Add to MetaCart
This paper describes an empirical investigation of the cost effectiveness of well-known state-based testing techniques for classes or clusters of classes that exhibit a state-dependent behavior. This is practically relevant as many object-oriented methodologies recommend modeling such components with statecharts which can then be used as a basis for testing. Our results, based on a series of three experiments, show that in most cases state-based techniques are not likely to be sufficient by themselves to catch most of the faults present in the code. Though useful, they need to be complemented with black-box, functional testing. We focus here on a particular technique, Category Partition, as this is the most commonly used and referenced black-box, functional testing technique. Two different oracle strategies have been applied for checking the success of test cases. One is a very precise oracle checking the concrete state of objects whereas the other one is based on the notion of state invariant (abstract states). Results show that there is a significant difference between them, both in terms of fault detection and cost. This is therefore an important choice to make that should be driven by the characteristics of the component to be tested, such as its criticality, complexity, and test budget.

An evaluation of exhaustive testing for data structures

by Darko Marinov, Alexandr Andoni, Dumitru Daniliuc, Sarfraz Khurshid, Martin Rinard - MIT Computer Science and Artificial Intelligence Laboratory Report MIT -LCS-TR-921 , 2003
"... We present an evaluation of exhaustive testing of linked data structures with sophisticated structural constraints. Specifically, we use the Korat testing framework to systematically enumerate all legal inputs within a certain size. We then evaluate the quality of this test suite according to severa ..."
Abstract - Cited by 34 (14 self) - Add to MetaCart
We present an evaluation of exhaustive testing of linked data structures with sophisticated structural constraints. Specifically, we use the Korat testing framework to systematically enumerate all legal inputs within a certain size. We then evaluate the quality of this test suite according to several measurements: ability to detect injected faults in the original correct implementations, code coverage, and specification coverage. Our results indicate that it is feasible to use exhaustive testing to obtain, within a reasonable amount of time, a high-quality test suite that can detect almost all faults and achieve complete code and specification coverage. Moreover, our results show that our exhaustive tests are of higher quality than randomly selected test suites that contain the same number of inputs selected from a larger potential input set. We conclude that exhaustive testing is a practical and effective testing methodology for sophisticated linked data structures. 1.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University