Results 11 - 20
of
61
Using the case-based ranking methodology for test case prioritization,”
- in Proceedings of the 22nd IEEE International Conference on Software Maintenance.
, 2006
"... Abstract The ..."
(Show Context)
How Well Does Test Case Prioritization Integrate with Statistical Fault Localization?
- INFORMATION AND SOFTWARE TECHNOLOGY
, 2012
"... Context: Effective test case prioritization shortens the time to detect failures, and yet the use of fewer test cases may compromise the effectiveness of subsequent fault localization.
Objective: The paper aims at finding whether several previously identified effectiveness factors of test case prior ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
Context: Effective test case prioritization shortens the time to detect failures, and yet the use of fewer test cases may compromise the effectiveness of subsequent fault localization.
Objective: The paper aims at finding whether several previously identified effectiveness factors of test case prioritization techniques, namely strategy, coverage granularity, and time cost, have observable consequences on the effectiveness of statistical fault localization techniques.
Method: This paper uses a controlled experiment to examine these factors. The experiment includes 16 test case prioritization techniques and 4 statistical fault localization techniques using the Siemens suite of programs as well as grep, gzip, zed, and flex as subjects. The experiment studies the effects of the percentage of code examined to locate faults from these benchmark subjects after a given number of failures have been observed.
Result: We find that if testers have a budgetary concern on the number of test cases for regression testing, the use of test case prioritization can save up to 40% of test case executions for commit builds without significantly affecting the effectiveness of fault localization. A statistical fault localization technique using a smaller fraction of a prioritized test suite is found to compromise its effectiveness seriously. Despite the presence of some variations, the inclusion of more failed test cases will generally improve the fault localization effectiveness during the integration process. Interestingly, during the variation periods, adding more failed test cases actually deteriorates the fault localization effectiveness. In terms of strategies, Random is found to be the most effective, followed by the ART and Additional strategies, while the Total strategy is the least effective. We do not observe sufficient empirical evidence to conclude that using different coverage granularity levels have different overall effects.
Conclusion: The paper empirically identifies that strategy and time-cost of test case prioritization techniques are key factors affecting the effectiveness of statistical fault localization, while coverage granularity is not a significant factor. It also identifies a mid-range deterioration in fault localization effectiveness when adding more test cases to facilitate debugging.
Taking advantage of service selection: a study on the testing of location-based web services through test case prioritization
- Proceedings of ICWS ’10
, 2010
"... Abstract—Dynamic service compositions pose new verification and validation challenges such as uncertainty in service membership. Moreover, applying an entire test suite to loosely coupled services one after another in the same composition can be too rigid and restrictive. In this paper, we investiga ..."
Abstract
-
Cited by 7 (5 self)
- Add to MetaCart
(Show Context)
Abstract—Dynamic service compositions pose new verification and validation challenges such as uncertainty in service membership. Moreover, applying an entire test suite to loosely coupled services one after another in the same composition can be too rigid and restrictive. In this paper, we investigate the impact of service selection on service-centric testing tech-niques. Specifically, we propose to incorporate service selection in executing a test suite and develop a suite of metrics and test case prioritization techniques for the testing of location-aware services. A case study shows that a test case prioritization technique that incorporates service selection can outperform their traditional counterpart — the impact of service selection is noticeable on software engineering techniques in general and on test case prioritization techniques in particular. Further-more, we find that points-of-interest-aware techniques can be significantly more effective than input-guided techniques in terms of the number of invocations required to expose the first failure of a service composition. Keywords—test case prioritization, location-based web ser-vice, service-centric testing, service selection I.
Optimizing Cost and Quality by Integrating Inspection and Test Processes
"... Inspections and testing are two of the most commonly performed software quality assurance processes today. Typically, these processes are applied in isolation, which, however, fails to exploit the benefits of systematically combining and integrating them. Expected benefits of such process integratio ..."
Abstract
-
Cited by 7 (6 self)
- Add to MetaCart
(Show Context)
Inspections and testing are two of the most commonly performed software quality assurance processes today. Typically, these processes are applied in isolation, which, however, fails to exploit the benefits of systematically combining and integrating them. Expected benefits of such process integration are higher defect detection rates or reduced quality assurance effort. Moreover, when conducting testing without any prior information regarding the system’s quality, it is often unclear which parts or which defect types should be prioritized. Existing approaches do not explicitly use information from inspections in a systematical way to focus testing processes. In this article, we present an integrated two-stage approach that routes inspection data to test processes in order to prioritize code classes and defect types. While an initial version of the approach focused on prioritizing code classes, this article focuses on the prioritization of defect types for testing. Results from a case study where the approach was applied on the code level show that those defect types could be prioritized before the testing that afterwards actually showed up most often during the test process. In addition, an overview of related work and an outlook on future research directions are given.
A General Noise-Reduction Framework for Fault Localization of Java Programs
- INFORMATION AND SOFTWARE TECHNOLOGY
, 2013
"... Context: Existing fault-localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
Context: Existing fault-localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They often ignore the noise introduced by other features on the same set of executions that may lead to the observed failures. It is unclear to what extent such noise can be alleviated.
Objective: This paper aims to develop a framework that reduces the noise in fault-failure correlation measurements.
Method: We develop a fault-localization framework that uses chains of key basic blocks as program features and a noise-reduction methodology to improve on the similarity coefficients of fault-localization techniques. We evaluate our framework on five base techniques using five real-life medium-scaled programs in different application domains. We also conduct a case study on subjects with multiple faults.
Results: The experimental result shows that the synthesized techniques are more effective than their base techniques by almost 10%. Moreover, their runtime overhead factors to collect the required feature values are practical. The case study also shows that the synthesized techniques work well on subjects with multiple faults.
Conclusion: We conclude that the proposed framework has a significant and positive effect on improving the effectiveness of the corresponding base techniques.
Putting Your Best Tests Forward
- IEEE Software
, 2003
"... helps software accommodate new technologies and user needs but can also affect its quality. So, when software engineers modify software, they regression test it, rerunning existing tests to verify that existing functionality hasn’t been harmed and creating new tests to validate new functionality. Re ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
helps software accommodate new technologies and user needs but can also affect its quality. So, when software engineers modify software, they regression test it, rerunning existing tests to verify that existing functionality hasn’t been harmed and creating new tests to validate new functionality. Regression testing is one of the most widely used testing techniques 1 but can be expensive. For example, one company we work with has a regression test suite for a system of only 20,000 lines of code that takes seven weeks and costs several hundred thousand dollars to execute. A second company runs their regression
A comparison of test case prioritization criteria for software product lines
- In IEEE International Conference on Software Testing, Verification, and Validation
, 2014
"... Abstract—Software Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Software Product Line (SPL) testing is challenging due to the potentially huge number of derivable products. To alleviate this problem, numerous contributions have been proposed to reduce the number of products to be tested while still having a good coverage. However, not much attention has been paid to the order in which the products are tested. Test case prioritization techniques reorder test cases to meet a certain performance goal. For instance, testers may wish to order their test cases in order to detect faults as soon as possible, which would translate in faster feedback and earlier fault correction. In this paper, we explore the applicability of test case prioritization techniques to SPL testing. We propose five different prioritization criteria based on common metrics of feature models and we compare their effectiveness in increasing the rate of early fault detection, i.e. a measure of how quickly faults are detected. The results show that different orderings of the same SPL suite may lead to significant differences in the rate of early fault detection. They also show that our approach may contribute to accelerate the detection of faults of SPL test suites based on combinatorial testing. I.
A dynamic fault localization technique with noise reduction for Java programs
- In QSIC 2011
"... Abstract—Existing fault localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Existing fault localization techniques combine various program features and similarity coefficients with the aim of precisely assessing the similarities among the dynamic spectra of these program features to predict the locations of faults. Many such techniques estimate the probability of a particular program feature causing the observed failures. They ignore the noise introduced by the other features on the same set of executions that may lead to the observed failures. In this paper, we propose both the use of chains of key basic blocks as program features and an innovative similarity coef-ficient that has noise reduction effect. We have implemented our proposal in a technique known as MKBC. We have empir-ically evaluated MKBC using three real-life medium-sized programs with real faults. The results show that MKBC out-performs Tarantula, Jaccard, SBI, and Ochiai significantly. Keywords—fault localization; key block chain; noise reduction I.
Towards model-based testing of electronic funds transfer systems
- In Proc. of FSEN’11
, 2011
"... Abstract We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalizatio ..."
Abstract
-
Cited by 4 (3 self)
- Add to MetaCart
(Show Context)
Abstract We report on our first experience with applying model-based testing techniques to an operational Electronic Funds Transfer (EFT) switch. The goal is to test the conformance of the EFT switch to the standard flows described by the ISO 8583 standard. To this end, we first make a formalization of the transaction flows specified in the ISO 8583 standard in terms of a Labeled Transition System (LTS). This formalization paves the way for modelbased testing based on the formal notion of Input-Output Conformance (IOCO) testing. We adopt and augment IOCO testing for our particular application domain. We develop a prototype implementation and apply our proposed techniques in practice. We discuss the encouraging obtained results and the observed shortcomings of the present approach. We outline a roadmap to remedy the shortcomings and enhance the test results.
Evolutionary algorithm for prioritized pairwise test data generation
, 2012
"... ABSTRACT Combinatorial Interaction Testing (CIT) is a technique used to discover faults caused by parameter interactions in highly configurable systems. These systems tend to be large and exhaustive testing is generally impractical. Indeed, when the resources are limited, prioritization of test cas ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
ABSTRACT Combinatorial Interaction Testing (CIT) is a technique used to discover faults caused by parameter interactions in highly configurable systems. These systems tend to be large and exhaustive testing is generally impractical. Indeed, when the resources are limited, prioritization of test cases is a must. Important test cases are assigned a high priority and should be executed earlier. On the one hand, the prioritization of test cases may reveal faults in early stages of the testing phase. But, on the other hand the generation of minimal test suites that fulfill the demanded coverage criteria is an NP-hard problem. Therefore, search based approaches are required to find the (near) optimal test suites. In this work we present a novel evolutionary algorithm to deal with this problem. The experimental analysis compares five techniques on a set of benchmarks. It reveals that the evolutionary approach is clearly the best in our comparison. The presented algorithm can be integrated into CTE XL professional tool.