Results 1 - 10
of
96,970
Table 1. Pilot Results for Test Generation Tool
2005
"... In PAGE 9: ... We have shown a decrease in the effort required to execute the test suite and an improvement in the coverage of the functionality of the system. Table1 shows the results of the initial pilots performed with an early prototype version of this approach. ... ..."
Cited by 5
Table 1: The average performance on OAEI 2005 benchmark test suite
2005
"... In PAGE 6: ... 5.2 Performance of Falcon-AO The partial experiment results of our Falcon-AO are presented in Table1 , and you will see that Falcon-AO performs well for all these test cases. The matched pairs generated by LMO are fed into GMO as input.... In PAGE 6: ... The details of the decision will be presented in our experimental paper accompanied. As can be seen from Table1 , our tool Falcon-AO works very well for test case #101-104 and test case #201- Table 1: The average performance on OAEI 2005 benchmark test suite... ..."
Cited by 8
Table 6: Test generation and execution results.
2006
"... In PAGE 13: ...or test specifications) represented as timed traces, i.e., alternating sequences of states, and delays or discrete transitions, in the output format of the UPPAAL tool. Results: Table6 shows the result of the test suite generation. Each row of the table gives the numbers for a given coverage criteria and the automata it covers (used as input).... In PAGE 15: ... The behavior of the web server is controlled by sending parameters in the PDUs that are interpreted as commands by a php script running on the web server. Results: The test cases presented in the Table6 have been executed on an in-house version of the WAP gateway at Ericsson. As shown in the rightmost column of Table 6 most of the test case went well.... ..."
Cited by 2
Table 11: Summary of the test-case generation results.
2006
"... In PAGE 39: ....1.2 Experimental Results and Analysis We used our tools to automatically generate and run two test-suites for the FGS; one suite for requirements coverage and one for requirements UFC coverage. The results from our experiment are summarized in Table11 and Table 12. ... In PAGE 40: ...Table 11: Summary of the test-case generation results. Table11 shows the number of test-cases in each test-suite and the time it took to generate them. It is evident from the table that the UFC coverage test-suite is three times larger than the requirements coverage test-suite and can therefore be expected to provide better coverage of the model than the requirements coverage test-suite.... ..."
Table 1 Test Suite of SDL Games and Language Processing Tools.
"... In PAGE 9: ....2.1. The test suite of SDL games and language processing tools Table1 lists eight applications, or test cases, that form the test suite we use in our study. The top row of the table lists the names that we use to refer to each of the test cases.... In PAGE 9: ... The four language processing tools are: Doxygen [48], g4re, Jikes [22], and Keystone [25,36]. The rows of Table1... In PAGE 10: ... cases: the first row lists the version number, Version 1; the second row lists the number of source files, Source Files; the third row lists the number of translation units, Translation Units, which includes both C and C++ translation units; the fourth row lists the number of C++ translation units C++ Translation Units; and finally, the last row of the table lists the (approximate) number of thousands of lines of code (KLOC) for each test case, not counting blank or comment lines. Table1 shows that, for the test cases that we have chosen for our study, the SDL games are larger than the language processing tools. For example, the average number of KLOC for the games is 231, whereas the average number of KLOC for the language processing tools is 78.... ..."
Table 3 The result of applying different tools to a test suite
2003
"... In PAGE 4: ... Type-varying arguments in Cyclone are treated as tagged-unions. A DEEPER STUDY We applied all available tools mentioned above to a set of small programs to evaluate the performance of these tools; the result is shown in Table 1 through Table3 . All tested tools are the latest version avail- able on the Internet: PurifyPlus 2003a.... In PAGE 5: ... The test suite was composed of a number of C pro- gram files, each containing one or more errors caused by improper use of unsafe features of C. The result is listed in Table3 showing that binary-level instru- mentation tools and source instrumentation tools are more sensitive to buffer overflows, and generally do a better job in detecting buffer overflows and memory management errors than source checkers. However, none of these tools detected the error of reading un- initialized locals or reading uninitialized non-buffer objects on heap.... ..."
Table 3: Results of the application of mutation operators. Operator Nr.Mutants Feasible mutants Nr.TestCases
2005
"... In PAGE 40: ... The resulting mutated specifications produce several test cases. Table3 presents a summary of the number of mutants, feasible mutants and unique test cases generated by each one of the first three mutation operators. Some posterior analysis of the generated test cases shown that our tool by applying the MCO operator produces a test suite just like the one generated by means of DNF-partitioning.... ..."
Table 2: Overview of test suites.
2002
"... In PAGE 8: ...he reader is referred to Singer et al. (2000) for a recent contribution on this subject. In our experiments, we use random 3-SAT benchmark instances with D1BPD2 BP BGBMBF generated using the mkcnf generator2 using the forced option to ensure that they are satisfiable. Table2 presents an overview of these instances, which are grouped into three test suites that are available online3. 2ftp://dimacs.... ..."
Cited by 6
Table 2: Overview of test suites.
2002
"... In PAGE 8: ...he reader is referred to Singer et al. (2000) for a recent contribution on this subject. In our experiments, we use random 3-SAT benchmark instances with a21 a4 a0 a17 a2a1 a5 a42 generated using the mkcnf generator2 using the forced option to ensure that they are satis able. Table2 presents an overview of these instances, which are grouped into three test suites that are available online3. 2ftp://dimacs.... ..."
Cited by 6
Table 5: Test suite 3
1998
"... In PAGE 12: ...Sequoia-16) 971 rectangles. Both samples are mapped onto the same universe, i.e., the underlying grids are identical. -Predicate Sample 1 Sample 2 Grid intersects Sequoia-16 Sequoia-11 same northwest Sequoia-16 Sequoia-11 same Table 4: Test suite 2 Testsuite3( Table5 ) compares two randomly generated samples of the same model, but shifted against each other byarandomvector V U((x min ;;y min );; (x max ;;y max )). This idea is applied to all three synthetic models and a variety of sample sizes.... ..."
Cited by 19
Results 1 - 10
of
96,970