• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 22,992
Next 10 →

Table 3: Time (in seconds) to find all bottlenecks with search directives from different application versions. Times reported are median values for several runs, reported in seconds. Standard Deviations range from 3 to 17 seconds. Each row contains the data for a particular application version, A through D. Each column contains the data for a particular source of the search directives used with the Performance Consultant. For example, the cell found at row C and column B contains the time to diagnose C using directives from a previous run of B. Time relative to the base version (column None ) is shown in parentheses.

in Improving Online Performance Diagnosis by the Use of Historical Performance Data
by Karen L. Karavanic, Barton P. Miller 1999
"... In PAGE 8: ...2. The full results are shown in Table3 . In every case, adding his- torical knowledge to the Performance Consultant greatly improved its ability to quickly diagnose perfor- mance bottlenecks: diagnosis time was reduced a mini- mum of 75% in all executions using historical knowledge.... In PAGE 8: ... In every case, adding his- torical knowledge to the Performance Consultant greatly improved its ability to quickly diagnose perfor- mance bottlenecks: diagnosis time was reduced a mini- mum of 75% in all executions using historical knowledge. In Table3 , each row represents the version of the application currently being diagnosed. Each col- umn represents the source from which we extracted the search directives used.... ..."
Cited by 13

Table 2 presents overall Word Error Rate (WER), Sen- tence Error Rate (SER) and Semantic Error Rate (SemER) for both versions of the system, where SemER is measured as the proportion of utterances not receiving an acceptable back- translation. Since performance of the recognisers, particularly the GLM version, differs greatly depending on whether or not it was within the coverage of the GLM grammar, we present separate figures for in-coverage data (417 utterances) and out- of-coverage data (453 utterances).

in unknown title
by unknown authors 2005
"... In PAGE 3: ... Table2 : WER, SER and SemER for SLM and GLM versions of the recogniser, on in-coverage and out-of-coverage data. Translations to French and Japanese were judged for ac- ceptability by native speaker judges for each language: there were six judges for French, and three for Japanese.... ..."
Cited by 1

Table 1. De nition of !(s; t) In the long version of the paper, we show that this de nition leads to the results given by our examples, and in particular, to the three nets shown on the right-hand side of Figure 1 (Example A), depending on the particular choices of

in Refinement of Coloured Petri Nets
by Eike Best, Thomas Thielke
"... In PAGE 11: ... We give these nine cases explicitly for reasons of clarity, even though some of them can be combined into single cases. The full de nition of !(s; t) is given by the if{fi case distinction in Table1 . In the de nition, w denotes a value, m denotes a mode of N and ` apos; denotes multiset inclusion.... ..."

Table 2: Versioning software Vendors can maximize their profit by creating and developing software versions with maximum value and to sell these products by getting the highest value possible. For a software marketplace there are two principles in designing a product line: Vendors must offer software versions adapted to the requirements of different types of customers. Vendors have to stress the value of each software version in a way that is transparent for the customers. They must describe the particularities of the software in meta-data such as XML. There are several dimensions for versioning a software product. High value

in On-Demand Application Integration Business Concepts And Strategies For The ASP Market
by G. Tamm , O. Günther

Table 4: Similarity of Extracted Priorities Across Code Versions. Each column represents the source(s) of the priority directives: a run of one or more of versions A, B, and C. The rows contain data for high priority, low priority, and the complete set of both. The values are the number of priority directives for the particular category. For example, of the total 107 different high priority directives, 16 were unique to version A and 46 were common to versions A, B, and C.

in Improving Online Performance Diagnosis by the Use of Historical Performance Data
by Karen L. Karavanic, Barton P. Miller 1999
"... In PAGE 9: ... We exam- ined the different runs of Version C, noting the differ- ences in the sets of search directives extracted from the base runs of Versions A, B, and C. As shown in Table4 , 36% of the priorities were common across all three sets of directives, 41% were unique to a single set, and the remaining 23% occurred in two of the three sets. High priority settings have a bigger impact; for this category, 43% were common to all three, 30% were unique to one, and the remaining 27% were com- mon to two.... ..."
Cited by 13

Table 4.6: Test 3 bis: complete type reduction + module optimization (for the mpdus type). There may well be some room left for optimizations, in particular: a better memory management, a version with security checks removed, a globalisation of security checks within the \light weight quot; routine, an implementation of the \mappings quot; authorized for the FTLWS routine, These optimisations will be done in the next version, and will be incorporated to the INRIA- 2 deliverable.

in Applicability of the Session and the Presentation Layers for. . .
by E De Recherche, Route Des Lucioles, Walid Dabbous, Walid Dabbous, Christian Huitema, Christian Huitema, Leon Vidaller Siso, Leon Vidaller Siso, Joaquin Seoane, Joaquin Seoane, Julio Berrocal, Julio Berrocal

Table 8.1 shows the word error rates for using Schmid smoothing and my variant of Modi ed Kneser-Ney Smoothing respectively. This version outperforms Schmid smoothing in particular on the phoneme-error rates when stress is not counted, and on input that is not morphologically annotated. Signi cant differences (p lt; 0:00001) with respect to Schmid smoothing are marked with an asterisk.

in Letter-to-Phoneme Conversion for a German Text-to-Speech System
by Vera Demberg 2005

Table 3: Baseline test on the Prolog versions

in An Experimental Evaluation of Methodological Diversity in Multiversion Software Reliability
by Derek Partridge, Niall Griffith, Dan Tallis, Phillis Jones 1996
"... In PAGE 9: ... They are used to compute a measure of distinct-failure diversity, DFD. For the same test set applied to the four Prolog versions we have the simple test results ( Table3 ) and the coincident and identical failure results (Table 4). The joint coincident failure results are given in Table 5.... In PAGE 17: ...cope in the size of N (the number of versions) and high DFD, e.g. the Prolog set, with just four versions, has no scope in the size of N, so although its DFD = 0:966, there can be no gain.Two of the individual Prolog versions ( Table3 ) exhibit particularly good performances (versions 1 and 2), and from Table 4 we see a curious distribution of coincident failures. There are no instances of either 2 or 3 versions failing coincidently, yet on 30 of the tests all 4 versions fail.... ..."
Cited by 2

Table V. Dynamic Region Asymptotic Speedups without a Particular Optimization. This table compares asymptotic speedups with all optimizations enabled to that with a particular optimization disabled. Only those entries that correspond to optimizations that were applied (those with a check mark in Table IV) are shown. A number greater than 1.0 indicates that a dynamic region with a particular dynamic optimization disabled was still faster than its statically compiled version; a number less than 1.0 indicates that it was slower.

in The Benefits and Costs of DyC's
by Optimizations Brian Grant, Brian Grant, Markus Mock, Matthal Philipose, Craig Chambers, Susan J. Eggers

Table 2: Exact and approximate (second version of sequential imputations) calculations in 4 4 contingency table for comparing performance of students from different programs. Monte Carlo standard errors are indicated between parentheses. (?) Sequential imputations did not produce this particular clustering for M = 5; 000.

in Nonparametric Bayesian Analysis for Assessing Homogeneity in k×l Contingency TABLES WITH FIXED RIGHT MARGIN TOTALS
by Fernando A. Quintana 1998
"... In PAGE 17: ... As a comparison, we also implemented the sequential imputations algorithm (second version) with M = 5; 000. Both sets of results are displayed on Table2 . The exact Bayes factor is .... ..."
Cited by 6
Next 10 →
Results 1 - 10 of 22,992
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University