• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 14,044
Next 10 →

Average Error Filter Wavelength

by Figure Original Averaged, X Subsampled , 21
"... unication, and FP = further processing. ffl Fig. 4 a), b) Stereo pair. c) Disparity map, the filter's width, oe = =2:1, is 6 pixels. ffl Fig. 5 a) Image of a chair and a dustbin. b) Disparity map, oe = 6 pixels. c) The filter's energy ae in the nonsingular regions that survive constraint ..."
Abstract - Add to MetaCart
constraint (9). ffl Fig. 6 One of the images of the Translating Tree sequence. ffl Fig. 7 The disparity field of the Translating Tree computed for an input image of size 128 \Theta 128 (averaged). ffl Fig. 8 Behavior of the algorithm's error as a function of the filter wavelength, = 2:1 oe

Average Error for Spectral Asymptotics on Surfaces

by Robert S. Strichartz
"... Let N(t) denote the eigenvalue counting function of the Lapla-cian on a compact surface of constant nonnegative curvature, with or without boundary. We define a refined asymptotic formula Ñ(t) = At+Bt1/2 +C, where the constants are expressed in terms of the ge-ometry of the surface and its boundary ..."
Abstract - Add to MetaCart
boundary, and consider the average error A(t) = 1t ∫ t 0 D(s) ds for D(t) = N(t) − Ñ(t). We present a conjecture for the asymptotic behavior of A(t), and study some examples that support the conjecture. “The mills of God grind slowly, yet they grind exceeding small.” Proverb 1

Reducing the Average Error of Underresolved Approximations

by Anton Kast And, Anton Kast, Re J. Chorin
"... There exist problems of practical interest (in particular in turbulence) whose solutions are too complex to be accurately resolved numerically, but where one is interested only in large scale features averaged over the random details. We define the appropriate averages as expectations conditioned by ..."
Abstract - Add to MetaCart
There exist problems of practical interest (in particular in turbulence) whose solutions are too complex to be accurately resolved numerically, but where one is interested only in large scale features averaged over the random details. We define the appropriate averages as expectations conditioned

A comparison of mean average error . . .

by Sijing Liu
"... ..."
Abstract - Add to MetaCart
Abstract not found

Approximation for Average Error Probability of BPSK in the Presence of Phase Error

by Yeonsoo Jang, Dongweon Yoon, Ki Ho Kwon, Jaeyoon Lee, Wooju Lee
"... Abstract—Phase error in communications systems degrades error performance. In this paper, we present a simple approximation for the average error probability of the binary phase shift keying (BPSK) in the presence of phase error having a uniform distribution on arbitrary intervals. For the simple ap ..."
Abstract - Add to MetaCart
Abstract—Phase error in communications systems degrades error performance. In this paper, we present a simple approximation for the average error probability of the binary phase shift keying (BPSK) in the presence of phase error having a uniform distribution on arbitrary intervals. For the simple

Validation of Average Error Rate Over Classifiers

by Eric Bax , 1997
"... We examine methods to estimate the average and variance of test error rates over a set of classifiers. We begin with the process of drawing a classifier at random for each example. Given validation data, the average test error rate can be estimated as if validating a single classifier. Given the tes ..."
Abstract - Cited by 3 (3 self) - Add to MetaCart
We examine methods to estimate the average and variance of test error rates over a set of classifiers. We begin with the process of drawing a classifier at random for each example. Given validation data, the average test error rate can be estimated as if validating a single classifier. Given

4 ON VARIOUS DEFINITIONS OF SHADOWING WITH AVERAGE ERROR IN TRACING

by Xinxing Wu, Piotr Oprocha, Guanrong Chen
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

Ensemble Methods in Machine Learning

by Thomas G. Dietterich - MULTIPLE CLASSIFIER SYSTEMS, LBCS-1857 , 2000
"... Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boostin ..."
Abstract - Cited by 625 (3 self) - Add to MetaCart
Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging

Base-calling of automated sequencer traces using phred. I. Accuracy Assessment

by Brent Ewing, Ladeana Hillier, Michael C. Wendl, Phil Green - GENOME RES , 1998
"... The availability of massive amounts of DNA sequence information has begun to revolutionize the practice of biology. As a result, current large-scale sequencing output, while impressive, is not adequate to keep pace with growing demand and, in particular, is far short of what will be required to obta ..."
Abstract - Cited by 1653 (4 self) - Add to MetaCart
accuracy. phred appears to be the first base-calling program to achieve a lower error rate than the ABI software, averaging 40%–50 % fewer errors in the data sets examined independent of position in read, machine running conditions, or sequencing chemistry.

An empirical comparison of voting classification algorithms: Bagging, boosting, and variants.

by Eric Bauer , Philip Chan , Salvatore Stolfo , David Wolpert - Machine Learning, , 1999
"... Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several vari ..."
Abstract - Cited by 707 (2 self) - Add to MetaCart
in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting
Next 10 →
Results 1 - 10 of 14,044
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University