Results 11  20
of
3,014,685
Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics
 J. Geophys. Res
, 1994
"... . A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The ..."
Abstract

Cited by 786 (23 self)
 Add to MetaCart
. A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter
ModelBased Analysis of Oligonucleotide Arrays: Model Validation, Design Issues and Standard Error Application
, 2001
"... Background: A modelbased analysis of oligonucleotide expression arrays we developed previously uses a probesensitivity index to capture the response characteristic of a specific probe pair and calculates modelbased expression indexes (MBEI). MBEI has standard error attached to it as a measure of ..."
Abstract

Cited by 755 (28 self)
 Add to MetaCart
correlations with the original 20probe PM/MM difference model. MBEI method is able to extend the reliable detection limit of expression to a lower mRNA concentration. The standard errors of MBEI can be used to construct confidence intervals of fold changes, and the lower confidence bound of fold change is a
Expected Time Bounds for Selection
, 1975
"... A new selection algorithm is presented which is shown to be very efficient on the average, both theoretically and practically. The number of comparisons used to select the ith smallest of n numbers is n q min(i,ni) q o(n). A lower bound within 9 percent of the above formula is also derived. ..."
Abstract

Cited by 456 (4 self)
 Add to MetaCart
A new selection algorithm is presented which is shown to be very efficient on the average, both theoretically and practically. The number of comparisons used to select the ith smallest of n numbers is n q min(i,ni) q o(n). A lower bound within 9 percent of the above formula is also derived.
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2175 (20 self)
 Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 885 (52 self)
 Add to MetaCart
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show
A new approach to the maximum flow problem
 JOURNAL OF THE ACM
, 1988
"... All previously known efficient maximumflow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortestlength augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the pre ..."
Abstract

Cited by 665 (33 self)
 Add to MetaCart
to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n³) time bound on an nvertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version
New results in linear filtering and prediction theory
 TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 585 (0 self)
 Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary
RCV1: A new benchmark collection for text categorization research
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2004
"... Reuters Corpus Volume I (RCV1) is an archive of over 800,000 manually categorized newswire stories recently made available by Reuters, Ltd. for research purposes. Use of this data for research on text categorization requires a detailed understanding of the real world constraints under which the data ..."
Abstract

Cited by 646 (11 self)
 Add to MetaCart
errorful data. We refer to the original data as RCV1v1, and the corrected data as RCV1v2. We benchmark several widely used supervised learning methods on RCV1v2, illustrating the collection’s properties, suggesting new directions for research, and providing baseline results for future studies. We make
Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
, 2001
"... Variable selection is fundamental to highdimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized ..."
Abstract

Cited by 911 (60 self)
 Add to MetaCart
functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing
Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm
 Machine Learning
, 1988
"... learning Boolean functions, linearthreshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each ex ..."
Abstract

Cited by 766 (5 self)
 Add to MetaCart
algorithms are available that make a bounded number of mistakes, with the bound independent of the number of examples seen by the learner. We present one such algorithm that learns disjunctive Boolean functions, along with variants for learning other classes of Boolean functions. The basic method can
Results 11  20
of
3,014,685