Results 1  10
of
68,033
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract

Cited by 13236 (32 self)
 Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
A new theoretical insight on timedependent diffusion coefficient
"... In NMR, measuring the timedependent diffusion coefficient D(t) is an efficient tool to probe the geometry of porous media1,2,3. Although the diffusive motion is well understood in singlescale domains (slab, cylinder, and sphere)4, many issues remain unclear for multiscale porous structures like s ..."
Abstract
 Add to MetaCart
sedimentary rocks, cements, or biological tissues. To get a better theoretical insight onto restricted diffusion in multiscale geometries, we study the spinecho signal attenuation due to diffusion in circular and spherical layers, {x∈Rd: L–l<x<L}, presenting two geometrical lengths, the radius L
On Lloyd’s algorithm: new theoretical insights for clustering in practice
"... A paradox for “kmeans clustering” kmeans objective φ of C = {ci, i ∈ [k]} on a dataset X: φX(C) = x∈X ‖x − C(x)‖2, where C(x) = arg min c∈C ‖x − c‖ Even though approximation algorithms exist, they are rarely used for applications. Instead, a few heuristics, most notably Lloyd’s algorithm, are pre ..."
Abstract
 Add to MetaCart
A paradox for “kmeans clustering” kmeans objective φ of C = {ci, i ∈ [k]} on a dataset X: φX(C) = x∈X ‖x − C(x)‖2, where C(x) = arg min c∈C ‖x − c‖ Even though approximation algorithms exist, they are rarely used for applications. Instead, a few heuristics, most notably Lloyd’s algorithm, are preferred and often successful in practice. Lloyd’s algorithm (a.k.a. the “kmeans ” algorithm) Input: dataset X, X  = n); k; samples size m, m> k.
Theoretical improvements in algorithmic efficiency for network flow problems

, 1972
"... This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimumcost flow problem. Upper bounds on ... the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps req ..."
Abstract

Cited by 560 (0 self)
 Add to MetaCart
This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimumcost flow problem. Upper bounds on ... the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps
The ensemble Kalman Filter: Theoretical formulation and practical implementation.
 Ocean Dynamics,
, 2003
"... Abstract The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group and numerous publications have discussed applications and theoretical aspects of it. This paper rev ..."
Abstract

Cited by 496 (5 self)
 Add to MetaCart
reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal
Dropout from higher education: A theoretical synthesis of recent research
 Review of Educational Research
, 1975
"... Despite the very extensive literature on dropout from higher education, much remains unknown about the nature of the dropout process. In large measure, the failure of past research to delineate more clearly the multiple characteristics of dropout can be traced to two major shortcomings; namely, inad ..."
Abstract

Cited by 798 (2 self)
 Add to MetaCart
, inadequate attention given to questions of definition and to the development of theoretical models that seek to explain, not simply to describe, the processes that bring individuals to leave institutions of higher education. With regard to the former, inadequate attention given to definition has often led
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory
, 1995
"... Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical s ..."
Abstract

Cited by 675 (39 self)
 Add to MetaCart
with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2213 (20 self)
 Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
New results in linear filtering and prediction theory
 TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 607 (0 self)
 Add to MetaCart
in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 897 (52 self)
 Add to MetaCart
that techniques used in the analysis of Vapnik’s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins
Results 1  10
of
68,033