Results 1 - 10
of
179,781
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract
-
Cited by 13236 (32 self)
- Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
A Simple, Fast, and Accurate Algorithm to Estimate Large Phylogenies by Maximum Likelihood
, 2003
"... The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The ..."
Abstract
-
Cited by 2182 (27 self)
- Add to MetaCart
. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment
The CN2 Induction Algorithm
- MACHINE LEARNING
, 1989
"... Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, cn2, designed for the efficient induction of simple, comprehensib ..."
Abstract
-
Cited by 890 (6 self)
- Add to MetaCart
Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, cn2, designed for the efficient induction of simple
Rapid object detection using a boosted cascade of simple features
- ACCEPTED CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2001
, 2001
"... This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the " ..."
Abstract
-
Cited by 3283 (9 self)
- Add to MetaCart
This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract
-
Cited by 2213 (20 self)
- Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract
-
Cited by 660 (13 self)
- Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
The geometry of graphs and some of its algorithmic applications
- COMBINATORICA
, 1995
"... In this paper we explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graph-theoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric representations that res ..."
Abstract
-
Cited by 524 (19 self)
- Add to MetaCart
In this paper we explore some implications of viewing graphs as geometric objects. This approach offers a new perspective on a number of graph-theoretic and algorithmic problems. There are several ways to model graphs geometrically and our main concern here is with geometric representations
Simple Heuristics That Make Us Smart
, 2008
"... To survive in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury, decision-makers must use bounded rationality. In this precis of Simple heuristics that make us smart, we explore fast and frugal heuristics—simple rules for making decisions with re ..."
Abstract
-
Cited by 456 (15 self)
- Add to MetaCart
algorithms, particularly when generalizing to new data—simplicity leads to robustness.
A new learning algorithm for blind signal separation
-
, 1996
"... A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract
-
Cited by 622 (80 self)
- Add to MetaCart
A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number
A NEW POLYNOMIAL-TIME ALGORITHM FOR LINEAR PROGRAMMING
- COMBINATORICA
, 1984
"... We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than the ell ..."
Abstract
-
Cited by 860 (3 self)
- Add to MetaCart
We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than
Results 1 - 10
of
179,781