• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 114,579
Next 10 →

The Nature of Statistical Learning Theory

by Vladimir N. Vapnik , 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract - Cited by 13236 (32 self) - Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based

Invitation to Fixed-Parameter Algorithms

by Rolf Niedermeier , 2002
"... ..."
Abstract - Cited by 447 (79 self) - Add to MetaCart
Abstract not found

Techniques For Practical Fixed-Parameter Algorithms

by Falk Hüffner, Rolf Niedermeier, Sebastian Wernicke , 2007
"... The fixed-parameter approach is an algorithm design technique for solving combinatorially hard (mostly NP-hard) problems. For some of these problems, it can lead to algorithms that are both efficient and yet at the same time guaranteed to find optimal solutions. Focusing on their application to solv ..."
Abstract - Cited by 23 (8 self) - Add to MetaCart
to solving NP-hard problems in practice, we survey three main techniques to develop fixed-parameter algorithms, namely: kernelization (data reduction with provable performance guarantee), depthbounded search trees and a new technique called iterative compression. Our discussion is circumstantiated by several

Experiments with a New Boosting Algorithm

by Yoav Freund, Robert E. Schapire , 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract - Cited by 2213 (20 self) - Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced

A new learning algorithm for blind signal separation

by S. Amari, A. Cichocki, H. H. Yang - , 1996
"... A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract - Cited by 622 (80 self) - Add to MetaCart
A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number

A NEW POLYNOMIAL-TIME ALGORITHM FOR LINEAR PROGRAMMING

by N. Karmarkar - COMBINATORICA , 1984
"... We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than the ell ..."
Abstract - Cited by 860 (3 self) - Add to MetaCart
We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requires O(tf'SL) arithmetic operations on O(L) bit numbers, where n is the number of variables and L is the number of bits in the input. The running,time of this algorithm is better than

The geometry of algorithms with orthogonality constraints

by Alan Edelman, Tomás A. Arias, Steven T. Smith - SIAM J. MATRIX ANAL. APPL , 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract - Cited by 640 (1 self) - Add to MetaCart
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal

The Cache Performance and Optimizations of Blocked Algorithms

by Monica S. Lam, Edward E. Rothberg, Michael E. Wolf - In Proceedings of the Fourth International Conference on Architectural Support for Programming Languages and Operating Systems , 1991
"... Blocking is a well-known optimization technique for improving the effectiveness of memory hierarchies. Instead of operating on entire rows or columns of an array, blocked algorithms operate on submatrices or blocks, so that data loaded into the faster levels of the memory hierarchy are reused. This ..."
Abstract - Cited by 574 (5 self) - Add to MetaCart
Blocking is a well-known optimization technique for improving the effectiveness of memory hierarchies. Instead of operating on entire rows or columns of an array, blocked algorithms operate on submatrices or blocks, so that data loaded into the faster levels of the memory hierarchy are reused

The CN2 Induction Algorithm

by Peter Clark , Tim Niblett - MACHINE LEARNING , 1989
"... Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, cn2, designed for the efficient induction of simple, comprehensib ..."
Abstract - Cited by 890 (6 self) - Add to MetaCart
Systems for inducing concept descriptions from examples are valuable tools for assisting in the task of knowledge acquisition for expert systems. This paper presents a description and empirical evaluation of a new induction system, cn2, designed for the efficient induction of simple

Data Streams: Algorithms and Applications

by S. Muthukrishnan , 2005
"... In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has emerg ..."
Abstract - Cited by 533 (22 self) - Add to MetaCart
In the data stream scenario, input arrives very rapidly and there is limited memory to store the input. Algorithms have to work with one or few passes over the data, space less than linear in the input size or time significantly less than the input size. In the past few years, a new theory has
Next 10 →
Results 1 - 10 of 114,579
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University