Results 11  20
of
824,525
Training Support Vector Machines: an Application to Face Detection
, 1997
"... We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision sur ..."
Abstract

Cited by 715 (1 self)
 Add to MetaCart
We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision
Greed is Good: Algorithmic Results for Sparse Approximation
, 2004
"... This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal representa ..."
Abstract

Cited by 904 (9 self)
 Add to MetaCart
This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries. It provides a sufficient condition under which both OMP and Donoho’s basis pursuit (BP) paradigm can recover the optimal
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 3306 (12 self)
 Add to MetaCart
large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization
Large margin methods for structured and interdependent output variables
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract

Cited by 607 (12 self)
 Add to MetaCart
Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 793 (39 self)
 Add to MetaCart
in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include
Complete discrete 2D Gabor transforms by neural networks for image analysis and compression
, 1988
"... A threelayered neural network is described for transforming twodimensional discrete signals into generalized nonorthogonal 2D “Gabor” representations for image analysis, segmentation, and compression. These transforms are conjoint spatial/spectral representations [lo], [15], which provide a comp ..."
Abstract

Cited by 471 (8 self)
 Add to MetaCart
because t e elementary expansion functions are not orthogonal. One orthogonking approach developed for 1D signals by Bastiaans [8], based on biorthonormal expansions, is restricted by constraints on the conjoint sampling rates and invariance of the windowing function, as well as by the fact
A hardcore predicate for all oneway functions
 In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing
, 1989
"... Abstract rity of f. In fact, for inputs (to f*) of practical size, the pieces effected by f are so small A central tool in constructing pseudorandom that f can be inverted (and the “hardcore” generators, secure encryption functions, and bit computed) by exhaustive search. in other areas are “hardc ..."
Abstract

Cited by 435 (5 self)
 Add to MetaCart
(within a polynomial) 50) given only f(z). Both b, f are computable security. Namely, we prove a conjecture of in polynomial time. [Levin 87, sec. 5.6.21 that the sca1a.r product [Yao 821 transforms any oneway function of boolean vectors p, x is a hardcore of every f into a more complicated one, f
Interprocedural dataflow analysis via graph reachability
, 1994
"... The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow fun ..."
Abstract

Cited by 449 (34 self)
 Add to MetaCart
The paper shows how a large class of interprocedural dataflowanalysis problems can be solved precisely in polynomial time by transforming them into a special kind of graphreachability problem. The only restrictions are that the set of dataflow facts must be a finite set, and that the dataflow
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 450 (46 self)
 Add to MetaCart
stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LSSVM classifiers with linear, polynomial and radial basis function
The WienerAskey Polynomial Chaos for Stochastic Differential Equations
 SIAM J. SCI. COMPUT
, 2002
"... We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener's polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces the dime ..."
Abstract

Cited by 369 (38 self)
 Add to MetaCart
We present a new method for solving stochastic differential equations based on Galerkin projections and extensions of Wiener's polynomial chaos. Specifically, we represent the stochastic processes with an optimum trial basis from the Askey family of orthogonal polynomials that reduces
Results 11  20
of
824,525