Results 1 - 10
of
1,258
Max-margin Markov networks
, 2003
"... In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ..."
Abstract
-
Cited by 604 (15 self)
- Add to MetaCart
for learning M 3 networks based on a compact quadratic program formulation. We provide a new theoretical bound for generalization in structured domains. Experiments on the task of handwritten character recognition and collective hypertext classification demonstrate very significant gains over previous
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range
Large margin methods for structured and interdependent output variables
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract
-
Cited by 624 (12 self)
- Add to MetaCart
to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm
Benchmarking Least Squares Support Vector Machine Classifiers
- NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract
-
Cited by 476 (46 self)
- Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set
Multiple kernel learning, conic duality, and the SMO algorithm
- In Proceedings of the 21st International Conference on Machine Learning (ICML
, 2004
"... While classical kernel-based classifiers are based on a single kernel, in practice it is often desirable to base classifiers on combinations of multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for the support vector machine (SVM), and showed that the optimiz ..."
Abstract
-
Cited by 445 (31 self)
- Add to MetaCart
that the optimization of the coefficients of such a combination reduces to a convex optimization problem known as a quadratically-constrained quadratic program (QCQP). Unfortunately, current convex optimization toolboxes can solve this problem only for a small number of kernels and a small number of data points
Large scale multiple kernel learning
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We s ..."
Abstract
-
Cited by 340 (20 self)
- Add to MetaCart
While classical kernel-based learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We
Support vector machines for multiple-instance learning
- Advances in Neural Information Processing Systems 15
, 2003
"... This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state-of-the ..."
Abstract
-
Cited by 314 (2 self)
- Add to MetaCart
This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interior-point method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract
-
Cited by 194 (10 self)
- Add to MetaCart
formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
Autocalibration and the absolute quadric
- in Proc. IEEE Conf. Computer Vision, Pattern Recognition
, 1997
"... We describe a new method for camera autocalibration and scaled Euclidean structure and motion, from three or more views taken by a moving camera with fixed but unknown intrinsic parameters. The motion constancy of these is used to rectify an initial projective reconstruction. Euclidean scene structu ..."
Abstract
-
Cited by 248 (7 self)
- Add to MetaCart
easily. The nonlinear method is stabler, faster, more accurate and more general than the quasi-linear one. It is based on a general constrained optimization technique — sequential quadratic programming — that may well be useful in other vision problems.
Compact integer programming formulations . . .
, 2007
"... We present a new polyhedral approach to nonlinear Boolean optimization. Compared to other methods, it produces much smaller integer programming models, making it more efficient from a practical point of view. We mainly obtain this by two different ideas: first, we do not require the objective funct ..."
Abstract
- Add to MetaCart
We present a new polyhedral approach to nonlinear Boolean optimization. Compared to other methods, it produces much smaller integer programming models, making it more efficient from a practical point of view. We mainly obtain this by two different ideas: first, we do not require the objective
Results 1 - 10
of
1,258