Results 11 - 20
of
15,208
Minimax Programs
- University of California Press
, 1997
"... We introduce an optimization problem called a minimax program that is similar to a linear program, except that the addition operator is replaced in the constraint equations by the maximum operator. We clarify the relation of this problem to some better-known problems. We identify an interesting spec ..."
Abstract
-
Cited by 482 (5 self)
- Add to MetaCart
special case and present an efficient algorithm for its solution. 1 Introduction Over the last fifty years, thousands of problems of practical interest have been formulated as a linear program. Not only has the linear programming model proven to be widely applicable, but ongoing research has discovered
Interior-point Methods
, 2000
"... The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, algorithms and software for linear programming have become quite sophisticated, while extensions to more general classes of problems, such as convex quadrati ..."
Abstract
-
Cited by 612 (15 self)
- Add to MetaCart
quadratic programming, semidefinite programming, and nonconvex and nonlinear problems, have reached varying levels of maturity. We review some of the key developments in the area, including comments on both the complexity theory and practical algorithms for linear programming, semidefinite programming
Distance metric learning for large margin nearest neighbor classification
- In NIPS
, 2006
"... We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven ..."
Abstract
-
Cited by 695 (14 self)
- Add to MetaCart
We show how to learn a Mahanalobis distance metric for k-nearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin
Randomized Gossip Algorithms
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peer-to-peer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract
-
Cited by 532 (5 self)
- Add to MetaCart
stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient
Distortion invariant object recognition in the dynamic link architecture
- IEEE TRANSACTIONS ON COMPUTERS
, 1993
"... We present an object recognition system based on the Dynamic Link Architecture, which is an extension to classical Artificial Neural Networks. The Dynamic Link Architecture ex-ploits correlations in the fine-scale temporal structure of cellular signals in order to group neurons dynamically into hig ..."
Abstract
-
Cited by 637 (80 self)
- Add to MetaCart
into higher-order entities. These entities represent a very rich structure and can code for high level objects. In order to demonstrate the capabilities of the Dynamic Link Architecture we implemented a program that can recognize human faces and other objects from video images. Memorized objects
Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization
, 2000
"... ..."
Large margin methods for structured and interdependent output variables
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract
-
Cited by 624 (12 self)
- Add to MetaCart
to accomplish this, we propose to appropriately generalize the well-known notion of a separation margin and derive a corresponding maximum-margin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range
Max-margin Markov networks
, 2003
"... In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ..."
Abstract
-
Cited by 604 (15 self)
- Add to MetaCart
for learning M 3 networks based on a compact quadratic program formulation. We provide a new theoretical bound for generalization in structured domains. Experiments on the task of handwritten character recognition and collective hypertext classification demonstrate very significant gains over previous
Benchmarking Least Squares Support Vector Machine Classifiers
- NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract
-
Cited by 476 (46 self)
- Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set
Results 11 - 20
of
15,208