Results 1 - 10
of
97
An interior-point method for large-scale l1-regularized logistic regression
- Journal of Machine Learning Research
, 2007
"... Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale ℓ1-regularized logistic regression problems. Small problems with up to a thousand ..."
Abstract
-
Cited by 290 (9 self)
- Add to MetaCart
Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale ℓ1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.
The Entire Regularization Path for the Support Vector Machine
, 2004
"... The Support Vector Machine is a widely used tool for classification. Many efficient imple-mentations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a ..."
Abstract
-
Cited by 204 (11 self)
- Add to MetaCart
(Show Context)
The Support Vector Machine is a widely used tool for classification. Many efficient imple-mentations exist for fitting a two-class SVM model. The user has to supply values for the tuning parameters: the regularization cost parameter, and the kernel parameters. It seems a common practice is to use a default value for the cost parameter, often leading to the least restrictive model. In this paper we argue that the choice of the cost parameter can be critical. We then derive an algorithm that can fit the entire path of SVM solutions for every value of the cost parameter, with essentially the same computational cost as fitting one SVM model. We illustrate our algorithm on some examples, and use our representation to give further insight into the range of SVM solutions.
Interior methods for nonlinear optimization
- SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract
-
Cited by 127 (6 self)
- Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interior-point techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Numerical Decomposition of the Solution Sets of Polynomial Systems into Irreducible Components
, 2001
"... In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based on homotopy continuation, that compute much of the geometric information contained in the primary decomposi ..."
Abstract
-
Cited by 75 (33 self)
- Add to MetaCart
In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based on homotopy continuation, that compute much of the geometric information contained in the primary decomposition of the solution set. In particular, ignoring multiplicities, our algorithms lay out the decomposition of the set of solutions into irreducible components, by finding, at each dimension, generic points on each component. As by-products, the computation also determines the degree of each component and an upper bound on itsmultiplicity. The bound issharp (i.e., equal to one) for reduced components. The algorithms make essential use of generic projection and interpolation, and can, if desired, describe each irreducible component precisely as the common zeroesof a finite number of polynomials.
Numerical Homotopies to compute generic Points on positive dimensional Algebraic Sets
- Journal of Complexity
, 1999
"... Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for four-bar mechanisms) that are challenging to compute numerically by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the com ..."
Abstract
-
Cited by 64 (30 self)
- Add to MetaCart
(Show Context)
Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for four-bar mechanisms) that are challenging to compute numerically by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the components with linear subspaces in general position to obtain generic points of the components as the isolated solutions of an auxiliary system. Since this requires the solution of a number of larger overdetermined systems, the procedure is computationally expensive and also wasteful because many solution paths diverge. In this article an embedding of the original polynomial system is presented, which leads to a sequence of homotopies, with solution paths leading to generic points of all components as the isolated solutions of an auxiliary system. The new procedure significantly reduces the number of paths to solutions that need to be followed. This approach has been implemented and applied to...
Computing Regularization Paths for Learning Multiple Kernels
, 2005
"... The problem of learning a sparse conic combination of kernel functions or kernel matrices for classification or regression can be achieved via the regularization by a block 1-norm [1]. In this paper, we present an algorithm that computes the entire regularization path for these problems. ..."
Abstract
-
Cited by 55 (11 self)
- Add to MetaCart
The problem of learning a sparse conic combination of kernel functions or kernel matrices for classification or regression can be achieved via the regularization by a block 1-norm [1]. In this paper, we present an algorithm that computes the entire regularization path for these problems.
Geometric constraint solving
- Computing in Euclidean Geometry
, 1995
"... We survey the current state of the art in geometric constraint solving. Both 2D and 3D constraint solving is considered, and different approaches are characterized. ..."
Abstract
-
Cited by 45 (8 self)
- Add to MetaCart
(Show Context)
We survey the current state of the art in geometric constraint solving. Both 2D and 3D constraint solving is considered, and different approaches are characterized.
Tracking curved regularized optimization solution paths
- in ‘Advances in Neural Information Processing Systems (NIPS*2004
, 2004
"... Regularization plays a central role in the analysis of modern data, where non-regularized fitting is likely to lead to over-fitted models, useless for both prediction and interpretation. We consider the design of incremental algorithms which follow paths of regularized solutions, as the regularizati ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
(Show Context)
Regularization plays a central role in the analysis of modern data, where non-regularized fitting is likely to lead to over-fitted models, useless for both prediction and interpretation. We consider the design of incremental algorithms which follow paths of regularized solutions, as the regularization varies. These approaches often result in methods which are both efficient and highly flexible. We suggest a general path-following algorithm based on second-order approximations, prove that under mild conditions it remains “very close ” to the path of optimal solutions and illustrate it with examples. 1
Numerical Irreducible Decomposition using Projections from Points on the Components
- In Symbolic Computation: Solving Equations in Algebra, Geometry, and Engineering, volume 286 of Contemporary Mathematics
"... To classify positive dimensional solution components of a polynomial system, we construct polynomials interpolating points sampled from each component. In previous work, points on an i-dimensional component were linearly projected onto a generically chosen (i + 1)-dimensional subspace. In this p ..."
Abstract
-
Cited by 27 (15 self)
- Add to MetaCart
(Show Context)
To classify positive dimensional solution components of a polynomial system, we construct polynomials interpolating points sampled from each component. In previous work, points on an i-dimensional component were linearly projected onto a generically chosen (i + 1)-dimensional subspace. In this paper, we present two improvements. First, we reduce the dimensionality of the ambient space by determining the linear span of the component and restricting to it. Second, if the dimension of the linear span is greater than i + 1, we use a less generic projection that leads to interpolating polynomials of lower degree, thus reducing the number of samples needed. While this more ecient approach still guarantees | with probability one | the correct determination of the degree of each component, the mere evaluation of an interpolating polynomial no longer certi es the membership of a point to that component. We present an additional numerical test that certi es membership in this new situation. We illustrate the performance of our new approach on some well-known test systems.