Results 1  10
of
26
Squareroot lasso: pivotal recovery of sparse signals via conic programming
 Biometrika
, 2011
"... ar ..."
(Show Context)
Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression.
, 2008
"... Abstract In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The socalled factor estimation and selection (FES) method, recently proposed by Yuan et al. ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The socalled factor estimation and selection (FES) method, recently proposed by Yuan et al.
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva60254 Convex Relaxations for Mixed Integer Predictive Control ⋆
"... Convex relaxations for mixed integer predictive ..."
(Show Context)
A primaldual algorithmic framework for constrained convex minimization
, 2014
"... Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
Constrained convex minimization via modelbased excessive gap
 in Proceedings of Neural Information Processing Systems Foundation (NIPS
, 2014
"... We introduce a modelbased excessive gap technique to analyze firstorder primaldual methods for constrained convex minimization. As a result, we construct firstorder primaldual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterat ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We introduce a modelbased excessive gap technique to analyze firstorder primaldual methods for constrained convex minimization. As a result, we construct firstorder primaldual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and proxcenter selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fastgradient methods as special cases, where our rates apply. 1
Multidimensional FIR filter design via trigonometric sumofsquares optimization
, 2007
"... We discuss a method for multidimensional FIR filter design via sumofsquares formulations of spectral mask constraints. The sumofsquares optimization problem is expressed as a semidefinite program with lowrank structure, by sampling the constraints using discrete cosine and sine transforms. The ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We discuss a method for multidimensional FIR filter design via sumofsquares formulations of spectral mask constraints. The sumofsquares optimization problem is expressed as a semidefinite program with lowrank structure, by sampling the constraints using discrete cosine and sine transforms. The resulting semidefinite program is then solved by a customized primaldual interiorpoint method that exploits lowrank structure. This leads to a substantial reduction in the computational complexity, compared to generalpurpose semidefinite programming methods that exploit sparsity.
Convex optimization of charging infrastructure design and component sizing of a plugin series HEV powertrain
 In IFAC World Congress
, 2011
"... Abstract: With the topic of plugin HEV city buses, this paper studies the highly coupled optimization problem of finding the most cost efficient compromise between investing in onboard electric powertrain components and installing a charging infrastructure along the bus line. The paper describes ho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract: With the topic of plugin HEV city buses, this paper studies the highly coupled optimization problem of finding the most cost efficient compromise between investing in onboard electric powertrain components and installing a charging infrastructure along the bus line. The paper describes how convex optimization can be used to find the optimal battery sizing for a series HEV with fixed engine and generator unit and a fixed charging infrastructure along the bus line. The novelty of the proposed optimization approach is that both the battery sizing and the energy management strategy are optimized simultaneously by solving a convex problem. In the optimization approach the power characteristics of the enginegenerator unit are approximated by a convex, second order polynomial, and the convex battery model assumes quadratic losses. The paper also presents an example for a specific bus line, showing the dependence between the optimal battery sizing and the number of charging stations on the bus line.
Advance Access publication on ?
, 2011
"... SUMMARY We propose a pivotal method for estimating highdimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of Lasso, called squareroot Lasso. The method nei ..."
Abstract
 Add to MetaCart
SUMMARY We propose a pivotal method for estimating highdimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of Lasso, called squareroot Lasso. The method neither relies on the knowledge of the standard deviation σ of the regression errors nor does it need to preestimate σ. Despite not knowing σ, squareroot Lasso achieves nearoracle performance, attaining the prediction norm convergence rate σ (s/n) log p, and thus matching the performance of the Lasso that knows σ. Moreover, we show that these results are valid for both Gaussian and nonGaussian errors, under some mild moment restrictions, using moderate deviation theory. Finally, we formulate the squareroot Lasso as a solution to a convex conic programming problem. This formulation allows us to implement the estimator using efficient algorithmic methods, such as interior point and first order methods specialized to conic programming problems of a very large size. Some key words: conic programming; highdimensional sparse model; unknown sigma.
Properties of a Cutting Plane Method for Semidefinite Programming
, 2007
"... We analyze the properties of an interior point cutting plane algorithm that is based on a semiinfinite linear formulation of the dual semidefinite program. The cutting plane algorithm approximately solves a linear relaxation of the dual semidefinite program in every iteration and relies on a separa ..."
Abstract
 Add to MetaCart
(Show Context)
We analyze the properties of an interior point cutting plane algorithm that is based on a semiinfinite linear formulation of the dual semidefinite program. The cutting plane algorithm approximately solves a linear relaxation of the dual semidefinite program in every iteration and relies on a separation oracle that returns linear cutting planes. We show that the complexity of a variant of the interior point cutting plane algorithm is slightly smaller than that of a direct interior point solver for semidefinite programs where the number of constraints is approximately equal to the dimension of the matrix. Our primary focus in this paper is the design of good separation oracles that return cutting planes that support the feasible region of the dual semidefinite program. Furthermore, we introduce a concept called the tangent space induced by a supporting hyperplane that measures the strength of a cutting plane, characterize the supporting hyperplanes that give higher dimensional tangent spaces, and show how such cutting planes can be found efficiently. Our procedures are analogous to finding facets of an integer polytope in cutting plane methods for integer programming. We illustrate these concepts with two examples in the paper. Finally, we describe separation oracles that return nonpolyhedral cutting surfaces. Recently, Krishnan et al. [41] and Oskoorouchi and Goffin [32] have adopted these separation oracles in conic interior point cutting plane algorithms for solving semidefinite programs.