• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

On the implementation and usage of SDPT3 -- a Matlab software package for semidefinite-quadratic-linear programming, version 4.0 (2006)

by K. C. Toh, R. H. Tütüncü, M. J. Todd
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 26
Next 10 →

Square-root lasso: pivotal recovery of sparse signals via conic programming

by Alexandre Belloni, Victor Chernozhukov, Lie Wang - Biometrika , 2011
"... ar ..."
Abstract - Cited by 57 (10 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...the authors’ webpages. The square-root lasso runs at least as fast as the corresponding implementations of these methods for the lasso, for instance, the Sdpt3 implementation of interior-point method =-=[24]-=-, and the Tfocs implementation of first-order methods by Becker, Candès and Grant described in [2]. We report the exact running times in the Supplementary Material. 5. Empirical performance of square...

Convex Optimization Methods for Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression.

by Zhaosong Lu , Renato D C Monteiro , Ming Yuan , 2008
"... Abstract In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al. ..."
Abstract - Cited by 21 (3 self) - Add to MetaCart
Abstract In this paper, we study convex optimization methods for computing the nuclear (or, trace) norm regularized least squares estimate in multivariate linear regression. The so-called factor estimation and selection (FES) method, recently proposed by Yuan et al.
(Show Context)

Citation Context

...compared using a set of randomly generated instances. We show that the variant of Nesterov’s smooth method [20] generally outperforms the interior point method implemented in SDPT3 version 4.0 (beta) =-=[19]-=- substantially . Moreover, the former method is much more memory efficient. Key words: Cone programming, smooth saddle point problem, first-order method, interior point method, multivariate linear reg...

http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-60254 Convex Relaxations for Mixed Integer Predictive Control ⋆

by Daniel Axehill, Lieven V, Anders Hansson, Daniel Axehill, Lieven V, Anders Hansson, Daniel Axehill, Lieven Vandenberghe, Anders Hansson
"... Convex relaxations for mixed integer predictive ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Convex relaxations for mixed integer predictive
(Show Context)

Citation Context

...wo processors of the type Dual Core AMD Opteron 270 sharing 4 GB RAM (only one core was used) running CentOS release 5.3 Kernel 2.6.18 (64 bit) and Matlab 7.8. The solvers used were SDPT3 version 4.0 =-=[19]-=-, and CPLEX version 11, and they were called using Yalmip [13]. 5.1 Comparisons of relaxations In this experiment, the relative gaps of the different relaxations are compared for different prediction ...

A primal-dual algorithmic framework for constrained convex minimization

by Quoc Tran-Dinh , Volkan Cevher , 2014
"... Abstract We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
Abstract We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
(Show Context)

Citation Context

... In this test, we choose xc = 0 ∈ [l,u] and db(x,xc) := (1/2)‖x−xc‖2. We then evaluate DX numerically, given X := [l,u]. We estimate DY ? and f ? by solving (68) with an interior-point solver (SDPT3) =-=[65]-=- up to accuracy 10−8. In the (2P1D) scheme, we set γ0 = β0 = √ L̄g, while, in the (1P2D) scheme, we set γ0 := 2 √ 2‖A‖ K+1 with K := 104 and generate the theoretical bounds defined in Theorem 4.1. We ...

Constrained convex minimization via model-based excessive gap

by Quoc Tran-dinh, Volkan Cevher - in Proceedings of Neural Information Processing Systems Foundation (NIPS , 2014
"... We introduce a model-based excessive gap technique to analyze first-order primal-dual methods for constrained convex minimization. As a result, we construct first-order primal-dual methods with optimal convergence rates on the primal objec-tive residual and the primal feasibility gap of their iterat ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
We introduce a model-based excessive gap technique to analyze first-order primal-dual methods for constrained convex minimization. As a result, we construct first-order primal-dual methods with optimal convergence rates on the primal objec-tive residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-center selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fast-gradient methods as special cases, where our rates apply. 1
(Show Context)

Citation Context

...e augmented Lagrangian smoother (S = A). In this test, we fix xc = 0n and db(x,xc) := (1/2)‖x‖2. Since ρ is given, we can evaluate DX numerically. By solving (22) with the SDPT3 interior-point solver =-=[30]-=- up to the accuracy 10−8, we can estimate D?Y and f ?. In the (2P1D) scheme, we set γ0 = β0 = √ L̄g , while, in the (1P2D) scheme, we set γ0 := 2 √ 2‖A‖(K + 1)−1 with K := 104 and generate the theoret...

Multidimensional FIR filter design via trigonometric sum-of-squares optimization

by Tae Roh, Bogdan Dumitrescu, Lieven Vandenberghe , 2007
"... We discuss a method for multidimensional FIR filter design via sum-of-squares formulations of spectral mask constraints. The sum-of-squares optimization problem is expressed as a semidefinite program with low-rank structure, by sampling the constraints using discrete cosine and sine transforms. The ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
We discuss a method for multidimensional FIR filter design via sum-of-squares formulations of spectral mask constraints. The sum-of-squares optimization problem is expressed as a semidefinite program with low-rank structure, by sampling the constraints using discrete cosine and sine transforms. The resulting semidefinite program is then solved by a customized primal-dual interior-point method that exploits lowrank structure. This leads to a substantial reduction in the computational complexity, compared to general-purpose semidefinite programming methods that exploit sparsity.
(Show Context)

Citation Context

...e that if A = I, the mapping A(X) is a vector of inner products with r rank-one matrices cic T i . This special case can be handled by the general-purpose solvers DSDP [3–5] and SDTP3 (ver. 4.0 beta) =-=[27, 29]-=-. Including a non-square matrix A is often useful, and allows us to handle the constraint (26), for example, with nonsquare G. Example In the 2-D FIR lowpass filter design problem (28) with filter ord...

Convex optimization of charging infrastructure design and component sizing of a plug-in series HEV powertrain

by Nikolce Murgovski, Lars Johannesson, Jonas Hellgren, Bo Egardt - In IFAC World Congress , 2011
"... Abstract: With the topic of plug-in HEV city buses, this paper studies the highly coupled optimization problem of finding the most cost efficient compromise between investing in onboard electric powertrain components and installing a charging infrastructure along the bus line. The paper describes ho ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract: With the topic of plug-in HEV city buses, this paper studies the highly coupled optimization problem of finding the most cost efficient compromise between investing in onboard electric powertrain components and installing a charging infrastructure along the bus line. The paper describes how convex optimization can be used to find the optimal battery sizing for a series HEV with fixed engine and generator unit and a fixed charging infrastructure along the bus line. The novelty of the proposed optimization approach is that both the battery sizing and the energy management strategy are optimized simultaneously by solving a convex problem. In the optimization approach the power characteristics of the engine-generator unit are approximated by a convex, second order polynomial, and the convex battery model assumes quadratic losses. The paper also presents an example for a specific bus line, showing the dependence between the optimal battery sizing and the number of charging stations on the bus line.

Advance Access publication on ?

by A Belloni , V Chernozhukov , L Wang , 2011
"... SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of Lasso, called square-root Lasso. The method nei ..."
Abstract - Add to MetaCart
SUMMARY We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of Lasso, called square-root Lasso. The method neither relies on the knowledge of the standard deviation σ of the regression errors nor does it need to pre-estimate σ. Despite not knowing σ, square-root Lasso achieves near-oracle performance, attaining the prediction norm convergence rate σ (s/n) log p, and thus matching the performance of the Lasso that knows σ. Moreover, we show that these results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions, using moderate deviation theory. Finally, we formulate the square-root Lasso as a solution to a convex conic programming problem. This formulation allows us to implement the estimator using efficient algorithmic methods, such as interior point and first order methods specialized to conic programming problems of a very large size. Some key words: conic programming; high-dimensional sparse model; unknown sigma.

Revisiting several problems and . . .

by Victor Blanco, Justo Puerto, et al.
"... ..."
Abstract - Add to MetaCart
Abstract not found

Properties of a Cutting Plane Method for Semidefinite Programming

by Kartik Krishnan Sivaramakrishnan, John E. Mitchell , 2007
"... We analyze the properties of an interior point cutting plane algorithm that is based on a semi-infinite linear formulation of the dual semidefinite program. The cutting plane algorithm approximately solves a linear relaxation of the dual semidefinite program in every iteration and relies on a separa ..."
Abstract - Add to MetaCart
We analyze the properties of an interior point cutting plane algorithm that is based on a semi-infinite linear formulation of the dual semidefinite program. The cutting plane algorithm approximately solves a linear relaxation of the dual semidefinite program in every iteration and relies on a separation oracle that returns linear cutting planes. We show that the complexity of a variant of the interior point cutting plane algorithm is slightly smaller than that of a direct interior point solver for semidefinite programs where the number of constraints is approximately equal to the dimension of the matrix. Our primary focus in this paper is the design of good separation oracles that return cutting planes that support the feasible region of the dual semidefinite program. Furthermore, we introduce a concept called the tangent space induced by a supporting hyperplane that measures the strength of a cutting plane, characterize the supporting hyperplanes that give higher dimensional tangent spaces, and show how such cutting planes can be found efficiently. Our procedures are analogous to finding facets of an integer polytope in cutting plane methods for integer programming. We illustrate these concepts with two examples in the paper. Finally, we describe separation oracles that return nonpolyhedral cutting surfaces. Recently, Krishnan et al. [41] and Oskoorouchi and Goffin [32] have adopted these separation oracles in conic interior point cutting plane algorithms for solving semidefinite programs.
(Show Context)

Citation Context

...finite programs. However, current semidefinite solvers based on IPMs can only handle problems with dimension n and number of equality constraints k up to a few thousands (see, for example, Toh et al. =-=[40]-=-). Each iteration of a primal-dual IPM solver needs to form a dense Schur matrix, store this matrix in memory, and finally factorize and solve a dense system of linear equations of size k with this co...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University