DMCA
Linearly Convergent First-Order Algorithms for Semi-definite Programming (2014)
Citations
1100 | Semidefinite Programming
- Vandenberghe, Boyd
- 1994
(Show Context)
Citation Context ...twenty years. Semi-definite Programming can be used to model many practical problems in vary fields such as Convex constrained Optimization, Combinatorial Optimization, Control Theory,... We refer to =-=[32]-=- for a general survey and applications of SDP. Algorithms for solving SDP have been explosively studied since a major works are made by Nesterov and Nemirovski [18], [19], [20], [21], in which they sh... |
546 |
Interior-Point Polynomial Algorithms in Convex Programming.
- Nesterov, Nemirovskii
- 1994
(Show Context)
Citation Context ...ings are used in context of Linear Matrix Inequalities constraints (LMIs), see [28], [31]. LMIs can also be solved numerically by recent interior point methods for semi-definite programming, see [6], =-=[26]-=-. ∗Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, (email: glan@ise.ufl.edu). 1 Linear Programming is a special case of Semidefinite Programming, as wel... |
540 |
Introductory Lectures on Convex Optimization. A Basic Course, volume 87 of Applied Optimization
- Nesterov
- 2004
(Show Context)
Citation Context ...d the growth condition of the objective function. We now ready to describe our non-smooth algorithm as follows. Each main Step (Step 1), to obtain the new iterate, we run the sub-gradient method (see =-=[23]-=-) for K = 4M2µ2, where µ is defined in (2.5), with the input is the current iterate. In other words, we restart the sub-gradient algorithm after a constant number K = 4M2µ2 of iterations. We denote {x... |
521 | Smooth minimization of non-smooth functions
- Nesterov
(Show Context)
Citation Context ...onstraints increase because of computational cost per each iteration. Recently, first-order methods are focused because of the efficiency in solving large scale SDP such as Nesterov’s optimal methods =-=[24]-=-, [25], Nemirovski’s prox-method [17] and spectral bundle methods [5]. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Line... |
274 |
Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications,
- Ben-Tal, Nemirovski
- 2001
(Show Context)
Citation Context ...n that in view of Assumption 1, the pair of primal and dual SDP problem (2.1) and (2.2) satisfy the Slater’s condition, hence they have optimal solutions and their associated gap duality is zero, see =-=[1]-=-. Moreover, a primal-dual optimal solution of (2.1) and (2.2) can be found by solving the complementarity problem as following Linear Matrix Inequalities constraints Ax ≤ B AT y = c y ≤ 0 〈x, ... |
252 | An interior-point method for semidefinite programming
- Hermberg, Rendl, et al.
- 1994
(Show Context)
Citation Context ...grammings are used in context of Linear Matrix Inequalities constraints (LMIs), see [28], [31]. LMIs can also be solved numerically by recent interior point methods for semi-definite programming, see =-=[6]-=-, [26]. ∗Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL 32611, (email: glan@ise.ufl.edu). 1 Linear Programming is a special case of Semidefinite Programming, ... |
170 | A spectral bundle method for semidefinite programming
- Helmberg, Rendl
(Show Context)
Citation Context ...Recently, first-order methods are focused because of the efficiency in solving large scale SDP such as Nesterov’s optimal methods [24], [25], Nemirovski’s prox-method [17] and spectral bundle methods =-=[5]-=-. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Linear Matrix Inequalities constraints (LMIs), see [28], [31]. LMIs can a... |
139 |
On the convergence of the coordinate descent method for convex differentiable minimization
- LUO, TSENG
- 1992
(Show Context)
Citation Context ...t role in algorithmic convergence proofs. In particular, Luo and Tseng showed the power of error bound idea in deriving the linear convergent rate in many algorithm for variety class of problems, see =-=[16]-=-, [15], [30]. However, it is not easy to obtain an error bound except in linear and quadratic cases, or when the Slater constraint qualification condition holds, see [3]. In [33], Zhang derived error ... |
128 |
On approximate solutions of systems of linear inequalities
- Hoffman
- 1952
(Show Context)
Citation Context ... from a arbitrary point to the feasible solution set of (6.34). The minimum constant LH satisfies the growth condition (6.35) is called the Hoffman constant which is well studied in [33], [27], [13], =-=[4]-=- and [34]. That constant can be easily estimated in some cases, especially in linear system of equations. In that case, the Hoffman constant is the smallest non-zero singular value of the matrix A. Th... |
119 | Prox-method with rate of convergence o(1=t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems
- Nemirovski
- 2004
(Show Context)
Citation Context ...ational cost per each iteration. Recently, first-order methods are focused because of the efficiency in solving large scale SDP such as Nesterov’s optimal methods [24], [25], Nemirovski’s prox-method =-=[17]-=- and spectral bundle methods [5]. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Linear Matrix Inequalities constraints (L... |
109 |
A method for unconstrained convex minimization problem with the rate of convergence O
- Nesterov
- 1983
(Show Context)
Citation Context ... 2: Go to Step 1. The convergence property is described in following Theorem. Theorem 15 For any k ≥ 1, f(xk) ≤ 1 2 f(xk−1). 16 Proof. By convergence properties of Nesterov accelerate gradient method =-=[22]-=-, we have f(xk)− f∗ ≤ 4‖A‖ 2 (K + 2)2 ‖xk−1 − x∗‖2. Using the Hoffman error bound and note that f∗ = 0, we obtain f(xk) ≤ 4‖A‖ 2L2H (K + 2)2 f(xk−1) ≤ 1 2 f(xk−1), where the second inequality is follo... |
100 |
Error bounds in mathematical programming
- Pang
- 1997
(Show Context)
Citation Context ...nds for general convex conic problem under some various conditions. The error bound for Semidefinite Programming was studied by Deng and Hu in[3], Jourani and Ye in[8]. Related topics can be found in =-=[27]-=-, [29], [14]. The paper is organized as follows. In Section 2, we introduce the problem of interest and the Slater constraint constraint qualification condition is made. Respectively, in Section 3 and... |
67 | An optimal method for stochastic composite optimization.
- Lan
- 2012
(Show Context)
Citation Context ...en because of the convexity of the objective function, it is easy to see that x∗ is a feasible solution to (5.16) for every step, i.e. x∗ ∈ Lt, t = 1, 2, ..., T − 1. Furthermore, using the Lemma 1 in =-=[9]-=- and the subproblem (5.16), we have ‖xτ − x∗‖2 + ‖xτ−1 − xτ‖2 ≤ ‖xτ−1 − x∗‖2, τ = 1, 2, ..., T. 12 Summing up the above inequalities we obtain ‖xT − x∗‖2 + T∑ τ=1 ‖xτ−1 − xτ‖2 ≤ ‖x0 − x∗‖2,∀x∗ ∈ X∗. T... |
54 | Superlinear convergence of a symmetric primal-dual path following algorithm for semidefinite programming
- Luo, Sturm, et al.
- 1996
(Show Context)
Citation Context ... made by Nesterov and Nemirovski [18], [19], [20], [21], in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in [29], =-=[14]-=-. Despite the fact that SDP can be solved in polynomial time by IP methods, they become impractical when the number of constraints increase because of computational cost per each iteration. Recently, ... |
51 | Smoothing technique and its applications in semidefinite optimization
- Nesterov
- 2004
(Show Context)
Citation Context ...ints increase because of computational cost per each iteration. Recently, first-order methods are focused because of the efficiency in solving large scale SDP such as Nesterov’s optimal methods [24], =-=[25]-=-, Nemirovski’s prox-method [17] and spectral bundle methods [5]. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Linear Mat... |
37 | Randomized methods for linear constraints: convergence rates and conditioning.
- Leventhal, Lewis
- 2006
(Show Context)
Citation Context ...an be applied for solving LP. In this paper, we propose a linearly convergent algorithm for Linear system of inequalities, which require a weaker assumption than the one for LMIs problem. We refer to =-=[12]-=- for other linear convergent algorithms for Linear system of inequalities. Error bounds usually play an important role in algorithmic convergence proofs. In particular, Luo and Tseng showed the power ... |
22 |
Optimization over Positive Semidefinite Matrices: Mathematical Background and User’s Manual
- NESTEROV, NEMIROVSKII
- 1990
(Show Context)
Citation Context ...ontrol Theory,... We refer to [32] for a general survey and applications of SDP. Algorithms for solving SDP have been explosively studied since a major works are made by Nesterov and Nemirovski [18], =-=[19]-=-, [20], [21], in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in [29], [14]. Despite the fact that SDP can be solv... |
16 |
A general approach to polynomial-time algorithms design for convex programming
- Nesterov, Nemirovsky
- 1988
(Show Context)
Citation Context ...ion, Control Theory,... We refer to [32] for a general survey and applications of SDP. Algorithms for solving SDP have been explosively studied since a major works are made by Nesterov and Nemirovski =-=[18]-=-, [19], [20], [21], in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in [29], [14]. Despite the fact that SDP can b... |
16 |
On linear convergence of iterative methods for the variational inequality problem.
- Tseng
- 1995
(Show Context)
Citation Context ...gorithmic convergence proofs. In particular, Luo and Tseng showed the power of error bound idea in deriving the linear convergent rate in many algorithm for variety class of problems, see [16], [15], =-=[30]-=-. However, it is not easy to obtain an error bound except in linear and quadratic cases, or when the Slater constraint qualification condition holds, see [3]. In [33], Zhang derived error bounds for g... |
14 |
Conic formulation of a convex programming problem and duality
- Nesterov, Nemirovski
- 1992
(Show Context)
Citation Context ...y,... We refer to [32] for a general survey and applications of SDP. Algorithms for solving SDP have been explosively studied since a major works are made by Nesterov and Nemirovski [18], [19], [20], =-=[21]-=-, in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in [29], [14]. Despite the fact that SDP can be solved in polyno... |
13 |
Primal-dual first-order methods with O(1/ǫ) iteration-complexity for cone programming
- Lan, Lu, et al.
(Show Context)
Citation Context ...re ‖A‖ denotes the operator norm of A with respected to the pair of norm ‖.‖2 and ‖.‖F defined as follows ‖A‖ := max{‖Au‖∗F : ‖u‖ ≤ 1} (4.10) Proof. The proof immediately follows the Proposition 1 of =-=[11]-=-, in which U = U∗ = Rn, V = V ∗ = Sn, and ψ = (distSn − )2, where distSn − is the distance function to the cone Sn− measured in terms of the norm ‖.‖F . Note that distSn − is a convex function with 2-... |
12 | On sensitivity of central solutions in semidefinite programming,
- STURM, ZHANG
- 2001
(Show Context)
Citation Context ...ks are made by Nesterov and Nemirovski [18], [19], [20], [21], in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in =-=[29]-=-, [14]. Despite the fact that SDP can be solved in polynomial time by IP methods, they become impractical when the number of constraints increase because of computational cost per each iteration. Rece... |
10 |
The sharp Lipschitz constants for feasible and optimal solutions of a perturbed linear program, Linear Algebra and its Applications,
- Li
- 1993
(Show Context)
Citation Context ...stance from a arbitrary point to the feasible solution set of (6.34). The minimum constant LH satisfies the growth condition (6.35) is called the Hoffman constant which is well studied in [33], [27], =-=[13]-=-, [4] and [34]. That constant can be easily estimated in some cases, especially in linear system of equations. In that case, the Hoffman constant is the smallest non-zero singular value of the matrix ... |
9 |
Computable error bounds for convex inequality systems in reflexive Banach spaces
- Deng
(Show Context)
Citation Context ...at σIn −Ad ∈ Sn−, and denote µ = ‖d‖ σ . (2.5) Note that the Assumption 2 implies the Slater constraint qualification condition for the feasible set of (2.4), hence S is nonempty, see [8], [33], [3], =-=[2]-=-. In Section 2 and Section 3, we will present two equivalent SDP optimization formulations of LMI and linearly convergent algorithms for solving these formulations. 3 A non-smooth SDP Optimization For... |
7 |
Computable error bounds for semidefinite programming
- Deng, Hu
- 1999
(Show Context)
Citation Context ...ariety class of problems, see [16], [15], [30]. However, it is not easy to obtain an error bound except in linear and quadratic cases, or when the Slater constraint qualification condition holds, see =-=[3]-=-. In [33], Zhang derived error bounds for general convex conic problem under some various conditions. The error bound for Semidefinite Programming was studied by Deng and Hu in[3], Jourani and Ye in[8... |
6 |
Linear Matrix Inequalities for signal processing: An overview
- Balakrishnan, Vandenberghe
- 1998
(Show Context)
Citation Context ...ndle methods [5]. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Linear Matrix Inequalities constraints (LMIs), see [28], =-=[31]-=-. LMIs can also be solved numerically by recent interior point methods for semi-definite programming, see [6], [26]. ∗Department of Industrial and Systems Engineering, University of Florida, Gainesvil... |
4 | Global error bounds for convex conic problems
- Zhang
- 1999
(Show Context)
Citation Context ...lass of problems, see [16], [15], [30]. However, it is not easy to obtain an error bound except in linear and quadratic cases, or when the Slater constraint qualification condition holds, see [3]. In =-=[33]-=-, Zhang derived error bounds for general convex conic problem under some various conditions. The error bound for Semidefinite Programming was studied by Deng and Hu in[3], Jourani and Ye in[8]. Relate... |
2 | Error bounds for eigenvalue and semidefinite matrix inequality systems
- Jourani, Ye
(Show Context)
Citation Context ...3]. In [33], Zhang derived error bounds for general convex conic problem under some various conditions. The error bound for Semidefinite Programming was studied by Deng and Hu in[3], Jourani and Ye in=-=[8]-=-. Related topics can be found in [27], [29], [14]. The paper is organized as follows. In Section 2, we introduce the problem of interest and the Slater constraint constraint qualification condition is... |
2 | Error bound and reduced-gradient projection algorithms for convex minimization over a polyhedral set
- Luo, Tseng
- 1993
(Show Context)
Citation Context ... in algorithmic convergence proofs. In particular, Luo and Tseng showed the power of error bound idea in deriving the linear convergent rate in many algorithm for variety class of problems, see [16], =-=[15]-=-, [30]. However, it is not easy to obtain an error bound except in linear and quadratic cases, or when the Slater constraint qualification condition holds, see [3]. In [33], Zhang derived error bounds... |
1 | Bundle-type methods uniformly optimal for smooth and non-smooth convex optimization
- Lan
- 2010
(Show Context)
Citation Context ...ms covers both non-smooth formulation (3.6) corresponding to L = 0,M = ‖A‖ and smooth formulation (4.9) corresponding to 9 L = 2‖A‖2,M = 0, where the optimal values of both formulations are zeros. In =-=[10]-=-, Lan propose two algorithms which is uniformly optimal for solving both non-smooth and smooth convex programming problems. More interestingly, these algorithms do not require any smoothness informati... |
1 |
Self-concordantfunctions andpolynomial time methods in convexprogramming
- Nesterov, Nemirovski
- 1990
(Show Context)
Citation Context ... Theory,... We refer to [32] for a general survey and applications of SDP. Algorithms for solving SDP have been explosively studied since a major works are made by Nesterov and Nemirovski [18], [19], =-=[20]-=-, [21], in which they showed that Interior Point (IP) methods for Linear Programming (LP) can be extended to SDP. Related topics can be found in [29], [14]. Despite the fact that SDP can be solved in ... |
1 |
Linear matrix inequalities in system and control theory
- Stephen, Laurent, et al.
- 1994
(Show Context)
Citation Context ...ral bundle methods [5]. In system and control theory, system identification and signal processing, Semi-definite Programmings are used in context of Linear Matrix Inequalities constraints (LMIs), see =-=[28]-=-, [31]. LMIs can also be solved numerically by recent interior point methods for semi-definite programming, see [6], [26]. ∗Department of Industrial and Systems Engineering, University of Florida, Gai... |
1 |
Zheng and Kung Fu Ng. Hoffman”s least error bounds for systems of linear inequalities
- Yin
(Show Context)
Citation Context ...arbitrary point to the feasible solution set of (6.34). The minimum constant LH satisfies the growth condition (6.35) is called the Hoffman constant which is well studied in [33], [27], [13], [4] and =-=[34]-=-. That constant can be easily estimated in some cases, especially in linear system of equations. In that case, the Hoffman constant is the smallest non-zero singular value of the matrix A. The algorit... |