Results 1  10
of
2,664
Distributed Nesterovlike gradient algorithms
 in Proc. 51st IEEE Conference on Decision and Control (CDC), 2012
"... Abstract — In classical, centralized optimization, the Nesterov gradient algorithm reduces the number of iterations to produce an ✏accurate solution (in terms of the cost function) with respect to ordinary gradient from O(1/✏) to O(1 / p ✏). This improvement is achieved on a class of convex functio ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abstract — In classical, centralized optimization, the Nesterov gradient algorithm reduces the number of iterations to produce an ✏accurate solution (in terms of the cost function) with respect to ordinary gradient from O(1/✏) to O(1 / p ✏). This improvement is achieved on a class of convex
NesterovTodd Directions are Newton Directions
, 1999
"... The theory of selfscaled conic programming provides a unified framework for the theories of linear programming, semidefinite programming and convex quadratic programming with convex quadratic constraints. The standard search directions for interiorpoint methods applied to selfscaled conic programm ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
programming problems are the socalled NesterovTodd directions. In this article we show that these direction fields are special cases of socalled target directions, a unifying concept for primaldual interior point methods for selfscaled conic programming. In particular, this implies that Nesterov
A Mathematical View of Interiorpoint Methods for Convex Optimization
 IN CONVEX OPTIMIZATION, MPS/SIAM SERIES ON OPTIMIZATION, SIAM
, 2001
"... These lecture notes aim at developing a thorough understanding of the core theory for interiorpoint methods. The overall theory continues to grow ata rapid rate but the core ideas have remained largely unchanged for several years, since Nesterov and Nemirovskii [1] published their pathbreaking, br ..."
Abstract

Cited by 269 (1 self)
 Add to MetaCart
These lecture notes aim at developing a thorough understanding of the core theory for interiorpoint methods. The overall theory continues to grow ata rapid rate but the core ideas have remained largely unchanged for several years, since Nesterov and Nemirovskii [1] published their path
A study of Nesterov’s scheme for Lagrangian decomposition and map labeling
 In Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR
, 2011
"... We study the MAPlabeling problem for graphical models by optimizing a dual problem obtained by Lagrangian decomposition. In this paper, we focus specifically on Nesterov’s optimal firstorder optimization scheme for nonsmooth convex programs, that has been studied for a range of other problems i ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
We study the MAPlabeling problem for graphical models by optimizing a dual problem obtained by Lagrangian decomposition. In this paper, we focus specifically on Nesterov’s optimal firstorder optimization scheme for nonsmooth convex programs, that has been studied for a range of other problems
An Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for Semidefinite Programming
"... This paper proposes an infeasible interiorpoint algorithm with full NesterovTodd step for semidefinite programming, which is an extension of the work of Roos (SIAM J. Optim., 16(4):1110– 1136, 2006). The polynomial bound coincides with that of infeasible interiorpoint methods for linear programmi ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This paper proposes an infeasible interiorpoint algorithm with full NesterovTodd step for semidefinite programming, which is an extension of the work of Roos (SIAM J. Optim., 16(4):1110– 1136, 2006). The polynomial bound coincides with that of infeasible interiorpoint methods for linear
Convergence Rates of Distributed NesterovLike Gradient Methods on Random Networks
"... Abstract—We consider distributed optimization in random networks where nodes cooperatively minimize the sum of their individual convex costs. Existing literature proposes distributed gradientlike methods that are computationally cheap and resilient to link failures, but have slow convergence rates ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
the set of symmetric, stochastic matrices with positive diagonals. The network is connected on average and the cost functions are convex, differentiable, with Lipschitz continuous and bounded gradients. We design two distributed Nesterovlike gradient methods that modify the D–NG and D–NC methods that we
DISTRIBUTED NESTEROV GRADIENT METHODS FOR RANDOM NETWORKS: CONVERGENCE IN PROBABILITY AND CONVERGENCE RATES
"... We consider distributed optimization where N nodes in a generic, connected network minimize the sum of their individual, locally known, convex costs. Existing literature proposes distributed gradientlike methods that are attractive due to computationally cheap iterations and provable resilience t ..."
Abstract
 Add to MetaCart
to random internode communication failures, but such methods have slow theoretical and empirical convergence rates. Building from the centralized Nesterov gradient methods, we propose accelerated distributed gradientlike methods and establish that they achieve strictly faster rates than existing
Convergence Rates of Distributed Nesterovlike Gradient Methods on Random Networks
"... Abstract—We consider distributed optimization in random networks where N nodes cooperatively minimize the sum ∑N i=1 fi(x) of their individual convex costs. Existing literature proposes distributed gradientlike methods that are computationally cheap and resilient to link failures, but have slow co ..."
Abstract
 Add to MetaCart
(k)} drawn from the set of symmetric, stochastic matrices with positive diagonals. The network is connected on average and the cost functions are convex, differentiable, with Lipschitz continuous and bounded gradients. We design two distributed Nesterovlike gradient methods that modify the D
Results 1  10
of
2,664