Results 11  20
of
22
1On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems
"... Abstract—This paper investigates convergence properties of scalable algorithms for nonconvex and structured optimization. We focus on two methods that combine the fast convergence properties of augmented Lagrangianbased methods with the separability properties of alternating optimization. The first ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—This paper investigates convergence properties of scalable algorithms for nonconvex and structured optimization. We focus on two methods that combine the fast convergence properties of augmented Lagrangianbased methods with the separability properties of alternating optimization. The first method is adapted from the classic quadratic penalty function method and is called the Alternating Direction Penalty Method (ADPM). Unlike the original quadratic penalty function method, in which singlestep optimizations are adopted, ADPM uses alternating optimization, which in turn is exploited to enable scalability of the algorithm. The second method is the wellknown Alternating Direction Method of Multipliers (ADMM). We show that the ADPM asymptotically converges to a primal feasible point under mild conditions. Moreover, we give numerical evidence to demonstrate the potentials of the ADPM for computing a good objective value. In the case of the ADMM, we give sufficient conditions under which the algorithm asymptotically reaches the standard first order necessary conditions for local optimality. Throughout the paper, we substantiate the theory with numerical examples and finally demonstrate possible applications of ADPM and ADMM to a nonconvex localization problem in wireless sensor networks. Index Terms — Nonconvex Optimization, ADMM, Localization I.
A Proximal Dual Consensus ADMM Method for MultiAgent Constrained Optimization
"... ar ..."
(Show Context)
TimeAverage Stochastic Optimization with Nonconvex Decision Set and its Convergence
"... AbstractThis paper considers timeaverage stochastic optimization, where a time average decision vector, an average of decision vectors chosen in every time step from a timevarying (possibly nonconvex) set, minimizes a convex objective function and satisfies convex constraints. This formulation h ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractThis paper considers timeaverage stochastic optimization, where a time average decision vector, an average of decision vectors chosen in every time step from a timevarying (possibly nonconvex) set, minimizes a convex objective function and satisfies convex constraints. This formulation has applications in networking and operations research. In general, timeaverage stochastic optimization can be solved by a Lyapunov optimization technique. This paper shows that the technique exhibits a transient phase and a steady state phase. When the problem has a unique vector of Lagrange multipliers, the convergence time can be improved. By starting the time average in the steady state, the convergence times become O(1/ ) under a locallypolyhedral assumption and O(1/ 1.5 ) under a locallynonpolyhedral assumption, where denotes the proximity to the optimal objective cost.
On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
, 2014
"... In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective func ..."
Abstract
 Add to MetaCart
In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we prove that under the assumption of strong convexity and Lipshitz continuity of the gradient of the primal objective function we have a global error bound type property for the dual problem. Using this error bound property we devise a new fully distributed dual gradient scheme, i.e. a gradient scheme based on a weighted step size, for which we derive global linear rate of convergence for both dual and primal suboptimality and for primal feasibility violation. Many real applications, e.g. distributed model predictive control, network utility maximization or optimal power flow, can be posed as linearly constrained separable convex problems for which dual gradient type methods from literature have sublinear convergence rate. In the present paper we prove for the first time that in fact we can achieve linear convergence rate for such algorithms when they are used for solving these applications. Numerical simulations are also provided to confirm our theory.
An Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
"... Abstract We propose a distributed firstorder augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimi ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We propose a distributed firstorder augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of DFAL iterates is optimal; and for any ǫ > 0, an ǫoptimal and ǫfeasible solution can be computed within O(log(ǫ −1 )) DFAL iterations, which require O( ψ 1.5 max dmin ǫ −1 ) proximal gradient computations and communications per node in total, where ψ max denotes the largest eigenvalue of the graph Laplacian, and d min is the minimum degree of the graph. We also propose an asynchronous version of DFAL by incorporating randomized block coordinate descent methods; and demonstrate the efficiency of DFAL on large scale sparsegroup LASSO problems.
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Spatial Reuse in Dense Wireless Areas: A Crosslayer Optimization Approach via ADMM
"... Abstract—This paper introduces an efficient method for communication resource use in dense wireless areas where all nodes must communicate with a common destination node. The proposed method groups nodes based on their distance from the destination and creates a structured multihop configuration i ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—This paper introduces an efficient method for communication resource use in dense wireless areas where all nodes must communicate with a common destination node. The proposed method groups nodes based on their distance from the destination and creates a structured multihop configuration in which each group can relay its neighbor’s data. The large number of active radio nodes and the common direction of communication toward a single destination are exploited to reuse the limited spectrum resources in spatially separated groups. Spectrum allocation constraints among groups are then embedded in a joint routing and resource allocation framework to optimize the route and amount of resources allocated to each node. The solution to this problem uses coordination among the lowerlayers of the wirelessnetwork protocol stack to outperform conventional approaches where these layers are decoupled. Furthermore, the structure of this problem is exploited to obtain a semidistributed optimization algorithm based on the alternating direction method of multipliers (ADMM) where each node can optimize its resources independently based on local channel information. Index Terms—Alternating direction method of multipliers (ADMM), crosslayer optimization, dynamic resource allocation, routing. I.
1Distributed Optimization Algorithms for WideArea Oscillation Monitoring in Power Systems Using InterRegional PMU–PDC Architectures
"... Abstract—In this paper, we present a set of distributed algorithms for estimating the electromechanical oscillation modes of large power system networks using Synchrophasors. With the number of Phasor Measurement Units (PMUs) in the North American grid scaling up to thousands, system operators are ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a set of distributed algorithms for estimating the electromechanical oscillation modes of large power system networks using Synchrophasors. With the number of Phasor Measurement Units (PMUs) in the North American grid scaling up to thousands, system operators are gradually inclining towards distributed cyberphysical architectures for executing widearea monitoring and control operations. Traditional centralized approaches, in fact, are anticipated to become untenable soon due to various factors such as data volume, security, communication overhead, and failure to adhere to realtime deadlines. To address this challenge, we propose three different communication and computational architectures by which estimators located at the control centers of various utility companies can run local optimization algorithms using local PMU data, and thereafter communicate with other estimators to reach a global solution. Both synchronous and asynchronous communications are considered. Each architecture integrates a centralized Pronybased algorithm with several variants of Alternating Direction Method of Multipliers (ADMM). We discuss the relative advantages and bottlenecks of each architecture using simulations of IEEE 68bus and IEEE 145bus power system as well as an ExoGENIbased software defined network. Index Terms—Distributed optimization, Prony, phasor mea
Fast Decentralized Gradient Descent Method and Applications to Insitu Seismic Tomography
"... Abstract—We consider the decentralized consensus optimization problem arising from insitu seismic tomography in largescale sensor networks. Unlike traditional seismic imaging performed in a centralized location, each node in this setting privately holds an objective function and partial data. Th ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We consider the decentralized consensus optimization problem arising from insitu seismic tomography in largescale sensor networks. Unlike traditional seismic imaging performed in a centralized location, each node in this setting privately holds an objective function and partial data. The goal of each node is to obtain the optimal solution of the whole seismic image, while communicating only with its immediate neighbors. We present a fast decentralized gradient descent method and prove that this new method can reach optimal convergence rate of O(1/k2) where k is the number of communication/iteration rounds. Extensive numerical experiments on synthetic and realworld sensor network seismic data demonstrate that the proposed algorithms significantly outperform existing methods.
Stochastic Graph Filtering on TimeVarying GraphsEXTENDED VERSION
"... Abstract—We have recently seen a surge of work on distributed graph filters, extending classical results to the graph setting. State of the art filters have however only been examined from a deterministic standpoint, ignoring the impact of stochasticity in the computation (e.g., temporal fluctuation ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We have recently seen a surge of work on distributed graph filters, extending classical results to the graph setting. State of the art filters have however only been examined from a deterministic standpoint, ignoring the impact of stochasticity in the computation (e.g., temporal fluctuation of links) and input (e.g., the value of each node is a random process). Initiating the study of stochastic graph signal processing, this paper shows that a prominent class of graph filters, namely autoregressive moving average (ARMA) filters, are suitable for the stochastic setting. In particular, we prove that an ARMA filter that operates on a stochastic signal over a stochastic graph is equivalent, in the mean, to the same filter operating on the expected signal over the expected graph. We also characterize the variance of the output and we provide an upper bound for its average value among different nodes. Our results are validated by numerical simulations. I.
1Distributed Interference Management Policies for Heterogeneous Small Cell Networks
"... We study the problem of distributed interference management in a network of heterogeneous small cells with different cell sizes, different numbers of user equipments (UEs) served, and different throughput requirements by UEs. We consider the uplink transmission, where each UE determines when and at ..."
Abstract
 Add to MetaCart
(Show Context)
We study the problem of distributed interference management in a network of heterogeneous small cells with different cell sizes, different numbers of user equipments (UEs) served, and different throughput requirements by UEs. We consider the uplink transmission, where each UE determines when and at what power level it should transmit to its serving small cell base station (SBS). We propose a general framework for designing distributed interference management policies, which exploits weak interference among nonneighboring UEs by letting them transmit simultaneously (i.e. spatial reuse), while eliminating strong interference among neighboring UEs by letting them transmit in different time slots. The design of optimal interference management policies has two key steps. Ideally, we need to find all the subsets of noninterfering UEs, i.e. the maximal independent sets (MISs) of the interference graph, but this is NPhard (nondeterministic polynomial time) even when solved in a centralized manner. Then, in order to maximize some given network performance criterion subject to UEs ’ minimum throughput requirements, we need to determine the optimal fraction of time occupied by each MIS, which requires global information (e.g. all the UEs ’ throughput requirements and channel gains). In our framework, we first propose a distributed algorithm for the UESBS pairs to find a subset of MISs in logarithmic