Results 1  10
of
21
Average Consensus in the Presence of Delays and Dynamically Changing Directed Graph Topologies
 IEEE Transactions on Automatic Control
, 2012
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Multidimensional NewtonRaphson consensus for distributed convex optimization
"... Abstract — In this work we consider a multidimensional distributed optimization technique that is suitable for multiagents systems subject to limited communication connectivity. In particular, we consider a convex unconstrained additive problem, i.e. a case where the global convex unconstrained mult ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract — In this work we consider a multidimensional distributed optimization technique that is suitable for multiagents systems subject to limited communication connectivity. In particular, we consider a convex unconstrained additive problem, i.e. a case where the global convex unconstrained multidimensional cost function is given by the sum of local cost functions available only to the specific owning agents. We show how, by exploiting the separation of timescales principle, the multidimensional consensusbased strategy approximates a NewtonRaphson descent algorithm. We propose two alternative optimization strategies corresponding to approximations of the main procedure. These approximations introduce tradeoffs between the required communication bandwidth and the convergence speed/accuracy of the results. We provide analytical proofs of convergence and numerical simulations supporting the intuitions developed through the paper. Index Terms — multidimensional distributed optimization, multidimensional convex optimization, consensus algorithms, multiagent systems, NewtonRaphson methods I.
A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization
, 2013
"... In this paper we consider a general problem setup for a wide class of convex and robust distributed optimization problems in peertopeer networks. In this setup convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we consider a general problem setup for a wide class of convex and robust distributed optimization problems in peertopeer networks. In this setup convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to the constraints. We propose a novel fully distributed algorithm, named cuttingplane consensus, to solve the problem, based on an outer polyhedral approximation of the constraint sets. Processors running the algorithm compute and exchange linear approximations of their locally feasible sets. Independently of the number of processors in the network, each processor stores only a small number of linear constraints, making the algorithm scalable to large networks. The cuttingplane consensus algorithm is presented and analyzed for the general framework. Specifically, we prove that all processors running the algorithm agree on an optimizer of the global problem, and that the algorithm is tolerant to node and link failures as long as network connectivity is preserved. Then, the cutting plane consensus algorithm is specified to three different classes of distributed optimization problems, namely (i) inequality constrained problems, (ii) robust optimization problems, and (iii) almost separable optimization problems with separable objective functions and coupling constraints. For each one of these problem classes we solve a concrete problem that can be expressed in that framework and present computational results. That is, we show how to solve: position estimation in wireless sensor networks, a distributed robust linear program and, a distributed microgrid control problem.
Distributed Robust Optimization via CuttingPlane Consensus
"... This paper addresses the problem of robust optimization in largescale networks of identical processors. General convex optimization problems are considered, where uncertain constraints are distributed to the processors in the network. The processors have to compute a maximizer of a linear objectiv ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper addresses the problem of robust optimization in largescale networks of identical processors. General convex optimization problems are considered, where uncertain constraints are distributed to the processors in the network. The processors have to compute a maximizer of a linear objective over the robustly feasible set, defined as the intersection of all locally known feasible sets. We propose a novel asynchronous algorithm, based on outerapproximations of the robustly feasible set, to solve such problems. Each processor stores a small set of linear constraints that form an outerapproximation of the robustly feasible set. Based on its locally available information and the data exchange with neighboring processors, each processor repeatedly updates its local approximation. A computational study for robust linear programming illustrates that the completion time of the algorithm depends primarily on the diameter of the communication graph and is independent of the network size.
Distributed line search via dynamic convex combinations
"... AbstractThis paper considers multiagent systems seeking to optimize a convex aggregate function. We assume that the gradient of this function is distributed, meaning that each agent can compute its corresponding partial derivative with state information about its neighbors and itself only. In suc ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractThis paper considers multiagent systems seeking to optimize a convex aggregate function. We assume that the gradient of this function is distributed, meaning that each agent can compute its corresponding partial derivative with state information about its neighbors and itself only. In such scenarios, the discretetime implementation of the gradient descent method poses the fundamental challenge of determining appropriate agent stepsizes that guarantee the monotonic evolution of the objective function. We provide a distributed algorithmic solution to this problem based on the aggregation of agent stepsizes via adaptive convex combinations. Simulations illustrate our results.
Finding the Best Panoramas
, 2011
"... Google Maps publishes street level panoramic photographs from around the world in the Street View service. When users request street level imagery in a given area, we would like to show the best or most representative imagery from the region. In order to select the best panorama for a region of any ..."
Abstract
 Add to MetaCart
(Show Context)
Google Maps publishes street level panoramic photographs from around the world in the Street View service. When users request street level imagery in a given area, we would like to show the best or most representative imagery from the region. In order to select the best panorama for a region of any size, I developed a panorama ranking algorithm. An enhancement to this technique is also described here, leveraging the Alternating Direction Method of Multipliers to create a high throughput distributed online learning algorithm that should allow for instant classification updating based on realtime user traffic. The ranking algorithm was deployed on maps.google.com on Monday, December 12, 2011. For more in depth information on the particular difficulties posed by our work on Google Street View, please refer to [1] and [2]. (a) Chicago
Asynchronous NewtonRaphson Consensus for Distributed Convex Optimization ⋆
"... Abstract: We consider the distributed unconstrained minimization of separable convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network. We specifically address an asynchronous distributed ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: We consider the distributed unconstrained minimization of separable convex cost functions, where the global cost is given by the sum of several local and private costs, each associated to a specific agent of a given communication network. We specifically address an asynchronous distributed optimization technique called NewtonRaphson Consensus. Beside having low computational complexity, low communication requirements and being interpretable as a distributed NewtonRaphson algorithm, the technique has also the beneficial properties of requiring very little coordination and naturally supporting timevarying topologies. In this work we analytically prove that under some assumptions it shows either local or global convergence properties, and corroborate this result by the means of numerical simulations.