Results 1  10
of
79
Distributed Subgradient Methods for Multiagent Optimization
, 2007
"... We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimiz ..."
Abstract

Cited by 240 (25 self)
 Add to MetaCart
(Show Context)
We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a timevarying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
MRF energy minimization and beyond via dual decomposition
 IN: IEEE PAMI. (2011
"... This paper introduces a new rigorous theoretical framework to address discrete MRFbased optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first ..."
Abstract

Cited by 105 (9 self)
 Add to MetaCart
(Show Context)
This paper introduces a new rigorous theoretical framework to address discrete MRFbased optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and we demonstrate the extreme generality and flexibility of such an approach. We thus show that, by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend stateoftheart messagepassing methods, 2) optimize very tight LPrelaxations to MRF optimization, 3) and take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g, graphcut based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.
Dual decomposition for parsing with nonprojective head automata
 In Proc. of EMNLP
, 2010
"... This paper introduces algorithms for nonprojective parsing based on dual decomposition. We focus on parsing algorithms for nonprojective head automata, a generalization of headautomata models to nonprojective structures. The dual decomposition algorithms are simple and efficient, relying on standa ..."
Abstract

Cited by 101 (16 self)
 Add to MetaCart
This paper introduces algorithms for nonprojective parsing based on dual decomposition. We focus on parsing algorithms for nonprojective head automata, a generalization of headautomata models to nonprojective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the nonprojective parsing problem. Empirically the LP relaxation is very often tight: for many languages, exact solutions are achieved on over 98 % of test sentences. The accuracy of our models is higher than previous work on a broad range of datasets. 1
On Distributed Convex Optimization Under Inequality and Equality Constraints
 UNIVERSITY OF CALIFORNIA, SAN DIEGO (UC SAN
, 2012
"... We consider a general multiagent convex optimization problem where the agents are to collectively minimize a global objective function subject to a global inequality constraint, a global equality constraint, and a global constraint set. The objective function is defined by a sum of local objective ..."
Abstract

Cited by 52 (8 self)
 Add to MetaCart
(Show Context)
We consider a general multiagent convex optimization problem where the agents are to collectively minimize a global objective function subject to a global inequality constraint, a global equality constraint, and a global constraint set. The objective function is defined by a sum of local objective functions, while the global constraint set is produced by the intersection of local constraint sets. In particular, we study two cases: one where the equality constraint is absent, and the other where the local constraint sets are identical. We devise two distributed primaldual subgradient algorithms based on the characterization of the primaldual optimal solutions as the saddle points of the Lagrangian and penalty functions. These algorithms can be implemented over networks with dynamically changing topologies but satisfying a standard connectivity property, and allow the agents to asymptotically agree on optimal solutions and optimal values of the optimization problem under the Slater’s condition.
Subgradient methods for saddlepoint problems
 Journal of Optimization Theory and Applications
, 2009
"... We study subgradient methods for computing the saddle points of a convexconcave function. Our motivation is coming from networking applications where dual and primaldual subgradient methods have attracted much attention in designing decentralized network protocols. We first present a subgradient al ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
(Show Context)
We study subgradient methods for computing the saddle points of a convexconcave function. Our motivation is coming from networking applications where dual and primaldual subgradient methods have attracted much attention in designing decentralized network protocols. We first present a subgradient algorithm for generating approximate saddle points and provide periteration convergence rate estimates on the constructed solutions. We then focus on Lagrangian duality, where we consider a convex primal optimization problem and its Lagrangian dual problem, and generate approximate primaldual optimal solutions as approximate saddle points of the Lagrangian function. We present a variation of our subgradient method under the Slater constraint qualification and provide stronger estimates on the convergence rate of the generated primal sequences. In particular, we provide bounds on the amount of feasibility violation and on the primal objective function values at the approximate solutions. Our algorithm is particularly wellsuited for problems where the subgradient of the dual function cannot be evaluated easily (equivalently, the minimum of the Lagrangian function at a dual solution cannot be computed efficiently), thus impeding the use of dual subgradient methods.
Approximate Inference in Graphical Models using LP Relaxations
, 2010
"... Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the vari ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the variables. We present new approaches to approximate inference based on linear programming (LP) relaxations. Our algorithms optimize over the cycle relaxation of the marginal polytope, which we show to be closely related to the first lifting of the SheraliAdams hierarchy, and is significantly tighter than the pairwise LP relaxation. We show how to efficiently optimize over the cycle relaxation using a cuttingplane algorithm that iteratively introduces constraints into the relaxation. We provide a criterion to determine which constraints would be most helpful in tightening the relaxation, and give efficient algorithms for solving the search problem of finding the best cycle constraint to add according to this criterion.
A Tutorial on Dual Decomposition and Lagrangian Relaxation for Inference in Natural Language Processing
"... Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, descr ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Dual decomposition, and more generally Lagrangian relaxation, is a classical method for combinatorial optimization; it has recently been applied to several inference problems in natural language processing (NLP). This tutorial gives an overview of the technique. We describe example algorithms, describe formal guarantees for the method, and describe practical issues in implementing the algorithms. While our examples are predominantly drawn from the NLP literature, the material should be of general relevance to inference problems in machine learning. A central theme of this tutorial is that Lagrangian relaxation is naturally applied in conjunction with a broad class of combinatorial algorithms, allowing inference in models that go significantly beyond previous work on Lagrangian relaxation for inference in graphical models.
NewtonRaphson consensus for distributed convex optimization
 In CDC and European Control Conference
, 2011
"... Abstract — We study the problem of unconstrained distributed optimization in the context of multiagents systems subject to limited communication connectivity. In particular we focus on the minimization of a sum of convex cost functions, where each component of the global function is available only ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Abstract — We study the problem of unconstrained distributed optimization in the context of multiagents systems subject to limited communication connectivity. In particular we focus on the minimization of a sum of convex cost functions, where each component of the global function is available only to a specific agent and can thus be seen as a private local cost. The agents need to cooperate to compute the minimizer of the sum of all costs. We propose a consensuslike strategy to estimate a NewtonRaphson descending update for the local estimates of the global minimizer at each agent. In particular, the algorithm is based on the separation of timescales principle and it is proved to converge to the global minimizer if a specific parameter that tunes the rate of convergence is chosen sufficiently small. We also provide numerical simulations and compare them with alternative distributed optimization strategies like the Alternating Direction Method of Multipliers and the Distributed Subgradient Method. Index Terms — distributed optimization, convex optimization, consensus algorithms, multiagent systems, NewtonRaphson methods I.
Dice: a Game Theoretic Framework for Wireless Multipath Network Coding
"... Network coding has emerged as a promising approach that enables reliable and efficient endtoend transmissions in lossy wireless mesh networks. Existing protocols have demonstrated its resilience to packet losses, as well as the ability to integrate naturally with multipath opportunistic routing. H ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
Network coding has emerged as a promising approach that enables reliable and efficient endtoend transmissions in lossy wireless mesh networks. Existing protocols have demonstrated its resilience to packet losses, as well as the ability to integrate naturally with multipath opportunistic routing. However, these heuristics do not take into account the inherent resource competition in wireless networks, thereby compromising the coding advantages. In this paper, we take a gametheoretic perspective towards optimized resource allocation for network coding based unicast protocols. We design decentralized mechanisms that achieve better efficiencyfairness tradeoff, for both cooperative and selfish users. Our framework features a modularized optimization of two subproblems: the multipath routing of coded information flows for each player, and the broadcast and coding rate allocation among competing players. We have implemented the framework on a wireless emulation testbed and demonstrated its high performance in terms of throughput and fairness.