Results 1  10
of
116,644
Distributed Subgradient Methods and Quantization Effects
"... We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we
Distributed Subgradient Methods for Multiagent Optimization
, 2007
"... We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimiz ..."
Abstract

Cited by 240 (25 self)
 Add to MetaCart
We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent
Distributed Subgradient Methods over Random Networks
, 2008
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume
Distributed Subgradient Methods for Delay Tolerant Networks
"... to optimize global performance in Delay Tolerant Networks (DTNs). These methods rely on simple local node operations and consensus algorithms to average neighbours ’ information. Existing results for convergence to optimal solutions can only be applied to DTNs in the case of synchronous operation of ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
of the nodes and memoryless random meeting processes. In this paper we address both these issues. First, we prove convergence to the optimal solution for a more general class of mobility models. Second, we show that, under asynchronous operations, a direct application of the original subgradient method would
Distributed Subgradient Methods for Convex Optimization over Random Networks
, 2009
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multiagent optimization that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals
Convergence Analysis of Distributed Subgradient Methods over Random Networks
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. Fo ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume
On the rate of convergence of distributed subgradient methods for multiagent optimization
 PROCEEDINGS OF IEEE CDC
, 2007
"... We study a distributed computation model for optimizing the sum of convex (nonsmooth) objective functions of multiple agents. We provide convergence results and estimates for convergence rate. Our analysis explicitly characterizes the tradeoff between the accuracy of the approximate optimal soluti ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
We study a distributed computation model for optimizing the sum of convex (nonsmooth) objective functions of multiple agents. We provide convergence results and estimates for convergence rate. Our analysis explicitly characterizes the tradeoff between the accuracy of the approximate optimal
Performance Evaluation of the Consensusbased Distributed Subgradient Method Under Random Communication Topologies
"... We investigate collaborative optimization of an objective function expressed as a sum of local convex functions, when the agents make decisions in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graphvalued ra ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
valued random process, assumed independent and identically distributed. Specifically, we study the performance of the consensusbased multiagent distributed subgradient method and show how it depends on the probability distribution of the random graph. For the case of a constant stepsize, we first give
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 542 (20 self)
 Add to MetaCart
We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a
Randomized Gossip Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Motivated by applications to sensor, peertopeer, and ad hoc networks, we study distributed algorithms, also known as gossip algorithms, for exchanging information and for computing in an arbitrarily connected network of nodes. The topology of such networks changes continuously as new nodes join a ..."
Abstract

Cited by 532 (5 self)
 Add to MetaCart
stochastic matrix characterizing the algorithm. Designing the fastest gossip algorithm corresponds to minimizing this eigenvalue, which is a semidefinite program (SDP). In general, SDPs cannot be solved in a distributed fashion; however, exploiting problem structure, we propose a distributed subgradient
Results 1  10
of
116,644