Results 1  10
of
24
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
, 2010
"... ..."
(Show Context)
Giannakis, “Distributed sparse linear regression
 IEEE Trans. Signal Process
, 2010
"... Abstract—The Lasso is a popular technique for joint estimation and continuous variable selection, especially wellsuited for sparse and possibly underdetermined linear regression problems. This paper develops algorithms to estimate the regression coefficients via Lasso when the training data are di ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
(Show Context)
Abstract—The Lasso is a popular technique for joint estimation and continuous variable selection, especially wellsuited for sparse and possibly underdetermined linear regression problems. This paper develops algorithms to estimate the regression coefficients via Lasso when the training data are distributed across different agents, and their communication to a central processing unit is prohibited for e.g., communication cost or privacy reasons. A motivating application is explored in the context of wireless communications, whereby sensing cognitive radios collaborate to estimate the radiofrequency power spectrum density. Attaining different tradeoffs between complexity and convergence speed, three novel algorithms are obtained after reformulating the Lasso into a separable form, which is iteratively minimized using the alternatingdirection method of multipliers so as to gain the desired degree of parallelization. Interestingly, the per agent estimate updates are given by simple softthresholding operations, and interagent communication overhead remains at affordable level. Without exchanging elements from the different training sets, the local estimates consent to the global Lasso solution, i.e., the fit that would be obtained if the entire data set were centrally available. Numerical experiments with both simulated and real data demonstrate the merits of the proposed distributed schemes, corroborating their convergence and global optimality. The ideas in this paper can be easily extended for the purpose of fitting related models in a distributed fashion, including the adaptive Lasso, elastic net, fused Lasso and nonnegative garrote. Index Terms—Distributed linear regression, Lasso, parallel optimization, sparse estimation. I.
Design of optimal sparse feedback gains via the alternating direction method of multipliers
 IEEE Trans. Automat. Control
"... Abstract—We design sparse and block sparse feedback gains that minimize the variance amplification (i.e., the norm) of distributed systems. Our approach consists of two steps. First, we identify sparsity patterns of feedback gains by incorporating sparsitypromoting penalty functions into the optim ..."
Abstract

Cited by 33 (8 self)
 Add to MetaCart
Abstract—We design sparse and block sparse feedback gains that minimize the variance amplification (i.e., the norm) of distributed systems. Our approach consists of two steps. First, we identify sparsity patterns of feedback gains by incorporating sparsitypromoting penalty functions into the optimal control problem, where the added terms penalize the number of communication links in the distributed controller. Second, we optimize feedback gains subject to structural constraints determined by the identified sparsity patterns. In the first step, the sparsity structure of feedback gains is identified using the alternating direction method of multipliers, which is a powerful algorithm wellsuited to large optimization problems. This method alternates between promoting the sparsity of the controller and optimizing the closedloop performance, which allows us to exploit the structure of the corresponding objective functions. In particular, we take advantage of the separability of the sparsitypromoting penalty functions to decompose the minimization problem into subproblems that can be solved analytically. Several examples are provided to illustrate the effectiveness of the developed approach. Index Terms—Alternating direction method of multipliers (ADMM), communication architectures, continuation methods, minimization, optimization, separable penalty functions, sparsitypromoting optimal control, structured distributed design. I.
Giannakis, “Cooperative spectrum sensing for cognitive radios using Kriged Kalman filtering
 IEEE Journal of Selected Topics in Signal Processing
, 2011
"... Abstract—A cooperative cognitive radio (CR) sensing problem is considered, where a number of CRs collaboratively detect the presence of primary users (PUs) by exploiting the novel notion of channel gain (CG) maps. The CG maps capture the propagation medium per frequency from any point in space and t ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
(Show Context)
Abstract—A cooperative cognitive radio (CR) sensing problem is considered, where a number of CRs collaboratively detect the presence of primary users (PUs) by exploiting the novel notion of channel gain (CG) maps. The CG maps capture the propagation medium per frequency from any point in space and time to each CR user. They are updated in realtime using Kriged Kalman filtering (KKF), a tool with wellappreciated merits in geostatistics. In addition, the CG maps enable tracking the transmitpower and location of an unknown number of PUs, via a sparse regression technique. The latter exploits the sparsity inherent to the PU activities in a geographical area, using annorm regularized, sparsitypromoting weighted leastsquares formulation. The resulting sparsitycognizant tracker is developed in both centralized and distributed formats, to reduce computational complexity and memory requirements of a batch alternative. Numerical tests demonstrate considerable performance gains achieved by the proposed algorithms. Index Terms—Channel estimation, cognitive radio, compressed sampling, distributed algorithms, Kalman filters. I.
DADMM: A communicationefficient distributed algorithm for separable optimization
 IEEE Trans. Sig. Proc
, 2013
"... ar ..."
(Show Context)
Distributed consensusbased demodulation: algorithms and error analysis
 IEEE Trans. Wireless Commun
, 2010
"... Abstract—This paper deals with distributed demodulation of spacetime transmissions of a common message from a multiantenna access point (AP) to a wireless sensor network. Based on local message exchanges with singlehop neighboring sensors, two algorithms are developed for distributed demodulation. ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This paper deals with distributed demodulation of spacetime transmissions of a common message from a multiantenna access point (AP) to a wireless sensor network. Based on local message exchanges with singlehop neighboring sensors, two algorithms are developed for distributed demodulation. In the first algorithm, sensors consent on the estimated symbols. By relaxing the finitealphabet constraints on the symbols, the demodulation task is formulated as a distributed convex optimization problem that is solved iteratively using the method of multipliers. Distributed versions of the centralized zeroforcing (ZF) and minimum meansquare error (MMSE) demodulators follow as special cases. In the second algorithm, sensors iteratively reach consensus on the average (cross) covariances of locally available persensor data vectors with the corresponding APtosensor channel matrices, which constitute sufficient statistics for maximum likelihood demodulation. Distributed versions of the sphere decoding algorithm and the ZF/MMSE demodulators are also developed. These algorithms offer distinct merits in terms of error performance and resilience to nonideal intersensor links. In both cases, the periteration error performance is analyzed, and the approximate number of iterations needed to attain a prescribed error rate are quantified. Simulated tests verify the analytical claims. Interestingly, only a few consensus iterations (roughly as many as the number of sensors), suffice for the distributed demodulators to approach the performance of their centralized counterparts. Index Terms—Detection and estimation, sensor networks, cooperative diversity. I.
Distributed ADMM for Model Predictive Control and Congestion Control
"... Abstract — Many problems in control can be modeled as an optimization problem over a network of nodes. Solving them with distributed algorithms provides advantages over centralized solutions, such as privacy and the ability to process data locally. In this paper, we solve optimization problems in ne ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Many problems in control can be modeled as an optimization problem over a network of nodes. Solving them with distributed algorithms provides advantages over centralized solutions, such as privacy and the ability to process data locally. In this paper, we solve optimization problems in networks where each node requires only partial knowledge of the problem’s solution. We explore this feature to design a decentralized algorithm that allows a significant reduction in the total number of communications. Our algorithm is based on the Alternating Direction of Multipliers (ADMM), and we apply it to distributed Model Predictive Control (MPC) and TCP/IP congestion control. Simulation results show that the proposed algorithm requires less communications than previous work for the same solution accuracy. x4 x1 4
Distributed Maximum Likelihood Sensor Network Localization
 IEEE Transactions on Signal Processing
, 2014
"... Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We deri ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We derive a computational efficient edgebased version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edgebased convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation errorresilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for largescale networks. Index Terms—Distributed optimization, convex relaxations, sensor network localization, distributed algorithms, ADMM, distributed localization, sensor networks, maximum likelihood. I.
Solving systems of monotone inclusions via primaldual splitting techniques
"... Abstract. In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the singlevalued are evaluated explicitly (forward steps), while th ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the singlevalued are evaluated explicitly (forward steps), while the setvalued ones via their resolvents (backward steps). In addition, most of the steps in the iterative scheme can be executed simultaneously, this making the method applicable to a variety of convex minimization problems. The numerical performances of the proposed splitting algorithm are emphasized through applications in average consensus on colored networks and image classification via support vector machines. 1.
for consensus on colored networks
 2012 IEEE 51st Annual Conference on Decision and Control (CDC), 2012
"... Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a coloring scheme of the network is available, we derive a decentralized, asynchronous, and communicationefficient algorithm that is based on the Alternating Direction Method of Multipliers (ADMM). Our simulations with other stateoftheart consensus algorithms show that the proposed algorithm is the one exhibiting the most stable performance across several network models. I.