Results 1 
5 of
5
Distributed Algorithms for Stochastic Source Seeking With Mobile Robot Networks
, 2014
"... Autonomous robot networks are an effective tool for monitoring largescale environmental fields. This paper proposes distributed control strategies for localizing the source of a noisy signal, which could represent a physical quantity of interest such as magnetic force, heat, radio signal, or chemi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Autonomous robot networks are an effective tool for monitoring largescale environmental fields. This paper proposes distributed control strategies for localizing the source of a noisy signal, which could represent a physical quantity of interest such as magnetic force, heat, radio signal, or chemical concentration. We develop algorithms specific to two scenarios: one in which the sensors have a precise model of the signal formation process and one in which a signal model is not available. In the modelfree scenario, a team of sensors is used to follow a stochastic gradient of the signal field. Our approach is distributed, robust to deformations in the group geometry, does not necessitate global localization, and is guaranteed to lead the sensors to a neighborhood of a local maximum of the field. In the modelbased scenario, the sensors follow a stochastic gradient of the mutual information (MI) between their expected measurements and the expected source location in a distributed manner. The performance is demonstrated in simulation using a robot sensor network to localize the source of a wireless radio signal. [DOI: 10.1115/1.4027892] 1
1Distributed Detection: Finitetime Analysis and Impact of Network Topology
"... This paper addresses the problem of distributed detection in multiagent networks. Agents receive private signals about an unknown state of the world. The underlying state is globally identifiable, yet informative signals may be dispersed throughout the network. Using an optimizationbased framework ..."
Abstract
 Add to MetaCart
(Show Context)
This paper addresses the problem of distributed detection in multiagent networks. Agents receive private signals about an unknown state of the world. The underlying state is globally identifiable, yet informative signals may be dispersed throughout the network. Using an optimizationbased framework, we develop an iterative local strategy for updating individual beliefs. In contrast to the existing literature which focuses on asymptotic learning, we provide a finitetime analysis. Furthermore, we introduce a KullbackLeibler cost to compare the efficiency of the algorithm to its centralized counterpart. Our bounds on the cost are expressed in terms of network size, spectral gap, centrality of each agent and relative entropy of agents ’ signal structures. A key observation is that distributing more informative signals to central agents results in a faster learning rate. Furthermore, optimizing the weights, we can speed up learning by improving the spectral gap. We also quantify the effect of link failures on learning speed in symmetric networks. We finally provide numerical simulations which verify our theoretical results. I.
1 Nonasymptotic Convergence Rates for Cooperative Learning Over TimeVarying Directed Graphs
"... ar ..."
(Show Context)
1Social Learning and Distributed Hypothesis Testing
"... Abstract—This paper considers a problem of distributed hypothesis testing and social learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the no ..."
Abstract
 Add to MetaCart
Abstract—This paper considers a problem of distributed hypothesis testing and social learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a “nonBayesian ” linear consensus using the logbeliefs of their neighbors. The main result of this paper is that under mild assumptions, the belief of any node in any incorrect parameter converges to zero exponentially fast, and the exponential rate of learning is a characterized by the network structure and the divergences between the observations ’ distributions. I.