DMCA
A Nonparametric Training Algorithm for Decentralized Binary Hypothesis Testing Networks (1993)
Venue: | Proceedings of 1993 American Control Conference |
Citations: | 1 - 0 self |
Citations
960 |
A stochastic approximation method
- Robbins, Monro
- 1951
(Show Context)
Citation Context ...rect updates without doing any function evaluations. We subsequently discovered, in the adaptive pattern classification literature, a modification of the Robbins-Monro stochastic approximation method =-=[8]-=- which was well suited for solving this problem and which we subsequently extended to the decentralized setting. 3.2 Stochastic Approximation In this section, we modify the "window algorithm" presente... |
159 | Detection With Distributed Sensors," - Tenney, Sandell - 1981 |
158 |
Parallel and distributed computation,
- Bertsekas, Tsitsiklis
- 1989
(Show Context)
Citation Context ...articular, in order to guarantee the existence and uniqueness of a fixed point solution, as well as convergence from arbitrary starting values, the system must be shown to have a contraction property =-=[1]-=-. Unfortunately, showing this property directly is difficult since the Gaussian error functions are very algebraically cumbersome. However, our numerical studies have uncovered no problems whatsoever ... |
119 |
On the complexity of decentralized decision making and detection problems,”
- Tsitsiklis, Athans
- 1985
(Show Context)
Citation Context ... this work is that the observations of the DMs are conditionally independent. Without this assumption, the decision rules not only become messy, but the problem has been shown to be NP-complete [12], =-=[13]-=-. We also assume for simplicity that the observations are scalar valued, although generalization to vector lid observations presents no difficulty. The decision criterion on which we focus, namely the... |
12 |
Pattern Classifiers and Trainable Machines
- Sklansky, Wassel
- 1981
(Show Context)
Citation Context ...h was well suited for solving this problem and which we subsequently extended to the decentralized setting. 3.2 Stochastic Approximation In this section, we modify the "window algorithm" presented in =-=[9]-=-, [15], [16] to construct a nonparametric distributed training method. The window algorithm is based on the following idea. Recall that the optimal minimum probability of error decision rule for the c... |
8 |
Optimization of Detection Networks
- Tang
- 1990
(Show Context)
Citation Context ...rom training are to be judged by how close they come to the thresholds which result from optimizing P,(.). This optimization, as can already be surmised, is nontrivial. It is discussed extensively in =-=[10]-=-. 3.1 Successive Approximation As we have seen for the Gaussian case, the optimal thresholds may be expressed as a system of coupled nonlinear algebraic equations which specify the necessary condition... |
7 |
Optimal Design of Distributed Detection Networks
- Ekchian
- 1982
(Show Context)
Citation Context ...blems. We note that an excellent and very general overview of the field of decentralized detection theory is presented in [12], while the small teams which concern us have been extensively studied in =-=[4]-=-. A critical assumption in this work is that the observations of the DMs are conditionally independent. Without this assumption, the decision rules not only become messy, but the problem has been show... |
5 | Learning Algorithms for Nonparametric Solution to the Minihnum Error Classification Problem - Do-Tu, Installe - 1978 |
5 | Analysis of a Two-Sensor Tandem Distributed Detection Network," M.S. Dissertation, Dept. of Elec - Pothiawala - 1989 |
5 |
Training a One-Dimensional Classifier to Minimize the Probability of Error
- Wassel, Sklansky
- 1972
(Show Context)
Citation Context ... well suited for solving this problem and which we subsequently extended to the decentralized setting. 3.2 Stochastic Approximation In this section, we modify the "window algorithm" presented in [9], =-=[15]-=-, [16] to construct a nonparametric distributed training method. The window algorithm is based on the following idea. Recall that the optimal minimum probability of error decision rule for the central... |
3 | Distributed Decisionmaking with Constrained Decisionmakers: A Case Study - Boettcher, Tenney - 1986 |
2 | Modulation Theory, volume 1 - Trees, Estimation - 1968 |
1 |
A Note on Learning Signal Detection. IRE Trans. on Information Theory
- Kac
- 1962
(Show Context)
Citation Context ...on the following idea. Recall that the optimal minimum probability of error decision rule for the centralized binary classification problem is the LRT p(yjlH) t Po (3.26) p(ylHo) ,= Pi It is noted in =-=[5]-=- that if the functions pop(ylHo) and plp(ylH1 ) have a single point of intersection, the minimum probability of error threshold K* is that value of K satisfying pop(y = KIHo) = Plp(y = KHl1 ) (3.27) F... |
1 |
Training a Linear Classifier to Optimize the Error Probability
- Wassel
- 1972
(Show Context)
Citation Context ...suited for solving this problem and which we subsequently extended to the decentralized setting. 3.2 Stochastic Approximation In this section, we modify the "window algorithm" presented in [9], [15], =-=[16]-=- to construct a nonparametric distributed training method. The window algorithm is based on the following idea. Recall that the optimal minimum probability of error decision rule for the centralized b... |