#### DMCA

## QUANTIZATION WITH ADAPTATION- ESTIMATION OF GAUSSIAN LINEAR MODELS ∗

### Citations

11694 | Maximum likelihood from incomplete data via the em algorithm
- Dempster, Laird, et al.
- 1977
(Show Context)
Citation Context ...ND BALÁZS TORMA approach to solve the likelihood equation is to use the EM-method. Following the basic steps of the EM-method we replace the log-likelihood function by an auxiliary function (see e.g. =-=[3]-=-) (10) Q(y; θ, ¯ θ) = E¯ θ[log p(X, θ, σ 2 ) | y] = E [log p(X, θ, σ 2 ) | y; ¯ θ], where ¯ θ is a current best estimate, and the random variable X = ¯ θ+e is the unknown assumed state. For N independ... |

3536 |
Equation of state calculations by fast computing machines
- Metropolis, Rosenbluth, et al.
- 1953
(Show Context)
Citation Context ... on the right hand side of (14) are expectations with respect to a conditional Gaussian density, and it is therefore natural to approximate them using a Markov Chain Monte Carlo (MCMC) algorithm, see =-=[9, 12]-=-. A combination of the latter with the EM-algorithm leads to a stochastic approximation scheme called a randomized EM-method, first presented in [4, 5]). A similar method has been developed independen... |

526 |
Adaptive Algorithms and Stochastic Approximation.
- Benveniste, Metivier, et al.
- 1990
(Show Context)
Citation Context ...t surely, under appropriate technical conditions. The asymptotic covariance of ˆ θt can be expressed as the solution of a Lyapunov-equation, using the algebra of Theorem 13, Chapter 4.5.3, Part II of =-=[1]-=-. It is well-known from the theory of stochastic approximation, that, in the case of a weighted stochastic gradient method based on the maximum-likelihood estimation, the best available asymptotic cov... |

284 | Quantized feedback stabilization of linear systems - Brockett, Liberzon - 2000 |

34 |
Applications of a kushner and clark lemma to general classes of stochastic algorithms
- Metivier, Priouret
- 1984
(Show Context)
Citation Context ...nchmark problem can be generalized for the multi-variable case. The theoretical analysis of the method can be carried out with the theory of recursive estimation under Markovian dynamics developed in =-=[1, 11]-=-. Although this theory is not complete, inasmuch a basic problem, the possibility of the estimator leaving any fixed compact domain is treated in a practically unsatisfactory manner, this deficiency c... |

17 |
Monte-Carlo Methods
- Hammersley, Handscomb
- 1964
(Show Context)
Citation Context ... on the right hand side of (14) are expectations with respect to a conditional Gaussian density, and it is therefore natural to approximate them using a Markov Chain Monte Carlo (MCMC) algorithm, see =-=[9, 12]-=-. A combination of the latter with the EM-algorithm leads to a stochastic approximation scheme called a randomized EM-method, first presented in [4, 5]). A similar method has been developed independen... |

17 |
Quantization Noise
- Widrow, Kollar
- 2008
(Show Context)
Citation Context ...atter application area is particularly fit for the present issue honoring Roger Brockett, due to his fundamental contribution to the area in his 1998 paper, (coauthored by D. Liberzon), [2]. See also =-=[14]-=- for a recent survey on quantization in communication and control. A scalar quantizer is defined as a mapping q from IR to a discrete, finite or countable set Y ⊂ IR, representing the so-called quanti... |

5 |
A representation theorem for the error of recursive estimators
- Gerencsér
- 2006
(Show Context)
Citation Context ...apter 4.5.3, Part II of [1]. (Note that the conditions of this Theorem are not realistic. However it is likely that a rigorous derivation can be obtained under realistic assumptions by the methods of =-=[7]-=-.) To summarize the algebra of this theorem let A be the Jacobian matrix of the associated ODE at θ = θ ∗ , and let Ā = A + I/2 be stable. Define S(θ∗ ) as the asymptotic covariance of the empirical m... |

3 |
Estimation of parameters from quantized noisy observations
- Finesso, Gerencsr, et al.
- 1999
(Show Context)
Citation Context ... of the form, with known δ: (4) yn = q(θ ∗ + δ + en). An efficient randomized EM-method to solve the off-line maximum-likelihood estimation problem, based on say N observations, has been developed in =-=[4]-=-. In the course of this procedure we generate a sequence of estimators θt that converge to the off-line maximum likelihood estimator ˆ θN almost surely, under appropriate technical conditions. A real-... |

3 |
A randomized EM-algorithm for estimating quantized linear Gaussian regression
- Finesso, Gerencsr, et al.
- 1999
(Show Context)
Citation Context ...ff-line maximum likelihood estimator ˆ θN almost surely, under appropriate technical conditions. A real-time version of this method, exhibiting excellent convergence properties, has been developed in =-=[5]-=-. In the real-time scheme we generate a sequence of estimators ˆ θt such that ˆ θt converges to θ ∗ almost surely, under appropriate technical conditions. The asymptotic covariance of ˆ θt can be expr... |

1 |
A recursive randomized EM-algorithm for estimation under quantization error
- Finesso, Gerencsér, et al.
- 2000
(Show Context)
Citation Context ...: a real-time recursive randomized EM-procedure can be defined by (45) ˆ θT = 1 T ∑ k∈K Nk,T ∑ ξ k t . t=1234 LÁSZLÓ GERENCSÉR, ILDIKÓ KMECS, AND BALÁZS TORMA This method has been first presented in =-=[6]-=-, but without justification for its convergence. The above derivation lends to a direct application of the BMP theory. The associated ODE. The so-called associated ODE, see Chapter 1.5, Part II of [1]... |

1 |
Almost sure and Lq-convergence of the re-initialized BMP scheme
- Gerencsér, Mátyás
- 2007
(Show Context)
Citation Context ...y is not complete, inasmuch a basic problem, the possibility of the estimator leaving any fixed compact domain is treated in a practically unsatisfactory manner, this deficiency can be rectified, see =-=[8]-=-. The presentation and verification of all conditions required by the BMP theory would exceed the allotted space. Henceforth we will restrict ourselves to the verification of a key condition of the BM... |

1 |
Signal identification after noisy nonlinear transformation
- Masry, Cambanis
(Show Context)
Citation Context ... Gaussian noise, followed by quantization. I.e. the observed values are of the form (3) yn = q(θ ∗ + en), where en is an i.i.d. Gaussian sequence with mean 0 and known variance σ2 = (σ∗) 2 , see e.g. =-=[10]-=-. The assumed knowledge of σ2 may be unrealistic in many applications, but it greatly simplifies the presentation. We shall discuss the possibility of handling unknown σ-s at the end of the paper. One... |

1 |
Adaptive algorithms and Markov chain Monte Carlo methods
- Solo
- 1999
(Show Context)
Citation Context ...thm leads to a stochastic approximation scheme called a randomized EM-method, first presented in [4, 5]). A similar method has been developed independently for the problem of log-linear regression in =-=[13]-=-. The MCMC method. Thus to compute ∫ xφ(x | kh; ¯ θ, σ 2 ) dx we generate an Ik ergodic Markov chain ¯ ξk t ( ¯ θ) on the state-space Ik, which is an interval of length h (in case of no saturation), s... |