• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

DMCA

Dynamic Bayesian Networks: Representation, Inference and Learning (2002)

Cached

  • Download as a PDF

Download Links

  • [www.cs.ubc.ca]
  • [www.ai.mit.edu]
  • [cdn.preterhuman.net]
  • [cdn.preterhuman.net]
  • [cdn.preterhuman.net]
  • [www.cs.ubc.ca]
  • [www.ai.mit.edu]
  • [ibug.doc.ic.ac.uk]

  • Other Repositories/Bibliography

  • CiteULike
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Kevin Patrick Murphy
Citations:769 - 3 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Murphy02dynamicbayesian,
    author = {Kevin Patrick Murphy},
    title = {Dynamic Bayesian Networks: Representation, Inference and Learning},
    year = {2002}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and bio-sequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data. In particular, the main novel technical contributions of this thesis are as follows: a way of representing Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of applying Rao-Blackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.

Keyphrases

dynamic bayesian network    sequential data    new deterministic approximate inference algorithm    dbn model    catholic presentation    expressive power    online inference    structural em algorithm    exact smoothing algorithm    speech recognition    sequential data modelling    rao-blackwellised particle    main value    kalman filter model    loopy belief propagation    junction tree algorithm    dbns generalize kfms    simultaneous localization    bio-sequence analysis    arbitrary probability distribution    bk algorithm    exact online inference    many different kind    simple way    new complexity bound    main novel technical contribution    state space    different application    many area    single discrete random    factored form    hidden markov model    hierarchical hmms    approximate inference   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University