• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Dynamic Bayesian Networks: Representation, Inference and Learning (2002)

by Kevin P Murphy
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 771
Next 10 →

Learning in graphical models

by Michael I. Jordan - STATISTICAL SCIENCE , 2004
"... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..."
Abstract - Cited by 806 (10 self) - Add to MetaCart
Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in large-scale data analysis problems. We also present examples of graphical models in bioinformatics, error-control coding and language processing.
(Show Context)

Citation Context

...roposed update can be neglected. Finally, a variety of hybrid algorithms can be defined in which exact inference algorithms are used locally within an overall sampling framework (Jensen et al., 1995, =-=Murphy, 2002-=-). 3.3 Variational algorithms The basic idea of variational inference is to characterize a probability distribution as the solution to an optimization problem, to perturb this optimization problem, an...

Learning and inferring transportation routines

by Lin Liao, Dieter Fox, Henry Kautz , 2004
"... This paper introduces a hierarchical Markov model that can learn and infer a user’s daily movements through the community. The model uses multiple levels of abstraction in order to bridge the gap between raw GPS sensor measurements and high level information such as a user’s mode of transportation ..."
Abstract - Cited by 312 (22 self) - Add to MetaCart
This paper introduces a hierarchical Markov model that can learn and infer a user’s daily movements through the community. The model uses multiple levels of abstraction in order to bridge the gap between raw GPS sensor measurements and high level information such as a user’s mode of transportation or her goal. We apply Rao-Blackwellised particle filters for efficient inference both at the low level and at the higher levels of the hierarchy. Significant locations such as goals or locations where the user frequently changes mode of transportation are learned from GPS data logs without requiring any manual labeling. We show how to detect abnormal behaviors (e.g. taking a wrong bus) by concurrently tracking his activities with a trained and a prior model. Experiments show that our model is able to accurately predict the goals of a person and to recognize situations in which the user performs unknown activities.

An Empirical Bayes Approach to Inferring Large-Scale Gene Association Networks

by Juliane Schäfer, Korbinian Strimmer - BIOINFORMATICS , 2004
"... Motivation: Genetic networks are often described statistically by graphical models (e.g. Bayesian networks). However, inferring the network structure offers a serious challenge in microarray analysis where the sample size is small compared to the number of considered genes. This renders many standar ..."
Abstract - Cited by 237 (6 self) - Add to MetaCart
Motivation: Genetic networks are often described statistically by graphical models (e.g. Bayesian networks). However, inferring the network structure offers a serious challenge in microarray analysis where the sample size is small compared to the number of considered genes. This renders many standard algorithms for graphical models inapplicable, and inferring genetic networks an “ill-posed” inverse problem. Methods: We introduce a novel framework for small-sample inference of graphical models from gene expression data. Specifically, we focus on so-called graphical Gaussian models (GGMs) that are now frequently used to describe gene association networks and to detect conditionally dependent genes. Our new approach is based on (i) improved (regularized) small-sample point estimates of partial correlation, (ii) an exact test of edge inclusion with adaptive estimation of the degree of freedom, and (iii) a heuristic network search based on false discovery rate multiple testing. Steps (ii) and (iii) correspond to an empirical Bayes estimate of the network topology. Results: Using computer simulations we investigate the sensitivity (power) and specificity (true negative rate) of the proposed framework to estimate GGMs from microarray data. This shows that it is possible to recover the true network topology with high accuracy even for small-sample data sets. Subsequently, we analyze gene expression data from a breast cancer tumor study and illustrate our approach by inferring a corresponding large-scale gene association network for 3,883 genes. Availability: The authors have implemented the approach in the R package “GeneTS ” that is freely available from

Inferring High-Level Behavior from Low-Level Sensors

by Donald J. Patterson, Lin Liao, Dieter Fox, Henry Kautz , 2003
"... We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler's current mode of transportation as well as his most likely route, in an unsupervised manner. The model ..."
Abstract - Cited by 200 (17 self) - Add to MetaCart
We present a method of learning a Bayesian model of a traveler moving through an urban environment. This technique is novel in that it simultaneously learns a unified model of the traveler's current mode of transportation as well as his most likely route, in an unsupervised manner. The model is implemented using particle filters and learned using Expectation-Maximization. The training data is drawn from a GPS sensor stream that was collected by the authors over a period of three months. We demonstrate that by adding more external knowledge about bus routes and bus stops, accuracy is improved.
(Show Context)

Citation Context

...y, Bayes filters can make use of the independences between the different parts ofthe tracking problem. Such independences are typically displayed in a graphical model like Fig. 1. A dynamic Bayes net =-=[10, 11]-=-, such as this one, consists of a set of vari-ables for each time point t, where an arc from one variable to another indicates a causalinfluence. Although all of the links are equivalent in their caus...

Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data

by Charles Sutton, Khashayar Rohanimanesh, Andrew McCallum - IN ICML , 2004
"... In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when longrange dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain cond ..."
Abstract - Cited by 171 (13 self) - Add to MetaCart
In sequence modeling, we often wish to represent complex interaction between labels, such as when performing multiple, cascaded labeling tasks on the same sequence, or when longrange dependencies exist. We present dynamic conditional random fields (DCRFs), a generalization of linear-chain conditional random fields (CRFs) in which each time slice contains a set of state variables and edges---a distributed state representation as in dynamic Bayesian networks (DBNs)---and parameters are tied across slices. Since exact
(Show Context)

Citation Context

... in many different areas, including bioinformatics, music modeling, computational linguistics, speech recognition, and information extraction. Dynamic Bayesian networks (DBNs) (Dean & Kanazawa, 1989; =-=Murphy, 2002-=-) are a popular method for probabilistic sequence modeling, because they exploit structure in the problem to compactly represent distributions over multiple state variables. Hidden Markov models (HMMs...

Social Signal Processing: Survey of an Emerging Domain

by Alessandro Vinciarelli , Maja Pantic , Hervé Bourlard , 2008
"... The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next- ..."
Abstract - Cited by 153 (32 self) - Add to MetaCart
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially-aware computing.

Lifted first-order probabilistic inference

by Rodrigo De Salvo Braz, Eyal Amir, Dan Roth - In Proceedings of IJCAI-05, 19th International Joint Conference on Artificial Intelligence , 2005
"... Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poo ..."
Abstract - Cited by 126 (8 self) - Add to MetaCart
Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting first-order specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the first-order level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a first-order level, and that can be applied to any first-order model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference. 1

Extracting places and activities from gps traces using hierarchical conditional random fields

by Lin Liao, Dieter Fox, Henry Kautz - International Journal of Robotics Research , 2007
"... Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract a person’s activities and significant places from traces of GPS data. Our system uses hierarchically structured conditional random fields to generate a consistent mod ..."
Abstract - Cited by 119 (3 self) - Add to MetaCart
Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract a person’s activities and significant places from traces of GPS data. Our system uses hierarchically structured conditional random fields to generate a consistent model of a person’s activities and places. In contrast to existing techniques, our approach takes high-level context into account in order to detect the significant places of a person. Our experiments show significant improvements over existing techniques. Furthermore, they indicate that our system is able to robustly estimate a person’s activities using a model that is trained from data collected by other persons. 1
(Show Context)

Citation Context

...odel that can extract high-level activities from sequences of GPS readings. One possible approach is to use generative models such as hidden Markov 3smodels (HMM) [34, 7] or dynamic Bayesian networks =-=[14, 25, 20]-=-. However, discriminative models such as conditional random fields (CRF), have recently been shown to outperform generative techniques in areas such as natural language processing [18, 37], informatio...

a CAPpella: Programming by demonstration of context-aware applications

by Anind K. Dey, Raffay Hamid, Chris Beckmann, Ian Li, Daniel Hsu - in Proceedings of CHI 2004 , 2004
"... Context-aware applications are applications that implicitly take their context of use into account by adapting to changes in a user's activities and environments. No one has more intimate knowledge about these activities and environments than end-users themselves. Currently there is no support ..."
Abstract - Cited by 81 (4 self) - Add to MetaCart
Context-aware applications are applications that implicitly take their context of use into account by adapting to changes in a user's activities and environments. No one has more intimate knowledge about these activities and environments than end-users themselves. Currently there is no support for end-users to build context-aware applications for these dynamic settings. To address this issue, we present a CAPpella, a programming by demonstration Context-Aware Prototyping environment intended for end-users. Users "program " their desired context-aware behavior (situation and associated action) in situ, without writing any code, by demonstrating it to a CAPpella and by annotating the relevant portions of the demonstration. Using a meeting and medicine-taking scenario, we illustrate how a user can demonstrate different behaviors to a CAPpella. We describe a CAPpella's underlying system to explain how it supports users in building behaviors and present a study of 14 endusers to illustrate its feasibility and usability.

Using the structure of Web sites for automatic segmentation of tables

by Kristina Lerman, Lise Getoor, Steven Minton, Craig Knoblock , 2004
"... Many Web sites, especially those that dynamically generate HTML pages to display the results of a user’s query, present information in the form of list or tables. Current tools that allow applications to programmatically extract this information rely heavily on user input, often in the form of label ..."
Abstract - Cited by 80 (5 self) - Add to MetaCart
Many Web sites, especially those that dynamically generate HTML pages to display the results of a user’s query, present information in the form of list or tables. Current tools that allow applications to programmatically extract this information rely heavily on user input, often in the form of labeled extracted records. The sheer size and rate of growth of the Web make any solution that relies primarily on user input is infeasible in the long term. Fortunately, many Web sites contain much explicit and implicit structure, both in layout and content, that we can exploit for the purpose of information extraction. This paper describes an approach to automatic extraction and segmentation of records from Web tables. Automatic methods do not require any user input, but rely solely on the layout and content of the Web source. Our approach relies on the common structure of many Web sites, which present information as a list or a table, with a link in each entry leading to a detail page containing additional information about that item. We describe two algorithms that use redundancies in the content of table and detail pages to aid in information extraction. The first algorithm encodes additional information provided by detail pages as constraints and finds the segmentation by solving a constraint satisfaction problem. The second algorithm uses probabilistic inference to find the record segmentation. We show how each approach can exploit the web site structure in a general, domain-independent manner, and we demonstrate the effectiveness of each algorithm on a set of twelve Web sites. 1.
(Show Context)

Citation Context

...ecord, false otherwise. Of course, in addition to the variables, our model describes the dependencies between them. Rather than using the standard HMM representation, we use a factored representation =-=[10, 20]-=-, which allows us to more economically model (and learn) the state transition probabilities. We have defined the factored structure of the model, as shown in Figure 2. Arrows indicate probabilistic de...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University