Results 1  10
of
21
Discrete chain graph models.
 Bernoulli
, 2009
"... The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one mod ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
The statistical literature discusses different types of Markov properties for chain graphs that lead to four possible classes of chain graph Markov models. The different models are rather well understood when the observations are continuous and multivariate normal, and it is also known that one model class, referred to as models of LWF (LauritzenWermuthFrydenberg) or block concentration type, yields discrete models for categorical data that are smooth. This paper considers the structural properties of the discrete models based on the three alternative Markov properties. It is shown by example that two of the alternative Markov properties can lead to nonsmooth models. The remaining model class, which can be viewed as a discrete version of multivariate regressions, is proven to comprise only smooth models. The proof employs a simple change of coordinates that also reveals that the model's likelihood function is unimodal if the chain components of the graph are complete sets.
Algebraic methods for evaluating integrals in Bayesian statistics
, 2011
"... The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning and computational biology. Following the recent success of algebraic statistics [16,41,43] in frequentist inference and inspired by Wa ..."
Abstract

Cited by 17 (14 self)
 Add to MetaCart
(Show Context)
The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning and computational biology. Following the recent success of algebraic statistics [16,41,43] in frequentist inference and inspired by Watanabe’s foundational approach to singular learning theory [58], the goal of this dissertation is to study algebraic, geometric and combinatorial methods for computing Bayesian integrals effectively, and to explore the rich mathematical theories that arise in this connection between statistics and algebraic geometry. For these integrals, we investigate their exact evaluation for small samples and their asymptotics for large samples. According to Watanabe, the key to understanding singular models lies in desingularizing the KullbackLeibler function K(ω) of the model at the true distribution. This step puts the model in a standard form so that various central limit theorems can be applied. While general algorithms exist for desingularizing any analytic function, applying them to nonpolynomial functions such as K(ω) can be computationally expensive. Many singular models are however represented as regular models whose parameters are polynomial functions of new parameters. Discrete models and multivariate Gaussian models are all examples. We call them regularly
Global identifiability of linear structural equation models
 Ann. Statist
, 2011
"... ar ..."
(Show Context)
Marginal Likelihood Integrals for Mixtures of Independence Models
, 2009
"... Inference in Bayesian statistics involves the evaluation of marginal likelihood integrals. We present algebraic algorithms for computing such integrals exactly for discrete data of small sample size. Our methods apply to both uniform priors and Dirichlet priors. The underlying statistical models are ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
Inference in Bayesian statistics involves the evaluation of marginal likelihood integrals. We present algebraic algorithms for computing such integrals exactly for discrete data of small sample size. Our methods apply to both uniform priors and Dirichlet priors. The underlying statistical models are mixtures of independent distributions, or, in geometric language, secant varieties of SegreVeronese varieties.
Computing Maximum Likelihood Estimates in Recursive . . .
, 2008
"... In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equat ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equation modelling, and approaches to causal inference. They are also related to Gaussian graphical models via a classical representation known as a path diagram. Despite the models ’ long history, a number of problems remain open. In this paper, we address the problem of computing maximum likelihood estimates in the subclass of ‘bowfree ’ recursive linear models. The term ‘bowfree ’ refers to the condition that the errors for variables i and j be uncorrelated if variable i occurs in the structural equation for variable j. We introduce a new algorithm, termed Residual Iterative Conditional Fitting (RICF), that can be implemented using only least squares computations. In contrast to existing algorithms, RICF has clear convergence properties and finds parameter estimates in closed form whenever possible.
Maximum likelihood fitting of acyclic directed mixed graphs to binary data
 Proceedings of the 26th International Conference on Uncertainty in Artificial Intelligence
, 2010
"... mixed graphs to binary data ..."
(Show Context)
Identification of discrete concentration graph models with one hidden binary variable
"... Conditions are presented for different types of identifiability of discrete variable models generated over an undirected graph in which one node represents a binary hidden variable. These models can be seen as extensions of the latent class model to allow for conditional associations between the obs ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Conditions are presented for different types of identifiability of discrete variable models generated over an undirected graph in which one node represents a binary hidden variable. These models can be seen as extensions of the latent class model to allow for conditional associations between the observable random variables. Since local identification corresponds to full rank of the parametrization map, we establish a necessary and sufficient condition for the rank to be full everywhere in the parameter space. The condition is based on the topology of the undirected graph associated to the model. For nonfull rank models, the obtained characterization allows us to find the subset of the parameter space where the identifiability breaks down.
The geometry of conditional independence tree models with hidden variables
 DEPARTMENT OF MATHEMATICS, UNIVERSITY OF CALIFORNIA, BERKELEY, CA 94720, USA EMAIL ADDRESS: MACUETO@MATH.BERKELEY.EDU DEPARTMENT OF MATHEMATICS, STANFORD UNIVERSITY
, 2009
"... In this paper we investigate the geometry of undirected graphical models of trees when all the variables in the system are binary and some of them are hidden. We obtain a full description of those models which is given by polynomial equations and inequalities and give exact formulas for their param ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In this paper we investigate the geometry of undirected graphical models of trees when all the variables in the system are binary and some of them are hidden. We obtain a full description of those models which is given by polynomial equations and inequalities and give exact formulas for their parameters in terms of the marginal probability over the observed variables. We also show how correlations link to tree metrics considered in phylogenetics. Finally, a new system of coordinates is given that is intrinsically related to the phylogenetic tree models and which allows us to classify phylogenetic invariants.
Smoothness of Gaussian Conditional Independence Models
 CONTEMPORARY MATHEMATICS
, 2010
"... Conditional independence in a multivariate normal (or Gaussian) distribution is characterized by the vanishing of subdeterminants of the distribution’s covariance matrix. Gaussian conditional independence models thus correspond to algebraic subsets of the cone of positive definite matrices. For s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Conditional independence in a multivariate normal (or Gaussian) distribution is characterized by the vanishing of subdeterminants of the distribution’s covariance matrix. Gaussian conditional independence models thus correspond to algebraic subsets of the cone of positive definite matrices. For statistical inference in such models it is important to know whether or not the model contains singularities. We study this issue in models involving up to four random variables. In particular, we give examples of conditional independence relations which, despite being probabilistically representable, yield models that nontrivially decompose into a finite union of several smooth submodels.