Results 1 
8 of
8
Learning the structure of linear latent variable models
 Journal of Machine Learning Research
, 2006
"... We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the ..."
Abstract

Cited by 57 (17 self)
 Add to MetaCart
We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are dseparated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is pointwise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we
The TETRAD Project: Constraint Based Aids to Causal Model Specification
 MULTIVARIATE BEHAVIORAL RESEARCH
"... ..."
(Show Context)
Glymour: Linearity properties of Bayes nets with binary variables
 Uncertainty in Artificial Intelligence: Proceedings of the 17th Conference (UAI2001
, 2001
"... It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation of the influence of one variable on another even when the association between the variables may be confounded by unobserved common causes; (3) the association (or conditional probability distribution of one variable given another) of two variables connected by a path or pair of paths with a single common vertex (a trek) can be computed directly from the parameter values associated with each edge in the trek; (4) the association of two variables produced by multiple treks can be computed from the parameters associated with each trek; and (5) the independence of two variables conditional on a third implies the corresponding independence of the sums of the variables over all units conditional on the sums over all
TREK SEPARATION FOR GAUSSIAN GRAPHICAL MODELS
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2009
"... Gaussian graphical models are semialgebraic subsets of the cone of positive definite covariance matrices. Submatrices with low rank correspond to generalizations of conditional independence constraints on collections of random variables. We give a precise graphtheoretic characterization of when su ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Gaussian graphical models are semialgebraic subsets of the cone of positive definite covariance matrices. Submatrices with low rank correspond to generalizations of conditional independence constraints on collections of random variables. We give a precise graphtheoretic characterization of when submatrices of the covariance matrix have small rank for a general class of mixed graphs that includes directed acyclic and undirected graphs as special cases. Our new trek separation criterion generalizes the familiar dseparation criterion. Proofs are based on the trek rule, the resulting matrix factorizations, and classical theorems of algebraic combinatorics on the expansions of determinants of path polynomials.
Automatic discovery of latent variable models
 Machine Learning Dpt., CMU
, 2005
"... representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity. ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or any other entity.
Generalized measurement models
, 2004
"... document without permission of its author may be prohibited by law. ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
(Show Context)
document without permission of its author may be prohibited by law.
Identification and likelihood inference for recursive linear models with correlated errors
, 2007
"... In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by recursive systems of linear structural equations. Such models appear in particular in seemingly unrelated regressions, structural equation modelling, simultaneous equati ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In recursive linear models, the multivariate normal joint distribution of all variables exhibits a dependence structure induced by recursive systems of linear structural equations. Such models appear in particular in seemingly unrelated regressions, structural equation modelling, simultaneous equation systems, and in Gaussian graphical modelling. We show that recursive linear models that are ‘bowfree’ are wellbehaved statistical models, namely, they are everywhere identifiable and form curved exponential families. Here, ‘bowfree ’ refers to models satisfying the condition that if a variable x occurs in the structural equation for y, then the errors for x and y are uncorrelated. For the computation of maximum likelihood estimates in ‘bowfree ’ recursive linear models we introduce the Residual Iterative Conditional Fitting (RICF) algorithm. Compared to existing algorithms RICF is easily implemented requiring only least squares computations, has clear convergence properties, and finds parameter estimates in closed form whenever possible. 1
Published In Linearity Properties of Bayes Nets with Binary Variables
, 2001
"... It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation ..."
Abstract
 Add to MetaCart
(Show Context)
It is “well known ” that in linear models: (1) testable constraints on the marginal distribution of observed variables distinguish certain cases in which an unobserved cause jointly influences several observed variables; (2) the technique of “instrumental variables ” sometimes permits an estimation of the influence of one variable on another even when the association between the variables may be confounded by unobserved common causes; (3) the association (or conditional probability distribution of one variable given another) of two variables connected by a path or pair of paths with a single common vertex (a trek) can be computed directly from the parameter values associated with each edge in the trek; (4) the association of two variables produced by multiple treks can be computed from the parameters associated with each trek; and (5) the independence of two variables conditional on a third implies the corresponding independence of the sums of the variables over all units conditional on the sums over all