Results 1  10
of
22
On learning discrete graphical models using greedy methods
 In Neural Information Processing Systems (NIPS) (currently under review
, 2011
"... In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a highdimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forwardbackward greedy algorithm as applied to gener ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a highdimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forwardbackward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum nodedegreed and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling asn = Ω(d 2 log(p)), in contrast to existing convexoptimization based algorithms that require a sample complexity of Ω(d 3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end. 1
Structure estimation for discrete graphical models: Generalized covariance matrices and their inverses
, 2012
"... We investigate the relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix. We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects th ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We investigate the relationship between the structure of a discrete graphical model and the support of the inverse of a generalized covariance matrix. We show that for certain graph structures, the support of the inverse covariance matrix of indicator variables on the vertices of a graph reflects the conditional independence structure of the graph. Our work extends results that have previously been established only in the context of multivariate Gaussian graphical models, thereby addressing an open question about the significance of the inverse covariance matrix of a nonGaussian distribution. The proof exploits a combination of ideas from the geometry of exponential families, junction tree theory, and convex analysis. These populationlevel results have various consequences for graph selection methods, both known and novel, including a novel method for structure estimation for missing or corrupted observations. We provide nonasymptotic guarantees for such methods, and illustrate the sharpness of these predictions via simulations.
Graphical models via generalized linear models.
 In Advances in Neural Information Processing Systems (NIPS),
, 2012
"... Abstract Undirected graphical models, also known as Markov networks, enjoy popularity in a variety of applications. The popular instances of these models such as Gaussian Markov Random Fields (GMRFs), Ising models, and multinomial discrete models, however do not capture the characteristics of data ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
Abstract Undirected graphical models, also known as Markov networks, enjoy popularity in a variety of applications. The popular instances of these models such as Gaussian Markov Random Fields (GMRFs), Ising models, and multinomial discrete models, however do not capture the characteristics of data in many settings. We introduce a new class of graphical models based on generalized linear models (GLMs) by assuming that nodewise conditional distributions arise from exponential families. Our models allow one to estimate multivariate Markov networks given any univariate exponential distribution, such as Poisson, negative binomial, and exponential, by fitting penalized GLMs to select the neighborhood for each node. A major contribution of this paper is the rigorous statistical analysis showing that with high probability, the neighborhood of our graphical models can be recovered exactly. We also provide examples of nonGaussian highthroughput genomic networks learned via our GLM graphical models.
On model selection consistency of Mestimators with geometrically decomposable penalties
 Advances in Neural Information Processing Systems
, 2013
"... Penalized Mestimators are used in diverse areas of science and engineering to fit highdimensional models with some lowdimensional structure. Often, the penalties are geometrically decomposable, i.e. can be expressed as a sum of support functions over convex sets. We generalize the notion of irre ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Penalized Mestimators are used in diverse areas of science and engineering to fit highdimensional models with some lowdimensional structure. Often, the penalties are geometrically decomposable, i.e. can be expressed as a sum of support functions over convex sets. We generalize the notion of irrepresentable to geometrically decomposable penalties and develop a general framework for establishing consistency and model selection consistency of Mestimators with such penalties. We then use this framework to derive results for some special cases of interest in bioinformatics and statistical learning. 1
Learning the structure of mixed graphical models
 Journal of Computational and Graphical Statistics
, 2014
"... We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the grouplasso norm and follows naturally from a particular parametrization of the model. 1
Elementary Estimators for Graphical Models
"... We propose a class of closedform estimators for sparsitystructured graphical models, expressed as exponential family distributions, under highdimensional settings. Our approach builds on observing the precise manner in which the classical graphical model MLE “breaks down ” under highdimensional ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a class of closedform estimators for sparsitystructured graphical models, expressed as exponential family distributions, under highdimensional settings. Our approach builds on observing the precise manner in which the classical graphical model MLE “breaks down ” under highdimensional settings. Our estimator uses a carefully constructed, welldefined and closedform backward map, and then performs thresholding operations to ensure the desired sparsity structure. We provide a rigorous statistical analysis that shows that surprisingly our simple class of estimators recovers the same asymptotic convergence rates as those of the `1regularized MLEs that are much more difficult to compute. We corroborate this statistical performance, as well as significant computational advantages via simulations of both discrete and Gaussian graphical models. 1
Finding and Leveraging Structure in Learning Problems
, 2012
"... for the degree of Doctor of Philosophy. ..."
VectorSpace Markov Random Fields via Exponential Families
"... We present VectorSpace Markov Random Fields (VSMRFs), a novel class of undirected graphical models where each variable can belong to an arbitrary vector space. VSMRFs generalize a recent line of work on scalarvalued, uniparameter exponential family and mixed graphical models, thereby greatly ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We present VectorSpace Markov Random Fields (VSMRFs), a novel class of undirected graphical models where each variable can belong to an arbitrary vector space. VSMRFs generalize a recent line of work on scalarvalued, uniparameter exponential family and mixed graphical models, thereby greatly broadening the class of exponential families available (e.g., allowing multinomial and Dirichlet distributions). Specifically, VSMRFs are the joint graphical model distributions where the nodeconditional distributions belong to generic exponential families with general vector space domains. We also present a sparsistent Mestimator for learning our class of MRFs that recovers the correct set of edges with high probability. We validate our approach via a set of synthetic data experiments as well as a realworld case study of over four million foods from the popular diet tracking app MyFitnessPal. Our results demonstrate that our algorithm performs well empirically and that VSMRFs are capable of capturing and highlighting interesting structure in complex, realworld data. All code for our algorithm is open source and publicly available. 1.