#### DMCA

## The Bigraphical Lasso

Citations: | 2 - 1 self |

### Citations

2381 | Nonlinear dimensionality reduction by locally linear embedding - Roweis, Saul - 2000 |

1490 |
Spectral graph theory
- Chung
- 1997
(Show Context)
Citation Context ... Product When operating on adjacency matrices, the KS is also known in algebraic graph theory as the Cartesian product of graphs and is arguably the most prominent of graph products (Sabidussi, 1959; =-=Chung, 1996-=-; Imrich et al., 2008). This endows the output of the BiGLasso with a more intuitive and interpretable graph decomposition of the induced GMRF, see figure 1 for an example. Enhanced Sparsity For a mat... |

892 | Variable selection via nonconcave penalized likelihood and its oracle properties - Fan, Li - 2005 |

693 | Probabilistic Principal Component Analysis
- Tipping, Bishop
- 1999
(Show Context)
Citation Context ...across data points and the covariance matrix is fit by penalized likelihood. The number of parameters in the covariance matrix can be reduced by low rank constraints such as factor analysis (see e.g. =-=Tipping & Bishop, 1999-=-) or Proceedings of the 30 th International Conference on Machine Learning, Atlanta, Georgia, USA, 2013. JMLR: W&CP volume 28. Copyright 2013 by the author(s). by constraining the inverse covariance (... |

505 | An efficient method for estimating seemingly unrelated regressions and test of aggregation bias - Zellner - 1962 |

220 | Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models - Lawrence - 2005 |

170 | Matrix Variate Distributions - Gupta, Nagar - 2000 |

141 | Multi-task Gaussian process prediction - Bonilla, Chai, et al. - 2008 |

131 |
Multivariate Geostatistics
- Wackernagel
- 1998
(Show Context)
Citation Context ...ependently computable. The property does not apply under the presence of additive noise, hence the outputs remain coupled. This result first arose in geostatistics under the name of autokrigeability (=-=Wackernagel, 2003-=-) and is also discussed for covariance functions in (O’Hagan, 1998). On the contrary, due to its additive form a KS-structured noise-free covariance enables inter-task transfer. 2 One factor for covar... |

44 | The mle algorithm for the matrix normal distribution - Dutilleul - 1999 |

28 |
Graphical models, volume 17
- Lauritzen
- 1996
(Show Context)
Citation Context ...r precision) to be sparse (e.g. Banerjee et al., 2008). A sparse precision matrix defines a Gaussian Markov random field relationship which is conveniently represented by a weighted undirected graph (=-=Lauritzen, 1996-=-). Nodes that are not neighbors in the graph are conditionally independent given all other nodes. Models specified in this way encode conditional independence structures between features. An alternati... |

28 | Learning multiple tasks with a sparse matrix-normal penalty - Zhang, Schneider - 2010 |

13 | Efficient inference in matrix-variate gaussian models with iid observation noise - Stegle, Lippert, et al. - 2011 |

7 | A unifying probabilistic perspective for spectral dimensionality reduction: Insights and new models - Lawrence - 2012 |

5 |
Sparse matrix graphical models
- Leng, Tang
- 2012
(Show Context)
Citation Context ...ation (most right column). Kronecker-sum (KS) precision matrices. We run the BiGLasso and SMGM using the ℓ1 penalty. The Θp and Ψn precision matrices in both cases are generated in accordance to (§4, =-=Leng & Tang, 2012-=-); namely, as either of the following d × d blocks (d being either p or n) of increasing density: 1. A1: Inverse AR(1) (auto-regressive process) such that A1 = B −1 with Bij = 0.7 |i−j| . 2. A2: AR(4)... |

4 | A note on the efficiency of seemingly unrelated regression - Binkley, Nelson - 1988 |