Results 1  10
of
66
Learning Graphical Models With Hubs
"... We consider the problem of learning a highdimensional graphical model in which certain hub nodes are highlyconnected to many other nodes. Many authors have studied the use of an `1 penalty in order to learn a sparse graph in the highdimensional setting. However, the `1 penalty implicitly assumes ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
assumes that each edge is equally likely and independent of all other edges. We propose a general framework to accommodate more realistic networks with hub nodes, using a convex formulation that involves a rowcolumn overlap norm penalty. We apply this general framework to three widelyused probabilistic
Structured Learning of Gaussian Graphical Models
"... We consider estimation of multiple highdimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions. We assume that most aspects of the networks are shared, but that there are some structured differences between them. Specifically, the network diffe ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
to the aberrant activity of a few specific genes. We propose to solve this problem using the perturbednode joint graphical lasso, a convex optimization problem that is based upon the use of a rowcolumn overlap norm penalty. We then solve the convex problem using an alternating directions method of multipliers
Grouped and hierarchical model selection through composite absolute penalties
 Annals of Statistics
, 2006
"... Extracting useful information from highdimensional data is an important part of the focus of today’s statistical research and practice. Penalized loss function minimization has been shown to be effective for this task both theoretically and empirically. With the virtues of both regularization and ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
and sparsity, the L1penalized L2 minimization method Lasso has been popular in regression models. In this paper, we combine different norms including L1 to form an intelligent penalty in order to add side information to the fitting of a regression or classification model to obtain reasonable estimates
LOWRANK OPTIMIZATION WITH TRACE NORM PENALTY∗
"... Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the sear ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictorcorrector approach that outperforms the naive
Proximal methods for the latent group lasso penalty
, 2014
"... We consider a regularized least squares problem, with regularization by structured sparsityinducing norms, which extend the usual `1 and the group lasso penalty, by allowing the subsets to overlap. Such regularizations lead to nonsmooth problems that are difficult to optimize, and we propose in thi ..."
Abstract
 Add to MetaCart
We consider a regularized least squares problem, with regularization by structured sparsityinducing norms, which extend the usual `1 and the group lasso penalty, by allowing the subsets to overlap. Such regularizations lead to nonsmooth problems that are difficult to optimize, and we propose
Convex optimization techniques for fitting sparse gaussian graphical models
 In Proceedings of the 23rd International Conference on Machine Learning
, 2006
"... We consider the problem of fitting a largescale covariance matrix to multivariate Gaussian data in such a way that the inverse is sparse, thus providing model selection. Beginning with a dense empirical covariance matrix, we solve a maximum likelihood problem with an l1norm penalty term added to e ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
, based on Nesterov’s firstorder algorithm, yields a rigorous complexity estimate for the problem, with a much better dependence on problem size than interiorpoint methods. Our second algorithm uses block coordinate descent, updating row/columns of the covariance matrix sequentially. Experiments
Smoothing Proximal Gradient Method for General Structured Sparse Learning
"... We study the problem of learning high dimensional regression models regularized by a structuredsparsityinducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping group ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
We study the problem of learning high dimensional regression models regularized by a structuredsparsityinducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping
On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations
, 2008
"... An underdetermined linear system of equations Ax = b with nonnegativity constraint x 0 is considered. It is shown that for matrices A with a rowspan intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique. The bound on the required sparsity ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
, considering a matrix A with arbitrary column norms, and an arbitrary monotone elementwise concave penalty replacing the `1norm objective function. Finally, from a numerical point of view, a greedy algorithm—a variant of the matching pursuit—is presented, such that it is guaranteed to find this sparse
Scalable Convex Methods for Flexible LowRank Matrix Modeling.” arXiv preprint arXiv:1308.4211
, 2013
"... We propose a general framework for reducedrank modeling of matrixvalued data. By applying a generalized nuclear norm penalty we directly model lowdimensional latent variables associated with rows and columns. Our framework flexibly incorporates row and column features, smoothing kernels, and othe ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a general framework for reducedrank modeling of matrixvalued data. By applying a generalized nuclear norm penalty we directly model lowdimensional latent variables associated with rows and columns. Our framework flexibly incorporates row and column features, smoothing kernels
An Efficient Proximal Gradient Method for General Structured Structured Sparse Learning
"... We study the problem of learning regression models regularized by the structured sparsityinducing penalty which encodes the prior structural information. We consider two most widely adopted structures as motivating examples: (1) group structure (might overlap) which is encoded via ℓ1/ℓ2 mixed norm ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We study the problem of learning regression models regularized by the structured sparsityinducing penalty which encodes the prior structural information. We consider two most widely adopted structures as motivating examples: (1) group structure (might overlap) which is encoded via ℓ1/ℓ2 mixed norm
Results 1  10
of
66