• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 5,510
Next 10 →

Sparsistency of the Edge Lasso over Graphs

by James Sharpnack, Alessandro Rinaldo, Aarti Singh
"... The fused lasso was proposed recently to enable recovery of high-dimensional patterns which are piece-wise constant on a graph, by penalizing the 1-norm of differences of measurements at nodes that share an edge. While there have been some attempts at coming up with efficient algorithms for solving ..."
Abstract - Cited by 5 (3 self) - Add to MetaCart
The fused lasso was proposed recently to enable recovery of high-dimensional patterns which are piece-wise constant on a graph, by penalizing the 1-norm of differences of measurements at nodes that share an edge. While there have been some attempts at coming up with efficient algorithms for solving

Least angle regression

by Bradley Efron, Trevor Hastie, Iain Johnstone, Robert Tibshirani - Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract - Cited by 1308 (43 self) - Add to MetaCart
implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS

"GrabCut” -- interactive foreground extraction using iterated graph cuts

by Carsten Rother, Vladimir Kolmogorov, Andrew Blake - ACM TRANS. GRAPH , 2004
"... The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently ..."
Abstract - Cited by 1140 (36 self) - Add to MetaCart
The problem of efficient, interactive foreground/background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors

The Elements of Statistical Learning -- Data Mining, Inference, and Prediction

by Trevor Hastie, Robert Tibshirani, Jerome Friedman
"... ..."
Abstract - Cited by 1320 (13 self) - Add to MetaCart
Abstract not found

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

by Ingrid Daubechies, Michel Defrise, Christine De Mol , 2008
"... ..."
Abstract - Cited by 752 (9 self) - Add to MetaCart
Abstract not found

Consistency of the group lasso and multiple kernel learning

by Francis R. Bach - JOURNAL OF MACHINE LEARNING RESEARCH , 2007
"... We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1-norm where all spaces have dimension one, where it ..."
Abstract - Cited by 281 (34 self) - Add to MetaCart
We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1-norm where all spaces have dimension one, where

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images

by Alfred M. Bruckstein, David L. Donoho, Michael Elad , 2007
"... A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract - Cited by 423 (37 self) - Add to MetaCart
A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easily-verifiable conditions under which optimally-sparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical

Model selection through sparse maximum likelihood estimation

by Onureena Banerjee, Laurent El Ghaoui, John Lafferty - Journal of Machine Learning Research , 2008
"... We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1-norm penalty term. The problem as formulated is convex but the memor ..."
Abstract - Cited by 337 (2 self) - Add to MetaCart
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added ℓ1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive ℓ1-norm penalized regression. Our second algorithm, based on Nesterov’s first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright and Jordan, 2006), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.

A review of feature selection techniques in bioinformatics. Bioinformatics

by Yvan Saeys - Proceedings of LBM’07) Y Saeys , 2007
"... Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed t ..."
Abstract - Cited by 337 (9 self) - Add to MetaCart
Feature selection techniques have become an apparent need in many bioinformatics applications. In addition to the large pool of techniques that have already been developed in the machine learning and data mining fields, specific applications in bioinformatics have led to a wealth of newly proposed techniques. In this paper, we make the interested reader aware of the possibilities of feature selection, providing a basic taxonomy of feature selection techniques, and discussing their use, variety and potential in a number of both common as well as upcoming bioinformatics applications. Companion website:

The Bigraphical Lasso

by Alfredo Kalaitzis, John Lafferty, Neil D. Lawrence, Shuheng Zhou
"... The i.i.d. assumption in machine learning is endemic, but often flawed. Complex data sets exhibit partial correlations between both instances and features. A model specifying both types of correlation can have a number of parameters that scales quadratically with the number of features and data poin ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
points. We introduce the bigraphical lasso, an estimator for precision matrices of matrix-normals based on the Cartesian product of graphs. A prominent product in spectral graph theory, this structure has appealing properties for regression, enhanced sparsity and interpretability. To deal
Next 10 →
Results 1 - 10 of 5,510
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University