• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,264
Next 10 →

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images

by Alfred M. Bruckstein, David L. Donoho, Michael Elad , 2007
"... A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract - Cited by 427 (36 self) - Add to MetaCart
is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easily-verifiable conditions under which optimally-sparse

Metrics and norms used for obtaining sparse solutions to underdetermined Systems of Linear Equations

by Leoni Dalla, George K. Papageorgiou , 2014
"... This paper focuses on defining a measure, appropriate for obtaining optimally sparse solutions to underdetermined systems of linear equations.1 The general idea is the extension of metrics in n-dimensional spaces via the Cartesian product of metric spaces. 1 ..."
Abstract - Add to MetaCart
This paper focuses on defining a measure, appropriate for obtaining optimally sparse solutions to underdetermined systems of linear equations.1 The general idea is the extension of metrics in n-dimensional spaces via the Cartesian product of metric spaces. 1

Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions

by Emmanuel J. Candès, Justin Romberg , 2004
"... In this paper, we develop a robust uncertainty principle for finite signals in C N which states that for nearly all choices T, Ω ⊂ {0,..., N − 1} such that |T | + |Ω | ≍ (log N) −1/2 · N, there is no signal f supported on T whose discrete Fourier transform ˆ f is supported on Ω. In fact, we can mak ..."
Abstract - Cited by 181 (17 self) - Add to MetaCart
on finding the correct uncertainty relation or the optimally sparse solution for nearly all subsets but not necessarily all of them, and allows to considerably sharpen previously known results [9, 10]. In fact, we show that the fraction of sets (T, Ω) for which the above properties do not hold can be upper

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

by Jianqing Fan , Runze Li , 2001
"... Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized ..."
Abstract - Cited by 948 (62 self) - Add to MetaCart
functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing

A Singular Value Thresholding Algorithm for Matrix Completion

by Jian-Feng Cai, Emmanuel J. Candès, Zuowei Shen , 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract - Cited by 555 (22 self) - Add to MetaCart
-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two

Decoding by Linear Programming

by Emmanuel J. Candès, Terence Tao , 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract - Cited by 1399 (16 self) - Add to MetaCart
fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced

Laplacian eigenmaps and spectral techniques for embedding and clustering.

by Mikhail Belkin , Partha Niyogi - Proceeding of Neural Information Processing Systems, , 2001
"... Abstract Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami op erator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in ..."
Abstract - Cited by 668 (7 self) - Add to MetaCart
of the manifold on which the data may possibly reside. Recently, there has been some interest (Tenenbaum et aI, 2000 ; The core algorithm is very simple, has a few local computations and one sparse eigenvalu e problem. The solution reflects th e intrinsic geom etric structure of the manifold. Th e justification

Benchmarking Least Squares Support Vector Machine Classifiers

by Tony Van Gestel, Johan A. K. Suykens, Bart Baesens, Stijn Viaene, Jan Vanthienen, Guido Dedene, Bart De Moor, Joos Vandewalle - NEURAL PROCESSING LETTERS , 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract - Cited by 476 (46 self) - Add to MetaCart
stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LS-SVM classifiers with linear, polynomial and radial basis function

Sequential minimal optimization: A fast algorithm for training support vector machines

by John C. Platt - Advances in Kernel Methods-Support Vector Learning , 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract - Cited by 461 (3 self) - Add to MetaCart
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest

Greedy sparsity-constrained optimization

by Sohail Bahmani, Petros Boufounos, Bhiksha Raj - in Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on, IEEE, 2011
"... Abstract—Finding optimal sparse solutions to estimation problems, particularly in underdetermined regimes has recently gained much attention. Most existing literature study linear models in which the squared error is used as the measure of discrepancy to be minimized. However, in many applications d ..."
Abstract - Cited by 20 (4 self) - Add to MetaCart
Abstract—Finding optimal sparse solutions to estimation problems, particularly in underdetermined regimes has recently gained much attention. Most existing literature study linear models in which the squared error is used as the measure of discrepancy to be minimized. However, in many applications
Next 10 →
Results 1 - 10 of 1,264
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University