Results 1 - 10
of
19,890
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1-norm Solution is also the Sparsest Solution
- Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract
-
Cited by 568 (10 self)
- Add to MetaCart
that for large n, and for all Φ’s except a negligible fraction, the following property holds: For every y having a representation y = Φα0 by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, the solution α1 of the ℓ 1 minimization problem min �x�1 subject to Φα = y is unique and equal to α0
Sensitivity computation of the ℓ1 minimization problem and its application to dictionary design
- INVERSE PROBLEMS
, 2009
"... The ℓ 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
The ℓ 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract
-
Cited by 1399 (16 self)
- Add to MetaCart
to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the ℓ1-minimization problem (‖x‖ℓ1:= i |xi|) min g∈R n ‖y − Ag‖ℓ1 provided that the support of the vector of errors is not too large, ‖e‖ℓ0: = |{i: ei ̸= 0} | ≤ ρ · m
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a
The X-tree: An index structure for high-dimensional data
- In Proceedings of the Int’l Conference on Very Large Data Bases
, 1996
"... In this paper, we propose a new method for index-ing large amounts of point and spatial data in high-dimensional space. An analysis shows that index structures such as the R*-tree are not adequate for indexing high-dimensional data sets. The major problem of R-tree-based index structures is the over ..."
Abstract
-
Cited by 592 (17 self)
- Add to MetaCart
is the overlap of the bounding boxes in the directory, which increases with growing dimension. To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimiz-ing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split
Applications Of Circumscription To Formalizing Common Sense Knowledge
- Artificial Intelligence
, 1986
"... We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects o ..."
Abstract
-
Cited by 532 (12 self)
- Add to MetaCart
We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning first described in (McCarthy 1980) and some applications to formalizing common sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects
BIRCH: an efficient data clustering method for very large databases
- In Proc. of the ACM SIGMOD Intl. Conference on Management of Data (SIGMOD
, 1996
"... Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely st,udied problems in this area is the identification of clusters, or deusel y populated regions, in a multi-dir nensional clataset. Prior work does not adequately address the problem of ..."
Abstract
-
Cited by 576 (2 self)
- Add to MetaCart
of large datasets and minimization of 1/0 costs. This paper presents a data clustering method named Bfll (;”H (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and clynamicall y clusters incoming
An iterative image registration technique with an application to stereo vision
- In IJCAI81
, 1981
"... Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton- ..."
Abstract
-
Cited by 2897 (30 self)
- Add to MetaCart
. The registration problem The translational image registration problem can be characterized as follows: We are given functions F(x) and G(x) which give the respective pixel values at each location x in two images, where x is a vector. We wish to find the disparity vector h which minimizes some measure
Uncertainty principles and ideal atomic decomposition
- IEEE Transactions on Information Theory
, 2001
"... Suppose a discrete-time signal S(t), 0 t<N, is a superposition of atoms taken from a combined time/frequency dictionary made of spike sequences 1ft = g and sinusoids expf2 iwt=N) = p N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every d ..."
Abstract
-
Cited by 583 (20 self)
- Add to MetaCart
/frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the `1 norm of the coe cients among all decompositions. Here \highly sparse " means that Nt + Nw < p N=2 where Nt is the number of time atoms, Nw
On active contour models and balloons
- CVGIP: Image
"... The use.of energy-minimizing curves, known as “snakes, ” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method. ..."
Abstract
-
Cited by 588 (43 self)
- Add to MetaCart
The use.of energy-minimizing curves, known as “snakes, ” to extract features of interest in images has been introduced by Kass, Witkhr & Terzopoulos (Znt. J. Comput. Vision 1, 1987,321-331). We present a model of deformation which solves some of the problems encountered with the original method
Results 1 - 10
of
19,890