Results 1  10
of
313
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Stable recovery of sparse overcomplete representations in the presence of noise
 IEEE TRANS. INFORM. THEORY
, 2006
"... Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes t ..."
Abstract

Cited by 460 (22 self)
 Add to MetaCart
(Show Context)
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimalsparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (36 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Theoretical results on sparse representations of multiplemeasurement vectors
 IEEE Trans. Signal Process
, 2006
"... Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In th ..."
Abstract

Cited by 147 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMV. Some of these new results take advantages of additional information in the formulation of MMV. We consider the uniqueness under both an ℓ0norm like criterion and an ℓ1norm like criterion. The consequent equivalence between the ℓ0norm approach and the ℓ1norm approach indicates a computationally efficient way of finding the sparsest representation in an overcomplete dictionary. For greedy algorithms, it is proven that under certain conditions, orthogonal matching pursuit (OMP) can find the sparsest representation of an MMV with computational efficiency, just like in SMV. Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent theoretical advances in probability. The connections will be discussed.
Blind PARAFAC receivers for DSCDMA systems
 IEEE TRANS. SIGNAL PROCESSING
, 2000
"... This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC ..."
Abstract

Cited by 126 (20 self)
 Add to MetaCart
This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC DSCDMA receiver with performance close to nonblind minimum meansquared error (MMSE). The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversitycombining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users. Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOAcalibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the informationbearing signals. Instead, PARAFAC relies on a fundamental result regarding the uniqueness of lowrank threeway array decomposition due to Kruskal (and generalized herein to the complexvalued case) that guarantees identifiability of all relevant signals and propagation parameters. These and other issues are also demonstrated in pertinent simulation experiments.
Parallel Factor Analysis in Sensor Array Processing
 IEEE TRANS. SIGNAL PROCESSING
, 2000
"... This paper links multiple invariance sensor array processing (MISAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for lowrank decomposition of three and higher way arrays. This link facilitates the derivation of power ..."
Abstract

Cited by 126 (19 self)
 Add to MetaCart
This paper links multiple invariance sensor array processing (MISAP) to parallel factor (PARAFAC) analysis, which is a tool rooted in psychometrics and chemometrics. PARAFAC is a common name for lowrank decomposition of three and higher way arrays. This link facilitates the derivation of powerful identifiability results for MISAP, shows that the uniqueness of single and multipleinvariance ESPRIT stems from uniqueness of lowrank decomposition of threeway arrays, and allows tapping on the available expertise for fitting the PARAFAC model. The results are applicable to both datadomain and subspace MISAP formulations. The paper also includes a constructive uniqueness proof for a special PARAFAC model.
Orthogonal Tensor Decompositions
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2001
"... We explore the orthogonal decomposition of tensors (also known as multidimensional arrays or nway arrays) using two different definitions of orthogonality. We present numerous examples to illustrate the difficulties in understanding such decompositions. We conclude with a counterexample to a tensor ..."
Abstract

Cited by 124 (9 self)
 Add to MetaCart
We explore the orthogonal decomposition of tensors (also known as multidimensional arrays or nway arrays) using two different definitions of orthogonality. We present numerous examples to illustrate the difficulties in understanding such decompositions. We conclude with a counterexample to a tensor extension of the EckartYoung SVD approximation theorem by Leibovici and Sabatier [Linear Algebra Appl., 269 (1998), pp. 307329].
Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals
"... We address the problem of reconstructing a multiband signal from its subNyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multicoset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose stric ..."
Abstract

Cited by 109 (60 self)
 Add to MetaCart
(Show Context)
We address the problem of reconstructing a multiband signal from its subNyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multicoset sampling. Prior recovery methods for this sampling strategy either require knowledge of band locations or impose strict limitations on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multicoset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finitedimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of knownspectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.
On the best rank1 and rank(R1, R2,...,RN ) approximation of higherorder tensor
 SIAM Journal on Matrix Analysis and Applications
"... Abstract. In this paper we discuss a multilinear generalization of the best rankR approximation problem for matrices, namely, the approximation of a given higherorder tensor, in an optimal leastsquares sense, by a tensor that has prespecified column rank value, row rank value, etc. For matrices, t ..."
Abstract

Cited by 108 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we discuss a multilinear generalization of the best rankR approximation problem for matrices, namely, the approximation of a given higherorder tensor, in an optimal leastsquares sense, by a tensor that has prespecified column rank value, row rank value, etc. For matrices, the solution is conceptually obtained by truncation of the singular value decomposition (SVD); however, this approach does not have a straightforward multilinear counterpart. We discuss higherorder generalizations of the power method and the orthogonal iteration method.