Results 1  10
of
52
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
A fast approach for overcomplete sparse decomposition based on smoothed ℓ0 norm
, 2009
"... ..."
(Show Context)
Sparse representation for signal classification
 In Adv. NIPS
, 2006
"... In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that incl ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
(Show Context)
In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1
Underdetermined blind source separation based on sparse representation
 IEEE Transactions on Signal Processing
, 2006
"... Abstract—This paper discusses underdetermined (i.e., with more sources than sensors) blind source separation (BSS) using a twostage sparse representation approach. The first challenging task of this approach is to estimate precisely the unknown mixing matrix. In this paper, an algorithm for estimat ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
Abstract—This paper discusses underdetermined (i.e., with more sources than sensors) blind source separation (BSS) using a twostage sparse representation approach. The first challenging task of this approach is to estimate precisely the unknown mixing matrix. In this paper, an algorithm for estimating the mixing matrix that can be viewed as an extension of the DUET and the TIFROM methods is first developed. Standard clustering algorithms (e.g., Kmeans method) also can be used for estimating the mixing matrix if the sources are sufficiently sparse. Compared with the DUET, the TIFROM methods, and standard clustering algorithms, with the authors ’ proposed method, a broader class of problems can be solved, because the required key condition on sparsity of the sources can be considerably relaxed. The second task of the twostage approach is to estimate the source matrix using a standard linear programming algorithm. Another main contribution of the work described in this paper is the development of a recoverability analysis. After extending the results in [7], a necessary and sufficient condition for recoverability of a source vector is obtained. Based on this condition and various types of source sparsity, several probability inequalities and probability estimates for the recoverability issue are established. Finally, simulation results that illustrate the effectiveness of the theoretical results are presented. Index Terms—Blind source separation (BSS), Inorm, probability, recoverability, sparse representation, wavelet packets. I.
Fourthorder cumulantbased blind identification of underdetermined mixtures
 SIGNAL PROCESSING, IEEE TRANSACTIONS ON
, 2007
"... In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of so ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques.
Blind identification of underdetermined mixtures by simultaneous matrix diagonalization
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2008
"... In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in t ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higherorder tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation.
Sparse linear predictors for speech processing
 in Proc. Interspeech
, 2008
"... This paper presents two new classes of linear prediction schemes. The first one is based on the concept of creating a sparse residual rather than a minimum variance one, which will allow a more efficient quantization; we will show that this works well in presence of voiced speech, where the excitati ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
This paper presents two new classes of linear prediction schemes. The first one is based on the concept of creating a sparse residual rather than a minimum variance one, which will allow a more efficient quantization; we will show that this works well in presence of voiced speech, where the excitation can be represented by an impulse train, and creates a sparser residual in the case of unvoiced speech. The second class aims at finding sparse prediction coefficients; interesting results can be seen applying it to the joint estimation of longterm and shortterm predictors. The proposed estimators are all solutions to convex optimization problems, which can be solved efficiently and reliably using, e.g., interiorpoint methods. Index Terms: linear prediction, allpole modeling, convex optimization 1.
Blind Source Separation and Independent Component Analysis: A Review
, 2004
"... Blind source separation (BSS) and independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. A recent trend in BSS is to consider problems in the framework of matr ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Blind source separation (BSS) and independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. A recent trend in BSS is to consider problems in the framework of matrix factorization or more general signals decomposition with probabilistic generative and tree structured graphical models and exploit a priori knowledge about true nature and structure of latent (hidden) variables or sources such as spatiotemporal decorrelation, statistical independence, sparseness, smoothness or lowest complexity in the sense e.g., of best predictability. The possible goal of such decomposition can be considered as the estimation of sources not necessary statistically independent and parameters of a mixing system or more generally as finding a new reduced or hierarchical and structured representation for the observed (sensor) data that can be interpreted as physically meaningful coding or blind source estimation. The key issue is to find a such transformation or coding (linear or nonlinear) which has true physical meaning and interpretation. We present a review of BSS and ICA, including various algorithms for static and dynamic models and their applications. The paper mainly consists of three parts:
Generalized component analysis and blind source separation methods for analyzing multichannel brain signals
 Statistical and Process Models of Cognitive Aging, Mahwah, NJ
, 2006
"... Blind source separation (BSS) and related methods, e.g., independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. The recent trends in blind source separation and ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
Blind source separation (BSS) and related methods, e.g., independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. The recent trends in blind source separation and generalized component analysis (GCA) is to consider problems in the framework of matrix factorization or more general signals decomposition with probabilistic generative models and exploit a priori knowledge about true nature, morphology or structure of latent (hidden) variables or sources such as sparseness, spatiotemporal decorrelation, statistical independence, nonnegativity, smoothness or lowest possible complexity. The goal of BSS can be considered as estimation of true physical sources and parameters of a mixing system, while objective of GCA is finding a new reduced or hierarchical and structured representation for the observed (sensor) data that can be interpreted as physically meaningful coding or blind signal decompositions. The key issue is to find a such transformation or coding which has true physical meaning and interpretation. In this paper we discuss some promising applications of BSS/GCA for analyzing multimodal, multisensory data, especially EEG/MEG data. Moreover, we propose to apply
Blind estimation of channel parameters and source components for EEG signals: A sparse factorization approach
 IEEE Transactions on Neural Networks
, 2006
"... Abstract—In this paper, we use a twostage sparse factorization approach for blindly estimating the channel parameters and then estimating source components for electroencephalogram (EEG) signals. EEG signals are assumed to be linear mixtures of source components, artifacts, etc. Therefore, a raw EE ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we use a twostage sparse factorization approach for blindly estimating the channel parameters and then estimating source components for electroencephalogram (EEG) signals. EEG signals are assumed to be linear mixtures of source components, artifacts, etc. Therefore, a raw EEG data matrix can be factored into the product of two matrices, one of which represents the mixing matrix and the other the source component matrix. Furthermore, the components are sparse in the timefrequency domain, i.e., the factorization is a sparse factorization in the time frequency domain. It is a challenging task to estimate the mixing matrix. Our extensive analysis and computational results, which were based on many sets of EEG data, not only provide firm evidences supporting the above assumption, but also prompt us to propose a new algorithm for estimating the mixing matrix. After the mixing matrix is estimated, the source components are estimated in the time frequency domain using a linear programming method. In an example of the potential applications of our approach, we analyzed the EEG data that was obtained from a modified Sternberg memory experiment. Two almost uncorrelated components obtained by applying the sparse factorization method were selected for phase synchronization analysis. Several interesting findings were obtained, especially that memoryrelated synchronization and desynchronization appear in the alpha band, and that the strength of alpha band synchronization is related to memory performance. Index Terms—Electroencephalogram (EEG), linear mixture, linear programming, sparse factorization, synchronization, wavelet packets. I.