Results 1  10
of
884
Blind Source Separation by Sparse Decomposition in a Signal Dictionary
, 2000
"... Introduction In blind source separation an Nchannel sensor signal x(t) arises from M unknown scalar source signals s i (t), linearly mixed together by an unknown N M matrix A, and possibly corrupted by additive noise (t) x(t) = As(t) + (t) (1.1) We wish to estimate the mixing matrix A and the M ..."
Abstract

Cited by 274 (34 self)
 Add to MetaCart
Introduction In blind source separation an Nchannel sensor signal x(t) arises from M unknown scalar source signals s i (t), linearly mixed together by an unknown N M matrix A, and possibly corrupted by additive noise (t) x(t) = As(t) + (t) (1.1) We wish to estimate the mixing matrix A and the Mdimensional source signal s(t). Many natural signals can be sparsely represented in a proper signal dictionary s i (t) = K X k=1 C ik ' k (t) (1.2) The scalar functions ' k
Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria
 IEEE Trans. On Audio, Speech and Lang. Processing
, 2007
"... Abstract—An unsupervised learning algorithm for the separation of sound sources in onechannel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a timevarying gain ..."
Abstract

Cited by 189 (30 self)
 Add to MetaCart
Abstract—An unsupervised learning algorithm for the separation of sound sources in onechannel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a timevarying gain. Each sound source, in turn, is modeled as a sum of one or more components. The parameters of the components are estimated by minimizing the reconstruction error between the input spectrogram and the model, while restricting the component spectrograms to be nonnegative and favoring components whose gains are slowly varying and sparse. Temporal continuity is favored by using a cost term which is the sum of squared differences between the gains in adjacent frames, and sparseness is favored by penalizing nonzero gains. The proposed iterative estimation algorithm is initialized with random values, and the gains and the spectra are then alternatively updated using multiplicative update rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was compared with independent subspace analysis and basic nonnegative matrix factorization, which are based on the same linear model. According to these simulations, the proposed method enables a better separation quality than the previous algorithms. Especially, the temporal continuity criterion improved the detection of pitched musical sounds. The sparseness criterion did not produce significant improvements. Index Terms—Acoustic signal analysis, audio source separation, blind source separation, music, nonnegative matrix factorization, sparse coding, unsupervised learning. I.
A Survey of Dimension Reduction Techniques
, 2002
"... this paper, we assume that we have n observations, each being a realization of the p dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance matrix E{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 ..."
Abstract

Cited by 141 (0 self)
 Add to MetaCart
(Show Context)
this paper, we assume that we have n observations, each being a realization of the p dimensional random variable x = (x 1 , . . . , x p ) with mean E(x) = = ( 1 , . . . , p ) and covariance matrix E{(x )(x = # pp . We denote such an observation matrix by X = i,j : 1 p, 1 n}. If i and # i = # (i,i) denote the mean and the standard deviation of the ith random variable, respectively, then we will often standardize the observations x i,j by (x i,j i )/ # i , where i = x i = 1/n j=1 x i,j , and # i = 1/n j=1 (x i,j x i )
A Fast FixedPoint Algorithm for Independent Component Analysis of Complex Valued Signals
, 2000
"... Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued signals. In this article it is assumed that the original, complex valued source signals are mutually statis ..."
Abstract

Cited by 133 (1 self)
 Add to MetaCart
Separation of complex valued signals is a frequently arising problem in signal processing. For example, separation of convolutively mixed source signals involves computations on complex valued signals. In this article it is assumed that the original, complex valued source signals are mutually statistically independent, and the problem is solved by the independent component analysis (ICA) model. ICA is a statistical method for transforming an observed multidimensional random vector into components that are mutually as independent as possible. In this article, a fast xedpoint type algorithm that is capable of separating complex valued, linearly mixed source signals is presented and its computational efficiency is shown by simulations. Also, the local consistency of the estimator given by the algorithm is proved.
A robust and precise method for solving the permutation problem of frequencydomain blind source separation
 IEEE Trans. on Speech and Audio Processing 12
, 2004
"... This paper presents a robust and precise method for solving the permutation problem of frequencydomain blind source separation. It is based on two previous approaches: the direction of arrival estimation and the interfrequency correlation. We discuss the advantages and disadvantages of the two app ..."
Abstract

Cited by 116 (31 self)
 Add to MetaCart
(Show Context)
This paper presents a robust and precise method for solving the permutation problem of frequencydomain blind source separation. It is based on two previous approaches: the direction of arrival estimation and the interfrequency correlation. We discuss the advantages and disadvantages of the two approaches, and integrate them to exploit their respective advantages. We also present a closed form formula to estimate the directions of source signals from a separating matrix obtained by ICA. Experimental results show that our method solved permutation problems almost perfectly for a situation that two sources were mixed in a room whose reverberation time was 300 ms. 1.
A linear nongaussian acyclic model for causal discovery
 J. Machine Learning Research
, 2006
"... In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to ..."
Abstract

Cited by 103 (30 self)
 Add to MetaCart
(Show Context)
In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuousvalued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have nonGaussian distributions of nonzero variances. The solution relies on the use of the statistical method known as independent component analysis, and does not require any prespecified timeordering of the variables. We provide a complete Matlab package for performing this LiNGAM analysis (short for Linear NonGaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and realworld data.
BValidating the independent components of neuroimaging time series via clustering and visualization
 NeuroImage
, 2004
"... and visualization ..."
(Show Context)
Statistical Shape Analysis: Clustering, Learning, and Testing
 IEEE Trans. Pattern Anal. Mach. Intell
, 2005
"... Using a recently proposed geometric representation of planar shapes, we present algorithmic tools for: (i) hierarchical clustering of imaged objects according to the shapes of their boundaries, (ii) learning of probability models for clustered shapes, and (iii) testing of observed shapes under co ..."
Abstract

Cited by 81 (13 self)
 Add to MetaCart
(Show Context)
Using a recently proposed geometric representation of planar shapes, we present algorithmic tools for: (i) hierarchical clustering of imaged objects according to the shapes of their boundaries, (ii) learning of probability models for clustered shapes, and (iii) testing of observed shapes under competing probability models. Clustering at any level of hierarchy is performed using a mimimum dispersion criterion and a Markov search process. Statistical means of clusters provide shapes to be clustered at the next higher level, thus building a hierarchy of shapes.