Results 11  20
of
719
Blind Separation of Instantaneous Mixtures of Non Stationary Sources
 IEEE Trans. Signal Processing
, 2000
"... Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) nonstationarity of the sources to achieve separation. We introduce two objective functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and w ..."
Abstract

Cited by 167 (12 self)
 Add to MetaCart
(Show Context)
Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) nonstationarity of the sources to achieve separation. We introduce two objective functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and we show how they can be optimized, offline or online, by simple yet remarkably efficient algorithms (one is based on a novel joint diagonalization procedure, the other on a Newtonlike technique). The paper also includes (limited) numerical experiments and a discussion contrasting nonGaussian and nonstationary models. 1. INTRODUCTION The aim of this paper is to develop a blind source separation procedure adapted to source signals with time varying intensity (such as speech signals). For simplicity, we shall restrict ourselves to the simplest mixture model: X(t) = AS(t) (1) where X(t) = [X 1 (t) XK (t)] T is the vector of observations (at time t), A is a fixed unknown K K inver...
An Analytical Constant Modulus Algorithm
, 1996
"... Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of cochannel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all ..."
Abstract

Cited by 166 (35 self)
 Add to MetaCart
Iterative constant modulus algorithms such as Godard and CMA have been used to blindly separate a superposition of cochannel constant modulus (CM) signals impinging on an antenna array. These algorithms have certain deficiencies in the context of convergence to local minima and the retrieval of all individual CM signals that are present in the channel. In this paper, we show that the underlying constant modulus factorization problem is, in fact, a generalized eigenvalue problem, and may be solved via a simultaneous diagonalization of a set of matrices. With this new, analytical approach, it is possible to detect the number of CM signals present in the channel, and to retrieve all of them exactly, rejecting other, nonCM signals. Only a modest amount of samples are required. The algorithm is robust in the presence of noise, and is tested on measured data, collected from an experimental setup.
Mining eventrelated brain dynamics,”
 Trends in Cognitive Sciences,
, 2004
"... This article provides a new, more comprehensive view of eventrelated brain dynamics founded on an informationbased approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average eventrelated potentials (ERPs) or on changes &apo ..."
Abstract

Cited by 130 (21 self)
 Add to MetaCart
(Show Context)
This article provides a new, more comprehensive view of eventrelated brain dynamics founded on an informationbased approach to modeling electroencephalographic (EEG) dynamics. Most EEG research focuses either on peaks 'evoked' in average eventrelated potentials (ERPs) or on changes 'induced' in the EEG power spectrum by experimental events. Although these measures are nearly complementary, they do not fully model the eventrelated dynamics in the data, and cannot isolate the signals of the contributing cortical areas. We propose that many ERPs and other EEG features are better viewed as time/frequency perturbations of underlying field potential processes. The new approach combines independent component analysis (ICA), time/frequency analysis, and trialbytrial visualization that measures EEG source dynamics without requiring an explicit head model. Scalp EEG signals are produced by partial synchronization of neuronalscale field potentials across areas of cortex of centimetresquared scale. Although once viewed by some as a form of brain 'noise', it appears increasingly probable that this synchronization optimizes relations between spikemediated 'topdown' and 'bottomup' communication, both within and between brain areas. This optimization might have particular importance during motivated anticipation of, and attention to, meaningful events and associations and in response to their anticipated consequences [1 3]. This new view of cortical and scalprecorded field dynamics requires a new data analysis approach. Here, we suggest how a combination of signal processing and visualization methods can give a more adequate model of the spatially distributed eventrelated EEG dynamics that support cognitive events. Traditional analysis of eventrelated EEG data proceeds in one of two directions. In the timedomain approach, researchers average a set of data trials or epochs timelocked to some class of events, yielding an ERP waveform at each data channel. The frequencydomain approach averages changes in the frequency power spectrum of the whole EEG data time locked to the same events, producing a twodimensional image that we call the eventrelated spectral perturbation (ERSP; see Box 1). Neither ERP nor ERSP measures of eventrelated data fully model their dynamics. Imagine, by analogy, a snapshot of a seashore view created by averaging together a large number of snapshots taken at different times. This average snapshot would not show the waves! Similarly, ERP averaging filters out most of the EEG data, leaving only a small portion phaselocked to the timelocking events (see Box 1). The ERP and ERSP are nearly complementary. Oscillatory (ERSP) changes 'induced' by experimental events can be poorly represented in, or completely absent from the timedomain features of the ERP 'evoked' by the same events
Blind PARAFAC receivers for DSCDMA systems
 IEEE TRANS. SIGNAL PROCESSING
, 2000
"... This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC ..."
Abstract

Cited by 126 (20 self)
 Add to MetaCart
This paper links the directsequence codedivision multiple access (DSCDMA) multiuser separationequalizationdetection problem to the parallel factor (PARAFAC) model, which is an analysis tool rooted in psychometrics and chemometrics. Exploiting this link, it derives a deterministic blind PARAFAC DSCDMA receiver with performance close to nonblind minimum meansquared error (MMSE). The proposed PARAFAC receiver capitalizes on code, spatial, and temporal diversitycombining, thereby supporting small sample sizes, more users than sensors, and/or less spreading than users. Interestingly, PARAFAC does not require knowledge of spreading codes, the specifics of multipath (interchip interference), DOAcalibration information, finite alphabet/constant modulus, or statistical independence/whiteness to recover the informationbearing signals. Instead, PARAFAC relies on a fundamental result regarding the uniqueness of lowrank threeway array decomposition due to Kruskal (and generalized herein to the complexvalued case) that guarantees identifiability of all relevant signals and propagation parameters. These and other issues are also demonstrated in pertinent simulation experiments.
A robust and precise method for solving the permutation problem of frequencydomain blind source separation
 IEEE Trans. on Speech and Audio Processing 12
, 2004
"... This paper presents a robust and precise method for solving the permutation problem of frequencydomain blind source separation. It is based on two previous approaches: the direction of arrival estimation and the interfrequency correlation. We discuss the advantages and disadvantages of the two app ..."
Abstract

Cited by 116 (31 self)
 Add to MetaCart
(Show Context)
This paper presents a robust and precise method for solving the permutation problem of frequencydomain blind source separation. It is based on two previous approaches: the direction of arrival estimation and the interfrequency correlation. We discuss the advantages and disadvantages of the two approaches, and integrate them to exploit their respective advantages. We also present a closed form formula to estimate the directions of source signals from a separating matrix obtained by ICA. Experimental results show that our method solved permutation problems almost perfectly for a situation that two sources were mixed in a room whose reverberation time was 300 ms. 1.
A linear nongaussian acyclic model for causal discovery
 J. Machine Learning Research
, 2006
"... In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to ..."
Abstract

Cited by 103 (30 self)
 Add to MetaCart
(Show Context)
In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuousvalued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have nonGaussian distributions of nonzero variances. The solution relies on the use of the statistical method known as independent component analysis, and does not require any prespecified timeordering of the variables. We provide a complete Matlab package for performing this LiNGAM analysis (short for Linear NonGaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and realworld data.
Optimization algorithms exploiting unitary constraints
 IEEE Trans. Signal Processing
, 2002
"... Abstract—This paper presents novel algorithms that iteratively converge to a local minimum of a realvalued function ( ) subject to the constraint that the columns of the complexvalued matrix are mutually orthogonal and have unit norm. The algorithms are derived by reformulating the constrained ..."
Abstract

Cited by 103 (13 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents novel algorithms that iteratively converge to a local minimum of a realvalued function ( ) subject to the constraint that the columns of the complexvalued matrix are mutually orthogonal and have unit norm. The algorithms are derived by reformulating the constrained optimization problem as an unconstrained one on a suitable manifold. This significantly reduces the dimensionality of the optimization problem. Pertinent features of the proposed framework are illustrated by using the framework to derive an algorithm for computing the eigenvector associated with either the largest or the smallest eigenvalue of a Hermitian matrix. Index Terms—Constrained optimization, eigenvalue problems, optimization on manifolds, orthogonal constraints. I.
Algorithms for numerical analysis in high dimensions
 SIAM J. SCI. COMPUT
, 2005
"... Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we ..."
Abstract

Cited by 90 (11 self)
 Add to MetaCart
Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by: (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrödinger equation in quantum mechanics. Numerical examples are given.
Tensor decompositions for learning latent variable models
, 2014
"... This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable mo ..."
Abstract

Cited by 83 (7 self)
 Add to MetaCart
(Show Context)
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable moments (typically, of second and thirdorder). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin’s perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
Joint Approximate Diagonalization Of Positive Definite Hermitian Matrices
"... This paper provides an iterative algorithm to jointly approximately diagonalize K Hermitian positive definite matrices Γ_1, ..., Γ_K . Specifically it calculates the matrix B which minimizes the criterion P K k=1 n k [log det diag(BC k B ) log det(BC k B )], n k being positive ..."
Abstract

Cited by 80 (11 self)
 Add to MetaCart
(Show Context)
This paper provides an iterative algorithm to jointly approximately diagonalize K Hermitian positive definite matrices &Gamma;_1, ..., &Gamma;_K . Specifically it calculates the matrix B which minimizes the criterion P K k=1 n k [log det diag(BC k B ) log det(BC k B )], n k being positive numbers, which is a measure of the deviation from diagonality of the matrices BC_k B*. The convergence of the algorithm is discussed and some numerical experiments are performed showing the good performance of the algorithm.