Results 1  10
of
514
Survey on Independent Component Analysis
 NEURAL COMPUTING SURVEYS
, 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 2241 (104 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this paper, we survey the existing theory and methods for ICA.
Embracing wireless interference: Analog network coding
 in ACM SIGCOMM
, 2007
"... Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, r ..."
Abstract

Cited by 353 (10 self)
 Add to MetaCart
(Show Context)
Traditionally, interference is considered harmful. Wireless networks strive to avoid scheduling multiple transmissions at the same time in order to prevent interference. This paper adopts the opposite approach; it encourages strategically picked senders to interfere. Instead of forwarding packets, routers forward the interfering signals. The destination leverages networklevel information to cancel the interference and recover the signal destined to it. The result is analog network coding because it mixes signals not bits. So, what if wireless routers forward signals instead of packets? Theoretically, such an approach doubles the capacity of the canonical relay network. Surprisingly, it is also practical. We implement our design using software radios and show that it achieves significantly higher throughput than both traditional wireless routing and prior work on wireless network coding. 1.
Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources
, 1999
"... An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a proj ..."
Abstract

Cited by 307 (22 self)
 Add to MetaCart
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to highdimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
HighOrder Contrasts for Independent Component Analysis
"... This article considers highorder measures of independence for the independent component analysis problem and discusses the class of Jacobi algorithms for their optimization. Several implementations are discussed. We compare the proposed approaches with gradientbased techniques from the algorithmic ..."
Abstract

Cited by 252 (5 self)
 Add to MetaCart
This article considers highorder measures of independence for the independent component analysis problem and discusses the class of Jacobi algorithms for their optimization. Several implementations are discussed. We compare the proposed approaches with gradientbased techniques from the algorithmic point of view and also on a set of biomedical data.
Blind Separation of Instantaneous Mixtures of Non Stationary Sources
 IEEE Trans. Signal Processing
, 2000
"... Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) nonstationarity of the sources to achieve separation. We introduce two objective functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and w ..."
Abstract

Cited by 172 (12 self)
 Add to MetaCart
(Show Context)
Most ICA algorithms are based on a model of stationary sources. This paper considers exploiting the (possible) nonstationarity of the sources to achieve separation. We introduce two objective functions based on the likelihood and on mutual information in a simple Gaussian non stationary model and we show how they can be optimized, offline or online, by simple yet remarkably efficient algorithms (one is based on a novel joint diagonalization procedure, the other on a Newtonlike technique). The paper also includes (limited) numerical experiments and a discussion contrasting nonGaussian and nonstationary models. 1. INTRODUCTION The aim of this paper is to develop a blind source separation procedure adapted to source signals with time varying intensity (such as speech signals). For simplicity, we shall restrict ourselves to the simplest mixture model: X(t) = AS(t) (1) where X(t) = [X 1 (t) XK (t)] T is the vector of observations (at time t), A is a fixed unknown K K inver...
Measuring statistical dependence with HilbertSchmidt norms
 PROCEEDINGS ALGORITHMIC LEARNING THEORY
, 2005
"... We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the HilbertSchmidt norm of the crosscovariance operator (we term this a HilbertSchmidt Independence Criterion, or HSIC). Th ..."
Abstract

Cited by 157 (44 self)
 Add to MetaCart
We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the HilbertSchmidt norm of the crosscovariance operator (we term this a HilbertSchmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernelbased independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no userdefined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on HSIC do not suffer from slow learning rates. Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernelbased criteria, and of other recently published ICA methods.
Blind source separation of more sources than mixtures using overcomplete representations
 IEEE Sig. Proc. Lett
, 1999
"... Abstract—Empirical results were obtained for the blind source separation of more sources than mixtures using a recently proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: 1) learning an overcomplete r ..."
Abstract

Cited by 134 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Empirical results were obtained for the blind source separation of more sources than mixtures using a recently proposed framework for learning overcomplete representations. This technique assumes a linear mixing model with additive noise and involves two steps: 1) learning an overcomplete representation for the observed data and 2) inferring sources given a sparse prior on the coefficients. We demonstrate that three speech signals can be separated with good fidelity given only two mixtures of the three signals. Similar results were obtained with mixtures of two speech signals and one music signal. Index Terms—Blind source separation, independent component analysis, overcomplete dictionary, overcomplete representation, speech signal separation. (a) (b)
Blind Separation of Disjoint Orthogonal Signals: Demixing N Sources from 2 Mixtures
 In IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP
, 2000
"... We present a novel method for blind separation of any number of sources using only two mixtures. The method applies when sources are (W)disjoint orthogonal, that is, when the supports of the (windowed) Fourier transform of any two signals in the mixture are disjoint sets. We show that, for anechoi ..."
Abstract

Cited by 124 (13 self)
 Add to MetaCart
(Show Context)
We present a novel method for blind separation of any number of sources using only two mixtures. The method applies when sources are (W)disjoint orthogonal, that is, when the supports of the (windowed) Fourier transform of any two signals in the mixture are disjoint sets. We show that, for anechoic mixtures of attenuated and delayed sources, the method allows one to estimate the mixing parameters by clustering ratios of the timefrequency representations of the mixtures. The estimates of the mixing parameters are then used to partition the timefrequency representation of one mixture to recover the original sources. The technique is valid even in the case when the number of sources is larger than the number of mixtures. The general results are verified on both speech and wireless signals. Sample sound files can be found here: http://www.princeton.edu/~srickard/bss.html 1. INTRODUCTION Demixing noisy mixtures has been a goal of long standing in the field of blind source separation(...
Independent component approach to the analysis of EEG and MEG recordings
 IEEE Transactions on Biomedical Engineering
, 2000
"... This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish t ..."
Abstract

Cited by 98 (9 self)
 Add to MetaCart
(Show Context)
This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to
InformationTheoretic Analysis of Interscale and Intrascale Dependencies Between Image Wavelet Coefficients
 IEEE Transactions on Image Processing
, 2001
"... This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance. ..."
Abstract

Cited by 97 (1 self)
 Add to MetaCart
(Show Context)
This paper presents an informationtheoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance.