Results 1  10
of
92
A linear nongaussian acyclic model for causal discovery
 J. Machine Learning Research
, 2006
"... In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to ..."
Abstract

Cited by 103 (30 self)
 Add to MetaCart
In recent years, several methods have been proposed for the discovery of causal structure from nonexperimental data. Such methods make various assumptions on the data generating process to facilitate its identification from purely observational data. Continuing this line of research, we show how to discover the complete causal structure of continuousvalued data, under the assumptions that (a) the data generating process is linear, (b) there are no unobserved confounders, and (c) disturbance variables have nonGaussian distributions of nonzero variances. The solution relies on the use of the statistical method known as independent component analysis, and does not require any prespecified timeordering of the variables. We provide a complete Matlab package for performing this LiNGAM analysis (short for Linear NonGaussian Acyclic Model), and demonstrate the effectiveness of the method using artificially generated data and realworld data.
DirectLiNGAM: A direct method for learning a linear nongaussian structural equation model
 J. of Machine Learning Research
"... ..."
Blind Source Separation and Independent Component Analysis: A Review
, 2004
"... Blind source separation (BSS) and independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. A recent trend in BSS is to consider problems in the framework of matr ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Blind source separation (BSS) and independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. A recent trend in BSS is to consider problems in the framework of matrix factorization or more general signals decomposition with probabilistic generative and tree structured graphical models and exploit a priori knowledge about true nature and structure of latent (hidden) variables or sources such as spatiotemporal decorrelation, statistical independence, sparseness, smoothness or lowest complexity in the sense e.g., of best predictability. The possible goal of such decomposition can be considered as the estimation of sources not necessary statistically independent and parameters of a mixing system or more generally as finding a new reduced or hierarchical and structured representation for the observed (sensor) data that can be interpreted as physically meaningful coding or blind source estimation. The key issue is to find a such transformation or coding (linear or nonlinear) which has true physical meaning and interpretation. We present a review of BSS and ICA, including various algorithms for static and dynamic models and their applications. The paper mainly consists of three parts:
Evolving Signal Processing for Brain–Computer Interfaces
, 2012
"... This paper discusses the challenges associated with building robust and useful BCI models from accumulated biological knowledge and data, and the technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may become ubiquitous in the future. ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
This paper discusses the challenges associated with building robust and useful BCI models from accumulated biological knowledge and data, and the technical problems associated with incorporating multimodal physiological, behavioral, and contextual data that may become ubiquitous in the future.
Causal Modelling Combining Instantaneous and Lagged Effects: an Identifiable Model Based on NonGaussianity
"... Causal analysis of continuousvalued variables typically uses either autoregressive models or linear Gaussian Bayesian networks with instantaneous effects. Estimation of Gaussian Bayesian networks poses serious identifiability problems, which is why it was recently proposed to use nonGaussian model ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
Causal analysis of continuousvalued variables typically uses either autoregressive models or linear Gaussian Bayesian networks with instantaneous effects. Estimation of Gaussian Bayesian networks poses serious identifiability problems, which is why it was recently proposed to use nonGaussian models. Here, we show how to combine the nonGaussian instantaneous model with autoregressive models. We show that such a nonGaussian model is identifiable without prior knowledge of network structure, and we propose an estimation method shown to be consistent. This approach also points out how neglecting instantaneous effects can lead to completely wrong estimates of the autoregressive coefficients. 1.
Generalized component analysis and blind source separation methods for analyzing multichannel brain signals
 Statistical and Process Models of Cognitive Aging, Mahwah, NJ
, 2006
"... Blind source separation (BSS) and related methods, e.g., independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. The recent trends in blind source separation and ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
Blind source separation (BSS) and related methods, e.g., independent component analysis (ICA) are generally based on a wide class of unsupervised learning algorithms and they found potential applications in many areas from engineering to neuroscience. The recent trends in blind source separation and generalized component analysis (GCA) is to consider problems in the framework of matrix factorization or more general signals decomposition with probabilistic generative models and exploit a priori knowledge about true nature, morphology or structure of latent (hidden) variables or sources such as sparseness, spatiotemporal decorrelation, statistical independence, nonnegativity, smoothness or lowest possible complexity. The goal of BSS can be considered as estimation of true physical sources and parameters of a mixing system, while objective of GCA is finding a new reduced or hierarchical and structured representation for the observed (sensor) data that can be interpreted as physically meaningful coding or blind signal decompositions. The key issue is to find a such transformation or coding which has true physical meaning and interpretation. In this paper we discuss some promising applications of BSS/GCA for analyzing multimodal, multisensory data, especially EEG/MEG data. Moreover, we propose to apply
Beyond Brain Blobs: Machine Learning Classifiers as Instruments for Analyzing Functional Magnetic Resonance Imaging Data
, 1998
"... Vector Decomposition MachineEsta tese é dedicada aos meus pais Paula e José, avós Clementina e Sidónio e à minha irmã Mariana, por terem sempre confiado em mim de todas as formas possiveis, mesmo The thesis put forth in this dissertation is that machine learning classifiers can be used as instrument ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Vector Decomposition MachineEsta tese é dedicada aos meus pais Paula e José, avós Clementina e Sidónio e à minha irmã Mariana, por terem sempre confiado em mim de todas as formas possiveis, mesmo The thesis put forth in this dissertation is that machine learning classifiers can be used as instruments for decoding variables of interest from functional magnetic resonance imaging (fMRI) data. There are two main goals in decoding: • Showing that the variable of interest can be predicted from the data in a statistically reliable manner (i.e. there’s enough information present). • Shedding light on how the data encode the information needed to predict, taking into account what the classifier used can learn and any criteria by which the data are filtered (e.g. how voxels and time points used are chosen). Chapter 2 considers the issues that arise when using traditional linear classifiers and several different voxel selection techniques to strive towards these
Independent Component Analysis: Recent Advances
"... Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components which are maximally independent and nonGaussian (nonnormal). Its fundamental difference to classical multivariate statistical methods is in the assumption of ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components which are maximally independent and nonGaussian (nonnormal). Its fundamental difference to classical multivariate statistical methods is in the assumption of nonGaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of ICA was mainly developed in the 1990’s and summarized, for example, in our monograph in 2001. Here, we provide an overview of some recent developments in the theory since the year 2000. The main topics are: analysis of causal relations, testing independent components, analysing multiple data sets (threeway data), modelling dependencies between the components, and improved methods for estimating the basic model. Key words: independent component analysis, blind source separation, nonGaussianity, causal analysis. 1.
Large Sample Group Independent Component Analysis of Functional Magnetic Resonance Imaging Using Anatomical AtlasBased Reduction and Bootstrapped Clustering
"... Independent component analysis (ICA) is a popular method for the analysis of functional magnetic resonance imaging (fMRI) signals that is capable of revealing connected brain systems of functional significance. To be computationally tractable, estimating the independent components (ICs) inevitably r ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Independent component analysis (ICA) is a popular method for the analysis of functional magnetic resonance imaging (fMRI) signals that is capable of revealing connected brain systems of functional significance. To be computationally tractable, estimating the independent components (ICs) inevitably requires one or more dimension reduction steps. Whereas most algorithms perform such reductions in the time domain, the input data are much more extensive in the spatial domain, and there is broad consensus that the brain obeys rules of localization of function into regions that are smaller in number than the number of voxels in a brain image. These functional units apparently reorganize dynamically into networks under different task conditions. Here we develop a new approach to ICA, producing group results by bagging and clustering over hundreds of pooled singlesubject ICA results that have been projected to a lowerdimensional subspace. Averages of anatomically based regions are used to compress the single subjectICA results prior to clustering and resampling via bagging. The computational advantages of this approach make it