Results 1  10
of
52
Efficient Markov Chain Monte Carlo methods for decoding population spike trains
 TO APPEAR, NEURAL COMPUTATION
, 2010
"... Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
(Show Context)
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is logconcave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nonGaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hitandrun” algorithm performed better than other MCMC methods. Using these
Empirical models of spiking in neural populations
"... Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurren ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
(Show Context)
Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a lowdimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spikeresponse models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodnessoffit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or pointprocess models, finding that the nonGaussian model provides slightly better goodnessoffit and more realistic population spike counts. 1
Gaussian process regression networks
 In Proceedings of the 29th International Conference on Machine Learning (ICML
, 2012
"... We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
We introduce a new regression framework, Gaussian process regression networks (GPRN), which combines the structural properties of Bayesian neural networks with the nonparametric flexibility of Gaussian processes. GPRN accommodates input (predictor) dependent signal and noise correlations between multiple output (response) variables, input dependent lengthscales and amplitudes, and heavytailed predictive distributions. We derive both elliptical slice sampling and variational Bayes inference procedures for GPRN. We apply GPRN as a multiple output regression and multivariate volatility model, demonstrating substantially improved performance over eight popular multiple output (multitask) Gaussian process models and three multivariate volatility models on real datasets, including a 1000 dimensional gene expression dataset. 1.
Populationwide distributions of neural activity during perceptual decisionmaking
 PROGRESS IN NEUROBIOLOGY
, 2012
"... ..."
Robust learning of lowdimensional dynamics from large neural ensembles,”
 in NIPS,
, 2013
"... Abstract Recordings from large populations of neurons make it possible to search for hypothesized lowdimensional dynamics. Finding these dynamics requires models that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality r ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract Recordings from large populations of neurons make it possible to search for hypothesized lowdimensional dynamics. Finding these dynamics requires models that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality reduction for neural data that is convex, does not make strong assumptions about dynamics, does not require averaging over many trials and is extensible to more complex statistical models that combine local and global influences. The results can be combined with spectral methods to learn dynamical systems models. The basic method extends PCA to the exponential family using nuclear norm minimization. We evaluate the effectiveness of this method using an exact decomposition of the Bregman divergence that is analogous to variance explained for PCA. We show on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary we can still recover the true latent subspace. We also demonstrate an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics. Finally, we demonstrate improved prediction on real neural data from monkey motor cortex compared to fitting linear dynamical models without nuclear norm smoothing.
Low dimensional neural features predict muscle emg signals
 in Proc. IEEE EMBC
, 2010
"... Abstract — Understanding the relationship between neural activity in motor cortex and muscle activity during movements is important both for basic science and for the design of neural prostheses. While there has been significant work in decoding muscle EMG from neural data, decoders often require ma ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Understanding the relationship between neural activity in motor cortex and muscle activity during movements is important both for basic science and for the design of neural prostheses. While there has been significant work in decoding muscle EMG from neural data, decoders often require many parameters which make the analysis susceptible to overfitting, which reduces generalizability and makes the results difficult to interpret. To address this issue, we recorded simultaneous neural activity from the motor cortices (M1/PMd) of rhesus monkeys performing an armreaching task while recording EMG from arm muscles. In this work, we focused on relating the mean neural activity (averaged across reach trials to one target) to the corresponding mean EMG. We reduced the dimensionality of the neural data and found that the curvature of the lowdimensional (lowD) neural activity could be used as a signature of muscle activity. Using this signature, and without directly fitting EMG data to the neural activity, we derived neural axes based on reaches to only one reach target (<5 % of the data) that could explain EMG for reaches across multiple targets (average R 2 = 0.65). Our results suggest that cortical population activity is tightly related to muscle EMG measurements, predicting a lag between cortical activity and movement generation of 47.5 ms. Furthermore, our ability to predict EMG features across different movements suggests that there are fundamental axes or directions in the lowD neural space along which the neural population activity moves to activate particular muscles. I.
Variational Gaussianprocess factor analysis for modeling spatiotemporal data
"... We present a probabilistic factor analysis model which can be used for studying spatiotemporal datasets. The spatial and temporal structure is modeled by using Gaussian process priors both for the loading matrix and the factors. The posterior distributions are approximated using the variational Bay ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
We present a probabilistic factor analysis model which can be used for studying spatiotemporal datasets. The spatial and temporal structure is modeled by using Gaussian process priors both for the loading matrix and the factors. The posterior distributions are approximated using the variational Bayesian framework. High computational cost of Gaussian process modeling is reduced by using sparse approximations. The model is used to compute the reconstructions of the global sea surface temperatures from a historical dataset. The results suggest that the proposed model can outperform the stateoftheart reconstruction systems. 1
Modeling Correlated Arrival Events with Latent SemiMarkov Processes
"... The analysis of correlated point process data has wide applications, ranging from biomedical research to network analysis. In this work, we model such data as generated by a latent collection of continuoustime binary semiMarkov processes, corresponding to external events appearing and disappear ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The analysis of correlated point process data has wide applications, ranging from biomedical research to network analysis. In this work, we model such data as generated by a latent collection of continuoustime binary semiMarkov processes, corresponding to external events appearing and disappearing. A continuoustime modeling framework is more appropriate for multichannel point process data than a binning approach requiring time discretization, and we show connections between our model and recent ideas from the discretetime literature. We describe an efficient MCMC algorithm for posterior inference, and apply our ideas to both synthetic data and a realworld biometrics application. 1.
Informationtheoretic metric learning: 2–D linear projections of neural data for visualization
 In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE
, 2013
"... Abstract — Intracortical neural recordings are typically highdimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually inspect differences among responses to various conditions. By representing the neural response in a lowdimensional spac ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract — Intracortical neural recordings are typically highdimensional due to many electrodes, channels, or units and high sampling rates, making it very difficult to visually inspect differences among responses to various conditions. By representing the neural response in a lowdimensional space, a researcher can visually evaluate the amount of information the response carries about the conditions. We consider a linear projection to 2–D space that also parametrizes a metric between neural responses. The projection, and corresponding metric, should preserve classrelevant information pertaining to different behavior or stimuli. We find the projection as a solution to the informationtheoretic optimization problem of maximizing the information between the projected data and the class labels. The method is applied to two datasets using different types of neural responses: motor cortex neuronal firing rates of a macaque during a centerout reaching task, and local field potentials in the somatosensory cortex of a rat during tactile stimulation of the forepaw. In both cases, projected data points preserve the natural topology of targets or peripheral touch sites. Using the learned metric on the neural responses increases the nearestneighbor classification rate versus the original data; thus, the metric is tuned to distinguish among the conditions. I.