Results 1  10
of
50
Statistical models for neural encoding, decoding, and optimal stimulus design
 Computational Neuroscience: Progress in Brain Research
, 2006
"... There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the ..."
Abstract

Cited by 53 (17 self)
 Add to MetaCart
(Show Context)
There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the “decoding ” problem concerns how much information is in a spike train: in particular, how well can we estimate the stimulus that gave rise to the spike train? This chapter describes statistical modelbased techniques that in some cases provide a unified solution to these two coding problems. These models can capture stimulus dependencies as well as spike history and interneuronal interaction effects in population spike trains, and are intimately related to biophysicallybased models of integrateandfire type. We describe flexible, powerful likelihoodbased methods for fitting these encoding models and then for using the models to perform optimal decoding. Each of these (apparently quite difficult) tasks turn out to be highly computationally tractable, due to a key concavity property of the model likelihood. Finally, we return to the encoding problem to describe how to use these models to adaptively optimize the stimuli presented to the cell on a trialbytrial basis, in order that we may infer the optimal model parameters as efficiently as possible.
A new look at statespace models for neural data
 Journal of Computational Neuroscience
, 2010
"... State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these appro ..."
Abstract

Cited by 53 (25 self)
 Add to MetaCart
(Show Context)
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in statespace models with nonGaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the statespace setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatiallyvarying firing rates.
Gaussianprocess factor analysis for lowdimensional singletrial analysis of neural population activity
 J Neurophysiol
"... You might find this additional information useful... A corrigendum for this article has been published. It can be found at: ..."
Abstract

Cited by 52 (15 self)
 Add to MetaCart
You might find this additional information useful... A corrigendum for this article has been published. It can be found at:
Modelbased decoding, information estimation, and changepoint detection in multineuron spike trains
 UNDER REVIEW, NEURAL COMPUTATION
, 2007
"... Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several ..."
Abstract

Cited by 38 (18 self)
 Add to MetaCart
(Show Context)
Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several decoding methods based on pointprocess neural encoding models (i.e. “forward ” models that predict spike responses to novel stimuli). These models have concave loglikelihood functions, allowing for efficient fitting via maximum likelihood. Moreover, we may use the likelihood of the observed spike trains under the model to perform optimal decoding. We present: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus — the most probable stimulus to have generated the observed single or multiplespike train response, given some prior distribution over the stimulus; (2) a Gaussian approximation to the posterior distribution, which allows us to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the response; and (4) a framework for the detection of changepoint times (e.g. the time at which the stimulus undergoes a change in mean or variance), by marginalizing over the posterior distribution of stimuli. We show several examples illustrating the performance of these estimators with simulated data.
Empirical models of spiking in neural populations
"... Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurren ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a lowdimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spikeresponse models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodnessoffit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or pointprocess models, finding that the nonGaussian model provides slightly better goodnessoffit and more realistic population spike counts. 1
Neural Decoding of Hand Motion Using a Linear StateSpace Model With Hidden States
"... Abstract—The Kalman filter has been proposed as a model to decode neural activity measured from the motor cortex in order to obtain realtime estimates of hand motion in behavioral neurophysiological experiments. However, currently used linear statespace models underlying the Kalman filter do not t ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The Kalman filter has been proposed as a model to decode neural activity measured from the motor cortex in order to obtain realtime estimates of hand motion in behavioral neurophysiological experiments. However, currently used linear statespace models underlying the Kalman filter do not take into account other behavioral states such as muscular activity or the subject’s level of attention, which are often unobservable during experiments but may play important roles in characterizing neural controlled hand movement. To address this issue, we depict these unknown states as one multidimensional hidden state in the linear statespace framework. This new model assumes that the observed neural firing rate is directly related to this hidden state. The dynamics of the hand state are also allowed to impact the dynamics of the hidden state, and vice versa. The parameters in the model can be identified by a conventional expectationmaximization algorithm. Since this model still uses the linear Gaussian framework, handstate decoding can be performed by the efficient Kalman filter algorithm. Experimental results show that this new model provides a more appropriate representation of the neural data and generates more accurate decoding. Furthermore, we have used recently developed computationally efficient methods by incorporating a priori information of the targets of the reaching movement. Our results show that the hiddenstate model with targetconditioning further improves decoding accuracy. Index Terms—Hidden states, Kalman filter, motor cortex, neural decoding, statespace model.
Statistical models of spike trains
 In C. Liang, & G. Lord (Eds.), Stochastic methods in neuroscience
, 2009
"... 1 Spiking neurons make inviting targets for analytical methods based on stochastic processes: spike trains carry information in their temporal patterning, yet they are often highly irregular across time and across experimental replications. The bulk of this volume is devoted to mathematical and biop ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
1 Spiking neurons make inviting targets for analytical methods based on stochastic processes: spike trains carry information in their temporal patterning, yet they are often highly irregular across time and across experimental replications. The bulk of this volume is devoted to mathematical and biophysical models useful in understanding neurophysiological processes. In this chapter we consider statistical models for analyzing spike train data. Strictly speaking, what we would call a statistical model for spike trains is simply a probabilistic description of the sequence of spikes. But it is somewhat misleading to ignore the dataanalytical context of these models. In particular, we want to make use of these probabilistic tools for the purpose of scientific inference. The leap from simple descriptive uses of probability to inferential applications is worth emphasizing for two reasons. First, this leap was one of the great conceptual advances in science, taking roughly two hundred years. It was not until the late 1700s that there emerged any clear notion of inductive (or what we would now call statistical) reasoning; it was not until the first half of the twentieth century that modern methods began to be developed systematically; and it was only in the second half of the twentieth century that these methods