Results 1  10
of
82
Characterization of Neural Responses with Stochastic Stimuli
 TO APPEAR IN: THE NEW COGNITIVE NEUROSCIENCES, 3RD EDITION EDITOR: M. GAZZANIGA
, 2004
"... ose response properties are not at least partially known in advance. This chapter provides an overview of some recently developed characterization methods. In general, the ingredients of the problem are: (a) the selection of a set of experimental stimuli; (b) selection of a model of response; (c) a ..."
Abstract

Cited by 72 (27 self)
 Add to MetaCart
ose response properties are not at least partially known in advance. This chapter provides an overview of some recently developed characterization methods. In general, the ingredients of the problem are: (a) the selection of a set of experimental stimuli; (b) selection of a model of response; (c) a procedure for fitting (estimation) of the model. We discuss solutions of this problem that combine stochastic stimuli with models based on an initial linear filtering stage that serves to reduce the dimensionality of the stimulus space. We begin by describing classical reverse correlation in this context, and then discuss several recent generalizations that increase the power and flexibility of this basic method. Thanks to Brian Lau, Dario Ringach, Nicole Rust, and Brian Wandell for helpful comments on the manuscript. This work was funded by the Howard Hughes Medical Institute, and the SloanSwartz Center for Theoretical Visual Neuroscience at New York University. 1 Reverse correlation M
Prediction and Decoding of Retinal Ganglion Cell Responses with a Probabilistic Spiking Model
, 2005
"... ... generation. We show that the stimulus selectivity, reliability, and timing precision of primate retinal ganglion cell (RGC) light responses can be reproduced accurately with a simple model consisting of a leaky integrateandfire spike generator driven by a linearly filtered stimulus, a postspik ..."
Abstract

Cited by 65 (20 self)
 Add to MetaCart
... generation. We show that the stimulus selectivity, reliability, and timing precision of primate retinal ganglion cell (RGC) light responses can be reproduced accurately with a simple model consisting of a leaky integrateandfire spike generator driven by a linearly filtered stimulus, a postspike current, and a Gaussian noise current. We fit model parameters for individual RGCs by maximizing the likelihood of observed spike responses to a stochastic visual stimulus. Although compact, the fitted model predicts the detailed time structure of responses to novel stimuli, accurately capturing the interaction between the spiking history and sensory stimulus selectivity. The model also accounts for the variability in responses to repeated stimuli, even when fit to data from a single (nonrepeating) stimulus sequence. Finally, the model can be used to derive an explicit, maximumlikelihood decoding rule for neural spike trains, thus providing a tool for assessing the limitations that spiking variability imposes on sensory performance.
Statistical models for neural encoding, decoding, and optimal stimulus design
 Computational Neuroscience: Progress in Brain Research
, 2006
"... There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the ..."
Abstract

Cited by 51 (17 self)
 Add to MetaCart
(Show Context)
There are two basic problems in the statistical analysis of neural data. The “encoding” problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the “decoding ” problem concerns how much information is in a spike train: in particular, how well can we estimate the stimulus that gave rise to the spike train? This chapter describes statistical modelbased techniques that in some cases provide a unified solution to these two coding problems. These models can capture stimulus dependencies as well as spike history and interneuronal interaction effects in population spike trains, and are intimately related to biophysicallybased models of integrateandfire type. We describe flexible, powerful likelihoodbased methods for fitting these encoding models and then for using the models to perform optimal decoding. Each of these (apparently quite difficult) tasks turn out to be highly computationally tractable, due to a key concavity property of the model likelihood. Finally, we return to the encoding problem to describe how to use these models to adaptively optimize the stimuli presented to the cell on a trialbytrial basis, in order that we may infer the optimal model parameters as efficiently as possible.
Modelbased decoding, information estimation, and changepoint detection in multineuron spike trains
 UNDER REVIEW, NEURAL COMPUTATION
, 2007
"... Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several ..."
Abstract

Cited by 37 (17 self)
 Add to MetaCart
(Show Context)
Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several decoding methods based on pointprocess neural encoding models (i.e. “forward ” models that predict spike responses to novel stimuli). These models have concave loglikelihood functions, allowing for efficient fitting via maximum likelihood. Moreover, we may use the likelihood of the observed spike trains under the model to perform optimal decoding. We present: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus — the most probable stimulus to have generated the observed single or multiplespike train response, given some prior distribution over the stimulus; (2) a Gaussian approximation to the posterior distribution, which allows us to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the response; and (4) a framework for the detection of changepoint times (e.g. the time at which the stimulus undergoes a change in mean or variance), by marginalizing over the posterior distribution of stimuli. We show several examples illustrating the performance of these estimators with simulated data.
Spike Inference from Calcium Imaging Using Sequential Monte Carlo Methods
, 2009
"... ABSTRACT As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies ..."
Abstract

Cited by 31 (9 self)
 Add to MetaCart
(Show Context)
ABSTRACT As recent advances in calcium sensing technologies facilitate simultaneously imaging action potentials in neuronal populations, complementary analytical tools must also be developed to maximize the utility of this experimental paradigm. Although the observations here are fluorescence movies, the signals of interest—spike trains and/or time varying intracellular calcium concentrations—are hidden. Inferring these hidden signals is often problematic due to noise, nonlinearities, slow imaging rate, and unknown biophysical parameters. We overcome these difficulties by developing sequential Monte Carlo methods (particle filters) based on biophysical models of spiking, calcium dynamics, and fluorescence. We show that even in simple cases, the particle filters outperform the optimal linear (i.e., Wiener) filter, both by obtaining better estimates and by providing error bars. We then relax a number of our model assumptions to incorporate nonlinear saturation of the fluorescence signal, as well external stimulus and spike history dependence (e.g., refractoriness) of the spike trains. Using both simulations and in vitro fluorescence observations, we demonstrate temporal superresolution by inferring when within a frame each spike occurs. Furthermore, the model parameters may be estimated using expectation maximization with only a very limited amount of data (e.g., ~5–10 s or 5–40 spikes), without the requirement of any simultaneous electrophysiology or imaging experiments.
Exact Hamiltonian Monte Carlo for Truncated Multivariate Gaussians
"... We present a Hamiltonian Monte Carlo algorithm to sample from multivariate Gaussian distributions in which the target space is constrained by linear and quadratic inequalities or products thereof. The Hamiltonian equations of motion can be integrated exactly and there are no parameters to tune. The ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
We present a Hamiltonian Monte Carlo algorithm to sample from multivariate Gaussian distributions in which the target space is constrained by linear and quadratic inequalities or products thereof. The Hamiltonian equations of motion can be integrated exactly and there are no parameters to tune. The algorithm mixes faster and is more efficient than Gibbs sampling. The runtime depends on the number and shape of the constraints but the algorithm is highly parallelizable. In many cases, we can exploit special structure in the covariance matrices of the untruncated Gaussian to further speed up the runtime. A simple extension of the algorithm permits sampling from distributions whose logdensity is piecewise quadratic, as in the “Bayesian Lasso ” model.
Efficient computation of the maximum a posteriori path and parameter estimation in integrateandfire and more general statespace models
 JOURNAL OF COMPUTATIONAL NEUROSCIENCE
, 2009
"... ..."
Estimating Information Rates with Confidence Intervals in Neural Spike Trains
, 2007
"... Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a dis ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a discrete, observed time series. We extend this method to measure the rate of novel information that a neural spike train encodes about a stimulus—the average and specific mutual information rates. Our estimator makes few assumptions about the underlying neural dynamics, shows excellent performance in experimentally relevant regimes, and uniquely provides confidence intervals bounding the range of information rates compatible with the observed spike train. We validate this estimator with simulations of spike trains and highlight how stimulus parameters affect its convergence in bias and variance. Finally, we apply these ideas to a recording from a guinea pig retinal ganglion cell and compare results to a simple linear decoder.
Spike train probability models for stimulusdriven leaky integrateandfire neurons
 Neural Computation
, 2008
"... ..."