Results 1  10
of
53
Modelbased decoding, information estimation, and changepoint detection in multineuron spike trains
 UNDER REVIEW, NEURAL COMPUTATION
, 2007
"... Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several ..."
Abstract

Cited by 38 (18 self)
 Add to MetaCart
Understanding how stimulus information is encoded in spike trains is a central problem in computational neuroscience. Decoding methods provide an important tool for addressing this problem, by allowing us to explicitly read out the information contained in spike responses. Here we introduce several decoding methods based on pointprocess neural encoding models (i.e. “forward ” models that predict spike responses to novel stimuli). These models have concave loglikelihood functions, allowing for efficient fitting via maximum likelihood. Moreover, we may use the likelihood of the observed spike trains under the model to perform optimal decoding. We present: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus — the most probable stimulus to have generated the observed single or multiplespike train response, given some prior distribution over the stimulus; (2) a Gaussian approximation to the posterior distribution, which allows us to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the response; and (4) a framework for the detection of changepoint times (e.g. the time at which the stimulus undergoes a change in mean or variance), by marginalizing over the posterior distribution of stimuli. We show several examples illustrating the performance of these estimators with simulated data.
Efficient Markov Chain Monte Carlo methods for decoding population spike trains
 TO APPEAR, NEURAL COMPUTATION
, 2010
"... Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed ..."
Abstract

Cited by 33 (14 self)
 Add to MetaCart
(Show Context)
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is logconcave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nonGaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for Gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the “hitandrun” algorithm performed better than other MCMC methods. Using these
Smoothing of, and parameter estimation from, noisy biophysical recordings. PLoS Comput. Biol
, 2006
"... Smoothing biophysical data 1 ..."
(Show Context)
Empirical models of spiking in neural populations
"... Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurren ..."
Abstract

Cited by 15 (13 self)
 Add to MetaCart
(Show Context)
Neurons in the neocortex code and compute as part of a locally interconnected population. Largescale multielectrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a lowdimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spikeresponse models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodnessoffit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or pointprocess models, finding that the nonGaussian model provides slightly better goodnessoffit and more realistic population spike counts. 1
Efficient computation of the maximum a posteriori path and parameter estimation in integrateandfire and more general statespace models
 JOURNAL OF COMPUTATIONAL NEUROSCIENCE
, 2009
"... ..."
Designing optimal stimuli to control neuronal spike timing
, 2011
"... Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models whic ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models which describe how a stimulating agent (such as an injected electrical current, or a laser light interacting with caged neurotransmitters or photosensitive ion channels) affect the spiking activity of neurons. Based on these models, we solve the reverse problem of finding the best timedependent modulation of the input, subject to hardware limitations as well as physiologically inspired safety measures, that causes the neuron to emit a spike train which with highest probability will be close to a target spike train. We adopt fast convex constrained optimization methods to solve this problem. Our methods can potentially be implemented in real time and may also be generalized to the case of many cells, suitable for neural prosthesis applications. Using biologically sensible parameters and constraints, our method finds stimulation patterns that generate very precise spike trains in simulated experiments. We also tested the intracellular current injection method on pyramidal cells in mouse cortical slices, quantifying the dependence of spiking reliability and timing precision on constraints imposed on the applied currents. 1
Fast Kalman filtering on quasilinear dendritic trees
, 2009
"... Optimal filtering of noisy voltage signals on dendritic trees is a key problem in computational cellular neuroscience. However, the state variable in this problem — the vector of voltages at every compartment — is very highdimensional: realistic multicompartmental models often have on the order of ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Optimal filtering of noisy voltage signals on dendritic trees is a key problem in computational cellular neuroscience. However, the state variable in this problem — the vector of voltages at every compartment — is very highdimensional: realistic multicompartmental models often have on the order of N = 10 4 compartments. Standard implementations of the Kalman filter require O(N 3) time and O(N 2) space, and are therefore impractical. Here we take advantage of three special features of the dendritic filtering problem to construct an efficient filter: (1) dendritic dynamics are governed by a cable equation on a tree, which may be solved using sparse matrix methods in O(N) time; and current methods for observing dendritic voltage (2) provide low SNR observations and (3) only image a relatively small number of compartments at a time. The idea is to approximate the Kalman equations in terms of a lowrank perturbation of the steadystate (zeroSNR) solution, which may be obtained in O(N) time using methods that exploit the sparse tree structure of dendritic dynamics. The resulting methods give a very good approximation to the exact Kalman solution, but only require O(N) time and space. We illustrate the method with applications to real and simulated dendritic branching structures, and describe how to extend the techniques to incorporate spatially subsampled, temporally filtered, and nonlinearly transformed observations. 1
Impact of network structure and cellular response on spike time correlations
 PLoS Comput. Biol
, 2012
"... correlations ..."
(Show Context)
Fast inference in generalized linear models via expected loglikelihoods
 J. Comput. Neurosci
, 2013
"... Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact loglikelihood by an expectati ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Generalized linear models play an essential role in a wide variety of statistical applications. This paper discusses an approximation of the likelihood in these models that can greatly facilitate computation. The basic idea is to replace a sum that appears in the exact loglikelihood by an expectation over the model covariates; the resulting “expected loglikelihood ” can in many cases be computed significantly faster than the exact loglikelihood. In many neuroscience experiments the distribution over model covariates is controlled by the experimenter and the expected loglikelihood approximation becomes particularly useful; for example, estimators based on maximizing this expected loglikelihood (or a penalized version thereof) can often be obtained with orders of magnitude computational savings compared to the exact maximum likelihood estimators. A risk analysis establishes that these maximum EL estimators often come with little cost in accuracy (and in some cases even improved accuracy) compared to standard maximum likelihood estimates. Finally, we find that these methods can significantly decrease the computation time of marginal likelihood calculations for model selection and of Markov chain Monte Carlo methods for sampling from the posterior parameter distribution. We illustrate our results by applying these methods to a computationallychallenging dataset of neural spike trains obtained via largescale multielectrode recordings in the primate retina. 1