Results 1  10
of
105
On dynamics of integrateandfire neural networks with adaptive conductances
 Frontiers in Neuroscience
, 2008
"... We present a mathematical analysis of a networks with IntegrateandFire neurons with conductance based synapses. Taking into account the realistic fact that the spike time is only known within some finite precision, we propose a model where spikes are effective at times multiple of a characteristic ..."
Abstract

Cited by 34 (17 self)
 Add to MetaCart
(Show Context)
We present a mathematical analysis of a networks with IntegrateandFire neurons with conductance based synapses. Taking into account the realistic fact that the spike time is only known within some finite precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ, where δ can be arbitrary small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the modeldynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the “edge of chaos”, a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a onetoone correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely “in the spikes ” in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and IntegrateandFire models and conductance based models. The present study considers networks with constant input, and without timedependent plasticity, but the framework has been designed for both extensions.
Towards reproducible descriptions of neuronal network models
 PLoS Comput Biol
, 2009
"... Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
(Show Context)
Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their reuse. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machinereadable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data, model, and softwaresharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come.
Interoperability of Neuroscience Modeling Software: Current Status and Future Directions
, 2007
"... Abstract Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is in ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
(Show Context)
Abstract Neuroscience increasingly uses computational models to assist in the exploration and interpretation of complex phenomena. As a result, considerable effort is in
NeuroML: a language for describing data driven models of neurons and networks with a high degree of biological detail. PLoS Comput. Biol. 6:e1000815. doi: 10.1371/journal.pcbi.1000815
 Neuron
, 2010
"... Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these datadriven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and crosssimulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage and ligandgated conductances, models of electrical coupling, synaptic transmission and shortterm plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroMLbased components to develop an highly detailed cortical network model. NeuroMLbased model descriptions were validated by demonstrating similar model behavior across five independently
Review Why Are Computational Neuroscience and Systems Biology So Separate?
"... Abstract: Despite similar computational approaches, there is surprisingly little interaction between the computational neuroscience and the systems biology research communities. In this review I reconstruct the history of the two disciplines and show that this may explain why they grew up apart. The ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Abstract: Despite similar computational approaches, there is surprisingly little interaction between the computational neuroscience and the systems biology research communities. In this review I reconstruct the history of the two disciplines and show that this may explain why they grew up apart. The separation is a pity, as both fields can learn quite a bit from each other. Several examples are given, covering sociological, software technical, and methodological aspects. Systems biology is a better organized community which is very effective at sharing resources, while computational neuroscience has more experience in multiscale modeling and the analysis of information processing by biological systems. Finally, I speculate about how the relationship between the two fields may evolve in the near future.
A Master Equation Formalism for Macroscopic Modeling of Asynchronous Irregular Activity States
, 2009
"... Many efforts have been devoted to modeling asynchronous irregular (AI) activity states, which resemble the complex activity states seen in the cerebral cortex of awake animals. Most of models have considered balanced networks of excitatory and inhibitory spiking neurons in which AI states are sustai ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Many efforts have been devoted to modeling asynchronous irregular (AI) activity states, which resemble the complex activity states seen in the cerebral cortex of awake animals. Most of models have considered balanced networks of excitatory and inhibitory spiking neurons in which AI states are sustained through recurrent sparse connectivity, with or without external input. In this letter we propose a mesoscopic description of such AI states. Using master equation formalism, we derive a secondorder meanfield set of ordinary differential equations describing the temporal evolution of randomly connected balanced networks. This formalism takes into account finite size effects and is applicable to any neuron model as long as its transfer function can be characterized. We compare the predictions of this approach with numerical simulations for different network configurations and parameter spaces. Considering the randomly connected network as a unit, this approach could be used to build largescale networks of such connected units, with an aim to model activity states constrained by macroscopic measurements, such as voltagesensitive dye imaging.
1 Avalanches in a stochastic model of spiking neurons Supporting Information
"... Here we show how to derive the linear noise approximation in section 2.4 of the main text. We start with the Master equation, which describes the evolution of the probability distribution for the discrete system. Let pk,l(t) = P(k excitatory and l inhibitory neurons are active at time t). We make t ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Here we show how to derive the linear noise approximation in section 2.4 of the main text. We start with the Master equation, which describes the evolution of the probability distribution for the discrete system. Let pk,l(t) = P(k excitatory and l inhibitory neurons are active at time t). We make the simplification that NE = NI = N. Starting from the decomposition k = N ˜ E = NE + N 1/2 ξE, l = N Ĩ = NI + N 1/2 ξI, we derive from this equations for the evolution of the deterministic part (E, I) and the fluctuating part (ξE, ξI). The evolution equations take the form of a series with a small parameter N −1/2. The Master equation is then dpk,l(t) dt = α [(k + 1)pk+1,l(t) − kpk,l(t)] + [(N − k + 1)f (sE(k − 1, l)) pk−1,l(t) − (N − k)f (sE(k, l)) pk,l(t)] + α [(l + 1)pk,l+1(t) − lpk,l(t)] + [(N − l + 1)f (sI(k, l − 1)) pk,l−1(t) − (N − l)f (sE(k, l)) pk,l(t)] (1) where the synaptic inputs are sE = wEEE − wEII + h and sI = wIEE − wEII + h, and f is the response function. Following [1] introduce the shift operators e∂k ∂l and e, which formally express f(k + 1) = e ∂k f(k) = f(k) + ∂kf(k) + 1 2 ∂k 2 f(k) + 1 3! ∂k 3 f(k).... (2) 2 So that the Master equation can be rewritten as dpk,l(t) dt where we define drift and diffusion functions = α ( e ∂k − 1 kpk,l(t) + ( e −∂k
Learning by reviewing
 Journal of Educational Psychology
, 2011
"... © 2011 Yamauchi, Kim and Shinomoto. This is an openaccess article subject to a nonexclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
© 2011 Yamauchi, Kim and Shinomoto. This is an openaccess article subject to a nonexclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.; This Document is Protected by copyright and was first
A modified cable formalism for modeling neuronal membranes at high frequencies
 Biophys. J
, 2008
"... ABSTRACT Intracellular recordings of cortical neurons in vivo display intense subthreshold membrane potential (Vm) activity. The power spectral density of the Vm displays a powerlaw structure at high frequencies (.50 Hz) with a slope of; 2.5. This type of frequency scaling cannot be accounted for b ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
(Show Context)
ABSTRACT Intracellular recordings of cortical neurons in vivo display intense subthreshold membrane potential (Vm) activity. The power spectral density of the Vm displays a powerlaw structure at high frequencies (.50 Hz) with a slope of; 2.5. This type of frequency scaling cannot be accounted for by traditional models, as either singlecompartment models or models based on reconstructed cell morphologies display a frequency scaling with a slope close to 4. This slope is due to the fact that the membrane resistance is shortcircuited by the capacitance for high frequencies, a situation which may not be realistic. Here, we integrate nonideal capacitors in cable equations to reflect the fact that the capacitance cannot be charged instantaneously. We show that the resulting nonideal cable model can be solved analytically using Fourier transforms. Numerical simulations using a ballandstick model yield membrane potential activity with similar frequency scaling as in the experiments. We also discuss the consequences of using nonideal capacitors on other cellular properties such as the transmission of high frequencies, which is boosted in nonideal cables, or voltage attenuation in dendrites. These results suggest that cable equations based on nonideal capacitors should be used to capture the behavior of neuronal membranes at high frequencies.