Results 1  10
of
19
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
, 2011
"... An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operati ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (‘‘explaining away’’) and with undirected loops, that occur in many realworld tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trialtotrial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Homeostatic plasticity in Bayesian spiking networks as Expectation Maximization with posterior constraints
"... Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Recent spiking network models of Bayesian inference and unsupervised learning frequently assume either inputs to arrive in a special format or employ complex computations in neuronal activation functions and synaptic plasticity rules. Here we show in a rigorous mathematical treatment how homeostatic processes, which have previously received little attention in this context, can overcome common theoretical limitations and facilitate the neural implementation and performance of existing models. In particular, we show that homeostatic plasticity can be understood as the enforcement of a ’balancing ’ posterior constraint during probabilistic inference and learning with Expectation Maximization. We link homeostatic dynamics to the theory of variational inference, and show that nontrivial terms, which typically appear during probabilistic inference in a large class of models, drop out. We demonstrate the feasibility of our approach in a spiking WinnerTakeAll architecture of Bayesian inference and learning. Finally, we sketch how the mathematical framework can be extended to richer recurrent network architectures. Altogether, our theory provides a novel perspective on the interplay of homeostatic processes and synaptic plasticity in cortical microcircuits, and points to an essential role of homeostasis during inference and learning in spiking networks. 1
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
 in International Joint Conference on Neural Networks (IJCNN). IEEE
, 2013
"... Abstract—Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain’s function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brain’s function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports onetoone equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrateandfire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finitestate behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologicallyrelevant behaviors of a dynamical neuron model. A. Context
Noise as a Resource for Computation and Learning in Networks of Spiking Neurons
, 2014
"... We are used to viewing noise as a nuisance in computing systems. This is a pity, since noise will be abundantly available in energyefficient future nanoscale devices and circuits. I propose here to learn from the way the brain deals with noise, and apparently even benefits from it. Recent theoreti ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We are used to viewing noise as a nuisance in computing systems. This is a pity, since noise will be abundantly available in energyefficient future nanoscale devices and circuits. I propose here to learn from the way the brain deals with noise, and apparently even benefits from it. Recent theoretical results have provided insight into how this can be achieved: how noise enables networks of spiking neurons to carry out probabilistic inference through sampling and also enables creative problem solving. In addition, noise supports the selforganization of networks of spiking neurons, and learning from rewards. I will sketch here the main ideas and some consequences of these results. I will also describe why these results are paving the way for a qualitative jump in the computational capability and learning performance of neuromorphic networks of spiking neurons with noise, and for other future computing systems that are able to treat noise as a resource.
TopDown Feedback in an HMAXLike Cortical Model of Object Perception Based on Hierarchical Bayesian Networks and Belief Propagation
 PLoS ONE
, 2012
"... Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological expe ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to treestructured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologicallyinspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including topdown recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an
SpikeBased Probabilistic Inference in Analog Graphical Models Using InterspikeInterval Coding
, 2013
"... Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals (ISIs) are used for stimulus representation in neural systems. However, there are very few algorithmic principles that exploit the benefits of suc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals (ISIs) are used for stimulus representation in neural systems. However, there are very few algorithmic principles that exploit the benefits of such temporal codes for probabilistic inference of stimuli or decisions. Here, we describe and rigorously prove the functional properties of a spikebased processor that uses ISI distributions to perform probabilistic inference. The abstract processor architecture serves as a building block for more concrete, neural implementations of the BeliefPropagation (BP) algorithm in arbitrary graphical models (e.g. Bayesian Networks and Factor Graphs). The distributed nature of graphical models matches well with the architectural and functional constraints imposed by biology. In our model, ISI distributions represent the BPmessages exchanged between factor nodes, leading to the interpretation of a single spike as a random sample that follows such a distribution. We verify the abstract processor model by numerical simulation in full graphs, and demonstrate that it can be applied even in presence of analog variables. As a particular example, we also show results of a concrete, neural implementation of the processor, although in principle our approach is more flexible and allows for different neurobiological interpretations. Furthermore, electrophysiological data from area LIP during behavioral experiments is assessed in the light of ISI coding, leading to concrete testable, quantitative predictions and a more accurate description of these data compared to hitherto existing models.
Modelling object perception in cortex: hierarchical Bayesian
"... networks and belief propagation ..."
(Show Context)
A JOURNAL OF NEUROLOGY OCCASIONAL PAPER Circular inferences in schizophrenia
"... A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhi ..."
Abstract
 Add to MetaCart
A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called ‘circular belief propagation’. In circular belief propagation, bottomup sensory information and topdown predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient’s overconfidence when facing probabilistic choices, the learning of ‘unshakable ’ causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia.
Reviewed by:
, 2012
"... The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of perce ..."
Abstract
 Add to MetaCart
(Show Context)
The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of perception, and use this observation to frame a new computational account of the need for, and action of, attention – unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental settings, where cues shape expectations about a small number of upcoming stimuli and thus convey “prior ” information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its selective and integrative roles, and thus cannot be easily extended to complex environments. We suggest that the resource
ARTICLE Communicated by Ralph EtienneCummings A Systematic Method for Configuring VLSI Networks of Spiking Neurons
"... An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike realworld behavior in ha ..."
Abstract
 Add to MetaCart
(Show Context)
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike realworld behavior in hardware and robotic systems rather than simply simulating their performance on generalpurpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable highlevel configuration methods of the kind that have already been developed over many decades for simulations on generalpurpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter