Results 1  10
of
468
Polychronization: computation with spikes
 Neural. Comput
, 2006
"... We present a minimal spiking network that can polychronize, i.e., exhibit persistent timelocked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spiketimingdependent plasticity ..."
Abstract

Cited by 115 (0 self)
 Add to MetaCart
We present a minimal spiking network that can polychronize, i.e., exhibit persistent timelocked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spiketimingdependent plasticity (STDP); a readytouse MATLAB code is included. It exhibits sleep oscillations, gamma (40 Hz) rhythms, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously selforganize into groups and generate polychronous persistent activity. The number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. 1
An experimental unification of reservoir computing methods
, 2007
"... Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) lea ..."
Abstract

Cited by 70 (10 self)
 Add to MetaCart
Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.
Shortterm memory for serial order: A recurrent neural network model
 Psychological Review
, 2006
"... Despite a century of research, the mechanisms underlying shortterm or working memory for serial order remain uncertain. Recent theoretical models have converged on a particular account, based on transient associations between independent item and context representations. In the present article, the ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
(Show Context)
Despite a century of research, the mechanisms underlying shortterm or working memory for serial order remain uncertain. Recent theoretical models have converged on a particular account, based on transient associations between independent item and context representations. In the present article, the authors present an alternative model, according to which sequence information is encoded through sustained patterns of activation within a recurrent neural network architecture. As demonstrated through a series of computer simulations, the model provides a parsimonious account for numerous benchmark characteristics of immediate serial recall, including data that have been considered to preclude the application of recurrent neural networks in this domain. Unlike most competing accounts, the model deals naturally with findings concerning the role of background knowledge in serial recall and makes contact with relevant neuroscientific data. Furthermore, the model gives rise to numerous testable predictions that differentiate it from competing theories. Taken together, the results presented indicate that recurrent neural networks may offer a useful framework for understanding shortterm memory for serial order.
Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors
, 2013
"... Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it i ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goaldirected behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goaldirected behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their longterm behavior; intuition and timeconsuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system,
Denoising Source Separation
"... A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms which are optimised for specific problems. In this framework, source separation algorithms are constuct ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
(Show Context)
A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms which are optimised for specific problems. In this framework, source separation algorithms are constucted around denoising procedures. The resulting algorithms can range from almost blind to highly specialised source separation algorithms. Both simple linear and more complex nonlinear or adaptive denoising schemes are considered. Some existing independent component analysis algorithms are reinterpreted within DSS framework and new, robust blind source separation algorithms are suggested. Although DSS algorithms need not be explicitly based on objective functions, there is often an implicit objective function that is optimised. The exact relation between the denoising procedure and the objective function is derived and a useful approximation of the objective function is presented. In the experimental section, various DSS schemes are applied extensively to artificial data, to real magnetoencephalograms and to simulated CDMA mobile network signals. Finally, various extensions to the proposed DSS algorithms are considered. These include nonlinear observation mappings, hierarchical models and overcomplete, nonorthogonal feature spaces. With these extensions, DSS appears to have relevance to many existing models of neural information processing.
Computational aspects of feedback in neural circuits
 PLOS Computational Biology
, 2007
"... It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more re ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
(Show Context)
It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on timevarying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant realtime computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints.
An overview of reservoir computing: theory, applications and implementations
 Proceedings of the 15th European Symposium on Artificial Neural Networks
, 2007
"... Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computin ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computing. The idea can even be broadened by stating that any high dimensional, driven dynamic system, operated in the correct dynamic regime can be used as a temporal ‘kernel ’ which makes it possible to solve complex tasks using just linear postprocessing techniques. This tutorial will give an overview of current research on theory, application and implementations of Reservoir Computing. 1
Implementing synaptic plasticity in a VLSI spiking neural network model
 in International Joint Conference on Neural Networks IJCNN ’06
, 2006
"... Abstract — This paper describes an areaefficient mixedsignal implementation of synapsebased long term plasticity realized in a VLSI 1 model of a spiking neural network. The artificial synapses are based on an implementation of spike time dependent plasticity (STDP). In the biological specimen, ST ..."
Abstract

Cited by 32 (16 self)
 Add to MetaCart
(Show Context)
Abstract — This paper describes an areaefficient mixedsignal implementation of synapsebased long term plasticity realized in a VLSI 1 model of a spiking neural network. The artificial synapses are based on an implementation of spike time dependent plasticity (STDP). In the biological specimen, STDP is a mechanism acting locally in each synapse. The presented electronic implementation succeeds in maintaining this high level of parallelism and simultaneously achieves a synapse density of more than 9k synapses per mm 2 in a 180 nm technology. This allows the construction of neural microcircuits close to the biological specimen while maintaining a speed several orders of magnitude faster than biological real time. The large acceleration factor enhances the possibilities to investigate key aspects of plasticity, e.g. by performing extensive parameter searches. I.
Movement Generation with Circuits of Spiking Neurons
, 2005
"... How can complex movements that take hundreds of milliseconds be generated by stereotypical neural microcircuits consisting of spiking neurons with a much faster dynamics? We show that linear readouts from generic neural microcircuit models can be trained to generate basic arm movements. Such movemen ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
(Show Context)
How can complex movements that take hundreds of milliseconds be generated by stereotypical neural microcircuits consisting of spiking neurons with a much faster dynamics? We show that linear readouts from generic neural microcircuit models can be trained to generate basic arm movements. Such movement generation is independent of the arm model used and the type of feedback that the circuit receives. We demonstrate this by considering two different models of a twojointed arm, a standard model from robotics and a standard model from biology, that each generates different kinds of feedback. Feedback that arrives with biologically realistic delays of 50 to 280 ms turns out to give rise to the best performance. If a feedback with such desirable delay is not available, the neural microcircuit model also achieves good performance if it uses internally generated estimates of such feedback. Existing methods for movement generation in robotics that take the particular dynamics of sensors and actuators into account (embodiment of motor systems) are taken one step further with this approach, which provides methods for also using the embodiment of motion generation circuitry, that is, the inherent dynamics and spatial structure of neural circuits, for the generation of movement.
Computer Models and Analysis Tools for Neural microcircuits
"... This chapter surveys web resources regarding computer models and analysis tools for neural microcircuits. In particular it describes the features of a new website (www.lsm.tugraz.at) that facilitates the creation of computer models for cortical neural microcircuits of various sizes and levels of d ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
This chapter surveys web resources regarding computer models and analysis tools for neural microcircuits. In particular it describes the features of a new website (www.lsm.tugraz.at) that facilitates the creation of computer models for cortical neural microcircuits of various sizes and levels of detail, as well as tools for evaluating the computational power of these models in a Matlabenvironment.