Results 1  10
of
44
An experimental unification of reservoir computing methods
, 2007
"... Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) lea ..."
Abstract

Cited by 70 (10 self)
 Add to MetaCart
Three different uses of a recurrent neural network (RNN) as a reservoir that is not trained but instead read out by a simple external classification layer have been described in the literature: Liquid State Machines (LSMs), Echo State Networks (ESNs) and the Backpropagation Decorrelation (BPDC) learning rule. Individual descriptions of these techniques exist, but a overview is still lacking. Here, we present a series of experimental results that compares all three implementations, and draw conclusions about the relation between a broad range of reservoir parameters and network dynamics, memory, node complexity and performance on a variety of benchmark tests with different characteristics. Next, we introduce a new measure for the reservoir dynamics based on Lyapunov exponents. Unlike previous measures in the literature, this measure is dependent on the dynamics of the reservoir in response to the inputs, and in the cases we tried, it indicates an optimal value for the global scaling of the weight matrix, irrespective of the standard measures. We also describe the Reservoir Computing Toolbox that was used for these experiments, which implements all the types of Reservoir Computing and allows the easy simulation of a wide range of reservoir topologies for a number of benchmarks.
An overview of reservoir computing: theory, applications and implementations
 Proceedings of the 15th European Symposium on Artificial Neural Networks
, 2007
"... Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computin ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Training recurrent neural networks is hard. Recently it has however been discovered that it is possible to just construct a random recurrent topology, and only train a single linear readout layer. Stateoftheart performance can easily be achieved with this setup, called Reservoir Computing. The idea can even be broadened by stating that any high dimensional, driven dynamic system, operated in the correct dynamic regime can be used as a temporal ‘kernel ’ which makes it possible to solve complex tasks using just linear postprocessing techniques. This tutorial will give an overview of current research on theory, application and implementations of Reservoir Computing. 1
Campenhout, Linking Nonbinned Spike Train Kernels to Several Existing Spike Train Metrics
 Neurocomputing
"... spike train metrics ..."
(Show Context)
Improving reservoirs using Intrinsic Plasticity
, 2007
"... The benefits of using Intrinsic Plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron’s output towards an exponential distribution – thereby realizing an information maximization – have already been demonstrated. In this work, w ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
The benefits of using Intrinsic Plasticity (IP), an unsupervised, local, biologically inspired adaptation rule that tunes the probability density of a neuron’s output towards an exponential distribution – thereby realizing an information maximization – have already been demonstrated. In this work, we extend the ideas of this adaptation method to a more commonly used nonlinearity and a Gaussian output distribution. After deriving the learning rules, we show the effects of the bounded output of the transfer function on the moments of the actual output distribution. This allows us to show that the rule converges to the expected distributions, even in random recurrent networks. The IP rule is evaluated in a Reservoir Computing setting, which is a temporal processing technique which uses random, untrained recurrent networks as excitable media, where the network’s state is fed to a linear regressor used to calculate the desired output. We present an experimental comparison of the different IP rules on three benchmark tasks with different characteristics. Furthermore, we show that this unsupervised reservoir adaptation is able to adapt networks with very constrained topologies, such as a 1D lattice which generally shows quite unsuitable dynamic behavior, to a reservoir that can be used to solve complex tasks. We clearly demonstrate that IP is able to make Reservoir Computing more robust: the internal dynamics can autonomously tune themselves – irrespective of initial weights or input scaling – to the dynamic regime which is optimal for a given task.
On Computational Power and the OrderChaos Phase Transition in Reservoir Computing
"... Randomly connected recurrent neural circuits have proven to be very powerful models for online computations when a trained memoryless readout function is appended. Such Reservoir Computing (RC) systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuit ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Randomly connected recurrent neural circuits have proven to be very powerful models for online computations when a trained memoryless readout function is appended. Such Reservoir Computing (RC) systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work showed a fundamental difference between these two incarnations of the RC idea. The performance of a RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons such dependency has not been observed. In this article we investigate this apparent dichotomy in terms of the indegree of the circuit nodes. Our analyses based amongst others on the Lyapunov exponent reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits. This explains the observed decreased computational performance of binary circuits of high node indegree. Furthermore, a novel meanfield predictor for computational performance is introduced and shown to accurately predict the numerically obtained results. 1
Compact hardware liquid state machines on FPGA for realtime speech recognition
 NEURAL NETWORKS
, 2008
"... Hardware implementations of Spiking Neural Networks are numerous because they are well suited for implementation in digital and analog hardware, and outperform classic neural networks. This work presents an application driven digital hardware exploration where we implement realtime, isolated digit ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Hardware implementations of Spiking Neural Networks are numerous because they are well suited for implementation in digital and analog hardware, and outperform classic neural networks. This work presents an application driven digital hardware exploration where we implement realtime, isolated digit speech recognition using a Liquid State Machine. The Liquid State Machine is a recurrent neural network of spiking neurons where only the output layer is trained. First we test two existing hardware architectures which we improve and extent, but that appear to be too fast and thus area consuming for this application. Next, we present a scalable, serialized architecture that allows a very compact implementation of spiking neural networks that is still fast enough for realtime processing. All architectures support leaky integrateandfire membranes with exponential synaptic models. This work shows that there is actually a large hardware design space of Spiking Neural Network hardware that can be explored. Existing architectures only spanned part of it.
Reservoir computing approaches to recurrent neural network training
 COMPUT SCI REV 2009;3(3):127–49
, 2009
"... Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical appl ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Echo State Networks and Liquid State Machines introduced a new paradigm in artificial recurrent neural network (RNN) training, where an RNN (the reservoir) is generated randomly and only a readout is trained. The paradigm, becoming known as reservoir computing, greatly facilitated the practical application of RNNs and outperformed classical fully trained RNNs in many tasks. It has lately become a vivid research field with numerous extensions of the basic idea, including reservoir adaptation, thus broadening the initial paradigm to using different methods for training the reservoir and the readout. This review systematically surveys both: current ways of generating/adapting the reservoirs and training different types of readouts. It offers a natural conceptual classification of the techniques, which transcends boundaries of the current “brandnames” of reservoir methods, and thus aims to help unifying the field and providing the reader with a detailed “map” of it.
Parallel hardware implementation of a broad class of spiking neurons using serial arithmetic
 IN PROCEEDINGS OF ESANN’06
, 2006
"... Current digital, directly mapped implementations of spiking neural networks use serial processing and parallel arithmetic. On a standard CPU, this might be the good choice, but when using a Field Programmable Gate Array (FPGA), other implementation architectures are possible. This work present a har ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Current digital, directly mapped implementations of spiking neural networks use serial processing and parallel arithmetic. On a standard CPU, this might be the good choice, but when using a Field Programmable Gate Array (FPGA), other implementation architectures are possible. This work present a hardware implementation of a broad class of integrate and fire spiking neurons with synapse models using parallel processing and serial arithmetic. This results in very fast and compact implementations of spiking neurons on FPGA.
Reservoir Computing with Stochastic Bitstream Neurons
 In Proceedings of the 16th Annual ProRISC Workshop
, 2005
"... Reservoir Computing (RC) [6], [5], [9] is a computational framework with powerful properties and several interesting advantages compared to conventional techniques for pattern recognition. It consists essentially of two parts: a recurrently connected network of simple interacting nodes (the reservoi ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Reservoir Computing (RC) [6], [5], [9] is a computational framework with powerful properties and several interesting advantages compared to conventional techniques for pattern recognition. It consists essentially of two parts: a recurrently connected network of simple interacting nodes (the reservoir), and a readout function that observes the reservoir and computes the actual output of the system. The choice of the nodes that form the reservoir is very broad: spiking neurons [6], threshold logic gates [7] and sigmoidal neurons [5], [9] have been used. For this article, we will use analogue neurons to build an RCsystem on a Field Programmable Gate Array (FPGA), which is a chip that can be reconfigured. A traditional neuron calculates a weighted sum of its inputs, which is then fed through a nonlinearity (like a threshold or sigmoid function). This is not hardware efficient due to the extensive use of multiplications. In [2], a type of neuron is introduced that communicates using stochastic bitstreams instead of fixedpoint values. This drastically simplifies the hardware implementation of arithmetic operations such as addition, the nonlinearity and multiplication. We have built an implementation of RC on FPGA, using these stochastic neurons.
Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks
"... It is open how neurons in the brain are able to learn without supervision to discriminate between spatiotemporal firing patterns of presynaptic neurons. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is able to acquire the classification capability of Fisher’s Li ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
It is open how neurons in the brain are able to learn without supervision to discriminate between spatiotemporal firing patterns of presynaptic neurons. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is able to acquire the classification capability of Fisher’s Linear Discriminant (FLD), a powerful algorithm for supervised learning, if temporally adjacent samples are likely to be from the same class. We also demonstrate that it enables linear readout neurons of cortical microcircuits to learn the detection of repeating firing patterns within a stream of spike trains with the same firing statistics, as well as discrimination of spoken digits, in an unsupervised manner. 1