Results 1  10
of
31
Determining Lyapunov Exponents from a Time Series
 Physica
, 1985
"... We present the first algorithms that allow the estimation of nonnegative Lyapunov exponents from an experimental time series. Lyapunov exponents, which provide a qualitative and quantitative characterization of dynamical behavior, are related to the exponentially fast divergence or convergence of n ..."
Abstract

Cited by 495 (1 self)
 Add to MetaCart
(Show Context)
We present the first algorithms that allow the estimation of nonnegative Lyapunov exponents from an experimental time series. Lyapunov exponents, which provide a qualitative and quantitative characterization of dynamical behavior, are related to the exponentially fast divergence or convergence of nearby orbits in phase space. A system with one or more positive Lyapunov exponents is defined to be chaotic. Our method is rooted conceptually in a previously developed technique that could only be applied to analytically defined model systems: we monitor the longterm growth rate of small volume elements in an attractor. The method is tested on model systems with known Lyapunov spectra, and applied to data for the BelousovZhabotinskii reaction and CouetteTaylor flow. Contents 1.
Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics
, 1996
"... We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of inputoutput relations. In order to obtain m ..."
Abstract

Cited by 75 (22 self)
 Add to MetaCart
We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of inputoutput relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and inputoutput relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and shortterm prediction. 1 Introduction Neural networks provide frameworks for the representation of relations present in data. Especially in the fields of classification and time series prediction, neural networks Corresponding author, email:k...
A Dynamic HMM for Online Segmentation of Sequential Data
 Advances in Neural Information Processing Systems 14 (NIPS 2001
, 2002
"... We propose a novel method for the analysis of sequential data that exhibits an inherent mode switching. In particular, the data might be a nonstationary time series from a dynamical system that switches between multiple operating modes. Unlike other approaches, our method processes the data inc ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
We propose a novel method for the analysis of sequential data that exhibits an inherent mode switching. In particular, the data might be a nonstationary time series from a dynamical system that switches between multiple operating modes. Unlike other approaches, our method processes the data incrementally and without any training of internal parameters. We use an HMM with a dynamically changing number of states and an online variant of the Viterbi algorithm that performs an unsupervised segmentation and classification of the data onthefly, i.e. the method is able to process incoming data in realtime. The main idea of the approach is to track and segment changes of the probability density of the data in a sliding window on the incoming data stream. The usefulness of the algorithm is demonstrated by an application to a switching dynamical system.
Design of Neural Network Filters
 Electronics Institute, Technical University of Denmark
, 1993
"... Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
Emnet for n rv rende licentiatafhandling er design af neurale netv rks ltre. Filtre baseret pa neurale netv rk kan ses som udvidelser af det klassiske line re adaptive lter rettet mod modellering af uline re sammenh nge. Hovedv gten l gges pa en neural netv rks implementering af den ikkerekursive, uline re adaptive model med additiv st j. Formalet er at klarl gge en r kke faser forbundet med design af neural netv rks arkitekturer med henblik pa at udf re forskellige \blackbox " modellerings opgaver sa som: System identi kation, invers modellering og pr diktion af tidsserier. De v senligste bidrag omfatter: Formulering af en neural netv rks baseret kanonisk lter repr sentation, der danner baggrund for udvikling af et arkitektur klassi kationssystem. I hovedsagen drejer det sig om en skelnen mellem globale og lokale modeller. Dette leder til at en r kke kendte neurale netv rks arkitekturer kan klassi ceres, og yderligere abnes der mulighed for udvikling af helt nye strukturer. I denne sammenh ng ndes en gennemgang af en r kke velkendte arkitekturer. I s rdeleshed l gges der v gt pa behandlingen af multilags perceptron neural netv rket.
On Langevin Updating in Multilayer Perceptrons
 Neural Computation
, 1993
"... : The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numericall ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
: The Langevin updating rule, in which noise is added to the weights during learning, is presented and analyzed. It is well controlled and, being a natural extension to standard backpropagation learning, easily combined with other modifications of backpropagation. If the Hessian matrix is numerically illconditioned, Langevin updating converges faster than backpropagation and, probably, also higher order algorithms. This is particularly important for multilayer perceptrons with many hidden layers, which tend to have illconditioned Hessians. In addition, Manhattan updating is shown to have a similar effect as Langevin updating. 1 denni@thep.lu.se Introduction Performances of artificial neural networks (ANN) are often improved when external noise is present during the training phase. For instance in Hopfieldtype networks the basins of attraction for the stored memory patterns are enlarged when noisecorrupted training patterns are used [1]. In linear perceptrons the generalization a...
Adaptation of Fuzzy Inference System Using Neural Learning
, 2005
"... The integration of neural networks and fuzzy inference systems could be formulated into three main categories: cooperative, concurrent and integrated neurofuzzy models. We present three different types of cooperative neurofuzzy models namely fuzzy associative memories, fuzzy rule extraction using s ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
The integration of neural networks and fuzzy inference systems could be formulated into three main categories: cooperative, concurrent and integrated neurofuzzy models. We present three different types of cooperative neurofuzzy models namely fuzzy associative memories, fuzzy rule extraction using selforganizing maps and systems capable of learning fuzzy set parameters. Different Mamdani and TakagiSugeno type integrated neurofuzzy systems are further introduced with a focus on some of the salient features and advantages of the different types of integrated neurofuzzy models that have been evolved during the last decade. Some discussions and conclusions are also provided towards the end of the chapter.
Evolutionary Neural Networks for Nonlinear Dynamics Modeling
 Lectures Notes in Computer Science
, 1998
"... . In this paper the evolutionary design of a neural network model for predicting nonlinear systems behavior is discussed. In particular, the Breeder Genetic Algorithms are considered to provide the optimal set of synaptic weights of the network. The feasibility of the neural model proposed is demons ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
. In this paper the evolutionary design of a neural network model for predicting nonlinear systems behavior is discussed. In particular, the Breeder Genetic Algorithms are considered to provide the optimal set of synaptic weights of the network. The feasibility of the neural model proposed is demonstrated by predicting the Mackey Glass time series. A comparison with Genetic Algorithms and Back Propagation learning technique is performed. Keywords: Time Series Prediction, Artificial Neural Networks, Genetic Algorithms, Breeder Genetic Algorithms. 1 Introduction Artificial Neural Networks (ANNs) have been widely utilized in many application areas over the years. Nonetheless their drawback is that the design of an efficient architecture and the choice of the synaptic weights require high processing time. In particular, learning neural network weights can be considered a hard optimization problem for which the learning time scales exponentially becoming prohibitive as the problem size g...
Dynamic, evolving neuro fuzzy inference systems
 IEEE Transactions of Fuzzy Systems
, 2000
"... Key words: dynamic evolving fuzzy inference systems; online adaptive learning; clustering; time series prediction Abstract. This paper introduces a new type of fuzzy inference systems, denoted as DENFIS (dynamic evolving neuralfuzzy system), for adaptive online learning, and its application for d ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Key words: dynamic evolving fuzzy inference systems; online adaptive learning; clustering; time series prediction Abstract. This paper introduces a new type of fuzzy inference systems, denoted as DENFIS (dynamic evolving neuralfuzzy system), for adaptive online learning, and its application for dynamic time series prediction. DENFIS evolve through incremental, hybrid (supervised/unsupervised), learning and accommodate new input data, including new features, new classes, etc. through local element tuning. New fuzzy rules are created and updated during the operation of the system. At each time moment the output of DENFIS is calculated through a fuzzy inference system based on mmost activated fuzzy rules which are dynamically chosen from a fuzzy rule set. An approach is proposed for a dynamic creation of a firstorder TakagiSugeno type fuzzy rule set for the DENFIS model. The fuzzy rules can be inserted into DENFIS before, or during its learning process, and the rules can also be extracted from DENFIS during, or after its learning process. An evolving clustering method (ECM), which is employed in the DENFIS model, is also introduced. It is demonstrated that DENFIS can effectively learn complex temporal sequences in an adaptive way and outperform some existing models.
A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses
 Neural Networks for Signal Processing 4
, 1995
"... Recent interest has come about in deriving various neural network architectures for modelling timedependent signals. A number of algorithms have been published for multilayer perceptrons with synapses described by finite impulse response (FIR) and infinite impulse response (IIR) filters (the latter ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Recent interest has come about in deriving various neural network architectures for modelling timedependent signals. A number of algorithms have been published for multilayer perceptrons with synapses described by finite impulse response (FIR) and infinite impulse response (IIR) filters (the latter case is also known as Locally Recurrent Globally Feedforward Networks). The derivations of these algorithms have used different approaches in calculating the gradients, and in this note, we present a short, but unifying account of how these different algorithms compare for the FIR case, both in derivation, and performance. New algorithms are subsequently presented. Simulation results have been performed to benchmark these algorithms. In this note, results are compared for the MackeyGlass chaotic time series against a number of other methods including a standard multilayer perceptron, and a local approximation method. INTRODUCTION As a means of capturing timedependent signals in a nonlin...
Segmentation and Identification of Drifting Dynamical Systems
 PROC. NEURAL NETWORKS FOR SIGNAL PROCESSING VII IEEE WORKSHOP, 1997, AMALIA ISLAND, USA
, 1997
"... A method for the analysis of nonstationary time series with multiple operating modes is presented. In particular, it is possible to detect and to model a switching of the dynamics and also a less abrupt, time consuming drift from one mode to another. This is achieved by an unsupervised algorithm tha ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
A method for the analysis of nonstationary time series with multiple operating modes is presented. In particular, it is possible to detect and to model a switching of the dynamics and also a less abrupt, time consuming drift from one mode to another. This is achieved by an unsupervised algorithm that segments the data according to inherent modes, and a subsequent search through the space of possible drifts. An application to physiological wake/sleep data demonstrates that analysis and modeling of realworld time series can be improved when the drift paradigm is taken into account. In the case of wake/sleep data, we hope to gain more insight into the physiological processes that are involved in the transition from wake to sleep.