• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 59,837
Next 10 →

neural network trained with

by Fang Yixian, Wang Baowen, Wang Yongmao
"... stock index forecast based on dynamic recurrent ..."
Abstract - Add to MetaCart
stock index forecast based on dynamic recurrent

Parallel Neural Network Training

by P. Andert, Thomas J. Bartolac
"... email: andert @ orion.oac.uci.edu ..."
Abstract - Add to MetaCart
email: andert @ orion.oac.uci.edu

Global Optimization for Neural Network Training

by Yi Shang, Benjamin W. Wah - IEEE Computer , 1996
"... In this paper, we study various supervised learning methods for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize a nonlinear error function that spans the space of weights using heuristic st ..."
Abstract - Cited by 52 (11 self) - Add to MetaCart
In this paper, we study various supervised learning methods for training feed-forward neural networks. In general, such learning can be considered as a nonlinear global optimization problem in which the goal is to minimize a nonlinear error function that spans the space of weights using heuristic

Behaviour in 0 of the Neural Networks Training Cost

by Cyril Goutte , 1997
"... . We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number of first, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location. These calc ..."
Abstract - Add to MetaCart
. We study the behaviour in zero of the derivatives of the cost function used when training non-linear neural networks. It is shown that a fair number of first, second and higher order derivatives vanish in zero, validating the belief that 0 is a peculiar and potentially harmful location

Trajectory Methods for Neural Network Training

by Petalas Tasoulis Vrahatis, Y. G. Petalas, D. K. Tasoulis, M. N. Vrahatis - In: Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA’04 , 2004
"... A new class of methods for training multilayer feedforward neural networks is proposed. The proposed class of methods draws from methods for solving initial value problems of ordinary differential equations, and belong to the subclass of trajectory methods. The training of a multilayer feedforward n ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
A new class of methods for training multilayer feedforward neural networks is proposed. The proposed class of methods draws from methods for solving initial value problems of ordinary differential equations, and belong to the subclass of trajectory methods. The training of a multilayer feedforward

Neighbor Annealing for Neural Network Training

by V Scott Gordon
"... Abstract-An extremely simple technique for training the weights of a feedforward multilayer neural network is described and tested. The method, dubbed "neighbor annealing" is a simple random walk through weight space with a gradually decreasing step size. The approach is compared against ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract-An extremely simple technique for training the weights of a feedforward multilayer neural network is described and tested. The method, dubbed "neighbor annealing" is a simple random walk through weight space with a gradually decreasing step size. The approach is compared against

Genetic Algorithms for Neural Network Training

by On Transputers, Bernhard Ömer, Supervisor Dr, Graham M. Megson , 1995
"... The use of both, genetic algorithms and artificial neural networks, was originally motivated by the astonishing success of these concepts in their biological counterparts. Despite their totally different approaches, both can merely be seen as optimisation methods which are used in a wide range of ap ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
of applications, where traditional methods often prove to be unsatisfactory. This project deals about how genetic methods can be used as training strategies for neural networks and how they can be implemented on a distributed memory system. The main point of interest is, whether they provide a reasonable

Online Neural Network Training for Automatic

by Ischemia Episode Detection, D. K. Tasoulis, L. Vladutu, V. P. Plagianakos, A. Bezerianos, M. N. Vrahatis - In Leszek Rutkowski, Jörg H. Siekmann, Ryszard Tadeusiewicz, and Lotfi , 2003
"... Myocardial ischemia is caused by a lack of oxygen and nutrients to the contractile cells and may lead to myocardial infarction with its severe consequence of heart failure and arrhythmia. An electrocardiogram (ECG) represents a recording of changes occurring in the electrical potentials between ..."
Abstract - Add to MetaCart
propose a new classification methodology that draws from the disciplines of clustering and artificial neural networks, and apply it to the problem of myocardial ischemia detection. The results obtained are promising.

algorithm for feedforward neural network training

by Jing-ru Zhang A, Jun Zhang A, Tat-ming Lok C, Michael R. Lyu D
"... A hybrid particle swarm optimization–back-propagation ..."
Abstract - Add to MetaCart
A hybrid particle swarm optimization–back-propagation

Neural network training with constrained integer weights

by V. P. Plagianakos, M. N. Vrahatis , 1999
"... Abstract- In this contribution we present neural network training algorithms, which are based on the differential evolution (DE) strategies introduced by Storn and Price [Journal of Global Optimization 11, 341-359, 19971. These strategies are applied to train neural networks with small integer weigh ..."
Abstract - Cited by 17 (15 self) - Add to MetaCart
Abstract- In this contribution we present neural network training algorithms, which are based on the differential evolution (DE) strategies introduced by Storn and Price [Journal of Global Optimization 11, 341-359, 19971. These strategies are applied to train neural networks with small integer
Next 10 →
Results 1 - 10 of 59,837
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University