• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Discrete allpositive multilayer perceptrons for optical implementation. (1998)

by P Moerland, E Fiesler, I Saxena
Add To MetaCart

Tools

Sorted by:
Results 1 - 4 of 4

Neural Network Adaptations to Hardware Implementations

by Perry Moerland, Emile Fiesler , 1997
"... In order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential. However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of t ..."
Abstract - Cited by 19 (1 self) - Add to MetaCart
In order to take advantage of the massive parallelism offered by artificial neural networks, hardware implementations are essential. However, most standard neural network models are not very suitable for implementation in hardware and adaptations are needed. In this section an overview is given of the various issues that are encountered when mapping an ideal neural network model onto a compact and reliable neural network hardware implementation, like quantization, handling non-uniformities and non-ideal responses, and restraining computational complexity. Furthermore, a broad range of hardware-friendly learning rules is presented, which allow for simpler and more reliable hardware implementations. The relevance of these neural network adaptations to hardware is illustrated by their application in existing hardware implementations.

On the capabilities of neural networks using limited precision weights

by Sorin Draghici - Neural Netw , 2002
"... This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using limited range integer values opens the road for efficient VLSI implementations because i) a limited range for the weights can be translated into reduced storage req ..."
Abstract - Cited by 14 (0 self) - Add to MetaCart
This paper analyzes some aspects of the computational power of neural networks using integer weights in a very restricted range. Using limited range integer values opens the road for efficient VLSI implementations because i) a limited range for the weights can be translated into reduced storage requirements and ii) integer computation can be implemented in a more efficient way than the floating point one. The paper concentrates on classification problems and shows that, if the weights are restricted in a drastic way (both range and precision), the existence of a solution is not to be taken for granted anymore. The paper presents an existence result which relates the difficulty of the problem as characterized by the minimum distance between patterns of different classes to the weight range necessary to ensure that a solution exists. This result allows us to calculate a weight range for a given category of problems and be confident that the network has the capability to solve the given problems with integer weights in that range. Worst-case lower bounds are given for the number of entropy bits and weights necessary to solve a given problem. Various practical issues such as the relationship between the information entropy bits and storage bits are also discussed. The approach presented here uses a worst-case analysis. Therefore, the approach tends to overestimate the values obtained for the weight range, the number of bits and the number of weights. The paper also presents some statistical considerations that can be used to give up the absolute confidence of a successful training in exchange for values more appropriate for practical use. The approach presented is also discussed in the context of the VC-complexity.
(Show Context)

Citation Context

... 1994; Hohfeld & Fahlman, 1992; Tang & Kwan, 1993; Vincent & Myers, 1992; Xie & Jabri, 1991) or even powers of two weights (Dundar & Rose, 1995; Hollis & Paulos, 1994; Kwan & Tang, 1992, 1993; Marchesi, Orlandi, Piazza, Pollonara, & Uncini, 1990; Marchesi, Orlandi, Piazza, & Uncini, 1993; Simard & Graf, 1994; Tang & Kwan, 1993). This latter type of weights is important because it eliminates the need for multiplications in binary implementations. Another example of weight restrictions imposed by the type of implementation is the category of neural networks using discrete and positive weights (Moerland, Fiesler, & Saxena, 1998; Fiesler et al., 1990). Such networks are particularly suitable for optical implementations. While the existing algorithms offer a way of training these weights in order to achieve a desired goal weight state, there are relatively few results that address the problem of whether a goal state exist for a given problem and a given architecture in the context of limited range integer weights. The scope of this paper is the class of VLSI-friendly neural networks (i.e. networks using LPIW) and their capabilities in solving classification problems (see Fig. 1). This paper will show that: i) if the r...

Handwritten Digit Recognition with Binary Optical Perceptron

by I. Saxena, P. Moerland, E. Fiesler, A. Pourzand - in Artificial Neural Networks - ICANN'97, Lecture Notes in Computer Science , 1997
"... . Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal re ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
. Binary weights are favored in electronic and optical hardware implementations of neural networks as they lead to improved system speeds. Optical neural networks based on fast ferroelectric liquid crystal binary level devices can benefit from the many orders of magnitudes improved liquid crystal response times. An optimized learning algorithm for all-positive perceptrons is simulated on a limited data set of hand-written digits and the resultant network implemented optically. First, gray-scale and then binary inputs and weights are used in recall mode. On comparing the results for the example data set, the binarized inputs and weights network shows almost no loss in performance. IDIAP--RR 97-15 1 1 Introduction In hardware implementations of neural networks, it is attractive to use binary weights and inputs. In electronic implementations, reduction of chip area, reduced computation and improved system speed drive the motivation to enable the use of binary or a minimum number of dis...
(Show Context)

Citation Context

...ated read beam. 2.2 Training Algorithm Optimization There are three important aspects of algorithm development that we have addressed which now allow optical multilayer neural network implementations =-=[7]-=-. These concern the questions on the usefulness of the non-linearities offered by LCLVs, of implementing the well-known only-positive-weights limitation of incoherent optical processing systems, and d...

Neural Network Adaptations to Hardware Implementations

by R Esearch R Eport, Perry Moerland, Emile Fiesler, Perry Moerland, Emile Fiesler
"... ..."
Abstract - Add to MetaCart
Abstract not found
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University