Results 11 -
19 of
19
unknown title
"... In this thesis we model the mammalian cerebral cortex with attractor neural networks and study the parallel implementations of these models. First, we review the size, structure, and scaling laws of the cerebral cortex of five mammals; mouse, rat, cat, macaque, and human. Characteristics of the cort ..."
Abstract
- Add to MetaCart
(Show Context)
In this thesis we model the mammalian cerebral cortex with attractor neural networks and study the parallel implementations of these models. First, we review the size, structure, and scaling laws of the cerebral cortex of five mammals; mouse, rat, cat, macaque, and human. Characteristics of the cortex such as timescales, activity rates, and connectivity are also investigated. Based on how the cortex is vertically structured and modularized, we propose a generic model of cortex. In this model we make the assumption that the cortex to a first approximation operates as a fixed-point attractor memory. We review the field of attractor neural networks and focus on a special type called Potts neural networks. Second, we implement the generic model of cortex with a BCPNN (Bayesian Confidence Propagating Neural Network). The cortical BCPNN model is formulated as an attractor neural network and it is mainly used as an autoassociative memory. Based on the literature review and simulation experiments we analyze the model with regard to storage capacity and scaling characteristics. The analysis of the model provides design principles and constraints for cortex sized attractor neural networks.
b Adaptive Computation Company
"... This paper presents an overview of current ongoing research and design efforts conducted by Intelligent Optical Systems, Inc. in the area of hardware-based color segmentation. We discuss the specifics of the design of a microchip that combines a hardwired hybrid neural network with on-chip color ima ..."
Abstract
- Add to MetaCart
This paper presents an overview of current ongoing research and design efforts conducted by Intelligent Optical Systems, Inc. in the area of hardware-based color segmentation. We discuss the specifics of the design of a microchip that combines a hardwired hybrid neural network with on-chip color imaging. Several preliminary tests show high approximation ability of our scheme. The single-chip implementation has many advantages. The final product will consist of an RGB pixel array with infinite (analog) color depth and a neural network capable of high speed image segmentation.
Fast Triggering in High Energy Physics Experiments Using Hardware Neural Networks
"... Abstract—High Energy Physics experiments require high-speed triggering systems capable of performing complex pattern recognition at rates of Megahertz to Gigahertz. Neural networks implemented in hardware have been the solution of choice for certain experiments. The neural triggering problem is pres ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—High Energy Physics experiments require high-speed triggering systems capable of performing complex pattern recognition at rates of Megahertz to Gigahertz. Neural networks implemented in hardware have been the solution of choice for certain experiments. The neural triggering problem is presented here via a detailed look at the H1 level 2 trigger at the HERA accelerator in Hamburg, Germany, followed by a section on the importance of hardware preprocessing for such systems, and finally some new architectural ideas for using field programmable gate arrays in very high speed neural network triggers at upcoming experiments. I.
A Review of Implementation Techniques For
"... Abstract—This paper presents a review on implementation ..."
(Show Context)
BCPNN Implemented with Fixed-Point Arithmetic
"... A fixed-point arithmetic implementation of the BCPNN (Bayesian Confidence Propagating Neural Network) is described, implemented, and evaluated. Moving averages (MAs) that operate on a logarithmic scale are described. Probabilistic fractional bits (PFB) are used to save precision in the implementatio ..."
Abstract
- Add to MetaCart
(Show Context)
A fixed-point arithmetic implementation of the BCPNN (Bayesian Confidence Propagating Neural Network) is described, implemented, and evaluated. Moving averages (MAs) that operate on a logarithmic scale are described. Probabilistic fractional bits (PFB) are used to save precision in the implementation. The results show that the BCPNN implemented with integers and fixed-point arithmetic has a storage capacity as good as the floating-point arithmetic implementation given enough precision, e.g. 8 bits. The precision used in the connections can be traded for the number of connections. The scaling properties of the implementation are evaluated.
Analysis of Analog Neural Network Model with CMOS Multipliers
"... Abstract. The analog neural networks have some very useful advantages in comparison with digital neural net-work, but recent implementation of discrete elements gives not the possibility for realizing completely these advan-tages. The reason of this is the great variations of discrete semiconductors ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. The analog neural networks have some very useful advantages in comparison with digital neural net-work, but recent implementation of discrete elements gives not the possibility for realizing completely these advan-tages. The reason of this is the great variations of discrete semiconductors characteristics. The VLSI implementation of neural network algorithm is a new direction of analog neural network developments and applications. Analog design can be very difficult because of need to compensate the variations in manufacturing, in temperature, etc. It is necessary to study the characteristics and effectiveness of this implementation. In this article the parameter variation influence over analog neural network behavior has been investigated.