Results 1  10
of
14
FPGA implementations of neural networks  a survey of a decade of progress
 in: Proceedings of the 13th International Conference on Field Programmable Logic and Applications (FPL 2003
, 2003
"... Abstract. The first successful FPGA implementation [1] of artificial neural networks (ANNs) was published a little over a decade ago. It is timely to review the progress that has been made in this research area. This brief survey provides a taxonomy for classifying FPGA implementations of ANNs. Diff ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The first successful FPGA implementation [1] of artificial neural networks (ANNs) was published a little over a decade ago. It is timely to review the progress that has been made in this research area. This brief survey provides a taxonomy for classifying FPGA implementations of ANNs. Different implementation techniques and design issues are discussed. Future research trends are also presented. 1
Passive Dendrites Enable Single Neurons to Compute Linearly Nonseparable Functions
"... Local supralinear summation of excitatory inputs occurring in pyramidal cell dendrites, the socalled dendritic spikes, results in independent spiking dendritic subunits, which turn pyramidal neurons into twolayer neural networks capable of computing linearly nonseparable functions, such as the ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Local supralinear summation of excitatory inputs occurring in pyramidal cell dendrites, the socalled dendritic spikes, results in independent spiking dendritic subunits, which turn pyramidal neurons into twolayer neural networks capable of computing linearly nonseparable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic subunits, or only passive dendrites where input summation is purely sublinear, and where dendritic subunits are only saturating. To determine if such neurons can also compute linearly nonseparable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear subunit and either a single spiking or a saturating dendritic subunit. We then analytically generalize these numerical results to an arbitrary number of nonlinear subunits. First, we show that a single nonlinear dendritic subunit, in addition to the somatic nonlinearity, is sufficient to compute linearly nonseparable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic subunits, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly nonseparable functions can be implemented with at least two strategies: one where a dendritic subunit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic subunits. We formally prove that implementing the latter architecture is possible with both types of dendritic subunits whereas the former is only possible with spiking dendrites. Finally, we show how linearly
Mathematical Aspects of Neural Networks
 European Symposium of Artificial Neural Networks 2003
, 2003
"... In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretic ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this tutorial paper about mathematical aspects of neural networks, we will focus on two directions: on the one hand, we will motivate standard mathematical questions and well studied theory of classical neural models used in machine learning. On the other hand, we collect some recent theoretical results (as of beginning of 2003) in the respective areas. Thereby, we follow the dichotomy offered by the overall network structure and restrict ourselves to feedforward networks, recurrent networks, and selforganizing neural systems, respectively.
Implicit simulations using messaging protocols
 COMPUTERS AND PHYSICS
"... A novel algorithm for performing parallel, distributed computer simulations on the Internet using IP control messages is introduced. The algorithm employs carefully constructed ICMP packets which enable the required computations to be completed as part of the standard IP communication protocol. Afte ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
A novel algorithm for performing parallel, distributed computer simulations on the Internet using IP control messages is introduced. The algorithm employs carefully constructed ICMP packets which enable the required computations to be completed as part of the standard IP communication protocol. After providing a detailed description of the algorithm, experimental applications in the areas of stochastic neural networks and deterministic cellular automata are discussed. As an example of the algorithms potential power, a simulation of a deterministic
An Efficient Hardware Architecture for a Neural Network Activation Function Generator
"... Abstract. This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A splinebased approximation function is designed that provides a good tradeoff between accuracy and silicon area, whilst also being inherently scalable and ada ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes an efficient hardware architecture for a function generator suitable for an artificial neural network (ANN). A splinebased approximation function is designed that provides a good tradeoff between accuracy and silicon area, whilst also being inherently scalable and adaptable for numerous activation functions. This has been achieved by using a minimax polynomial and through optimal placement of the approximating polynomials based on the results of a genetic algorithm. The approximation error of the proposed method compares favourably to all related research in this field. Efficient hardware multiplication circuitry is used in the implementation, which reduces the area overhead and increases the throughput. 1
Implementing neural models in silicon
, 2004
"... Neural models are used in both computational neuroscience and in pattern recognition. The aim of the first is understanding of real neural systems, and of the second is gaining better, possibly brainlike performance for systems being built. In both cases, the highly parallel nature of the neural sy ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Neural models are used in both computational neuroscience and in pattern recognition. The aim of the first is understanding of real neural systems, and of the second is gaining better, possibly brainlike performance for systems being built. In both cases, the highly parallel nature of the neural system contrasts with the sequential nature of computer systems, resulting in slow and complex simulation software. More direct implementation in hardware (whether digital or analogue) holds out the promise of faster emulation both because hardware implementation is inherently faster than software, and because the operation is much more parallel. There are costs to this: modifying the system (for example to test out variants of the system) is much harder when a full application specific integrated circuit has been built. Fast emulation can permit direct incorporation of a neural model into a system, permitting realtime input and output. Appropriate selection of implementation technology can help to make interfacing the system to external devices simpler. We review the technologies involved, and discuss some example systems. 1 Why implement neural models in silicon? There are two primary reasons for implementing neural models: one is to attempt to gain better, and possibly
Analysis on complexity of neural networks using integer weights. Applied mathematics & Information scineces Mag
, 2012
"... Abstract: Integer neural network has been already extensively applied, but the result of application is always due to its operators because of the lack of theoretical guidance. In our paper, we will introduce a theoretical result that an integer network can present a good performance on approximati ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Integer neural network has been already extensively applied, but the result of application is always due to its operators because of the lack of theoretical guidance. In our paper, we will introduce a theoretical result that an integer network can present a good performance on approximating continuous function. Firstly we will make a quantitative analysis on integer networks capability of approximating a function from function spaces or . Then two new formulas will be first given to calculate the approximation error. Correspondently, we will give two new formulas to calculate the number of neurons in a threelayer network which has certain significance to engineering practice to construct a reasonable network.
A Framework for Developing Neural Networks Based Mobile Appliances
"... Abstract — The aim of our project is to develop a mobile realtime reader device for blind people. It uses Artificial Neural Networks (ANNs) for the core character recognition engine. The hardware constraints led us to develop a crossplatform framework to design and evaluate such subsystem in order ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — The aim of our project is to develop a mobile realtime reader device for blind people. It uses Artificial Neural Networks (ANNs) for the core character recognition engine. The hardware constraints led us to develop a crossplatform framework to design and evaluate such subsystem in order to find a good trade off between runtime performances and accuracy of results. The goal of this work is to provide a tool for prototyping and evaluating ANNs without requiring any insight knowledge of the toplevel design tool. At the same time this relies on the same code base that will be part of the target application, thus shortening and making easier the overall development process. In this paper we will present both the proposed software solution and its usefulness in our research project; it enabled us to reduce the needed runtime resources while improving the performance results of the ANN subsystem. I.
Control Flow Obfuscation using Neural Network to Fight Concolic Testing?
"... Abstract. Concolic testing is widely regarded as the stateoftheart technique in dynamic discovering and analyzing triggerbased behavior in software programs. It uses symbolic execution and an automatic theorem prover to generate new concrete test cases to maximize code coverage for scenarios l ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Concolic testing is widely regarded as the stateoftheart technique in dynamic discovering and analyzing triggerbased behavior in software programs. It uses symbolic execution and an automatic theorem prover to generate new concrete test cases to maximize code coverage for scenarios like software verification and malware analysis. While malicious developers usually try their best to hide malicious executions,