• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Models of Computation: Exploring the Power of Computing (1998)

by J E Savage
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 83
Next 10 →

Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations

by Wolfgang Maass, Thomas Natschläger, Henry Markram
"... A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model for real-time computing on time-var ..."
Abstract - Cited by 469 (38 self) - Add to MetaCart
A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead it is based on principles of high dimensional dynamical systems in combination with statistical learning theory, and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real-time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous

Two computational primitives for algorithmic self-assembly: Copying and counting

by Robert D. Barish, Paul W. K. Rothemund, Erik Winfree - Nano Letters , 2005
"... Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand ..."
Abstract - Cited by 68 (5 self) - Add to MetaCart
Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand, arranging themselves in a binary counting pattern that could serve as a template for a molecular electronic demultiplexing circuit. Although the yield of counting crystals is low, and per-tile error rates in such crystals is roughly 10%, this work demonstrates the potential of algorithmic self-assembly to create complex nanoscale patterns of technological interest. A subset of the tiles for counting form information-bearing DNA tubes that copy bit strings from layer to layer along their length. The challenge of engineering complex devices at the nanometer scale has been approached from two radically different directions. In top-down synthesis, information about the desired structure is imposed by an external apparatus, as in photolithography. In bottom-up synthesis, structure arises spontaneously due to chemical and physical forces intrinsic to the molecular components themselves. A significant challenge for bottom-up techniques is how to design

On the Computational Power of Winner-Take-All

by Wolfgang Maass , 2000
"... This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winner-take-all. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in com ..."
Abstract - Cited by 50 (9 self) - Add to MetaCart
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winner-take-all. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winner-take-all is a surprisingly powerful computational module in comparison with threshold gates (= McCulloch-Pitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winner-take-all in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winner-take-all gate as their only nonlinear operation. Our

Computational aspects of feedback in neural circuits

by Wolfgang Maass, Prashant Joshi, Eduardo D. Sontag - PLOS Computational Biology , 2007
"... It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more re ..."
Abstract - Cited by 37 (7 self) - Add to MetaCart
It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints.
(Show Context)

Citation Context

... simulates. Hence Theorem 1 implies in particular that any system (3) that belongs to the class Sn has in conjunction with several feedbacks the computational power of a universal Turing machine (see =-=[27]-=- or [28] for relevant concepts from computation theory). This follows from the fact that every Turing machine (hence any conceivable digital computation, most of which require a persistent memory) can...

Models of computation and languages for embedded system design

by A Jantsch, I Sander - IEE Proceedings on Computers and Digital Techniques
"... ..."
Abstract - Cited by 33 (2 self) - Add to MetaCart
Abstract not found

General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results

by Jiri Sima, Pekka Orponen , 2003
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract - Cited by 20 (0 self) - Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), net-work size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.

PEBBLE GAMES, PROOF COMPLEXITY AND TIME-SPACE TRADE-OFFS

by Jakob Nordström , 2010
"... Pebble games were extensively studied in the 1970s and 1980s in a number of different contexts. The last decade has seen a revival of interest in pebble games coming from the field of proof complexity. Pebbling has proven to be a useful tool for studying resolution-based proof systems when compari ..."
Abstract - Cited by 18 (6 self) - Add to MetaCart
Pebble games were extensively studied in the 1970s and 1980s in a number of different contexts. The last decade has seen a revival of interest in pebble games coming from the field of proof complexity. Pebbling has proven to be a useful tool for studying resolution-based proof systems when comparing the strength of different subsystems, showing bounds on proof space, and establishing size-space trade-offs. This is a survey of research in proof complexity drawing on results and tools from pebbling, with a focus on proof space lower bounds and trade-offs between proof size and proof space.

Evaluation of design strategies for stochastically assembled nanoarray memories

by Benjamin Gojman, Eric Rachlin, John E. Savage - J. Emerg. Technol. Comput. Syst , 2005
"... A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic self-assembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel na ..."
Abstract - Cited by 15 (7 self) - Add to MetaCart
A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic self-assembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel nanowires (NWs) that are differentiated at their time of manufacture. NW deposition is a stochastic process and the NW encodings present in an array cannot be known in advance. We explore the reliable construction of memories from stochastically assembled arrays. This is accomplished by describing several families of NW encodings and developing strategies to map external binary addresses onto internal NW encodings using programmable circuitry. We explore a variety of different mapping strategies and develop probabilistic methods of analysis. This is the first article that makes clear the wide range of choices that are available.

Network-Oblivious Algorithms

by Gianfranco Bilardi, Andrea Pietracaprina, Geppino Pucci, Francesco Silvestri - IN PROC. OF 21ST INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM , 2007
"... The design of algorithms that can run unchanged yet efficiently on a variety of machines characterized by different degrees of parallelism and communication capabilities is a highly desirable goal. We propose a framework for network-obliviousness based on a model of computation where the only parame ..."
Abstract - Cited by 14 (5 self) - Add to MetaCart
The design of algorithms that can run unchanged yet efficiently on a variety of machines characterized by different degrees of parallelism and communication capabilities is a highly desirable goal. We propose a framework for network-obliviousness based on a model of computation where the only parameter is the problem’s input size. Algorithms are then evaluated on a model with two parameters, capturing parallelism and granularity of communication. We show that, for a wide class of network-oblivious algorithms, optimality in the latter model implies optimality in a block-variant of the Decomposable BSP model, which effectively describes a wide and significant class of parallel platforms. We illustrate our framework by providing optimal network-oblivious algorithms for a few key problems, and also establish some negative results. 1

A unified model for multicore architectures

by John E. Savage, Mohammad Zubair - In Proc. 1st International Forum on Next-Generation Multicore/Manycore Technologies , 2008
"... With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of ..."
Abstract - Cited by 14 (1 self) - Add to MetaCart
With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of caches at different levels. We propose a unified memory hierarchy model that addresses these limitations and is an extension of the MHG model developed for a single processor with multi-memory hierarchy. We demonstrate that our unified framework can be applied to a number of multicore architectures for a variety of applications. In particular, we derive lower bounds on memory traffic between different levels in the hierarchy for financial and scientific computations. We also give a multicore algorithms for a financial
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University