Results 1  10
of
382
The method of creative telescoping
 J. Symbolic Computation
, 1991
"... An algorithm for de6nite hypergeometric summation is given. It is based, in a nonobvious way, on Gosper's algorithm for definite hypergeometric summation, and its theoretical justification relies on Bernstein's theory of holonomic systems. 1. ..."
Abstract

Cited by 201 (11 self)
 Add to MetaCart
(Show Context)
An algorithm for de6nite hypergeometric summation is given. It is based, in a nonobvious way, on Gosper's algorithm for definite hypergeometric summation, and its theoretical justification relies on Bernstein's theory of holonomic systems. 1.
The thermodynamics of computationa review
 In International Journlll öj Theoretical Physics [38
"... Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic omputers dissipate energy vastly in excess of the mean thermal energy kT, for purposes uch as maintaining volatile storage devices in a bistable condition, synchronizing and ..."
Abstract

Cited by 111 (2 self)
 Add to MetaCart
Computers may be thought of as engines for transforming free energy into waste heat and mathematical work. Existing electronic omputers dissipate energy vastly in excess of the mean thermal energy kT, for purposes uch as maintaining volatile storage devices in a bistable condition, synchronizing and standardizing signals, and maximizing switching speed. On the other hand, recent models due to Fredkin and Toffoli show that in principle a computer could compute at finite speed with zero energy dissipation and zero error. In these models, a simple assemblage of simple but idealized mechanical parts (e.g., hard spheres and flat plates) determines a ballistic trajectory isomorphic with the desired computation, a trajectory therefore not foreseen in detail by the builder of the computer. In a classical or semiclassical setting, ballistic models are unrealistic because they require the parts to be assembled with perfect precision and isolated from thermal noise, which would eventually randomize the trajectory and lead to errors. Possibly quantum effects could be exploited to prevent his undesired equipartition of the kinetic energy. Another family of models may be called
Discovering Neural Nets With Low Kolmogorov Complexity And High Generalization Capability
 Neural Networks
, 1997
"... Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for "simplicity&quo ..."
Abstract

Cited by 54 (31 self)
 Add to MetaCart
(Show Context)
Many neural net learning algorithms aim at finding "simple" nets to explain training data. The expectation is: the "simpler" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for "simplicity" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the "Bayesian" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the SolomonoffLevin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding "algorithmically simple" problem solutions with high generalization capability. The method is based on Levin complexity (a timebounded generalization of Kolmogorov comple...
Degrees of random sets
, 1991
"... An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrando ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
(Show Context)
An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrandom and weakly nrandom sequences with an emphasis on the structure of their Turing degrees. After an introduction and summary, in Chapter II we present several equivalent definitions of nrandomness and weak nrandomness including a new definition in terms of a forcing relation analogous to the characterization of ngeneric sequences in terms of Cohen forcing. We also prove that, as conjectured by Kurtz, weak nrandomness is indeed strictly weaker than nrandomness. Chapter III is concerned with intrinsic properties of nrandom sequences. The main results are that an (n + 1)random sequence A satisfies the condition A (n) ≡T A⊕0 (n) (strengthening a result due originally to Sacks) and that nrandom sequences satisfy a number of strong independence properties, e.g., if A ⊕ B is nrandom then A is nrandom relative to B. It follows that any countable distributive lattice can be embedded
Fifty Years of Shannon Theory
, 1998
"... A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication. ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication.
A computer scientist’s view of life, the universe, and everything
 Foundations of Computer Science: Potential  Theory  Cognition
, 1997
"... Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, genera ..."
Abstract

Cited by 50 (16 self)
 Add to MetaCart
(Show Context)
Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. Preliminaries Assumptions. A long time ago, the Great Programmer wrote a program that runs all possible universes on His Big Computer. “Possible ” means “computable”: (1) Each universe evolves on a discrete time scale. (2) Any universe’s state at a given time is describable by a finite number of bits. One of the many universes is ours, despite some who evolved in it and claim it is incomputable. Computable universes. Let TM denote an arbitrary universal Turing machine with unidirectional output tape. TM’s input and output symbols are “0”, “1”, and “, ” (comma). TM’s possible input programs can be ordered
Using random sets as oracles
"... Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases for 1randomness are exactly the Ktrivial sets and discuss several consequences of this result. We also sho ..."
Abstract

Cited by 46 (21 self)
 Add to MetaCart
(Show Context)
Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases for 1randomness are exactly the Ktrivial sets and discuss several consequences of this result. We also show that the bases for computable randomness include every ∆ 0 2 set that is not diagonally noncomputable, but no set of PAdegree. As a consequence, we conclude that an nc.e. set is a base for computable randomness iff it is Turing incomplete. 1