Results 1  10
of
218
Structure Learning in Conditional Probability Models via an Entropic Prior and Parameter Extinction
, 1998
"... We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum... ..."
Abstract

Cited by 79 (0 self)
 Add to MetaCart
We introduce an entropic prior for multinomial parameter estimation problems and solve for its maximum...
A Sequence of Series for The Lambert Function
, 1997
"... We give a uniform treatment of several series expansions for the Lambert W function, leading to an infinite family of new series. We also discuss standardization, complex branches, a family of arbitraryorder iterative methods for computation of W , and give a theorem showing how to correctly solve ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
We give a uniform treatment of several series expansions for the Lambert W function, leading to an infinite family of new series. We also discuss standardization, complex branches, a family of arbitraryorder iterative methods for computation of W , and give a theorem showing how to correctly solve another simple and frequently occurring nonlinear equation in terms of W and the unwinding number. 1 Introduction Investigations of the properties of the Lambert W function are good examples of nontrivial interactions between computer algebra, mathematics, and applications. To begin with, the standardization of the name W by computer algebra (see section 1.2 below) has had several effects. First, this standardization has exposed a great variety of applications; second, it has uncovered a significant history, hitherto unnoticed because the lack of a standard name meant that most researchers were unaware of previous work; and, third, it has now stimulated current interest in this remarkable ...
Optimal Search on a Technology Landscape
, 1998
"... Technological change at the #rmlevel has commonly been modeled as random sampling from a #xed distribution of possibilities. Such models, however, typically ignore empirically important aspects of the #rm's search process,notably the observation that the present state of the #rm guides future ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Technological change at the #rmlevel has commonly been modeled as random sampling from a #xed distribution of possibilities. Such models, however, typically ignore empirically important aspects of the #rm's search process,notably the observation that the present state of the #rm guides future innovation. In this paper we explicitly treat this aspect of the #rm's search for technological improvements by introducing a #technology landscape" into an otherwise standard dynamic programming setting where the optimal strategy is to assign a reservation price to each possible technology. Search is modeled as movement,constrained by the cost of innovation, over the technology landscape. Simulations are presented on a stylized technology landscape while analytic results are derived using landscapes that are similar to Markov random #elds. We #nd that early in the search for technological improvements,if the initial position is poor or average,it is optimal to search far away on the technology l...
A Fast, Compact Approximation of the Exponential Function
 NEURAL COMPUTATION
, 1999
"... Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This paper describes how ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
Neural network simulations often spend a large proportion of their time computing exponential functions. Since the exponentiation routines of typical math libraries are rather slow, their replacement with a fast approximation can greatly reduce the overall computation time. This paper describes how exponentiation can be approximated by manipulating the components of a standard (IEEE754) floatingpoint representation. This models the exponential function as well as a lookup table with linear interpolation, but is significantly faster and more compact.
Enzyme kinetics at high enzyme concentration
 Bull. Math. Biol
, 2000
"... We revisit previous analyses of the classical Michaelis–Menten substrate–enzyme reaction and, with the aid of the reverse quasisteadystate assumption, we challenge the approximation d[C]/dt ≈ 0 for the basic enzyme reaction at high enzyme concentration. For the first time, an approximate solution ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
We revisit previous analyses of the classical Michaelis–Menten substrate–enzyme reaction and, with the aid of the reverse quasisteadystate assumption, we challenge the approximation d[C]/dt ≈ 0 for the basic enzyme reaction at high enzyme concentration. For the first time, an approximate solution for the concentrations of the reactants uniformly valid in time is reported. Numerical simulations are presented to verify this solution. We show that an analytical approximation can be found for the reactants for each initial condition using the appropriate quasisteadystate assumption. An advantage of the present formalism is that it provides a new procedure for fitting experimental data to determine reaction constants. Finally, a new necessary criterion is found that ensures the validity of the reverse quasisteadystate assumption. This is verified numerically.
SumCracker: A package for manipulating symbolic sums and related objects
 J. SYMB. COMPUT
, 2006
"... We describe a new software package, named SumCracker, for proving and finding identities involving symbolic sums and related objects. SumCracker is applicable to a wide range of expressions for many of which there has not been any software available up to now. The purpose of this paper is to illustr ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We describe a new software package, named SumCracker, for proving and finding identities involving symbolic sums and related objects. SumCracker is applicable to a wide range of expressions for many of which there has not been any software available up to now. The purpose of this paper is to illustrate how to solve problems using that package.
Relating Two Hopf Algebras Built from An Operad, by appear
"... Starting from an operad, one can build a family of posets. From this family of posets, one can define an incidence Hopf algebra. By another construction, one can also build a group directly from the operad. We then consider its Hopf algebra of functions. We prove that there exists a surjective morph ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Starting from an operad, one can build a family of posets. From this family of posets, one can define an incidence Hopf algebra. By another construction, one can also build a group directly from the operad. We then consider its Hopf algebra of functions. We prove that there exists a surjective morphism from the latter Hopf algebra to the former one. This is illustrated by the case of an operad built on rooted trees, the NAP operad, where the incidence Hopf algebra is identified with the ConnesKreimer Hopf algebra of rooted trees. 1
Understanding expression simplification
 Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation (ISSAC 2004
, 2004
"... We give the first formal definition of the concept of simplification for general expressions in the context of Computer Algebra Systems. The main mathematical tool is an adaptation of the theory of Minimum Description Length, which is closely related to various theories of complexity, such as Kolmog ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
We give the first formal definition of the concept of simplification for general expressions in the context of Computer Algebra Systems. The main mathematical tool is an adaptation of the theory of Minimum Description Length, which is closely related to various theories of complexity, such as Kolmogorov Complexity and Algorithmic Information Theory. In particular, we show how this theory can justify the use of various “magic constants ” for deciding between some equivalent representations of an expression, as found in implementations of simplification routines.