Results 1  10
of
806,608
(Expanded Version)
, 2002
"... Abstract We present a type theory for higherorder modules that accounts for many central issues in module system design, including translucency, applicativity, generativity, and modules as firstclass values. Our type system harmonizes design elements from previous work, resulting in a simple, econ ..."
Abstract
 Add to MetaCart
Abstract We present a type theory for higherorder modules that accounts for many central issues in module system design, including translucency, applicativity, generativity, and modules as firstclass values. Our type system harmonizes design elements from previous work, resulting in a simple, economical account of modular programming. The main unifying principle is the treatment of abstraction mechanisms as computational effects. Our language is the first to provide a complete and practical formalization of all of these critical issues in module system design.
Flow of Ideas: Expanded Version
, 2013
"... Advocates of interdisciplinary collaboration suggest that scholarship be organized around topics rather than disciplines (Geiger and Sa, 2008; for a review, see Jacobs and ..."
Abstract
 Add to MetaCart
Advocates of interdisciplinary collaboration suggest that scholarship be organized around topics rather than disciplines (Geiger and Sa, 2008; for a review, see Jacobs and
This is an expanded version of a
"... Although it is well known that cross correlation can be efficiently implemented in the transform domain, the normalized form of cross correlation preferred for feature matching applications does not have a simple frequency domain expression. Normalized cross correlation has been computed in the spat ..."
Abstract
 Add to MetaCart
Although it is well known that cross correlation can be efficiently implemented in the transform domain, the normalized form of cross correlation preferred for feature matching applications does not have a simple frequency domain expression. Normalized cross correlation has been computed in the spatial domain for this reason. This short paper shows that unnormalized cross correlation can be efficiently normalized using precomputing integrals of the image and image 2 over the search window. 1
REVISED AND EXPANDED VERSION
, 2003
"... Abstract. We examine the question of when the ∗–homomorphism λ: A ∗D B → ..."
Multiprecision division: Expanded version
, 1998
"... This paper presents a study of multiprecision division on processors containing wordbyword multipliers. It compares several algorithms by first optimizing each for the software environment, and then comparing their performances on simple machine models. While the study was originally motivated by ..."
Abstract
 Add to MetaCart
This paper presents a study of multiprecision division on processors containing wordbyword multipliers. It compares several algorithms by first optimizing each for the software environment, and then comparing their performances on simple machine models. While the study was originally motivated by floatingpoint division in the smallword environment, the results are extended to multiprecision floatingpoint and integer division in general to the extent possible without extensive architecturespecific analysis. Two algorithms are found to be best for multiprecision division. For many floatingpoint division problems, and especially for any division by a small divisor, a hybrid of the NewtonRaphson and Byte Division algorithms is optimal, where significant reciprocal refinement is performed before beginning very high radix Byte Division iterations. Lowprecision arithmetic and a method of inexpensively boosting accuracy during NewtonRaphson reciprocal refinement improve algorithm efficiency. For other division problems, Restoring Division is best, and is easy to implement. The asymptotic costs for floatingpoint division of each of these algorithms is the same as that of multiprecision multiplication.
It is raining (somewhere). Expanded version
, 2006
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract
 Add to MetaCart
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. 1 It is raining (somewhere)
Learning with Temporary Memory (Expanded Version)
, 2008
"... In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In the inductive inference framework of learning in the limit, a variation of the bounded example memory (Bem) language learning model is considered. Intuitively, the new model constrains the learner’s memory not only in how much data may be retained, but also in how long that data may be retained. More specifically, the model requires that, if a learner commits an example x to memory in some stage of the learning process, then there is some subsequent stage for which x no longer appears in the learner’s memory. This model is called temporary example memory (T em) learning. In some sense, it captures the idea that memories fade. Many interesting results concerning the T emlearning model are presented. For example, there exists a class of languages that can be identified by memorizing k + 1 examples in the T em sense, but that cannot be identified by memorizing k examples in the Bem sense. On the other hand, there exists a class of languages that can be identified by memorizing just 1 example in the Bem sense, but that cannot be identified by memorizing any number of examples in the T em sense. (The proof of this latter result involves an infinitary selfreference argument.) Results are also presented concerning the special cases of: learning indexable classes of languages, and learning (arbitrary) classes of infinite languages.
Optimal Language Learning (Expanded Version)
, 2008
"... Abstract. Gold’s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, a ..."
Abstract
 Add to MetaCart
Abstract. Gold’s original paper on inductive inference introduced a notion of an optimal learner. Intuitively, a learner identifies a class of objects optimally iff there is no other learner that: requires as little of each presentation of each object in the class in order to identify that object, and, for some presentation of some object in the class, requires less of that presentation in order to identify that object. Wiehagen considered this notion in the context of function learning, and characterized an optimal function learner as one that is classpreserving, consistent, and (in a very strong sense) nonUshaped, with respect to the class of functions learned. Herein, Gold’s notion is considered in the context of language learning. Intuitively, a language learner identifies a class of languages optimally iff there is no other learner that: requires as little of each text for each language in the class in order to identify that language, and, for some text for some language in the class, requires less of that text in order to identify that language. Many interesting results concerning optimal language learners are presented. First, it is shown that a characterization analogous to Wiehagen’s does not hold in this setting. Specifically, optimality is not sufficient to guarantee Wiehagen’s conditions; though, those conditions are sufficient to guarantee optimality. Second, it is shown that the failure of this analog is not due to a restriction on algorithmic learning power imposed by nonUshapedness (in the strong form employed by Wiehagen). That is, nonUshapedness, even in this strong form, does not restrict algorithmic learning power. Finally, for an arbitrary optimal learner F of a class of languages L, it is shown that F optimally identifies a subclass K of L iff F is classpreserving with respect to K. 1
Domain Theory  Corrected and expanded version
"... bases were introduced in [Smy77] where they are called "Rstructures". Examples of abstract bases are concrete bases of continuous domains, of course, where the relation is the restriction of the order of approximation. Axiom (INT) is satisfied because of Lemma 2.2.15 and because we have ..."
Abstract
 Add to MetaCart
bases were introduced in [Smy77] where they are called "Rstructures". Examples of abstract bases are concrete bases of continuous domains, of course, where the relation is the restriction of the order of approximation. Axiom (INT) is satisfied because of Lemma 2.2.15 and because we have required bases in domains to have directed sets of approximants for each element.
Multivariable Feedback Control: Analysis
 span (B∗) und Basis B∗ = { ω1
, 2005
"... multiinput, multioutput feedback control design for linear systems using the paradigms, theory, and tools of robust control that have arisen during the past two decades. The book is aimed at graduate students and practicing engineers who have a basic knowledge of classical control design and st ..."
Abstract

Cited by 529 (24 self)
 Add to MetaCart
and statespace control theory for linear systems. A basic knowledge of matrix theory and linear algebra is required to appreciate and digest the material offered. This edition is a revised and expanded version of the first edition, which was published in 1996. The size of the
Results 1  10
of
806,608