Results 1  10
of
52
On perfect supercompilation
 Journal of Functional Programming
, 1996
"... We extend positive supercompilation to handle negative as well as positive information. This is done by instrumenting the underlying unfold rules with a small rewrite system that handles constraints on terms, thereby ensuring perfect information propagation. We illustrate this by transforming a na ..."
Abstract

Cited by 92 (3 self)
 Add to MetaCart
We extend positive supercompilation to handle negative as well as positive information. This is done by instrumenting the underlying unfold rules with a small rewrite system that handles constraints on terms, thereby ensuring perfect information propagation. We illustrate this by transforming a naively specialised string matcher into an optimal one. The presented algorithm is guaranteed to terminate by means of generalisation steps.
Homeomorphic Embedding for Online Termination
 STATIC ANALYSIS. PROCEEDINGS OF SAS’98, LNCS 1503
, 1998
"... Recently wellquasi orders in general, and homeomorphic embedding in particular, have gained popularity to ensure the termination of program analysis, specialisation and transformation techniques. In this paper, ..."
Abstract

Cited by 66 (9 self)
 Add to MetaCart
Recently wellquasi orders in general, and homeomorphic embedding in particular, have gained popularity to ensure the termination of program analysis, specialisation and transformation techniques. In this paper,
Controlling generalisation and polyvariance in partial deduction of normal logic programs
, 1996
"... In this paper, we further elaborate global control for partial deduction: For which atoms, among possibly innitely many, should partial deductions be produced, meanwhile guaranteeing correctness as well as termination, and providing ample opportunities for negrained polyvariance? Our solution is b ..."
Abstract

Cited by 60 (40 self)
 Add to MetaCart
(Show Context)
In this paper, we further elaborate global control for partial deduction: For which atoms, among possibly innitely many, should partial deductions be produced, meanwhile guaranteeing correctness as well as termination, and providing ample opportunities for negrained polyvariance? Our solution is based on two ingredients. First, we use the wellknown concept of a characteristic tree to guide abstraction (or generalisation) and polyvariance, and aim for producing one specialised procedure per characteristic tree generated. Previous work along this line failed to provide abstraction correctly dealing with characteristic trees. We show how this can be rectied in an elegant way. Secondly, we structure combinations of atoms and associated characteristic trees in global trees registering \causal " relationships among such pairs. This will allow us to spot looming nontermination and consequently perform proper generalisation in order to avert the danger, without having to impose a depth bound on characteristic trees. Leaving unspecied the specic local control one may wish to plug in, the resulting global control strategy enables partial deduction that always terminates in an elegant, non ad hoc way, while providing excellent specialisation as well as negrained (but reasonable) polyvariance.
Offline specialisation in Prolog using a handwritten compiler generator
, 2004
"... The so called âcogen approachâ to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that this approach is also applicable to partial eva ..."
Abstract

Cited by 47 (23 self)
 Add to MetaCart
The so called âcogen approachâ to program specialisation, writing a compiler generator instead of a specialiser, has been used with considerable success in partial evaluation of both functional and imperative languages. This paper demonstrates that this approach is also applicable to partial evaluation of logic programming languages, also called partial deduction. Selfapplication has not been as much in focus in logic programming as for functional and imperative languages, and the attempts to selfapply partial deduction systems have, of yet, not been altogether that successful. So, especially for partial deduction, the cogen approach should prove to have a considerable importance when it comes to practical applications. This paper first develops a generic offline partial deduction technique for pure logic programs, notably supporting partially instantiated datastructures via binding types. From this a very efficient cogen is derived, which generates very efficient generating extensions (executing up to several orders of magnitude faster than current online systems) which in turn perform very good and nontrivial specialisation, even rivalling existing online systems. All this is supported by extensive benchmarks. Finally, it is shown how the cogen can be extended to directly support a large part of Prologâs declarative and nondeclarative features and how semionline specialisation can be efficiently integrated.
Redundant Argument Filtering of Logic Programs
 Logic Program Synthesis and Transformation. Proceedings of LOPSTR’96, LNCS 1207
, 1996
"... This paper is concerned with the problem of removing, from a given logic program, redundant arguments. These are arguments which can be removed without affecting correctness. Most program specialisation techniques, even though they perform argument filtering and redundant clause removal, fail to re ..."
Abstract

Cited by 45 (18 self)
 Add to MetaCart
This paper is concerned with the problem of removing, from a given logic program, redundant arguments. These are arguments which can be removed without affecting correctness. Most program specialisation techniques, even though they perform argument filtering and redundant clause removal, fail to remove a substantial number of redundant arguments, yielding in some cases rather inefficient residual programs. We formalise the notion of a redundant argument and show that one cannot decide effectively whether a given argument is redundant. We then give a safe, effective approximation of the notion of a redundant argument and describe several simple and efficient algorithms calculating based on the approximative notion. We conduct extensive experiments with our algorithms on mechanically generated programs illustrating the practical benefits of our approach.
Conjunctive Partial Deduction: Foundations, Control, Algorithms, and Experiments
 J. LOGIC PROGRAMMING
, 1999
"... ..."
Logic Program Specialisation: How To Be More Specific
 Proceedings of the International Symposium on Programming Languages, Implementations, Logics and Programs (PLILP'96), LNCS 1140
, 1996
"... Standard partial deduction suffers from several drawbacks when compared to topdown abstract interpretation schemes. Conjunctive partial deduction, an extension of standard partial deduction, remedies one of those, namely the lack of sideways information passing. But two other problems remain: the l ..."
Abstract

Cited by 38 (24 self)
 Add to MetaCart
Standard partial deduction suffers from several drawbacks when compared to topdown abstract interpretation schemes. Conjunctive partial deduction, an extension of standard partial deduction, remedies one of those, namely the lack of sideways information passing. But two other problems remain: the lack of successpropagation as well as the lack of inference of global successinformation. We illustrate these drawbacks and show how they can be remedied by combining conjunctive partial deduction with an abstract interpretation technique known as more specific program construction. We present a simple, as well as a more refined integration of these methods. Finally we illustrate the practical relevance of this approach for some advanced applications, like proving functionality or specialising certain metaprograms written in the ground representation, where it surpasses the precision of current abstract interpretation techniques. 1 Introduction The heart of any technique for partial deduc...
Specialization of Lazy Functional Logic Programs
 IN PROC. OF THE ACM SIGPLAN CONF. ON PARTIAL EVALUATION AND SEMANTICSBASED PROGRAM MANIPULATION, PEPM'97, VOLUME 32, 12 OF SIGPLAN NOTICES
, 1997
"... Partial evaluation is a method for program specialization based on fold/unfold transformations [8, 25]. Partial evaluation of pure functional programs uses mainly static values of given data to specialize the program [15, 44]. In logic programming, the socalled static/dynamic distinction is hard ..."
Abstract

Cited by 36 (21 self)
 Add to MetaCart
(Show Context)
Partial evaluation is a method for program specialization based on fold/unfold transformations [8, 25]. Partial evaluation of pure functional programs uses mainly static values of given data to specialize the program [15, 44]. In logic programming, the socalled static/dynamic distinction is hardly present, whereas considerations of determinacy and choice points are far more important for control [12]. We discuss these issues in the context of a (lazy) functional logic language. We formalize a twophase specialization method for a nonstrict, first order, integrated language which makes use of lazy narrowing to specialize the program w.r.t. a goal. The basic algorithm (first phase) is formalized as an instance of the framework for the partial evaluation of functional logic programs of [2, 3], using lazy narrowing. However, the results inherited by [2, 3] mainly regard the termination of the PE method, while the (strong) soundness and completeness results must be restated for the lazy strategy. A postprocessing renaming scheme (second phase) is necessary which we describe and illustrate on the wellknown matching example. This phase is essential also for other nonlazy narrowing strategies, like innermost narrowing, and our method can be easily extended to these strategies. We show that our method preserves the lazy narrowing semantics and that the inclusion of simplification steps in narrowing derivations can improve control during specialization.
A Roadmap to Metacomputation by Supercompilation
, 1996
"... This paper gives a gentle introduction to Turchin's supercompilation and its applications in metacomputation with an emphasis on recent developments. First, a complete supercompiler, including positive driving and generalization, is defined for a functional language and illustrated with example ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
This paper gives a gentle introduction to Turchin's supercompilation and its applications in metacomputation with an emphasis on recent developments. First, a complete supercompiler, including positive driving and generalization, is defined for a functional language and illustrated with examples. Then a taxonomy of related transformers is given and compared to the supercompiler. Finally, we put supercompilation into the larger perspective of metacomputation and consider three metacomputation tasks: specialization, composition, and inversion.
Controlling Conjunctive Partial Deduction of Definite Logic Programs
, 1996
"... "Classical" partial deduction, within the framework by Lloyd and Shepherdson, computes partial deduction for separate atoms independently. As a consequence, a number of program optimisations, known from unfold/fold transformations and supercompilation, cannot be achieved. In this paper, we ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
"Classical" partial deduction, within the framework by Lloyd and Shepherdson, computes partial deduction for separate atoms independently. As a consequence, a number of program optimisations, known from unfold/fold transformations and supercompilation, cannot be achieved. In this paper, we show that this restriction can be lifted through (polygenetic) specialisation of entire atom conjunctions. We present a generic algorithm for such partial deduction and discuss its correctness in an extended formal framework. We concentrate on novel control challenges specific to this "conjunctive" partial deduction. We refine the generic algorithm into a fully automatic concrete one that registers partially deduced conjunctions in a global tree, and prove its termination and correctness. We discuss some further control refinements and illustrate the operation of the concrete algorithm and/or some of its possible variants on interesting transformation examples.