Results 1 -
9 of
9
Traceable Data Types for Self-Adjusting Computation
"... Self-adjusting computation provides an evaluation model where computations can respond automatically to modifications to their data by using a mechanism for propagating modifications through the computation. Current approaches to self-adjusting computation guarantee correctness by recording dependen ..."
Abstract
-
Cited by 9 (1 self)
- Add to MetaCart
(Show Context)
Self-adjusting computation provides an evaluation model where computations can respond automatically to modifications to their data by using a mechanism for propagating modifications through the computation. Current approaches to self-adjusting computation guarantee correctness by recording dependencies in a trace at the granularity of individual memory operations. Tracing at the granularity of memory operations, however, has some limitations: it can be asymptotically inefficient (e.g., compared to optimal solutions) because it cannot take advantage of problem-specific structure, it requires keeping a large computation trace (often proportional to the runtime of the program on the current input), and it introduces moderately large constant factors in practice. In this paper, we extend dependence-tracing to work at the granularity of the query and update operations of arbitrary (abstract)
Type-Directed Automatic Incrementalization
"... Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such a ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes. In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing
Self-adjusting stack machines
, 2011
"... Self-adjusting computation offers a language-based approach to writing programs that automatically respond to dynamically changing data. Recent work made significant progress in developing sound semantics and associated implementations of self-adjusting computation for high-level, functional languag ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Self-adjusting computation offers a language-based approach to writing programs that automatically respond to dynamically changing data. Recent work made significant progress in developing sound semantics and associated implementations of self-adjusting computation for high-level, functional languages. These techniques, however, do not address issues that arise for low-level languages, i.e., stackbased imperative languages that lack strong type systems and automatic memory management. In this paper, we describe techniques for self-adjusting computation which are suitable for low-level languages. Necessarily, we take a different approach than previous work: instead of starting with a high-level language with additional primitives to support self-adjusting computation, we start with a low-level intermediate language, whose semantics is given by a stack-based abstract machine. We prove that this semantics is sound: it always updates computations in a way that is consistent with full reevaluation. We give a compiler and runtime system for the intermediate language used by our abstract machine. We present an empirical evaluation that shows that our approach is efficient in practice, and performs favorably compared to prior proposals.
Adapton: composable, demand-driven incremental computation
- In Proceedings of Conference on Programming Language Design and Implementation (PLDI
, 2014
"... Many researchers have proposed programming languages that sup-port incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Many researchers have proposed programming languages that sup-port incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all depen-dencies will be recomputed, even if an observer no longer requires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their origi-nal context, e.g., when reordered. To address these problems, we present λcddic, a core calculus that applies a demand-driven seman-tics to incremental computation, tracking changes in a hierarchical fashion in a novel demanded computation graph. λcddic also formal-izes an explicit separation between inner, incremental computations and outer observers. This combination ensures λcddic programs only recompute computations as demanded by observers, and allows in-ner computations to be composed more freely. We describe an algo-rithm for implementing λcddic efficiently, and we present ADAPTON, a library for writing λcddic-style programs in OCaml. We evaluated ADAPTON on a range of benchmarks, and found that it provides re-liable speedups, and in many cases dramatically outperforms prior state-of-the-art IC approaches. 1.
Composable, Demand-Driven Incremental Computation
"... Many researchers have proposed programming languages that support incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious t ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Many researchers have proposed programming languages that support incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all dependencies will be recomputed, even if an observer no longer requires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered. To address these problems, we, a core calculus that applies a demand-driven semantics to incremental computation, tracking changes in a hierarchical fashion in a novel demanded computation graph. λ cdd ic also formalizes an explicit separation between inner, incremental computations and outer observers. This combination ensures λ cdd ic programs only recompute computations as demanded by observers, and allows inner computations to be composed more freely. We describe an algorithm for implementing λ cdd ic efficiently, and we present ADAPTON, a library for writing λ cdd ic-style programs in OCaml. We evaluated ADAPTON on a range of benchmarks, and found that it provides reliable speedups, and in many cases dramatically outperforms prior state-of-the-art IC approaches. present λ cdd ic 1.
Functional Programming for Dynamic and Large Data with Self-Adjusting Computation
"... Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Combining type theory, language design, and empirical work, we present techniques for computing with large and dynamically changing datasets. Based on lambda calculus, our techniques are suitable for expressing a diverse set of algorithms on large datasets and, via self-adjusting computation, enable computations to respond automatically to changes in their data. Compared to prior work, this work overcomes the main challenge of reducing the space usage of self-adjusting computation without disproportionately decreasing performance. To this end, we present a type system for precise dependency tracking that minimizes the time and space for storing dependency metadata. The type system eliminates an important assumption of prior work that can lead to recording of spurious dependencies. We give a new type-directed translation algorithm that generates correct self-adjusting programs without relying on this assumption. We then show a probabilistic chunking technique to further decrease space usage by controlling the fundamental space-time tradeoff in self-adjusting computation. We implement and evaluate these techniques, showing very promising results on challenging benchmarks and large graphs. 1.
Adapton: Composable Demand-Driven Incremental Computation
, 2013
"... Many researchers have proposed programming languages that sup-port incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious ..."
Abstract
- Add to MetaCart
Many researchers have proposed programming languages that sup-port incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all de-pendencies will be recomputed, even if an observer no longer re-quires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered. To address these problems, we present λcddic, a core calculus that applies a demand-driven semantics to incremental computa-tion, tracking changes in a hierarchical fashion in a novel demanded computation graph. λcddic also formalizes an explicit separation be-tween inner, incremental computations and outer observers. This combination ensures λcddic programs only recompute computations as demanded by observers, and allows inner computations to be reused more liberally. We present ADAPTON, an OCaml library implementing λcddic. We evaluated ADAPTON on a range of bench-marks, and found that it provides reliable speedups, and in many cases dramatically outperforms state-of-the-art IC approaches.
iThreads: A Threading Library for Parallel Incremental Computation
"... Abstract Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize these benefits automatically, we describe iThreads, a threa ..."
Abstract
- Add to MetaCart
Abstract Incremental computation strives for efficient successive runs of applications by re-executing only those parts of the computation that are affected by a given input change instead of recomputing everything from scratch. To realize these benefits automatically, we describe iThreads, a threading library for parallel incremental computation. iThreads supports unmodified shared-memory multithreaded programs: it can be used as a replacement for pthreads by a simple exchange of dynamically linked libraries, without even recompiling the application code. To enable such an interface, we designed algorithms and an implementation to operate at the compiled binary code level by leveraging MMU-assisted memory access tracking and process-based thread isolation. Our evaluation on a multicore platform using applications from the PARSEC and Phoenix benchmarks and two casestudies shows significant performance gains.
ADAPTIVE INFERENCE FOR GRAPHICAL MODELS
, 2012
"... Many algorithms and applications involve repeatedly solving a variation of the same statistical inference problem. Adaptive inference is a technique where the previous computations are leveraged to speed up the computations after modifying the model parameters. This approach is useful in situations ..."
Abstract
- Add to MetaCart
Many algorithms and applications involve repeatedly solving a variation of the same statistical inference problem. Adaptive inference is a technique where the previous computations are leveraged to speed up the computations after modifying the model parameters. This approach is useful in situations where a slow-to-compute statistical model needs to be re-run after some minor manual changes or in situations where the model is changing over time in minor ways; for example while studying the effects of mutations on proteins, one often constructs models that change slowly as mutations are introduced. Another important application of adaptive inference is in situations where the model is being used iteratively; for example in approximate inference we may want to decompose the problem into simpler inference subproblems that are solved repeatedly and iteratively using adaptive updates. In this thesis we explore both exact inference and iterative approximate inference approaches using adaptive updates. We rfist present algorithms for adaptive exact inference on general graphs that can be used to efficiently compute marginals and update MAP configurations under arbitrary changes to the input factor graph and its associated elimination tree. We then apply them to approximate inference using a framework called dual decomposition. The key