Results 1  10
of
27
Approximate storage in solidstate memories
 In Proc. Int. Symp. Microarchitecture
, 2013
"... Memories today expose an allornothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need highprecision storage for all of their data structures all of the time. This paper proposes mechanisms that enable applications ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Memories today expose an allornothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need highprecision storage for all of their data structures all of the time. This paper proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solidstate memories. We propose two mechanisms. The first allows errors in multilevel cells by reducing the number of programming pulses used to write them. The second mechanism mitigates wearout failures and extends memory endurance by mapping approximate data onto blocks that have exhausted their hardware error correction resources. Simulations show that reducedprecision writes in multilevel phasechange memory cells can be 1.7 × faster on average and using failed blocks can improve array lifetime by 23 % on average with quality loss under 10%.
Uncertain<t>: A firstorder type for uncertain data. In
 ASPLOS,
, 2014
"... Abstract Sampled data from sensors, the web, and people is inherently probabilistic. Because programming languages use discrete types (floats, integers, and booleans), applications, ranging from GPS navigation to web search to polling, express and reason about uncertainty in idiosyncratic ways. Thi ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
Abstract Sampled data from sensors, the web, and people is inherently probabilistic. Because programming languages use discrete types (floats, integers, and booleans), applications, ranging from GPS navigation to web search to polling, express and reason about uncertainty in idiosyncratic ways. This mismatch causes three problems. (1) Using an estimate as a fact introduces errors (walking through walls). (2) Computation on estimates compounds errors (walking at 59 mph). (3) Inference asks questions incorrectly when the data can only answer probabilistic question (e.g., "are you speeding?" versus "are you speeding with high probability"). This paper introduces the uncertain type (Uncertain T ), an abstraction that expresses, propagates, and exposes uncertainty to solve these problems. We present its semantics and a recipe for (a) identifying distributions, (b) computing, (c) inferring, and (d) leveraging domain knowledge in uncertain data. Because Uncertain T computations express an algebra over probabilities, Bayesian statistics ease inference over disparate information (physics, calendars, and maps). Uncertain T leverages statistics, learning algorithms, and domain expertise for experts and abstracts them for nonexpert developers. We demonstrate Uncertain T on two applications. The result is improved correctness, productivity, and expressiveness for probabilistic data.
Expressing and Verifying Probabilistic Assertions
"... Traditional assertions express correctness properties that must hold on every program execution. However, many applications have probabilistic outcomes and consequently their correctness properties are also probabilistic (e.g., they identify faces in images, consume sensor data, or run on unreliabl ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Traditional assertions express correctness properties that must hold on every program execution. However, many applications have probabilistic outcomes and consequently their correctness properties are also probabilistic (e.g., they identify faces in images, consume sensor data, or run on unreliable hardware). Traditional assertions do not capture these correctness properties. This paper proposes that programmers express probabilistic correctness properties with probabilistic assertions and describes a new probabilistic evaluation approach to efficiently verify these assertions. Probabilistic assertions are Boolean expressions that express the probability that a property will be true in a given execution rather than asserting that the property must always be true. Given either specific inputs or distributions on the input space, probabilistic evaluation verifies probabilistic assertions by first performing distribution extraction to represent the program as a Bayesian network. Probabilistic evaluation then uses statistical properties to simplify this representation to efficiently compute assertion probabilities directly or with sampling. Our approach is a mix of both static and dynamic analysis: distribution extraction statically builds and optimizes the Bayesian network representation and sampling dynamically interprets this representation. We implement our approach in a tool called MAYHAP for C and C++ programs. We evaluate expressiveness, correctness, and performance of MAYHAP on programs that use sensors, perform approximate computation, and obfuscate data for privacy. Our case studies demonstrate that probabilistic assertions describe useful correctness properties and that MAYHAP efficiently verifies them. Categories and Subject Descriptors G.3 [Probability and Statis
Using Crash Hoare Logic for Certifying the FSCQ File System
"... FSCQ is the first file system with a machinecheckable proof (using the Coq proof assistant) that its implementation meets its specification and whose specification includes crashes. FSCQ provably avoids bugs that have plagued previous file systems, such as performing disk writes without sufficient ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
FSCQ is the first file system with a machinecheckable proof (using the Coq proof assistant) that its implementation meets its specification and whose specification includes crashes. FSCQ provably avoids bugs that have plagued previous file systems, such as performing disk writes without sufficient barriers or forgetting to zero out directory blocks. If a crash happens at an inopportune time, these bugs can lead to data loss. FSCQ’s theorems prove that, under any sequence of crashes followed by reboots, FSCQ will recover the file system correctly without losing data. To state FSCQ’s theorems, this paper introduces the Crash Hoare logic (CHL), which extends traditional Hoare logic with a crash condition, a recovery procedure, and logical address spaces for specifying disk states at different abstraction levels. CHL also reduces the proof effort for developers through proof automation. Using CHL, we developed, specified, and proved the correctness of the FSCQ file system. Although FSCQ’s design is relatively simple, experiments with FSCQ running as a userlevel file system show that it is sufficient to run Unix applications with usable performance. FSCQ’s specifications and proofs required significantly more work than the implementation, but the work was manageable even for a small team of a few researchers. 1
SNNAP: approximate computing on programmable SoCs via neural acceleration
 In International Symposium on HighPerformance Computer Architecture (HPCA), 2015 (cited on
"... Abstract—Many applications that can take advantage of accelerators are amenable to approximate execution. Past work has shown that neural acceleration is a viable way to accelerate approximate code. In light of the growing availability of onchip fieldprogrammable gate arrays (FPGAs), this paper ex ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Many applications that can take advantage of accelerators are amenable to approximate execution. Past work has shown that neural acceleration is a viable way to accelerate approximate code. In light of the growing availability of onchip fieldprogrammable gate arrays (FPGAs), this paper explores neural acceleration on offtheshelf programmable SoCs. We describe the design and implementation of SNNAP, a flexible FPGAbased neural accelerator for approximate programs. SNNAP is designed to work with a compiler workflow that configures the neural network’s topology and weights instead of the programmable logic of the FPGA itself. This approach enables effective use of neural acceleration in commercially available devices and accelerates different applications without costly FPGA reconfigurations. No hardware expertise is required to accelerate software with SNNAP, so the effort required can be substantially lower than custom hardware design for an FPGA fabric and possibly even lower than current “Ctogates ” highlevel synthesis (HLS) tools. Our measurements on a Xilinx Zynq FPGA show that SNNAP yields a geometric mean of 3.8 × speedup (as high as 38.1×) and 2.8 × energy savings (as high as 28×) with less than 10 % quality loss across all applications but one. We also compare SNNAP with designs generated by commercial HLS tools and show that SNNAP has similar performance overall, with better resourcenormalized throughput on 4 out of 7 benchmarks. I.
ExpectationOriented Framework for Automating Approximate Programming
"... We describe ExpAX, a framework for automating approximate programming based on programmerspecified error expectations. Three components constitute ExpAX: (1) a programming model based on a new kind of program specification, which we refer to as expectations. Our programming model enables programmer ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We describe ExpAX, a framework for automating approximate programming based on programmerspecified error expectations. Three components constitute ExpAX: (1) a programming model based on a new kind of program specification, which we refer to as expectations. Our programming model enables programmers to implicitly relax the accuracy constraints without explicitly marking operations approximate; (2) a novel approximation safety analysis that automatically identifies a safetoapproximate subset of the program operations; and (3) an optimization that automatically marks a subset of the safetoapproximate operations as approximate while considering the error expectation. Further, we formulate the process of automatically marking operations as approximate as an optimization problem and provide a genetic algorithm to solve it. We evaluate ExpAX using a diverse set of applications and show that it can provide significant energy savings while improving the qualityofresult degradation. 1.
ExpectationOriented Framework for Automating Approximate Programming
"... This paper describes ExpAX, a framework for automating approximate programming based on programmerspecified error expectations. Three components constitute ExpAX: (1) a programming model based on a new kind of program specification, which we refer to as expectations. Our programming model enable ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
This paper describes ExpAX, a framework for automating approximate programming based on programmerspecified error expectations. Three components constitute ExpAX: (1) a programming model based on a new kind of program specification, which we refer to as expectations. Our programming model enables programmers to implicitly relax the accuracy constraints without explicitly marking operations approximate; (2) a novel approximation safety analysis that automatically identifies a safetoapproximate subset of the program operations; and (3) an optimization that automatically marks a subset of the safetoapproximate operations as approximate while considering the error expectation. Further, we formulate the process of automatically marking operations as approximate as an optimization problem and provide a genetic algorithm to solve it. We evaluate ExpAX using a diverse set of applications and show that it can provide significant energy savings while improving the qualityofresult degradation. ExpAX automatically excludes the safetoapproximate operations that if approximated lead to significant quality degradation. 1.
Compositional certified resource bounds.
, 2015
"... Abstract This paper presents a new approach for automatically deriving worstcase resource bounds for C programs. The described technique combines ideas from amortized analysis and abstract interpretation in a unified framework to address four challenges for stateoftheart techniques: compositional ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract This paper presents a new approach for automatically deriving worstcase resource bounds for C programs. The described technique combines ideas from amortized analysis and abstract interpretation in a unified framework to address four challenges for stateoftheart techniques: compositionality, user interaction, generation of proof certificates, and scalability. Compositionality is achieved by incorporating the potential method of amortized analysis. It enables the derivation of global wholeprogram bounds with local derivation rules by naturally tracking size changes of variables in sequenced loops and function calls. The resource consumption of functions is described abstractly and a function call can be analyzed without access to the function body. User interaction is supported with a new mechanism that clearly separates qualitative and quantitative verification. A user can guide the analysis to derive complex nonlinear bounds by using auxiliary variables and assertions. The assertions are separately proved using established qualitative techniques such as abstract interpretation or Hoare logic. Proof certificates are automatically generated from the local derivation rules. A soundness proof of the derivation system with respect to a formal cost semantics guarantees the validity of the certificates. Scalability is attained by an efficient reduction of bound inference to a linear optimization problem that can be solved by offtheshelf LP solvers. The analysis framework is implemented in the publiclyavailable tool C 4 B. An experimental evaluation demonstrates the advantages of the new technique with a comparison of C 4 B with existing tools on challenging micro benchmarks and the analysis of more than 2900 lines of C code from the cBench benchmark suite.
JouleGuard: Energy Guarantees for Approximate Applications
"... Energy consumption has become a major constraint in computing systems as it limits battery life in mobile devices and increases costs for servers and data centers. Recently, researchers have proposed creating approximate applications that can trade accuracy for decreased energy consumption. These a ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Energy consumption has become a major constraint in computing systems as it limits battery life in mobile devices and increases costs for servers and data centers. Recently, researchers have proposed creating approximate applications that can trade accuracy for decreased energy consumption. These approaches can guarantee accuracy or performance and generally try to minimize energy; however, they provide limited guarantees of energy consumption. In this paper, we build on prior work in approximate computing to create JouleGuard: a runtime control system that coordinates application and system to provide control theoretic formal guarantees of energy consumption, while maximizing accuracy. These energy guarantees will aid any user who has an energy budget (e.g., battery lifetime or operating cost) and must achieve the most accurate result on that budget. We implement JouleGuard and test it on a Linux/x86 server with eight different approximate applications created from two different frameworks. We find that JouleGuard respects energy budgets, provides near optimal accuracy, and adapts to phases in application workload. JouleGuard is designed to be general with respect to the applications and systems it controls, making it a suitable runtime for a number of approximate computing frameworks and languages. 1.
Chisel: Reliability and AccuracyAware Optimization of Approximate Computational Kernels
"... The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The accuracy of an approximate computation is the distance between the result that the computation produces and the corresponding fully accurate result. The reliability of the computation is the probability that it will produce an acceptably accurate result. Emerging approximate hardware platforms provide approximate operations that, in return for reduced energy consumption and/or increased performance, exhibit reduced reliability and/or accuracy. We present Chisel, a system for reliability and accuracyaware optimization of approximate computational kernels that run on approximate hardware platforms. Given a combined reliability and/or accuracy specification, Chisel automatically selects approximate kernel operations to synthesize an approximate computation that minimizes energy consumption while satisfying its reliability and accuracy specification. We evaluate Chisel on five applications from the image processing, scientific computing, and financial analysis domains. The experimental results show that our implemented optimization algorithm enables Chisel to optimize our set of benchmark kernels to obtain energy savings from 8.7 % to 19.8 % compared to the original (exact) kernel implementations while preserving important reliability guarantees.