Results 1 - 10
of
15
Probabilistic relational verification for cryptographic implementations,” Unpublished manuscript
, 2013
"... Relational program logics have been used for mechanizing for-mal proofs of various cryptographic constructions. With an eye to-wards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF⋆, a rela-tional extension of F⋆, a general-purpose ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
Relational program logics have been used for mechanizing for-mal proofs of various cryptographic constructions. With an eye to-wards scaling these successes towards end-to-end security proofs for implementations of distributed systems, we present RF⋆, a rela-tional extension of F⋆, a general-purpose higher-order stateful pro-gramming language with a verification system based on refinement types. The distinguishing feature of RF ⋆ is a relational Hoare logic for a higher-order, stateful, probabilistic language. Through care-ful language design, we adapt the F ⋆ typechecker to generate both classic and relational verification conditions, and to automatically discharge their proofs using an SMT solver. Thus, we are able to benefit from the existing features of F⋆, including its abstraction facilities for modular reasoning about program fragments. We eval-uate RF ⋆ experimentally by programming a series of cryptographic constructions and protocols, and by verifying their security proper-ties, ranging from information flow to unlinkability, integrity, and privacy. Moreover, we validate the design of RF ⋆ by formalizing in Coq a core probabilistic λ-calculus and a relational refinement type system and proving the soundness of the latter against a deno-tational semantics of the probabilistic λ-calculus.
Stochastic Optimization of Floating-Point Programs with Tunable Precision
"... The aggressive optimization of floating-point computations is an important problem in high-performance computing. Unfortunately, floating-point instruction sets have complicated semantics that of-ten force compilers to preserve programs as written. We present a method that treats floating-point opti ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
(Show Context)
The aggressive optimization of floating-point computations is an important problem in high-performance computing. Unfortunately, floating-point instruction sets have complicated semantics that of-ten force compilers to preserve programs as written. We present a method that treats floating-point optimization as a stochastic search problem. We demonstrate the ability to generate reduced precision implementations of Intel’s handwritten C numeric library which are up to 6 times faster than the original code, and achieve end-to-end speedups of over 30 % on a direct numeric simulation and a ray tracer by optimizing kernels that can tolerate a loss of preci-sion while still remaining correct. Because these optimizations are mostly not amenable to formal verification using the current state of the art, we present a stochastic search technique for characterizing maximum error. The technique comes with an asymptotic guaran-tee and provides strong evidence of correctness.
Robustness Analysis of Finite Precision Implementations
"... Abstract. A desirable property of control systems is robustness to inputs, when small perturbations of the inputs of a system will cause only small perturbations on outputs. This property should be maintained at the implementation level, where close inputs can lead to different execution paths. The ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract. A desirable property of control systems is robustness to inputs, when small perturbations of the inputs of a system will cause only small perturbations on outputs. This property should be maintained at the implementation level, where close inputs can lead to different execution paths. The problem becomes crucial for finite precision implementations, where any elementary computation is affected by an error. In this context, almost every test is potentially unstable, that is, for a given input, the finite precision and real numbers paths may differ. Still, state-of-the-art error analyses rely on the stable test hypothesis, yielding unsound error bounds when the conditional block is not robust to uncertainties. We propose a new abstract-interpretation based error analysis of finite precision implementations, which is sound in presence of unstable tests, by bounding the discontinuity error for path divergences. This gives a tractable analysis implemented in the FLUCTUAT analyzer. 1
S.: Robustness Analysis of Networked Systems
- In: Verification, Model Checking, and Abstract Interpretation (VMCAI
, 2013
"... Abstract. Many software systems are naturally modeled as networks of interacting elements such as computing nodes, input devices, and output devices. In this paper, we present a notion of robustness for a networked system when the underlying network is prone to errors. We model such a system N as a ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
Abstract. Many software systems are naturally modeled as networks of interacting elements such as computing nodes, input devices, and output devices. In this paper, we present a notion of robustness for a networked system when the underlying network is prone to errors. We model such a system N as a set of processes that communicate with each other over a set of internal channels, and interact with the outside world through a fixed set of input and output channels. We focus on network errors that arise from channel perturbations, and assume that we are given a worst-case bound δ on the number of errors that can occur in the internal channels of N. We say that the system N is (δ, )-robust if the deviation of the output of the perturbed system from the output of the unperturbed system is bounded by . We study a specific instance of this problem when each process is a Mealy machine, and the distance metric used to quantify the deviation from the desired output is either the L1-norm or the Levenshtein distance (also known as the edit distance). We present efficient decision procedures for (δ, )-robustness for both distance metrics. Our solution draws upon techniques from automata theory, essentially reducing the problem of checking (δ, )-robustness to the problem of checking emptiness for a certain class of reversal-bounded counter automata. 1
Regular Real Analysis
"... Abstract—We initiate the study of regular real analysis, or the analysis of real functions that can be encoded by automata on infinite words. It is known that ω-automata can be used to represent relations between real vectors, reals being represented in exact precision as infinite streams. The regul ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—We initiate the study of regular real analysis, or the analysis of real functions that can be encoded by automata on infinite words. It is known that ω-automata can be used to represent relations between real vectors, reals being represented in exact precision as infinite streams. The regular functions studied here constitute the functional subset of such relations. We show that some classic questions in function analysis can become elegantly computable in the context of regular real analysis. Specifically, we present an automatatheoretic technique for reasoning about limit behaviors of regular functions, and obtain, using this method, a decision procedure to verify the continuity of a regular function. Several other decision procedures for regular functions—for finding roots, fixpoints, minima, etc.—are also presented. At the same time, we show that the class of regular functions is quite rich, and includes functions that are highly challenging to encode using traditional symbolic notation. I.
Robustness Analysis of String Transducers?
"... Abstract. Many important functions over strings can be represented as finite-state string transducers. In this paper, we present an automata-theoretic technique for algorithmically verifying that such a function is robust to uncertainty. A function encoded as a transducer is defined to be robust if ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
Abstract. Many important functions over strings can be represented as finite-state string transducers. In this paper, we present an automata-theoretic technique for algorithmically verifying that such a function is robust to uncertainty. A function encoded as a transducer is defined to be robust if for each small (i.e., bounded) change to any input string, the change in the transducer’s output is proportional to the change in the input. Changes to input and output strings are quantified using weighted generalizations of the Levenshtein and Manhattan distances over strings. Our main technical contribution is a set of decision procedures based on reducing the problem of robustness verification of a transducer to the problem of checking the emptiness of a reversal-bounded counter machine. The decision procedures under the generalized Manhattan and Levenshtein distance metrics are in Pspace and Expspace, respectively. For transducers that are Mealy machines, the decision procedures under these metrics are in Nlogspace and Pspace, respectively. 1
Quantitative timed simulation functions and refinement metrics for real-time systems
- In HSCC. ACM
, 2013
"... ar ..."
(Show Context)
Consistency Analysis of Decision-Making Programs ∗
"... Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, indi-vidual decisions made by such programs may be nondeterministic, and lead to contradictory decision ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. As a result, indi-vidual decisions made by such programs may be nondeterministic, and lead to contradictory decisions at different points of an execu-tion. This means that an otherwise correct program may execute along paths, that it would not follow under its ideal semantics, vi-olating essential program invariants on the way. A program is said to be consistent if it does not suffer from this problem despite un-certainty in decisions. In this paper, we present a sound, automatic program analysis for verifying that a program is consistent in this sense. Our analysis proves that each decision made along a program execution is con-sistent with the decisions made earlier in the execution. The proof
A Taste of Sound Reasoning in Faust
"... We address the question of what software verifica-tion can do for the audio community by showcasing some preliminary design ideas and tools for a new framework dedicated to the formal reasoning about Faust programs. We use as a foundation one of the strongest current proof assistants, namely Coq com ..."
Abstract
- Add to MetaCart
(Show Context)
We address the question of what software verifica-tion can do for the audio community by showcasing some preliminary design ideas and tools for a new framework dedicated to the formal reasoning about Faust programs. We use as a foundation one of the strongest current proof assistants, namely Coq com-bined with SSReflect. We illustrate the practical impact of our approach via a use case, namely the proof that the implementation of a simple low-pass filter written in the Faust audio programming lan-guage indeed meets one of its specification properties. The paper thus serves three purposes: (1) to pro-vide a gentle introduction to the use of formal tools to the audio community, (2) to put forward program-ming and formal reasoning paradigms we think are well suited to the audio domain and (3) to illustrate this approach on a simple yet practical audio signal processing example, a low-pass filter.
Introduction to Automatic Software Repair
, 2015
"... This document presents an introduction to automatic software repair. It has been first ..."
Abstract
- Add to MetaCart
(Show Context)
This document presents an introduction to automatic software repair. It has been first