Results 1  10
of
14
Static Analysis for Probabilistic Programs: Inferring Whole Program Properties from Finitely Many Paths.
"... We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyberphysical systems. Correctness properties of such programs take the form of queries ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyberphysical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volumebound computations. Each path yields interval bounds that can be summed up with a “coverage ” bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.
MinEntropy as a Resource
"... Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, it is useful to model secrecy quantitatively, thinking of it as a “resource ” that may be gradually “consumed ” by a system. In this paper, we explore this intuition thr ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, it is useful to model secrecy quantitatively, thinking of it as a “resource ” that may be gradually “consumed ” by a system. In this paper, we explore this intuition through several dynamic and static models of secrecy consumption, ultimately focusing on (average) vulnerability and minentropy leakage as especially useful models of secrecy consumption. We also consider several composition operators that allow smaller systems to be combined into a larger system, and explore the extent to which the secrecy consumption of a combined system is constrained by the secrecy consumption of its constituents.
Quantifying information flow for dynamic secrets
"... Abstract—A metric is proposed for quantifying leakage of information about secrets and about how secrets change over time. The metric is used with a model of information flow for probabilistic, interactive systems with adaptive adversaries. The model and metric are implemented in a probabilistic pro ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract—A metric is proposed for quantifying leakage of information about secrets and about how secrets change over time. The metric is used with a model of information flow for probabilistic, interactive systems with adaptive adversaries. The model and metric are implemented in a probabilistic programming language and used to analyze several examples. The analysis demonstrates that adaptivity increases information flow. Keywords—dynamic secret, quantitative information flow, probabilistic programming, gain function, vulnerability I.
Faster TwoBit Pattern Analysis of Leakage
"... Abstract. In the context of quantitative information flow analysis, twobit patterns are a recent approach to computing upper bounds on leakage in deterministic programs. This paper shows that twobit pattern analysis can be done more efficiently through the use of four new techniques: implication gr ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. In the context of quantitative information flow analysis, twobit patterns are a recent approach to computing upper bounds on leakage in deterministic programs. This paper shows that twobit pattern analysis can be done more efficiently through the use of four new techniques: implication graphs, random execution, STP counterexamples, and deductive closure. We find that these techniques reduce the analysis time for a set of case studies by an average of 72%; in close to half the cases, the reduction is greater than 90%. 1
Hybrid Information Flow Monitoring Against Web Tracking
"... Abstract—Motivated by the problem of stateless web tracking (fingerprinting), we propose a novel approach to hybrid information flow monitoring by tracking the knowledge about secret variables using logical formulae. This knowledge representation helps to compare and improve precision of hybrid info ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Motivated by the problem of stateless web tracking (fingerprinting), we propose a novel approach to hybrid information flow monitoring by tracking the knowledge about secret variables using logical formulae. This knowledge representation helps to compare and improve precision of hybrid information flow monitors. We define a generic hybrid monitor parametrised by a static analysis and derive sufficient conditions on the static analysis for soundness and relative precision of hybrid monitors. We instantiate the generic monitor with a combined static constant and dependency analysis. Several other hybrid monitors including those based on wellknown hybrid techniques for information flow control are formalised as instances of our generic hybrid monitor. These monitors are organised into a hierarchy that establishes their relative precision. The whole framework is accompanied by a formalisation of the theory in the Coq proof assistant. I.
Slicing probabilistic programs
 In PLDI
, 2014
"... Abstract Probabilistic programs use familiar notation of programming languages to specify probabilistic models. Suppose we are interested in estimating the distribution of the return expression r of a probabilistic program P . We are interested in slicing the probabilistic program P and obtaining a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract Probabilistic programs use familiar notation of programming languages to specify probabilistic models. Suppose we are interested in estimating the distribution of the return expression r of a probabilistic program P . We are interested in slicing the probabilistic program P and obtaining a simpler program SLI(P ) which retains only those parts of P that are relevant to estimating r, and elides those parts of P that are not relevant to estimating r. We desire that the SLI transformation be both correct and efficient. By correct, we mean that P and SLI(P ) have identical estimates on r. By efficient, we mean that estimation over SLI(P ) be as fast as possible. We show that the usual notion of program slicing, which traverses control and data dependencies backward from the return expression r, is unsatisfactory for probabilistic programs, since it produces incorrect slices on some programs and suboptimal ones on others. Our key insight is that in addition to the usual notions of control dependence and data dependence that are used to slice nonprobabilistic programs, a new kind of dependence called observe dependence arises naturally due to observe statements in probabilistic programs. We propose a new definition of SLI(P ) which is both correct and efficient for probabilistic programs, by including observe dependence in addition to control and data dependences for computing slices. We prove correctness mathematically, and we demonstrate efficiency empirically. We show that by applying the SLI transformation as a prepass, we can improve the efficiency of probabilistic inference, not only in our own inference tool R2, but also in other systems for performing inference such as Church and Infer.NET.
Browser Randomisation against Fingerprinting: a Quantitative Information Flow Approach
"... Abstract. Web tracking companies use device fingerprinting to distinguish the users of the websites by checking the numerous properties of their machines and web browsers. One way to protect the users ’ privacy is to make them switch between different machine and browser configurations. We propose ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Web tracking companies use device fingerprinting to distinguish the users of the websites by checking the numerous properties of their machines and web browsers. One way to protect the users ’ privacy is to make them switch between different machine and browser configurations. We propose a formalisation of this privacy enforcement mechanism. We use informationtheoretic channels to model the knowledge of the tracker and the fingerprinting program, and show how to synthesise a randomisation mechanism that defines the distribution of configurations for each user. This mechanism provides a strong guarantee of privacy (the probability of identifying the user is bounded by a given threshold) while maximising usability (the user switches to other configurations rarely). To find an optimal solution, we express the enforcement problem of randomisation by a linear program. We investigate and compare several approaches to randomisation and find that more efficient privacy enforcement would often provide lower usability. Finally, we relax the requirement of knowing the fingerprinting program in advance, by proposing a randomisation mechanism that guarantees privacy for an arbitrary program. 1
Bayesian inference using data flow analysis
 In ESEC/SIGSOFT FSE
, 2013
"... ABSTRACT We present a new algorithm for Bayesian inference over probabilistic programs, based on data flow analysis techniques from the program analysis community. Unlike existing techniques for Bayesian inference on probabilistic programs, our data flow analysis algorithm is able to perform infere ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT We present a new algorithm for Bayesian inference over probabilistic programs, based on data flow analysis techniques from the program analysis community. Unlike existing techniques for Bayesian inference on probabilistic programs, our data flow analysis algorithm is able to perform inference directly on probabilistic programs with loops. Even for loopfree programs, we show that data flow analysis offers better precision and better performance benefits over existing techniques. We also describe heuristics that are crucial for our inference to scale, and present an empirical evaluation of our algorithm over a range of benchmarks.
Expectation Invariants for Probabilistic Program Loops as Fixed Points
 In SAS
, 2014
"... Abstract. We present static analyses for probabilistic loops using expectation invariants. Probabilistic loops are imperative whileloops augmented with calls to random variable generators. Whereas, traditional program analysis uses FloydHoare style invariants to overapproximate the set of reacha ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present static analyses for probabilistic loops using expectation invariants. Probabilistic loops are imperative whileloops augmented with calls to random variable generators. Whereas, traditional program analysis uses FloydHoare style invariants to overapproximate the set of reachable states, our approach synthesizes invariant inequalities involving the expected values of program expressions at the loop head. We first define the notion of expectation invariants, and demonstrate their usefulness in analyzing probabilistic program loops. Next, we present the set of expectation invariants for a loop as a fixed point of the preexpectation operator over sets of program expressions. Finally, we use existing concepts from abstract interpretation theory to present an iterative analysis that synthesizes expectation invariants for probabilistic program loops. We show how the standard polyhedral abstract domain can be used to synthesize expectation invariants for probabilistic programs, and demonstrate the usefulness of our approach on some examples of probabilistic program loops. 1
Modelbased Risk Analysis for Data Stream Queries
"... In the context of coalition decision support systems, an automated data analyst might be tasked with answering queries on a data stream. Asking the same query at different times can yield different answers since the underlying data changes ..."
Abstract
 Add to MetaCart
(Show Context)
In the context of coalition decision support systems, an automated data analyst might be tasked with answering queries on a data stream. Asking the same query at different times can yield different answers since the underlying data changes