Results 1  10
of
81
On the Foundations of Quantitative Information Flow
"... Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “sma ..."
Abstract

Cited by 116 (10 self)
 Add to MetaCart
(Show Context)
Abstract. There is growing interest in quantitative theories of information flow in a variety of contexts, such as secure information flow, anonymity protocols, and sidechannel analysis. Such theories offer an attractive way to relax the standard noninterference properties, letting us tolerate “small ” leaks that are necessary in practice. The emerging consensus is that quantitative information flow should be founded on the concepts of Shannon entropy and mutual information.Butauseful theory of quantitative information flow must provide appropriate security guarantees: if the theory says that an attack leaks x bits of secret information, then x should be useful in calculating bounds on the resulting threat. In this paper, we focus on the threat that an attack will allow the secret to be guessed correctly in one try. With respect to this threat model, we argue that the consensus definitions actually fail to give good security guarantees—the problem is that a random variable can have arbitrarily large Shannon entropy even if it is highly vulnerable to being guessed. We then explore an alternative foundation based on a concept of vulnerability (closely related to Bayes risk) and which measures uncertainty using Rényi’s minentropy, rather than Shannon entropy. 1
Quantifying Location Privacy
 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2011
"... It is a wellknown fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of LocationPrivacy Protection Mechanisms (LPPMs) have been proposed during the last decade. Ho ..."
Abstract

Cited by 71 (17 self)
 Add to MetaCart
It is a wellknown fact that the progress of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular. As a response to these issues, a number of LocationPrivacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker’s model tend to be incomplete, with the risk of a possibly wrong estimation of the users ’ location privacy. In this paper, we address these issues by providing a formal framework for the analysis of LPPMs; it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. The privacy of users and the success of the adversary in his locationinference attacks are two sides of the same coin. We revise location privacy by giving a simple, yet comprehensive, model to formulate all types of locationinformation disclosure attacks. Thus, by formalizing the adversary’s performance, we propose and justify the right metric to quantify location privacy. We clarify the difference between three aspects of the adversary’s inference attacks, namely their accuracy, certainty, and correctness. We show that correctness determines the privacy of users. In other words, the expected estimation error of the adversary is the metric of users ’ location privacy. We rely on wellestablished statistical methods to formalize and implement the attacks in a tool: the LocationPrivacy Meter that measures the location privacy of mobile users, given various LPPMs. In addition to evaluating some example LPPMs, by using our tool, we assess the appropriateness of some popular metrics for location privacy: entropy and kanonymity. The results show a lack of satisfactory correlation between these two metrics and the success of the adversary in inferring the users’ actual locations.
Vulnerability bounds and leakage resilience of blinded cryptography under timing attacks
 in 2010 IEEE Computer Security Foundations
, 2010
"... Abstract—We establish formal bounds for the number of minentropy bits that can be extracted in a timing attack against a cryptosystem that is protected by blinding, the stateofthe art countermeasure against timing attacks. Compared with existing bounds, our bounds are both tighter and of greater ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We establish formal bounds for the number of minentropy bits that can be extracted in a timing attack against a cryptosystem that is protected by blinding, the stateofthe art countermeasure against timing attacks. Compared with existing bounds, our bounds are both tighter and of greater operational significance, in that they directly address the key’s oneguess vulnerability. Moreover, we show that any semantically secure publickey cryptosystem remains semantically secure in the presence of timing attacks, if the implementation is protected by blinding and bucketing. This result shows that, by considering (and justifying) more optimistic models of leakage than recent proposals for leakageresilient cryptosystems, one can achieve provable resistance against sidechannel attacks for standard cryptographic primitives. I.
On the Bayes Risk in InformationHiding Protocols ∗
"... Randomized protocols for hiding private information can be regarded as noisy channels in the informationtheoretic sense, and the inference of the concealed information can be regarded as a hypothesistesting problem. We consider the Bayesian approach to the problem, and investigate the probability ..."
Abstract

Cited by 30 (16 self)
 Add to MetaCart
(Show Context)
Randomized protocols for hiding private information can be regarded as noisy channels in the informationtheoretic sense, and the inference of the concealed information can be regarded as a hypothesistesting problem. We consider the Bayesian approach to the problem, and investigate the probability of error associated to the MAP (Maximum Aposteriori Probability) inference rule. Our main result is a constructive characterization of a convex base of the probability of error, which allows us to compute its maximum value (over all possible input distributions), and to identify upper bounds for it in terms of simple functions. As a side result, we are able to improve the HellmanRaviv and the SanthiVardy bounds expressed in terms of conditional entropy. We then discuss an application of our methodology to the Crowds protocol, and in particular we show how to compute the bounds on the probability that an adversary break anonymity. 1
Informationtheoretic bounds for differentially private mechanisms
 In 24rd IEEE Computer Security Foundations Symposium, CSF 2011. IEEE Computer Society, Los Alamitos
"... Abstract—There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensu ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract—There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacypreserving data analysis. On the other hand, informationtheoretic notions of leakage are used for characterizing the confidentiality properties of programs in languagebased settings. The purpose of this article is to establish formal connections between both notions of confidentiality, and to compare them in terms of the security guarantees they deliver. We obtain the following results. First, we establish upper bounds for the leakage of every ɛdifferentially private mechanism in terms of ɛ and the size of the mechanism’s input domain. We achieve this by identifying and leveraging a connection to coding theory. Second, we construct a class of ɛdifferentially private channels whose leakage grows with the size of their input domains. Using these channels, we show that there cannot be domainsizeindependent bounds for the leakage of all ɛdifferentially private mechanisms. Moreover, we perform an empirical evaluation that shows that the leakage of these channels almost matches our theoretical upper bounds, demonstrating the accuracy of these bounds. Finally, we show that the question of providing optimal upper bounds for the leakage of ɛdifferentially private mechanisms in terms of rational functions of ɛ is in fact decidable.
Measuring anonymity with relative entropy
 In Proceedings of the 4th International Workshop on Formal Aspects in Security and Trust, volume 4691 of LNCS
, 2007
"... Abstract. Anonymity is the property of maintaining secret the identity of users performing a certain action. Anonymity protocols often use random mechanisms which can be described probabilistically. In this paper, we propose a probabilistic process calculus to describe protocols for ensuring anonymi ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Anonymity is the property of maintaining secret the identity of users performing a certain action. Anonymity protocols often use random mechanisms which can be described probabilistically. In this paper, we propose a probabilistic process calculus to describe protocols for ensuring anonymity, and we use the notion of relative entropy from information theory to measure the degree of anonymity these protocols can guarantee. Furthermore, we prove that the operators in the probabilistic process calculus are nonexpansive, with respect to this measuring method. We illustrate our approach by using the example of the Dining Cryptographers Problem. 1
Probability of Error in InformationHiding Protocols
 in "Proceedings of the 20th IEEE Computer Security Foundations Symposium (CSF20)", IEEE Computer Society
"... There are many bounds known in literature for the Bayes ’ risk. One of these is the equivocation bound, due to Rényi [22], which states that the probability of error is bound by the conditional entropy of the channel’s input given the output. Later, Hellman and Raviv improved this bound by half [13] ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
There are many bounds known in literature for the Bayes ’ risk. One of these is the equivocation bound, due to Rényi [22], which states that the probability of error is bound by the conditional entropy of the channel’s input given the output. Later, Hellman and Raviv improved this bound by half [13]. Recently, Santhi and Vardy have proposed a new bound, that depends exponentially on the (opposite of the) conditional entropy, and which considerably improves the HellmanRaviv bound in the case of multiinria00200957,
Statistical Measurement of Information Leakage
"... Abstract. Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities o ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Information theory provides a range of useful methods to analyse probability distributions and these techniques have been successfully applied to measure information flow and the loss of anonymity in secure systems. However, previous work has tended to assume that the exact probabilities of every action are known, or that the system is nondeterministic. In this paper, we show that measures of information leakage based on mutual information and capacity can be calculated, automatically, from trial runs of a system alone. We find a confidence interval for this estimate based on the number of possible inputs, observations and samples. We have developed a tool to automatically perform this analysis and we demonstrate our method by analysing a Mixminon anonymous remailer node. 1