Results 1  10
of
63
Universally composable security: A new paradigm for cryptographic protocols
, 2013
"... We present a general framework for representing cryptographic protocols and analyzing their security. The framework allows specifying the security requirements of practically any cryptographic task in a unified and systematic way. Furthermore, in this framework the security of protocols is preserved ..."
Abstract

Cited by 842 (43 self)
 Add to MetaCart
(Show Context)
We present a general framework for representing cryptographic protocols and analyzing their security. The framework allows specifying the security requirements of practically any cryptographic task in a unified and systematic way. Furthermore, in this framework the security of protocols is preserved under a general protocol composition operation, called universal composition. The proposed framework with its securitypreserving composition operation allows for modular design and analysis of complex cryptographic protocols from relatively simple building blocks. Moreover, within this framework, protocols are guaranteed to maintain their security in any context, even in the presence of an unbounded number of arbitrary protocol instances that run concurrently in an adversarially controlled manner. This is a useful guarantee, that allows arguing about the security of cryptographic protocols in complex and unpredictable environments such as modern communication networks.
Efficient unconditional oblivious transfer from almost any noisy channel
 Proceedings of Fourth Conference on Security in Communication Networks (SCN) ’04, LNCS
, 2004
"... Abstract. Oblivious transfer (OT) is a cryptographic primitive of central importance, in particular in two and multiparty computation. There exist various protocols for different variants of OT, but any such realization from scratch can be broken in principle by at least one of the two involved pa ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Oblivious transfer (OT) is a cryptographic primitive of central importance, in particular in two and multiparty computation. There exist various protocols for different variants of OT, but any such realization from scratch can be broken in principle by at least one of the two involved parties if she has sufficient computing power—and the same even holds when the parties are connected by a quantum channel. We show that, on the other hand, if noise—which is inherently present in any physical communication channel—is taken into account, then OT can be realized in an unconditionally secure way for both parties, i.e., even against dishonest players with unlimited computing power. We give the exact condition under which a general noisy channel allows for realizing OT and show that only “trivial ” channels, for which OT is obviously impossible to achieve, have to be excluded. Moreover, our realization of OT is efficient: For a security parameter α> 0—an upper bound on the probability that the protocol fails in any way—the required number of uses of the noisy channel is of order O(log(1/α) 2+ε) for any ε> 0. 1
Alpaca: extensible authorization for distributed services
 In 14th ACM Conference on Computer and Communications Security
, 2007
"... Traditional Public Key Infrastructures (PKI) have not lived up to their promise because there are too many ways to define PKIs, too many cryptographic primitives to build them with, and too many administrative domains with incompatible roots of trust. Alpaca is an authentication and authorization fr ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
Traditional Public Key Infrastructures (PKI) have not lived up to their promise because there are too many ways to define PKIs, too many cryptographic primitives to build them with, and too many administrative domains with incompatible roots of trust. Alpaca is an authentication and authorization framework that embraces PKI diversity by enabling one PKI to “plug in ” another PKI’s credentials and cryptographic algorithms, allowing users of the latter to authenticate themselves to services using the former using their existing, unmodified certificates. Alpaca builds on ProofCarrying Authorization (PCA) [8], expressing a credential as an explicit proof of a logical claim. Alpaca generalizes PCA to express not only delegation policies but also the cryptographic primitives, credential formats, and namespace structure needed to use foreign credentials directly. To achieve this goal, Alpaca introduces a method of creating and naming new principals which behave according to arbitrary rules, a modular approach to logical axioms, and a domainspecific language specialized for reasoning about authentication. We have implemented Alpaca as a Python module that assists applications in generating proofs (e.g., in a client requesting access to a resource), and in verifying those proofs via a compact 800line TCB (e.g., in a server providing that resource). We present examples demonstrating Alpaca’s extensibility in scenarios involving interorganization PKI interoperability and secure remote PKI upgrade.
Universal Codes as a Basis for Time Series Testing
 Statistical Methodology
, 2006
"... We suggest a new approach to hypothesis testing for ergodic and stationary processes. In contrast to standard methods, the suggested approach gives a possibility to make tests, based on any lossless data compression method even if the distribution law of the codeword lengths is not known. We apply t ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
We suggest a new approach to hypothesis testing for ergodic and stationary processes. In contrast to standard methods, the suggested approach gives a possibility to make tests, based on any lossless data compression method even if the distribution law of the codeword lengths is not known. We apply this approach to the following four problems: goodnessoffit testing (or identity testing), testing for independence, testing of serial independence and homogeneity testing and suggest nonparametric statistical tests for these problems. It is important to note that practically used socalled archivers can be used for suggested testing.
New monotones and lower bounds in unconditional twoparty computation
 In Advances in Cryptology — CRYPTO ’05
, 2005
"... Abstract. Since bit and string oblivious transfer and commitment, two primitives of paramount importance in secure two and multiparty computation, cannot be realized in an unconditionally secure way for both parties from scratch, reductions to weak informationtheoretic primitives as well as betwe ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Since bit and string oblivious transfer and commitment, two primitives of paramount importance in secure two and multiparty computation, cannot be realized in an unconditionally secure way for both parties from scratch, reductions to weak informationtheoretic primitives as well as between different variants of the functionalities are of great interest. In this context, we introduce three independent monotones—quantities that cannot be increased by any protocol—and use them to derive lower bounds on the possibility and efficiency of such reductions. An example is the transition between different versions of oblivious transfer, for which we also propose a new protocol allowing to increase the number of messages the receiver can choose from at the price of a reduction of their length. Our scheme matches the new lower bound and is, therefore, optimal. 1 Introduction, Motivation
On the costineffectiveness of redundancy in commercial P2P computing
 In Proc. 12th CCS
, 2005
"... We present a gametheoretic model of the interactions between server and clients in a constrained family of commercial P2P computations (where clients are financially compensated for work). We study the cost of implementing redundant task allocation (redundancy, for short) as a means of preventing c ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
We present a gametheoretic model of the interactions between server and clients in a constrained family of commercial P2P computations (where clients are financially compensated for work). We study the cost of implementing redundant task allocation (redundancy, for short) as a means of preventing cheating. Under the assumption that clients are motivated solely by the desire to maximize expected profit, we prove that, within this framework, redundancy is cost effective only when collusion among clients, including the Sybil attack, can be prevented. We show that in situations where this condition cannot be met, nonredundant task allocation is much less costly than redundancy. Categories and Subject Descriptors
A quantitative approach to reductions in secure computation
, 2003
"... Secure computation is one of the most fundamental cryptographic tasks. It is known that all functions can be computed securely in the information theoretic setting, given access to a black box for some complete function such as AND. However, without such a black box, not all functions can be securel ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Secure computation is one of the most fundamental cryptographic tasks. It is known that all functions can be computed securely in the information theoretic setting, given access to a black box for some complete function such as AND. However, without such a black box, not all functions can be securely computed. This gives rise to two types of functions, those that can be computed without a black box (“easy”) and those that cannot (“hard”). However, no further distinction among the hard functions is made. In this paper, we take a quantitative approach, associating with each function f the minimal number of calls to the black box that are required for securely computing f. Such an approach was taken before, mostly in an adhoc manner, for specific functions f of interest. We propose a systematic study, towards a general characterization of the hierarchy according to the number of blackbox calls. This approach leads to a better understanding of the inherent complexity for securely computing a given function f. Furthermore, minimizing the number of calls to the black box can lead to more efficient protocols when the calls to the black box are replaced by a secure protocol. We take a first step in this study, by considering the twoparty, honestbutcurious, informationtheoretic case. For this setting, we provide a complete characterization for deterministic protocols. We explore the hierarchy for randomized protocols as well, giving upper and lower bounds, and comparing it to the deterministic hierarchy. We show that for every Boolean function the largest gap between randomized and deterministic protocols is at most exponential, and there are functions which exhibit such a gap.
The exact price for unconditionally secure asymmetric cryptography
 In Advances in Cryptology  EUROCRYPT ’04, Lecture Notes in Computer Science
, 2004
"... Abstract. A completely insecure communication channel can only be transformed into an unconditionally secure channel if some informationtheoretic primitive is given to start from. All previous approaches to realizing such authenticity and privacy from weak primitives were symmetric in the sense that ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Abstract. A completely insecure communication channel can only be transformed into an unconditionally secure channel if some informationtheoretic primitive is given to start from. All previous approaches to realizing such authenticity and privacy from weak primitives were symmetric in the sense that security for both parties was achieved. We show that asymmetric informationtheoretic security can, however, be obtained at a substantially lower price than twoway security—like in the computationalsecurity setting, as the example of publickey cryptography demonstrates. In addition to this, we show that also an unconditionally secure bidirectional channel can be obtained under weaker conditions than previously known. One consequence of these results is that the assumption usually made in the context of quantum key distribution that the two parties share a short key initially is unnecessarily strong. Keywords. Informationtheoretic security, authentication, information reconciliation, privacy amplification, quantum key agreement, reductions
From singlebit to multibit publickey encryption via nonmalleable codes
 IACR Cryptology ePrint Archive
, 2014
"... One approach towards basing publickey encryption schemes on weak and credible assumptions is to build “stronger ” or more general schemes generically from “weaker ” or more restricted schemes. One particular line of work in this context, which has been initiated by Myers and Shelat (FOCS ’09) and c ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
One approach towards basing publickey encryption schemes on weak and credible assumptions is to build “stronger ” or more general schemes generically from “weaker ” or more restricted schemes. One particular line of work in this context, which has been initiated by Myers and Shelat (FOCS ’09) and continued by Hohenberger, Lewko, and Waters (Eurocrypt ’12), is to build a multibit chosenciphertext (CCA) secure publickey encryption scheme from a singlebit CCAsecure one. While their approaches achieve the desired goal, it is fair to say that the employed techniques are complicated and that the resulting ciphertext lengths are impractical. We propose a completely different and surprisingly simple approach to solving this problem. While it is wellknown that encrypting each bit of a plaintext string independently is insecure—the resulting scheme is malleable—we show that applying a suitable nonmalleable code (Dziembowski et al., ICS ’10) to the plaintext and subsequently encrypting the resulting codeword bitbybit results in a secure scheme. Our result is the one of the first applications of nonmalleable codes in a context other than memory tampering. The original notion of nonmalleability is, however, not sufficient. We therefore prove that
Decorrelation over Infinite Domains: the Encrypted CBCMAC Case
, 2000
"... Decorrelation theory has recently been proposed in order to address the security of block ciphers and other cryptographic primitives over a nite domain. We show here how to extend it to innite domains, which can be used in the Message Authentication Code (MAC) case. In 1994, Bellare, Kilian and Roga ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Decorrelation theory has recently been proposed in order to address the security of block ciphers and other cryptographic primitives over a nite domain. We show here how to extend it to innite domains, which can be used in the Message Authentication Code (MAC) case. In 1994, Bellare, Kilian and Rogaway proved that CBCMAC is secure when the input length is xed. This has been extended by Petrank and Racko in 1997 with a variable length. In this paper, we prove a result similar to Petrank and Racko's one by using decorrelation theory. This leads to a slightly improved result and a more compact proof. This result is meant to be a general proving technique for security, which can be compared to the approach which was announced by Maurer at CRYPTO'99. Decorrelation theory has recently been introduced. (See references [17] to [22].) Its rst aim was to address provable security in the area of block ciphers in order to prove their security against dierential [7] and linear cryptanalysis...