Results 1  10
of
53
NonInteractive Verifiable Computing: Outsourcing Computation to Untrusted Workers
, 2009
"... Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out co ..."
Abstract

Cited by 221 (13 self)
 Add to MetaCart
(Show Context)
Verifiable Computation enables a computationally weak client to “outsource ” the computation of a function F on various inputs x1,...,xk to one or more workers. The workers return the result of the function evaluation, e.g., yi = F(xi), as well as a proof that the computation of F was carried out correctly on the given value xi. The verification of the proof should require substantially less computational effort than computing F(xi) from scratch. We present a protocol that allows the worker to return a computationallysound, noninteractive proof that can be verified in O(m) time, where m is the bitlength of the output of F. The protocol requires a onetime preprocessing stage by the client which takes O(C) time, where C is the smallest Boolean circuit computing F. Our scheme also provides input and output privacy for the client, meaning that the workers do not learn any information about the xi or yi values. 1
Delegatable Pseudorandom Functions and Applications
"... We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy. A delegatable PRF, or DPRF for short, is a new primitive that enables a proxy to evaluate a PRF on a strict subset of its domain using a trapdoor derived from the DPRF secretkey. PRF delega ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
(Show Context)
We put forth the problem of delegating the evaluation of a pseudorandom function (PRF) to an untrusted proxy. A delegatable PRF, or DPRF for short, is a new primitive that enables a proxy to evaluate a PRF on a strict subset of its domain using a trapdoor derived from the DPRF secretkey. PRF delegation is policybased: the trapdoor is constructed with respect to a certain policy that determines the subset of input values which the proxy is allowed to compute. Interesting DPRFs should achieve lowbandwidth delegation: Enabling the proxy to compute the PRF values that conform to the policy should be more efficient than simply providing the proxy with the sequence of all such values precomputed. The main challenge in constructing DPRFs is in maintaining the pseudorandomness of unknown values in the face of an attacker that adaptively controls proxy servers. A DPRF may be optionally equipped with an additional property we call policy privacy, where any two delegation predicates remain indistinguishable in the view of a DPRFquerying proxy: achieving this raises new design challenges as policy privacy and efficiency are seemingly conflicting goals. For the important class of policies described as (1dimensional) ranges, we devise two DPRF constructions and rigorously prove their security. Built upon the wellknown treebased GGM PRF family [15], our constructions are generic and feature only logarithmic delegation size in the number of values conforming to the policy predicate. At only a constantfactor efficiency reduction, we show that our second construction is also policy private. As we finally describe, their new security and efficiency properties render our delegated PRF schemes particularly useful in numerous security applications, including RFID, symmetric searchable encryption, and broadcast encryption. 1
Verifiable delegation of computation over large datasets
 In Proceedings of the 31st annual conference on Advances in cryptology, CRYPTO’11
, 2011
"... We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial func ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
We study the problem of computing on large datasets that are stored on an untrusted server. We follow the approach of amortized verifiable computation introduced by Gennaro, Gentry, and Parno in CRYPTO 2010. We present the first practical verifiable computation scheme for high degree polynomial functions. Such functions can be used, for example, to make predictions based on polynomials fitted to a large number of sample points in an experiment. In addition to the many noncryptographic applications of delegating high degree polynomials, we use our verifiable computation scheme to obtain new solutions for verifiable keyword search, and proofs of retrievability. Our constructions are based on the DDH assumption and its variants, and achieve adaptive security, which was left as an open problem by Gennaro et al (albeit for general functionalities). Our second result is a primitive which we call a verifiable database (VDB). Here, a weak client outsources a large table to an untrusted server, and makes retrieval and update queries. For each query, the server provides a response and a proof that the response was computed correctly. The goal is to minimize the resources required by the client. This is made particularly challenging if the number of update queries is unbounded. We present a VDB scheme based on the hardness of the subgroup
From secrecy to soundness: efficient verification via secure computation
 In Proceedings of the 37th international colloquium conference on Automata, languages and programming
, 2010
"... Abstract. We study the problem of verifiable computation (VC) in which a computationally weak client wishes to delegate the computation of a function f on an input x to a computationally strong but untrusted server. We present new general approaches for constructing VC protocols, as well as solving ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
Abstract. We study the problem of verifiable computation (VC) in which a computationally weak client wishes to delegate the computation of a function f on an input x to a computationally strong but untrusted server. We present new general approaches for constructing VC protocols, as well as solving the related problems of program checking and selfcorrecting. The new approaches reduce the task of verifiable computation to suitable variants of secure multiparty computation (MPC) protocols. In particular, we show how to efficiently convert the secrecy property of MPC protocols into soundness of a VC protocol via the use of a message authentication code (MAC). The new connections allow us to apply results from the area of MPC towards simplifying, unifying, and improving over previous results on VC and related problems. In particular, we obtain the following concrete applications: (1) The first VC protocols for arithmetic computations which only make a blackbox use of the underlying field or ring; (2) a noninteractive VC protocol for boolean circuits in the preprocessing model, conceptually simplifying and improving the online complexity of a recent protocol of Gennaro et al. (Cryptology ePrint Archive: Report 2009/547); (3) NC0 selfcorrectors for complete languages in the complexity class NC1 and various logspace classes, strengthening previous AC0 correctors of Goldwasser et al. (STOC 2008). 1
On robust combiners for oblivious transfer and other primitives
 In Proc. Eurocrypt ’05
, 2005
"... At the mouth of two witnesses... shall the matter be establishedDeuteronomy Chapter 19. ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
At the mouth of two witnesses... shall the matter be establishedDeuteronomy Chapter 19.
On the impossibility of efficiently combining collision resistant hash functions
 In Proc. Crypto ’06
, 2006
"... Abstract. Let H1, H2 be two hash functions. We wish to construct a new hash function H that is collision resistant if at least one of H1 or H2 is collision resistant. Concatenating the output of H1 and H2 clearly works, but at the cost of doubling the hash output size. We ask whether a better constr ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Let H1, H2 be two hash functions. We wish to construct a new hash function H that is collision resistant if at least one of H1 or H2 is collision resistant. Concatenating the output of H1 and H2 clearly works, but at the cost of doubling the hash output size. We ask whether a better construction exists, namely, can we hedge our bets without doubling the size of the output? We take a step towards answering this question in the negative — we show that any secure construction that evaluates each hash function once cannot output fewer bits than simply concatenating the given functions. 1
PrivacyPreserving Computation and Verification of Aggregate Queries on Outsourced Databases
"... Outsourced databases provide a solution for data owners who want to delegate the task of answering database queries to thirdparty service providers. However, distrustful users may desire a means of verifying the integrity of responses to their database queries. Simultaneously, for privacy or secur ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
Outsourced databases provide a solution for data owners who want to delegate the task of answering database queries to thirdparty service providers. However, distrustful users may desire a means of verifying the integrity of responses to their database queries. Simultaneously, for privacy or security reasons, the data owner may want to keep the database hidden from service providers. This security property is particularly relevant for aggregate databases, where data is sensitive, and results should only be revealed for queries that are aggregate in nature. In such a scenario, using simple signature schemes for verification does not suffice. We present a solution in which service providers can collaboratively compute aggregate queries without gaining knowledge of intermediate results, and users can verify the results of their queries, relying only on their trust of the data owner. Our protocols are secure under reasonable cryptographic assumptions, and are robust to collusion between k dishonest service providers.
On robust combiners for private information retrieval and other primitives
 CRYPTO
, 2006
"... Abstract. Let A and B denote cryptographic primitives. A (k, m)robust AtoB combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The ma ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract. Let A and B denote cryptographic primitives. A (k, m)robust AtoB combiner is a construction, which takes m implementations of primitive A as input, and yields an implementation of primitive B, which is guaranteed to be secure as long as at least k input implementations are secure. The main motivation for such constructions is the tolerance against wrong assumptions on which the security of implementations is based. For example, a (1,2)robust AtoB combiner yields a secure implementation of B even if an assumption underlying one of the input implementations of A turns out to be wrong. In this work we study robust combiners for private information retrieval (PIR), oblivious transfer (OT), and bit commitment (BC). We propose a (1,2)robust PIRtoPIR combiner, and describe various optimizations based on properties of existing PIR protocols. The existence of simple PIRtoPIR combiners is somewhat surprising, since OT, a very closely related primitive, seems difficult to combine (Harnik et al., Eurocrypt’05). Furthermore, we present (1,2)robust PIRtoOT and PIRtoBC combiners. To the best of our knowledge these are the first constructions of AtoB combiners with A � = B. Such combiners, in addition to being interesting in their own right, offer insights into relationships between cryptographic primitives. In particular, our PIRtoOT combiner together with the impossibility result for OTcombiners of Harnik et al. rule out certain types of reductions of PIR to OT. Finally, we suggest a more finegrained approach to construction of robust combiners, which may lead to more efficient and practical combiners in many scenarios.
Optimal authenticated data structures with multilinear forms
 PAIRING 2010. LNCS
, 2010
"... Cloud computing and cloud storage are becoming increasingly prevalent. In this paradigm, clients outsource their data and computations to thirdparty service providers. Data integrity in the cloud therefore becomes an important factor for the functionality of these web services. Authenticated data ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Cloud computing and cloud storage are becoming increasingly prevalent. In this paradigm, clients outsource their data and computations to thirdparty service providers. Data integrity in the cloud therefore becomes an important factor for the functionality of these web services. Authenticated data structures, implemented with various cryptographic primitives, have been widely studied as a means of providing efficient solutions to data integrity problems (e.g., Merkle trees). In this paper, we introduce a new authenticated dictionary data structure that employs multilinear forms, a cryptographic primitive proposed by Silverberg and Boneh in 2003 [10], the construction of which, however, remains an open problem to date. Our authenticated dictionary is optimal, that is, it does not add any extra asymptotic cost to the plain dictionary data structure, yielding proofs of constant size, i.e., asymptotically equal to the size of the answer, while maintaining other relevant complexities logarithmic. Instead, solutions based on cryptographic hashing (e.g., Merkle trees) require proofs of logarithmic size [40]. Because multilinear forms are not known to exist yet, our result can be viewed from a different angle: if one could prove that optimal authenticated dictionaries cannot exist in the computational model, irrespectively of cryptographic primitives, then our solution would imply that cryptographically interesting multilinear form generators cannot exist as well (i.e., it can be viewed as a reduction). Thus, we provide an alternative avenue towards proving the nonexistence of multilinear form generators in the context of general lower bounds for authenticated data structures [40] and for memory checking [18], a model similar to the authenticated data structures model.
Twin clouds: Secure cloud computing with low latency
 In Proceedings of the IFIP International Conference on Communications and Multimedia Security (CMS
, 2011
"... Abstract. Cloud computing promises a cost effective enabling technology to outsource storage and massively parallel computations. However, existing approaches for provably secure outsourcing of data and arbitrary computations are either based on tamperproof hardware or fully homomorphic encryptio ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Cloud computing promises a cost effective enabling technology to outsource storage and massively parallel computations. However, existing approaches for provably secure outsourcing of data and arbitrary computations are either based on tamperproof hardware or fully homomorphic encryption. The former approaches are not scaleable, while the latter ones are currently not efficient enough to be used in practice. We propose an architecture and protocols that accumulate slow secure computations over time and provide the possibility to query them in parallel on demand by leveraging the benefits of cloud computing. In our approach, the user communicates with a resourceconstrained Trusted Cloud (either a private cloud or built from multiple secure hardware modules) which encrypts algorithms and data to be stored and later on queried in the powerful but untrusted Commodity Cloud. We split our protocols such that the Trusted Cloud performs securitycritical precomputations in the setup phase, while the Commodity Cloud computes the timecritical query in parallel under encryption in the query phase.