Results 1  10
of
29
Magic Functions
, 1999
"... We consider three apparently unrelated fundamental problems in distributed computing, cryptography and complexity theory and prove that they are essentially the same problem. ..."
Abstract

Cited by 77 (1 self)
 Add to MetaCart
We consider three apparently unrelated fundamental problems in distributed computing, cryptography and complexity theory and prove that they are essentially the same problem.
On the Existence of 3Round ZeroKnowledge Protocols
 In Crypto98, Springer LNCS 1462
, 1999
"... In this paper, we construct a 3round zeroknowledge protocol for any NP language. Our protocol achieves weaker notions of zeroknowledge than blackbox simulation zeroknowledge. Therefore, our result does not contradict the triviality result of Goldreich and Krawczyk [GoKr96] which shows that 3ro ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we construct a 3round zeroknowledge protocol for any NP language. Our protocol achieves weaker notions of zeroknowledge than blackbox simulation zeroknowledge. Therefore, our result does not contradict the triviality result of Goldreich and Krawczyk [GoKr96] which shows that 3round blackbox simulation zeroknowledge exist only for BPP languages. Our main contribution is to provide a nonblackbox simulation technique. Whether there exists such a simulation technique was a major open problem in the theory of zeroknowledge. Our simulation technique is based on a nonstandard computational assumption related to the Di#eHellman problem, which was originally proposed by Damgard [Da91]. This assumption, which we call the DA1, says that, given randomly chosen instance of the discrete logarithm problem (p, q, g, g a ), it is infeasible to compute (B, X) such that X = B a mod p without knowing the value b satisfying B = g b mod p. Our protocol achieves di#erent no...
The shortest vector in a lattice is hard to approximate to within some constant
 in Proc. 39th Symposium on Foundations of Computer Science
, 1998
"... Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (r ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We show that approximating the shortest vector problem (in any ℓp norm) to within any constant factor less than p √ 2 is hardfor NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NPhardness result (i.e., hardness under deterministic manyone reductions) under a reasonable number theoretic conjecture on the distribution of squarefree smooth numbers. As part of our proof, we give an alternative construction of Ajtai’s constructive variant of Sauer’s lemma that greatly simplifies Ajtai’s original proof. Key words. NPhardness, shortest vector problem, point lattices, geometry of numbers, sphere packing
The (True) Complexity of Statistical Zero Knowledge (Extended Abstract)
 Proceedings of the 22nd Annual ACM Symposium on the Theory of Computing, ACM
, 1990
"... ) Mihir Bellare Silvio Micali y Rafail Ostrovsky z MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 Abstract Statistical zeroknowledge is a very strong privacy constraint which is not dependent on computational limitations. In this paper we show that given a comp ..."
Abstract

Cited by 45 (18 self)
 Add to MetaCart
) Mihir Bellare Silvio Micali y Rafail Ostrovsky z MIT Laboratory for Computer Science 545 Technology Square Cambridge, MA 02139 Abstract Statistical zeroknowledge is a very strong privacy constraint which is not dependent on computational limitations. In this paper we show that given a complexity assumption a much weaker condition suffices to attain statistical zeroknowledge. As a result we are able to simplify statistical zeroknowledge and to better characterize, on many counts, the class of languages that possess statistical zeroknowledge proofs. 1 Introduction An interactive proof involves two parties, a prover and a verifier, who talk back and forth. The prover, who is computationally unbounded, tries to convince the probabilistic polynomial time verifier that a given theorem is true. A zeroknowledge proof is an interactive proof with an additional privacy constraint: the verifier does not learn why the theorem is true [11]. That is, whatever the polynomialtime verif...
Everything in NP can be argued in perfect zeroknowledge in a bounded number of rounds
, 1989
"... A perfect zeroknowledge interactive protocol allows a prover to convince a verifier of the validity of a statement in a way that does not give the verifier any additional information [GMR,GMW]. Such protocols take place by the exchange of messages back and forth between the prover and the verifier. ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
A perfect zeroknowledge interactive protocol allows a prover to convince a verifier of the validity of a statement in a way that does not give the verifier any additional information [GMR,GMW]. Such protocols take place by the exchange of messages back and forth between the prover and the verifier. An important measure of efficiency for these protocols is the number of rounds in the interaction. In previously known perfect zeroknowledge protocols for statements concerning NPcomplete problems [BCC], at least k rounds were necessary in order to prevent one party from having a probability of undetected cheating greater than 2 k . In this paper, we give the first perfect zeroknowledge protocol that offers arbitrarily high security for any statement in NP with a constant number of rounds (under the assumption that it is possible to find a prime p with known factorization of p 1 such that it is infeasible to compute discrete logarithms modulo p even for someone who knows the factors o...
LubyRackoff backwards: Increasing security by making block ciphers noninvertible
 ADVANCES IN CRYPTOLOGYEUROCRYPT '98 PROCEEDINGS
, 1998
"... We argue that the invertibility of a block cipher can reduce the security of schemes that use it, and a better starting point for scheme design is the noninvertible analog of a block cipher, that is, a pseudorandom function (PRF). Since a block cipher may be viewed as a pseudorandom permutation, ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
We argue that the invertibility of a block cipher can reduce the security of schemes that use it, and a better starting point for scheme design is the noninvertible analog of a block cipher, that is, a pseudorandom function (PRF). Since a block cipher may be viewed as a pseudorandom permutation, we are led to investigate the reverse of the problem studied by Luby and Rackoff, and ask: "how can one transform a PRP into a PRF in as securitypreserving a way as possible?" The solution we propose is datadependent rekeying. As an illustrative special case, let E:f0; 1g nf0;1g n!f0;1g n be the block cipher. Then we can construct the PRF F from the PRP E by setting F (k; x) =E(E(k; x);x). We generalize this to allow for arbitrary block and key lengths, and to improve e ciency. We prove strong quantitative bounds on the value of datadependent rekeying in the Shannon model of an ideal cipher, and take some initial steps towards an analysis in the standard model.
A M Saleh, Optical Networking: Past, Present, and Future
 IEEE J Lightwave Tech
"... Abstract—Over the past 25 years, networks have evolved from being relatively static with fairly homogeneous traffic to being more configurable and carrying a heterogeneous array of services. As the applications are ultimately the driver of network evolution, the paper begins with a brief history of ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Over the past 25 years, networks have evolved from being relatively static with fairly homogeneous traffic to being more configurable and carrying a heterogeneous array of services. As the applications are ultimately the driver of network evolution, the paper begins with a brief history of circuit, packet, and wave services, along with the development of the corresponding transport layers. The discussion then moves to the evolution of networknode architecture, with an emphasis on the opticalelectricaloptical and opticalbypass paradigms. Scalability and costeffectiveness in meeting network demands are two key factors in the discussion. The evolution of networking equipment, along with the development of the optical control plane, has facilitated a configurable optical layer. The enabling technologies, along with their ramifications, are discussed. Finally, the paper speculates on how capacity might evolve in the future, to handle the undoubtedly new services that are on the horizon. Index Terms—Configurability, dynamic networks, network evolution, network services, network transport layers, optical bypass, optical network elements, optical networking. I.
Interactive Hashing Simplifies ZeroKnowledge Protocol Design (Extended Abstract)
 Proc. of EuroCrypt 93
, 1998
"... Often the core difficulty in designing zeroknowledge protocols arises from having to consider every possible cheating verifier trying to extract aAditional information. ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Often the core difficulty in designing zeroknowledge protocols arises from having to consider every possible cheating verifier trying to extract aAditional information.
Monomial Representations for Gröbner Bases Computations
 PROCEEDINGS OF ISSAC 1998, ACM PRESS
, 1998
"... Monomial representations and operations for Gröbner bases computations are investigated from an implementation point of view. The technique of vectorized monomial operations is introduced and it is shown how it expedites computations of Gröbner bases. Furthermore, a rankbased monomial representatio ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Monomial representations and operations for Gröbner bases computations are investigated from an implementation point of view. The technique of vectorized monomial operations is introduced and it is shown how it expedites computations of Gröbner bases. Furthermore, a rankbased monomial representation and comparison technique is examined and it is concluded that this technique does not yield an additional speedup over vectorized comparisons. Extensive benchmark tests with the Computer Algebra System Singular are used to evaluate these concepts.
Design and performance of highspeed communication systems over timevarying radio channels
 ELEC. ENGIN. COMPUT. SCIENCE
, 1994
"... ..."
(Show Context)