Results 1  10
of
62
Simple Extractors for All MinEntropies and a New PseudoRandom Generator
"... We present a simple, selfcontained extractor construction that produces good extractors for all minentropies (minentropy measures the amount of randomness contained in a weak random source). Our construction is algebraic and builds on a new polynomialbased approach introduced by TaShma, Zuckerm ..."
Abstract

Cited by 111 (27 self)
 Add to MetaCart
(Show Context)
We present a simple, selfcontained extractor construction that produces good extractors for all minentropies (minentropy measures the amount of randomness contained in a weak random source). Our construction is algebraic and builds on a new polynomialbased approach introduced by TaShma, Zuckerman, and Safra [37]. Using our improvements, we obtain, for example, an extractor with output length m = k1\Gamma ffi and seed length O(log n). This matches the parameters of Trevisan's breakthrough result [38] and additionally achieves those parameters for smallminentropies k. Extending [38] to small k has been the focus of a sequence of recent works [15, 26, 35]. Our construction gives a much simpler and more direct solution tothis problem. Applying similar ideas to the problem of building pseudorandom generators, we obtain a new pseudorandom generator construction that is not based on the NW generator[21], and turns worstcase hardness directly into pseudorandomness. The parameters of this generator match those in [16, 33] and in particular are strong enough to obtain a new proof that P = BP P if E requires exponential size circuits. Essentially the same construction yields a hitting set generator with optimal seed length that outputs s\Omega (1) bits when given a function that requires circuits of size s (for any s). This implies a hardness versus randomness tradeoff for RP and BP P that is optimal (up to polynomial factors), solving an open problem raised by [14]. Our generators can also be used to derandomize AM in a way that improves and extends the results of [4, 18, 20].
Graph Nonisomorphism Has Subexponential Size Proofs Unless The PolynomialTime Hierarchy Collapses
 SIAM Journal on Computing
, 1998
"... We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with acce ..."
Abstract

Cited by 110 (4 self)
 Add to MetaCart
(Show Context)
We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round ArthurMerlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomialtime hierarchy (and hence the polynomialtime hierarchy collapses). This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given ...
Pseudorandomness and averagecase complexity via uniform reductions
 IN PROCEEDINGS OF THE 17TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudor ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudorandomness, nor does it explicitly establish an averagecase hardness result. In this paper: ◦ We obtain an optimal worstcase to averagecase connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACEcomplete selfcorrectible and downward selfreducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #Pcomplete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “blackbox” uniform reductions.
In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time
, 2002
"... Restricting the search space {0, 1} n to the set of truth tables of “easy ” Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity class ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Restricting the search space {0, 1} n to the set of truth tables of “easy ” Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity classes. In particular, we show that NEXP ⊂ P/poly ⇔ NEXP = MA; this can be interpreted as saying that no derandomization of MA (and, hence, of promiseBPP) is possible unless NEXP contains a hard Boolean function. We also prove several downward closure results for ZPP, RP, BPP, and MA; e.g., we show EXP = BPP ⇔ EE = BPE, where EE is the doubleexponential time class and BPE is the exponentialtime analogue of BPP.
Statistical zeroknowledge proofs with efficient provers: Lattice problems and more
 In CRYPTO
, 2003
"... Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) a ..."
Abstract

Cited by 50 (10 self)
 Add to MetaCart
Abstract. We construct several new statistical zeroknowledge proofs with efficient provers, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string. Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient latticebased identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard. We then turn to the general question of whether all problems in SZK ∩ NP admit statistical zeroknowledge proofs with efficient provers. Towards this end, we give a statistical zeroknowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive. 1
Another proof that BPP ⊆ PH (and more
, 1997
"... Abstract. We provide another proof of the Sipser–Lautemann Theorem by which BPP ⊆ MA ( ⊆ PH). The current proof is based on strong results regarding the amplification of BPP, due to Zuckerman (1996). Given these results, the current proof is even simpler than previous ones. Furthermore, extending th ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We provide another proof of the Sipser–Lautemann Theorem by which BPP ⊆ MA ( ⊆ PH). The current proof is based on strong results regarding the amplification of BPP, due to Zuckerman (1996). Given these results, the current proof is even simpler than previous ones. Furthermore, extending the proof leads to two results regarding MA: MA ⊆ ZPP N P (which seems to be new), and that twosided error MA equals MA. Finally, we survey the known facts regarding the fragment of the polynomialtime hierarchy that contains MA.
Derandomization in cryptography
 SIAM J. COMPUTING
"... We give two applications of Nisan–Wigdersontype (“noncryptographic”) pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NWtype generator, we construct: 1. A onemessage witnessindistinguishable proof system for every language in NP, based on any trapd ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
We give two applications of Nisan–Wigdersontype (“noncryptographic”) pseudorandom generators in cryptography. Specifically, assuming the existence of an appropriate NWtype generator, we construct: 1. A onemessage witnessindistinguishable proof system for every language in NP, based on any trapdoor permutation. This proof system does not assume a shared random string or any setup assumption, so it is actually an “NP proof system.” 2. A noninteractive bit commitment scheme based on any oneway function. The specific NWtype generator we need is a hitting set generator fooling nondeterministic circuits. It is known how to construct such a generator if E = DTIME(2 O(n) ) has a function of nondeterministic circuit complexity 2 Ω(n) (Miltersen and Vinodchandran, FOCS ‘99). Our witnessindistinguishable proofs are obtained by using the NWtype generator to derandomize the ZAPs of Dwork and Naor (FOCS ‘00). To our knowledge, this is the first construction of an NP proof system achieving a secrecy property. Our commitment scheme is obtained by derandomizing the interactive commitment scheme of Naor (J. Cryptology, 1991). Previous constructions of noninteractive commitment schemes were only known under incomparable assumptions.
Computational analogues of entropy
 In 11th International Conference on Random Structures and Algorithms
, 2003
"... Abstract. Minentropy is a statistical measure of the amount of randomness that a particular distribution contains. In this paper we investigate the notion of computational minentropy which is the computational analog of statistical minentropy. We consider three possible definitions for this notio ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Abstract. Minentropy is a statistical measure of the amount of randomness that a particular distribution contains. In this paper we investigate the notion of computational minentropy which is the computational analog of statistical minentropy. We consider three possible definitions for this notion, and show equivalence and separation results for these definitions in various computational models. We also study whether or not certain properties of statistical minentropy have a computational analog. In particular, we consider the following questions: 1. Let X be a distribution with high computational minentropy. Does one get a pseudorandom distribution when applying a “randomness extractor ” on X? 2. Let X and Y be (possibly dependent) random variables. Is the computational minentropy of (X, Y) at least as large as the computational minentropy of X? 3. Let X be a distribution over {0, 1} n that is “weakly unpredictable” in the sense that it is hard to predict a constant fraction of the coordinates of X with a constant bias. Does X have computational minentropy Ω(n)? We show that the answers to these questions depend on the computational model considered. In some natural models the answer is false and in others the answer is true. Our positive results for the third question exhibit models in which the “hybrid argument bottleneck ” in “moving from a distinguisher to a predictor ” can be avoided. 1
Pseudorandomness for approximate counting and sampling
 In Proceedings of the 20th IEEE Conference on Computational Complexity
, 2005
"... We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allow ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
We study computational procedures that use both randomness and nondeterminism. Examples are ArthurMerlin games and approximate counting and sampling of NPwitnesses. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption. One special case is a proof that EXP � ⊆ NP/poly ⇒ EXP � ⊆ P NP   /poly. In words, if there is a problem in EXP that cannot be computed by polysize nondeterministic circuits then there is one which cannot be computed by polysize circuits that make nonadaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize ArthurMerlin games (i.e., show AM = NP) are in fact all equivalent. In addition to simplifying the framework of AM derandomization, we show that this “unified assumption ” suffices to derandomize several other probabilistic procedures. For these results we define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NPwitnesses. We use the “boosting ” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. As a consequence, under this assumption, there are deterministic polynomial time algorithms that use nonadaptive NPqueries and perform the following tasks: • approximate counting of NPwitnesses: given a Boolean circuit A, output r such that (1 − ɛ)A −1 (1)  ≤r ≤A −1 (1).