Results 1  10
of
12
Graph Nonisomorphism Has Subexponential Size Proofs Unless The PolynomialTime Hierarchy Collapses
 SIAM Journal on Computing
, 1998
"... We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with acce ..."
Abstract

Cited by 110 (4 self)
 Add to MetaCart
(Show Context)
We establish hardness versus randomness tradeoffs for a broad class of randomized procedures. In particular, we create efficient nondeterministic simulations of bounded round ArthurMerlin games using a language in exponential time that cannot be decided by polynomial size oracle circuits with access to satisfiability. We show that every language with a bounded round ArthurMerlin game has subexponential size membership proofs for infinitely many input lengths unless exponential time coincides with the third level of the polynomialtime hierarchy (and hence the polynomialtime hierarchy collapses). This provides the first strong evidence that graph nonisomorphism has subexponential size proofs. We set up a general framework for derandomization which encompasses more than the traditional model of randomized computation. For a randomized procedure to fit within this framework, we only require that for any fixed input the complexity of checking whether the procedure succeeds on a given ...
Pseudorandomness and averagecase complexity via uniform reductions
 IN PROCEEDINGS OF THE 17TH ANNUAL IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudor ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
Impagliazzo and Wigderson (36th FOCS, 1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP � = BPP). Unlike results in the nonuniform setting, their result does not provide a continuous tradeoff between worstcase hardness and pseudorandomness, nor does it explicitly establish an averagecase hardness result. In this paper: ◦ We obtain an optimal worstcase to averagecase connection for EXP: if EXP � ⊆ BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t ′ (n) of the inputs by BPTIME(t ′ (n)) algorithms, for t ′ = t Ω(1). ◦ We exhibit a PSPACEcomplete selfcorrectible and downward selfreducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #Pcomplete problem with these properties. ◦ We argue that the results of Impagliazzo and Wigderson, and the ones in this paper, cannot be proved via “blackbox” uniform reductions.
In Search of an Easy Witness: Exponential Time vs. Probabilistic Polynomial Time
, 2002
"... Restricting the search space {0, 1} n to the set of truth tables of “easy ” Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity class ..."
Abstract

Cited by 51 (7 self)
 Add to MetaCart
(Show Context)
Restricting the search space {0, 1} n to the set of truth tables of “easy ” Boolean functions on log n variables, as well as using some known hardnessrandomness tradeoffs, we establish a number of results relating the complexity of exponentialtime and probabilistic polynomialtime complexity classes. In particular, we show that NEXP ⊂ P/poly ⇔ NEXP = MA; this can be interpreted as saying that no derandomization of MA (and, hence, of promiseBPP) is possible unless NEXP contains a hard Boolean function. We also prove several downward closure results for ZPP, RP, BPP, and MA; e.g., we show EXP = BPP ⇔ EE = BPE, where EE is the doubleexponential time class and BPE is the exponentialtime analogue of BPP.
Easiness Assumptions and Hardness Tests: Trading Time for Zero Error
 Journal of Computer and System Sciences
, 2000
"... We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization metho ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
(Show Context)
We propose a new approach towards derandomization in the uniform setting, where it is computationally hard to nd possible mistakes in the simulation of a given probabilistic algorithm. The approach consists in combining both easiness and hardness complexity assumptions: if a derandomization method based on an easiness assumption fails, then we obtain a certain hardness test that can be used to remove error in BPP algorithms. As an application, we prove that every RP algorithm can be simulated by a zeroerror probabilistic algorithm, running in expected subexponential time, that appears correct innitely often (i.o.) to every ecient adversary. A similar result by Impagliazzo and Wigderson (FOCS'98) states that BPP allows deterministic subexponentialtime simulations that appear correct with respect to any eciently sampleable distribution i.o., under the assumption that EXP 6= BPP; in contrast, our result does not rely on any unproven assumptions. As another application of our...
If NP languages are hard on the worstcase then it is easy to find their hard instances
"... We prove that if NP 6 ` BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. Th ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
We prove that if NP 6 ` BPP, i.e., if some NPcomplete language is worstcase hard, then for every probabilistic algorithm trying to decide the language,there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worstcase to averagecase reduction for NP of any kind.We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial time algorithms, which is a prerequisite assumption needed for OWF and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NPcomplete languages, that is samplable in quasipolynomial time and is hard for all probabilistic polynomial time algorithms (unless NP is easyin the worstcase). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worstcase, we can efficiently generate at most three Boolean formulas (of increasing
Derandomization: a brief overview
 Bulletin of the EATCS
"... This survey focuses on the recent (1998–2003) developments in the area of derandomization, with the emphasis on the derandomization of timebounded randomized complexity classes. 1 ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
This survey focuses on the recent (1998–2003) developments in the area of derandomization, with the emphasis on the derandomization of timebounded randomized complexity classes. 1
Hardness hypotheses, derandomization, and circuit complexity
"... We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: • The measure hypothesis: NP does not have pmeasure 0. • The pseudoNP hypothesis: there is an NP language that can be distinguished from any DTIME(2nǫ) ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences: • The measure hypothesis: NP does not have pmeasure 0. • The pseudoNP hypothesis: there is an NP language that can be distinguished from any DTIME(2nǫ) language by an NP refuter. • The NPmachine hypothesis: there is an NP machine accepting 0 ∗ for which no 2nǫtime machine can find infinitely many accepting computations. We show that the NPmachine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuitsize lower bounds that are known to follow from the first two hypotheses also follow from the NPmachine hypothesis. In particular, the NPmachine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.
Uniform hardness vs. randomness tradeoffs for ArthurMerlin games
"... Impagliazzo and Wigderson proved a uniform hardness vs. randomness "gap result" for BPP. We show an analogous result for AM: Either ArthurMerlin protocols are very strong and everything in E = ) can be proved to a subexponential time verifier, or else ArthurMerlin protocols are weak an ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Impagliazzo and Wigderson proved a uniform hardness vs. randomness "gap result" for BPP. We show an analogous result for AM: Either ArthurMerlin protocols are very strong and everything in E = ) can be proved to a subexponential time verifier, or else ArthurMerlin protocols are weak and every language in AM has a polynomial time nondeterministic algorithm in the uniform averagecase setting (i.e., it is infeasible to come up with inputs on which the algorithm fails). For the class AM coAM we can remove the averagecase clause and show under the same assumption that AM coNP.
Uniform Hardness Versus Randomness Tradeoffs For ArthurMerlin Games
"... ... A new ingredient in our proof is identifying a novel resiliency property of hardness vs. randomness tradeoffs. We observe that the MiltersenVinodchandran generator has this property. ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
... A new ingredient in our proof is identifying a novel resiliency property of hardness vs. randomness tradeoffs. We observe that the MiltersenVinodchandran generator has this property.
Department of Computer Science,
"... We consider uniform assumptions for derandomization. We provide intuitive evidence that BPP can be simulated nontrivially in deterministic time by showing that (1) There is a simulation of P in P OLY LOGSP ACE that is successful against all polynomialtime adversaries infinitely often, or BP P ⊆ SU ..."
Abstract
 Add to MetaCart
(Show Context)
We consider uniform assumptions for derandomization. We provide intuitive evidence that BPP can be simulated nontrivially in deterministic time by showing that (1) There is a simulation of P in P OLY LOGSP ACE that is successful against all polynomialtime adversaries infinitely often, or BP P ⊆ SUBEXP (2) There is a simulation of P in SUBP SP ACE that is successful against all polynomialtime adversaries infinitely often, or BP P = P. These results complement and extend earlier work of Sipser, NisanWigderson and Lu. We show similar tradeoffs between simulation of nondeterministic time by nondeterministic space and simulation of randomized algorithms by nondeterministic time. Finally, we give uniform assumptions under which there is a strict hierarchy for randomized polynomial time and randomized time can be simulated nontrivially by randomized space. 1