Results 1 
8 of
8
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 797 (39 self)
 Add to MetaCart
(Show Context)
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof " with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [6] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige, Goldwasser, Lovász, Safra and Szegedy [42], and Arora and Safra [6] and shows that there exists a positive ɛ such that approximating the maximum clique size in an Nvertex graph to within a factor of N ɛ is NPhard.
Transparent Proofs and Limits to Approximation
, 1994
"... We survey a major collective accomplishment of the theoretical computer science community on efficiently verifiable proofs. Informally, a formal proof is transparent (or holographic) if it can be verified with large confidence by a small number of spotchecks. Recent work by a large group of researc ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey a major collective accomplishment of the theoretical computer science community on efficiently verifiable proofs. Informally, a formal proof is transparent (or holographic) if it can be verified with large confidence by a small number of spotchecks. Recent work by a large group of researchers has shown that this seemingly paradoxical concept can be formalized and is feasible in a remarkably strong sense; every formal proof in ZF, say, can be rewritten in transparent format (proving the same theorem in a different proof system) without increasing the length of the proof by too much. This result in turn has surprising implications for the intractability of approximate solutions of a wide range of discrete optimization problems, extending the pessimistic predictions of the PNP theory to approximate solvability. We discuss the main results on transparent proofs and their implications to discrete optimization. We give an account of several links between the two subjects as well ...
Multilinearity selftesting with relative error
 In Proc. 17th STACS, LNCS 1770
, 2000
"... Abstract. We investigate selftesting programs with relative error by allowing error terms proportional to the function to be computed. Until now, in numerical computation, error terms were assumed to be either constant or proportional to the pth power of the magnitude of the input, for p ∈ [0, 1). ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate selftesting programs with relative error by allowing error terms proportional to the function to be computed. Until now, in numerical computation, error terms were assumed to be either constant or proportional to the pth power of the magnitude of the input, for p ∈ [0, 1). We construct new selftesters with relative error for realvalued multilinear functions defined over finite rational domains. The existence of such selftesters positively solves an open question in [KMS99]. Moreover, our selftesters are very efficient: they use few queries and simple operations.
Efficient Reductions from NP to Parity using ErrorCorrecting Codes
, 1993
"... This paper proves that every language in NP is recognized by an RP[ \Phi P] machine whose time complexity is quasilinear, apart from the time to verify witnesses. The results significantly improve the number of random bits, success probability, and running time of Valiant and Vazirani's orig ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper proves that every language in NP is recognized by an RP[ \Phi P] machine whose time complexity is quasilinear, apart from the time to verify witnesses. The results significantly improve the number of random bits, success probability, and running time of Valiant and Vazirani's original construction [VV86], and beat both the 2n random bits and time/success tradeoff in subsequent methods based on universal hashing. Questions of further improvements are connected to open problems in the theory of errorcorrecting codes. 1. Introduction Valiant and Vazirani [VV86] proved that every language L 2 NP is accepted by a polynomialtime bounded probabilistic oracle Turing machine M o which, on any input x, computes a Boolean formula OE x as an oracle query, and accepts if the answer is `1'. The oracle of M o has the "promise property" that whenever OE x has a unique satisfying assignment, it always answers `1', and whenever OE x has no satisfying assignments, it answers `0'. An ...
Genetic Algorithm Hardness and Approximation Complexity: A Research Agenda
"... Optimization problems, which seek to minimize or maximize some value, are ubiquitous in large scale computing. They are also among the most computationally difficult problems to solve. Consequently, many practical applications rely on approximations, settling for "good enough" when the &qu ..."
Abstract
 Add to MetaCart
Optimization problems, which seek to minimize or maximize some value, are ubiquitous in large scale computing. They are also among the most computationally difficult problems to solve. Consequently, many practical applications rely on approximations, settling for "good enough" when the "absolutely best" answer cannot be feasibly attained. Genetic algorithms, which simulate the evolution of potential solutions toward better and better alternatives, are a very powerful stochastic technique for finding approximate solutions to optimization problems. However, their limitations are currently not well understood. This leaves the programmer little guidance as to when this versatile technique should be used and when it should not. Recently, surprising theoretical results have led to a classification of optimization problems according to how well they can be approximated by reasonably fast algorithms. These limitations are inherent in the problem, and are therefore implementation independent. T...
Characterizing Small Depth and Small Space Classes by Operators of Higher Types
, 1998
"... Motivated by the question of how to define an analog of interactive proofs in the setting of logarithmic time and spacebounded computation, we study complexity classes defined in terms of operators quantifying over oracles. We obtain new characterizations of NC 1 , L, NL, NP, and NSC (the no ..."
Abstract
 Add to MetaCart
Motivated by the question of how to define an analog of interactive proofs in the setting of logarithmic time and spacebounded computation, we study complexity classes defined in terms of operators quantifying over oracles. We obtain new characterizations of NC 1 , L, NL, NP, and NSC (the nondeterministic version of SC). In some cases, we prove that our simulations are optimal (for instance, in bounding the number of queries to the oracle). 1 Introduction Interactive proofs motivate complexity theorists to study new modes of computation. These modes have been studied to great effect in the setting of polynomial time (e.g. [Sha92, LFKN92, BFL90]) and small spacebounded classes (e.g. [FL93, CL95]). Is it possible to study interactive proofs in the context of even smaller complexity classes? Would such a study be useful or interesting? It has often proved very useful to study modes of computation on very small complexity classes, although it has not always been clear at firs...
Improved ResourceBounded BorelCantelli and Stochasticity Theorems
, 1995
"... This note strengthens and simplifies Lutz's resourcebounded version of the BorelCantelli lemma for density systems and martingales. We observe that the technique can be used to construct martingales that are "additively honest," and also martingales that are "multiplicatively ..."
Abstract
 Add to MetaCart
This note strengthens and simplifies Lutz's resourcebounded version of the BorelCantelli lemma for density systems and martingales. We observe that the technique can be used to construct martingales that are "additively honest," and also martingales that are "multiplicatively honest." We use this to improve the "Weak Stochasticity Theorem" of Lutz and Mayordomo: their result does not address the issue of how rapidly the bias away from 1/2 converges toward zero in a "stochastic" language, while we show that the bias must vanish exponentially. 1. Introduction Lutz [15] developed a resourcebounded version of the classical first BorelCantelli lemma. Lutz's formulation, and its subsequent use in [16, 17, 19, 21], is in terms of the "density systems" that he originally used to define his resourcebounded measure theory in [15]. The abovecited papers all note the equivalent formulation of the measure theory in terms of martingales, along lines pioneered for complexity theory by Sc...
The Weizmann Workshop on Probabilistic Proof Systems
, 1994
"... The Weizmann Workshop on Probabilistic Proofs and Applications to Program Checking, Cryptography, and Hardness of Approximation was held at the Weizmann Institute of Science, on January 1013, 1994. The following report provides the abstracts of the talks given at the workshop, the list of participa ..."
Abstract
 Add to MetaCart
The Weizmann Workshop on Probabilistic Proofs and Applications to Program Checking, Cryptography, and Hardness of Approximation was held at the Weizmann Institute of Science, on January 1013, 1994. The following report provides the abstracts of the talks given at the workshop, the list of participants, and relevant references.