Results 1  10
of
64
Proof verification and hardness of approximation problems
 IN PROC. 33RD ANN. IEEE SYMP. ON FOUND. OF COMP. SCI
, 1992
"... We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probabilit ..."
Abstract

Cited by 797 (39 self)
 Add to MetaCart
(Show Context)
We show that every language in NP has a probablistic verifier that checks membership proofs for it using logarithmic number of random bits and by examining a constant number of bits in the proof. If a string is in the language, then there exists a proof such that the verifier accepts with probability 1 (i.e., for every choice of its random string). For strings not in the language, the verifier rejects every provided “proof " with probability at least 1/2. Our result builds upon and improves a recent result of Arora and Safra [6] whose verifiers examine a nonconstant number of bits in the proof (though this number is a very slowly growing function of the input length). As a consequence we prove that no MAX SNPhard problem has a polynomial time approximation scheme, unless NP=P. The class MAX SNP was defined by Papadimitriou and Yannakakis [82] and hard problems for this class include vertex cover, maximum satisfiability, maximum cut, metric TSP, Steiner trees and shortest superstring. We also improve upon the clique hardness results of Feige, Goldwasser, Lovász, Safra and Szegedy [42], and Arora and Safra [6] and shows that there exists a positive ɛ such that approximating the maximum clique size in an Nvertex graph to within a factor of N ɛ is NPhard.
A Parallel Repetition Theorem
 SIAM Journal on Computing
, 1998
"... We show that a parallel repetition of any twoprover oneround proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the t ..."
Abstract

Cited by 362 (9 self)
 Add to MetaCart
(Show Context)
We show that a parallel repetition of any twoprover oneround proof system (MIP(2, 1)) decreases the probability of error at an exponential rate. No constructive bound was previously known. The constant in the exponent (in our analysis) depends only on the original probability of error and on the total number of possible answers of the two provers. The dependency on the total number of possible answers is logarithmic, which was recently proved to be almost the best possible [U. Feige and O. Verbitsky, Proc. 11th Annual IEEE Conference on Computational Complexity, IEEE Computer Society Press, Los Alamitos, CA, 1996, pp. 7076].
On the RedBlue Set Cover Problem
 In Proceedings of the 11th Annual ACMSIAM Symposium on Discrete Algorithms
, 2000
"... Given a finite set of "red" elements R, a finite set of "blue" elements B and a family S ` 2 R[B , the redblue set cover problem is to find a subfamily C ` S which covers all blue elements, but which covers the minimum possible number of red elements. We note that RedBlue Se ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
(Show Context)
Given a finite set of "red" elements R, a finite set of "blue" elements B and a family S ` 2 R[B , the redblue set cover problem is to find a subfamily C ` S which covers all blue elements, but which covers the minimum possible number of red elements. We note that RedBlue Set Cover is closely related to several combinatorial optimization problems studied earlier. These include the group Steiner problem, directed Steiner problem, minimum label path, minimum monotone satisfying assignment and symmetric label cover. From the equivalence of RedBlue Set Cover and MMSA3 it follows that, unless P=NP, even the restriction of RedBlue Set Cover where every set contains only one blue and two red elements cannot be approximated to within O(2 log 1\Gammaffi n ) , where ffi = 1= log log c n, for any constant c ! 1=2 (where n = S). We give integer programming formulations of the problem and use them to obtain a 2 p n approximation algorithm for the restricted case of RedBlue Set Cove...
Topology Inference in the Presence of Anonymous Routers
 In IEEE INFOCOM
, 2003
"... Many topology discovery systems rely on traceroute to discover path information in public networks. However, for some routers, traceroute detects their existence but not their address; we term such routers anonymous routers.Thispaper considers the problem of inferring the network topology in the pr ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
(Show Context)
Many topology discovery systems rely on traceroute to discover path information in public networks. However, for some routers, traceroute detects their existence but not their address; we term such routers anonymous routers.Thispaper considers the problem of inferring the network topology in the presence of anonymous routers. We illustrate how obvious approaches to handle anonymous routers lead to incomplete, inflated, or inaccurate topologies. We formalize the topology inference problem and show that producing both exact and approximate solutions is intractable. Two heuristics are proposed and evaluated through simulation. These heuristics have been used to infer the topology of the 6Bone, and could be incorporated into existing tools to infer more comprehensive and accurate topologies.
Does Parallel Repetition Lower the Error in Computationally Sound Protocols
 In Proceedings of 38th Annual Symposium on Foundations of Computer Science, IEEE
, 1997
"... Whether or not parallel repetition lowers the error has been a fundamental question in the theory of protocols, with applications in many di erent areas. It is well known that parallel repetition reduces the error at an exponential rate in interactive proofs and ArthurMerlin games. It seems to have ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
Whether or not parallel repetition lowers the error has been a fundamental question in the theory of protocols, with applications in many di erent areas. It is well known that parallel repetition reduces the error at an exponential rate in interactive proofs and ArthurMerlin games. It seems to have been taken for granted that the same is true in arguments, or other proofs where the soundness only holds with respect to computationally bounded parties. We show that this is not the case. Surprisingly, parallel repetition can actually fail in this setting. We present fourround protocols whose error does not decrease under parallel repetition. This holds for any (polynomial) number of repetitions. These protocols exploit nonmalleable encryption and can be based on any trapdoor permutation. On the other hand we show that for threeround protocols the error does go down exponentially fast. The question of parallel error reduction is particularly important when the protocol is used in cryptographic settings like identi cation, and the error represent the probability that an intruder succeeds.
Inapproximability Results for Guarding Polygons and Terrains
, 2001
"... Past research on art gallery problems has concentrated almost exclusively on bounds on the numbers of guards needed in the worst case in various settings. On the complexity side, fewer results are available. For instance, it has long been known that placing a smallest number of guards for a given in ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
Past research on art gallery problems has concentrated almost exclusively on bounds on the numbers of guards needed in the worst case in various settings. On the complexity side, fewer results are available. For instance, it has long been known that placing a smallest number of guards for a given input polygon is NPhard. In this paper we initiate the study of the approximability of several types of art gallery problems.
Minimum Propositional Proof Length is NPHard to Linearly Approximate
, 1999
"... We prove that the problem of determining the minimum propositional proof length is NPhard to approximate within a factor of 2log 1\Gamma o(1) n. These results are very robust in that they hold for almost all natural proof systems, including: Frege systems, extended Frege systems, resolution, Horn ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
(Show Context)
We prove that the problem of determining the minimum propositional proof length is NPhard to approximate within a factor of 2log 1\Gamma o(1) n. These results are very robust in that they hold for almost all natural proof systems, including: Frege systems, extended Frege systems, resolution, Horn resolution, the polynomial calculus, the sequent calculus, the cutfree sequent calculus, as well as the polynomial calculus. Our hardness of approximation results usually apply to proof length measured either by number of symbols or by number of inferences, for treelike or daglike proofs. We introduce the Monotone Minimum (Circuit) Satisfying Assignment problem and reduce it to the problems of approximation of the length of proofs.
Polynomial Time Approximation Schemes for Some Dense Instances of NPHard Optimization Problems
 Proc. RANDOM 97, LNCS 1269
, 1997
"... We survey recent results on the existence of polynomial time approximation schemes for some dense instances of NPhard combinatorial optimization problems. We indicate further some inherent limits for existence of such schemes for some other dense instances of optimization problems. ..."
Abstract

Cited by 23 (11 self)
 Add to MetaCart
We survey recent results on the existence of polynomial time approximation schemes for some dense instances of NPhard combinatorial optimization problems. We indicate further some inherent limits for existence of such schemes for some other dense instances of optimization problems.
Inapproximability Results for Guarding Polygons without Holes
 LECTURE NOTES IN COMPUTER SCIENCE
, 1998
"... The three art gallery problems Vertex Guard, Edge Guard and Point Guard are known to be NPhard [8]. Approximation algorithms for Vertex Guard and Edge Guard with a logarithmic ratio were proposed in [7]. We prove that for each of these problems, there exists a constant ffl ? 0, such that no pol ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
The three art gallery problems Vertex Guard, Edge Guard and Point Guard are known to be NPhard [8]. Approximation algorithms for Vertex Guard and Edge Guard with a logarithmic ratio were proposed in [7]. We prove that for each of these problems, there exists a constant ffl ? 0, such that no polynomial time algorithm can guarantee an approximation ratio of 1 + ffl unless P = NP . We obtain our results by proposing gappreserving reductions, based on reductions from [8]. Our results are the first inapproximability results for these problems.
Opportunity cost algorithms for reduction of i/o and interprocess communication overhead in a computing cluster
 IEEE Transactions on Parallel and Distributed Systems
, 2003
"... Abstract—Computing Clusters (CC) consisting of several connected machines, could provide a highperformance, multiuser, timesharing environment for executing parallel and sequential jobs. In order to achieve good performance in such an environment, it is necessary to assign processes to machines in ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Computing Clusters (CC) consisting of several connected machines, could provide a highperformance, multiuser, timesharing environment for executing parallel and sequential jobs. In order to achieve good performance in such an environment, it is necessary to assign processes to machines in a manner that ensures efficient allocation of resources among the jobs. This paper presents opportunity cost algorithms for online assignment of jobs to machines in a CC. These algorithms are designed to improve the overall CPU utilization of the cluster and to reduces the I/O and the Interprocess Communication (IPC) overhead. Our approach is based on known theoretical results on competitive algorithms. The main contribution of the paper is how to adapt this theory into working algorithms that can assign jobs to machines in a manner that guarantees nearoptimal utilization of the CPU resource for jobs that perform I/O and IPC operations. The developed algorithms are easy to implement. We tested the algorithms by means of simulations and executions in a real system and show that they outperform existing methods for process allocation that are based on ad hoc heuristics. Index Terms—Load balancing, competitive algorithms, cluster computing, I/O overhead, IPC overhead. 1