Results 1  10
of
12
The Computational Complexity of Linear Optics
 in Proceedings of STOC 2011
"... We give new evidence that quantum computers—moreover, rudimentary quantumcomputers built entirely out of linearoptical elements—cannotbeefficientlysimulatedbyclassical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linearoptical n ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
(Show Context)
We give new evidence that quantum computers—moreover, rudimentary quantumcomputers built entirely out of linearoptical elements—cannotbeefficientlysimulatedbyclassical computers. In particular, we define a model of computation in which identical photons are generated, sent through a linearoptical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology. On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions. Our first result says that, if there exists a polynomialtime classical algorithm that samples from the same probability distribution as a linearoptical network, then P #P = BPP NP, and hence the polynomial hierarchy collapses to the third level. Unfortunately, this result assumes an extremely accurate simulation. Our main result suggests that even an approximate or noisy classical simulation would already imply a collapse of the polynomial hierarchy. For this, we need two unproven conjectures: the PermanentofGaussians Conjecture, which says that it is #Phard to approximate the permanent of a matrixAofindependentN (0,1)Gaussianentries, withhigh probability over A; and the Permanent AntiConcentration Conjecture, which says that Per(A)  ≥ √ n!/poly(n) with high probability over A. We present evidence for these conjectures, both of which seem interesting even apart from our application. For the 96page full version, see www.scottaaronson.com/papers/optics.pdf
FixedParameter and Approximation Algorithms: A New Look
"... A FixedParameter Tractable (FPT) ρapproximation algorithm for a minimization (resp. maximization) parameterized problem P is an FPTalgorithm that, given an instance (x, k) ∈ P computes a solution of cost at most k · ρ(k) (resp. k/ρ(k)) if a solution of cost at most (resp. at least) k exists; ot ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
A FixedParameter Tractable (FPT) ρapproximation algorithm for a minimization (resp. maximization) parameterized problem P is an FPTalgorithm that, given an instance (x, k) ∈ P computes a solution of cost at most k · ρ(k) (resp. k/ρ(k)) if a solution of cost at most (resp. at least) k exists; otherwise the output can be arbitrary. For wellknown intractable problems such as the W[1]hard Clique and W[2]hard Set Cover problems, the natural question is whether we can get any FPTapproximation. It is widely believed that both Clique and SetCover admit no FPT ρapproximation algorithm, for any increasing function ρ. However, to the best of our knowledge, there has been no progress towards proving this conjecture. Assuming standard conjectures such as the Exponential Time Hypothesis (ETH) [18] and the Projection Games Conjecture (PGC) [27], we make the first progress towards proving this conjecture by showing that – Under the ETH and PGC, there exist constants F1, F2> 0 such that the Set Cover problem does not admit a FPT approximation algorithm with ratio k F1 k in 2 F2 · poly(N, M) time, where N is the size of the universe and M is the number of sets. – Unless NP ⊆ SUBEXP, for every 1> δ> 0 there exists a constant F (δ)> 0 such that Clique has no FPT cost approximation with ratio k 1−δ in 2 kF · poly(n) time, where n is the number of vertices in the graph. In the second part of the paper we consider various W[1]hard problems
Label Cover Instances with Large Girth and the Hardness of Approximating Basic kSpanner
"... ..."
(Show Context)
Cryptographic Assumptions: A Position Paper
"... The mission of theoretical cryptography is to dene and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The mission of theoretical cryptography is to dene and construct provably secure cryptographic protocols and schemes. Without proofs of security, cryptographic constructs offer no guarantees whatsoever and no basis for evaluation and comparison. As most security proofs necessarily come in the form of a reduction between the security claim and an intractability assumption, such proofs are ultimately only as good as the assumptions they are based on. Thus, the complexity implications of every assumption we utilize should be of signicant substance, and serve as the yard stick for the value of our proposals. Lately, the eld of cryptography has seen a sharp increase in the number of new assumptions that are often complex to dene and difficult to interpret. At times, these assumptions are hard to untangle from the constructions which utilize them. We believe that the lack of standards of what is accepted as a reasonable cryptographic assumption can be harmful to the credibility of our eld. Therefore, there is a great need for measures according to which we classify and compare assumptions, as to which are safe and which are not. In this paper, we propose such a classication and review recently suggested assumptions in this light. This follows the footsteps of Naor (Crypto 2003). Our governing principle is relying on hardness assumptions that are independent of the cryptographic constructions.
The Checkpoint Problem
"... In this paper we consider the checkpoint problem. The input consists of an undirected graph G, a set of sourcedestination pairs {(s1, t1),..., (sk, tk)}, and a collection P of paths connecting the (si, ti) pairs. A feasible solution is a multicut E ′ ; namely, a set of edges whose removal disconnec ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we consider the checkpoint problem. The input consists of an undirected graph G, a set of sourcedestination pairs {(s1, t1),..., (sk, tk)}, and a collection P of paths connecting the (si, ti) pairs. A feasible solution is a multicut E ′ ; namely, a set of edges whose removal disconnects every sourcedestination pair. For each p ∈ P we define cpE ′(p) = p ∩ E ′ . In the sum checkpoint (SCP) problem the goal is to minimize ∑ p∈P cpE ′(p), while in the maximum checkpoint (MCP) problem the goal is to minimize maxp∈P cpE ′(p). These problem have several natural applications, e.g., in urban transportation and network security. In a sense, they combine the multicut problem and the minimum membership set cover problem. For the sum objective we show that weighted SCP is equivalent, with respect to approximability, to undirected multicut. Thus there exists an O(log n) approximation for SCP in general graphs. Our current approximability results for the max objective have a wide gap: we provide an approximation factor of O ( √ n log n/opt) for MCP and a hardness of 2 under the assumption P ̸ = NP. The hardness holds for trees, in which case we can obtain an asymptotic approximation factor of 2. Finally we show strong hardness for the wellknown problem of finding a path with minimum forbidden pairs, which in a sense can be considered the dual to the checkpoint problem. Despite various works on this problem, hardness of approximation was not known prior to this work. We show that the problem cannot be approximated within c n for some constant c> 0, unless P = NP. This is the strongest type of hardness possible. It carries over to directed acyclic graphs and is a huge improvement over the plain NPhardness of Gabow (SIAM J. Comp 2007, pages 1648–1671).
On set expansion problems and the Small Set expansion Conjecture
"... Abstract. We consider problems related to the The Small Set expansion conjecture (Small set Expansion Conjecture) [14]. In the MWEC problem, we are given an undirected simple graph G = (V,E) with integral vertex weights. The goal is to select a set U ⊆ V of maximum weight so that the number of edge ..."
Abstract
 Add to MetaCart
Abstract. We consider problems related to the The Small Set expansion conjecture (Small set Expansion Conjecture) [14]. In the MWEC problem, we are given an undirected simple graph G = (V,E) with integral vertex weights. The goal is to select a set U ⊆ V of maximum weight so that the number of edges with at least one endpoint in U is at most m′. Goldschmidt and Hochbaum [8] show that the problem is NPhard and they give a 3approximation algorithm for the problem. We present a polynomial time approximation algorithm with ratio 2 algorithm for, MWEC improving the bound of 3 of [8]. Interestingly, we show that a 2 − ǫ ratio for MWEC for any constant ǫ> 0 implies that the Small set Expansion Conjecture [14] fails. Thus under the Small set Expansion Conjecture, the ratio is for MWEC, is tight. To the best of our knowledge, this is the first time that the Small set Expansion Conjecture is shown to be related to breaking the threshold of some approximation problem. The 2 − ǫ inapproximability considerably improves the NPC result of [8]. In the FCEC problem, we are given a vertex weighted graph, a bound k, and our goal is to find a subset of vertices U of total weight at least k such that the number of edges with at least one endpoint in in U is minimized. The NPC result in [8] carries over to this problem as well. The best known ratio for the problem is 2(1+ ǫ) by Carnes and Shmoys [3]. We give a polynomial time ratio 2 approximation algorithm for FCEC improving [3] and
Hardness of Approximation
"... Abstract. This article accompanies the talk given by the author at the International Congress of Mathematicians, 2014. The article sketches some connections between approximability of NPcomplete problems, analysis and geometry, and the role played by the Unique Games Conjecture in facilitating the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This article accompanies the talk given by the author at the International Congress of Mathematicians, 2014. The article sketches some connections between approximability of NPcomplete problems, analysis and geometry, and the role played by the Unique Games Conjecture in facilitating these connections. For a more extensive introduction to the topic, the reader is referred to survey articles [39, 40, 64].
Computational Complexity and Information Asymmetry in Election Audits with LowEntropy Randomness
"... We investigate the security of an election audit using a table of random numbers prepared in advance. We show how this scenario can be modeled using tools from combinatorial graph theory and computational complexity theory, and obtain the following results: (1) A randomly generated table can be used ..."
Abstract
 Add to MetaCart
(Show Context)
We investigate the security of an election audit using a table of random numbers prepared in advance. We show how this scenario can be modeled using tools from combinatorial graph theory and computational complexity theory, and obtain the following results: (1) A randomly generated table can be used to produce a statistically good election audit that requires less randomness to be generated in real time by the auditors. (2) It is likely to be computationally infeasible for an adversary to compute, given a preprepared table of random numbers, how to minimize their chances of detection in an audit. (3) It is computationally infeasible to distinguish a truly random table from a malicious table that has been modified to decrease the probability of detecting cheating in certain precincts. 1