Results 1  10
of
24
Adiabatic quantum state generation and statistical zeroknowledge
 in Proc. 35th STOC
, 2003
"... The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to the problem. We systematically study ’quantum state generation’, namely, which superpositions can be efficiently generated. We first show that all problems in Statistical Z ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to the problem. We systematically study ’quantum state generation’, namely, which superpositions can be efficiently generated. We first show that all problems in Statistical Zero Knowledge (SZK), a class which contains many languages that are natural candidates for BQP, can be reduced to an instance of quantum state generation. This was known before for graph isomorphism, but we give a general recipe for all problems in SZK. We demonstrate the reduction from the problem to its quantum state generation version for three examples: Discrete log, quadratic residuosity and a gap version of closest vector in a lattice. We then develop tools for quantum state generation. For this task, we define the framework of ’adiabatic quantum state generation ’ which uses the language of ground states, spectral gaps and Hamiltonians instead of the standard unitary gate language. This language stems from the recently suggested adiabatic computation model [20] and seems to be especially tailored for the task of quantum state generation. After defining the paradigm, we provide two basic lemmas for adiabatic quantum state generation: • The Sparse Hamiltonian lemma, which gives a general technique for implementing sparse Hamiltonians efficiently, and, • The jagged adiabatic path lemma, which gives conditions for a sequence of Hamiltonians to allow efficient adiabatic state generation. We use our tools to prove that any quantum state which can be generated efficiently in the standard model can also be generated efficiently adiabatically, and vice versa. Finally we show how to apply our techniques to generate superpositions corresponding to limiting distributions of a large class of Markov chains, including the uniform distribution over all perfect
On the Primer Selection Problem in Polymerase Chain Reaction Experiments
, 1996
"... In this paper we address the problem of primer selection in polymerase chain reaction (PCR) experiments. We prove that the problem of minimizing the number of primers required to amplify a set of DNA sequences is NPcomplete. Moreover, we show that it is also intractable to approximate solutions to ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
In this paper we address the problem of primer selection in polymerase chain reaction (PCR) experiments. We prove that the problem of minimizing the number of primers required to amplify a set of DNA sequences is NPcomplete. Moreover, we show that it is also intractable to approximate solutions to this problem to within a constant times optimal. We develop a branchand bound algorithm that solves the primers minimization problem within reasonable time for typical instances. Moreover, we present an efficient approximation scheme for this problem, and prove that our heuristic always produces solutions with cost no worse than a logarithmic factor times optimal. Finally, we analyze a weighted variant, where both the number of primers as well as the sum of their costs is optimized simultaneously. We conclude by addressing the empirical performance of our methods on biological data. 1 Introduction The polymerase chain reaction (PCR) has revolutionized the practice of molecular biology, mak...
Haplofreq  estimating haplotype frequencies efficiently
 In RECOMB
, 2005
"... A commonly used tool in disease association studies is the search for discrepancies between the haplotype distribution in the case and control populations. In order to find this discrepancy, the haplotypes frequency in each of the populations is estimated from the genotypes. We present a new method ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
A commonly used tool in disease association studies is the search for discrepancies between the haplotype distribution in the case and control populations. In order to find this discrepancy, the haplotypes frequency in each of the populations is estimated from the genotypes. We present a new method HAPLOFREQ to estimate haplotype frequencies over a short genomic region given the genotypes or haplotypes with missing data or sequencing errors. Our approach incorporates a maximum likelihood model based on a simple random generative model which assumes that the genotypes are independently sampled from the population. We first show that if the phased haplotypes are given, possibly with missing data, we can estimate the frequency of the haplotypes in the population by finding the global optimum of the likelihood function in polynomial time. If the haplotypes are not phased, finding the maximum value of the likelihood function is NPhard. In this case we define an alternative likelihood function which can be thought of as a relaxed likelihood function. We show that the maximum relaxed likelihood can be found in polynomial time, and that the optimal solution of the relaxed likelihood approaches asymptotically to the haplotype frequencies in the population. In contrast to previous approaches, our algorithms are guaranteed to converge in polynomial time to a global maximum of the different likelihood functions. We compared the performance of our algorithm to the widely used program PHASE, and we found that our estimates are at least 10 % more accurate than PHASE and about ten times faster than PHASE. Our techniques involve new algorithms in convex optimization. These algorithms may be of independent interest. Particularly, they may be helpful in other maximum likelihood problems arising from survey sampling.
The symmetric traveling salesman polytope: New facets from the graphical relaxation
 MATHEMATICS OF OPERATIONS RESEARCH
, 2007
"... ..."
(Show Context)
Adiabatic quantum state generation
 SIAM Journal on Computing
"... Abstract. The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to this task by studying the problem of quantum state generation. We motivate this problem by showing that the entire class of statistical zero knowledge, which cont ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The design of new quantum algorithms has proven to be an extremely difficult task. This paper considers a different approach to this task by studying the problem of quantum state generation. We motivate this problem by showing that the entire class of statistical zero knowledge, which contains natural candidates for efficient quantum algorithms such as graph isomorphism and lattice problems, can be reduced to the problem of quantum state generation. To study quantum state generation, we define a paradigm which we call adiabatic state generation (ASG) and which is based on adiabatic quantum computation. The ASG paradigm is not meant to replace the standard quantum circuit model or to improve on it in terms of computational complexity. Rather, our goal is to provide a natural theoretical framework, in which quantum state generation algorithms could be designed. The new paradigm seems interesting due to its intriguing links to a variety of different areas: the analysis of spectral gaps and groundstates of Hamiltonians in physics, rapidly mixing Markov chains, adiabatic computation, and approximate counting. To initiate the study of ASG, we prove several general lemmas that can serve as tools when using this paradigm. We demonstrate the application of the paradigm by using it to turn a variety of (classical) approximate counting algorithms into efficient quantum state generators of nontrivial quantum states, including, for example, the uniform superposition over all perfect matchings in a bipartite graph.
Task Graph Performance Bounds Through Comparison Methods
, 2001
"... When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel str ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
When a parallel computation is represented in a formalism that imposes seriesparallel structure on its task graph, it becomes amenable to automated analysis and scheduling. Unfortunately, its execution time will usually also increase as precedence constraints are added to ensure seriesparallel structure. Bounding the slowdown ratio would allow an informed tradeoff between the benefits of a restrictive formalism and its cost in loss of performance. This dissertation deals with seriesparallelising task graphs by adding precedence constraints to a task graph, to make the resulting task graph seriesparallel. The weak bounded slowdown conjecture for seriesparallelising task graphs is introduced. This states that the slowdown is bounded if information about the workload can be used to guide the selection of which precedence constraints to add. A theory of best seriesparallelisations is developed to investigate this conjecture. Partial evidence is presented that the weak slowdown bound is likely to be 4/3, and this bound is shown to be tight.
POLYHEDRAL COMBINATORICS
"... Polyhedral combinatorics is a rich mathematical subject motivated by integer and linear programming. While not exhaustive, this survey covers a variety of interesting topics, so let’s get right to it! ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Polyhedral combinatorics is a rich mathematical subject motivated by integer and linear programming. While not exhaustive, this survey covers a variety of interesting topics, so let’s get right to it!
The complexity of generic primal algorithms for solving general integer programs
 MATH. OPER. RES
, 2001
"... Primal methods constitute a common approach to solving (combinatorial) optimization problems. Starting from a given feasible solution, they successively produce new feasible solutions with increasingly better objective function value until an optimal solution is reached. From an abstract point of vi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Primal methods constitute a common approach to solving (combinatorial) optimization problems. Starting from a given feasible solution, they successively produce new feasible solutions with increasingly better objective function value until an optimal solution is reached. From an abstract point of view, an augmentation problem is solved in each iteration. That is, given a feasible point, these methods find an augmenting vector, if one exists. Usually, augmenting vectors with certain properties are sought to guarantee the polynomial running time of the overall algorithm. In this paper, we show that one can solve every integer programming problem in polynomial time provided one can efficiently solve the directed augmentation problem. The directed augmentation problem arises from the ordinary augmentation problem by splitting each direction into its positive and its negative part and by considering linear objectives on these parts. Our main result states that in order to get a polynomialtime algorithm for optimization it is sufficient to efficiently find, for any linear objective function in the positive and negative part, an arbitrary augmenting vector. This result also provides a general framework for the design of polynomialtime algorithms for specific combinatorial optimization problems. We demonstrate its applicability by considering the mincost
Offloading Floating Car Data
 IEEE WoWMoM
, 2013
"... Abstract—Floating Car Data (FCD) is currently collected by moving vehicles and uploaded to Internetbased processing centers through the cellular access infrastructure. As FCD is foreseen to rapidly become a pervasive technology, the present network paradigm risks not to scale well in the future, wh ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Floating Car Data (FCD) is currently collected by moving vehicles and uploaded to Internetbased processing centers through the cellular access infrastructure. As FCD is foreseen to rapidly become a pervasive technology, the present network paradigm risks not to scale well in the future, when a vast majority of automobiles will be constantly sensing their operation as well as the external environment and transmitting such information towards the Internet. In order to relieve the cellular network from the additional load that widespread FCD can induce, we study a local gathering and fusion paradigm, based on vehicletovehicle (V2V) communication. We show how this approach can lead to significant gain, especially when and where the cellular network is stressed the most. Moreover, we propose several distributed schemes to FCD offloading based on the principle above that, despite their simplicity, are extremely efficient and can reduce the FCD capacity demand at the access network by up to 95%. I.
Convex Discrete Optimization
, 2007
"... We develop an algorithmic theory of convex optimization over discrete sets. Using a combination of algebraic and geometric tools we are able to provide polynomial time algorithms for solving broad classes of convex combinatorial optimization problems and convex integer programming problems in variab ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We develop an algorithmic theory of convex optimization over discrete sets. Using a combination of algebraic and geometric tools we are able to provide polynomial time algorithms for solving broad classes of convex combinatorial optimization problems and convex integer programming problems in variable dimension. We discuss some of the many applications of this theory including to quadratic programming, matroids, bin packing and cuttingstock problems, vector partitioning and clustering, multiway transportation problems, and privacy and confidential statistical data disclosure. Highlights of our work include a strongly polynomial time algorithm for convex and linear combinatorial optimization over any family presented by a membership oracle when the underlying polytope has few edgedirections; a new theory of sotermed nfold integer programming, yielding polynomial time solution of important and natural classes of convex and linear integer programming problems in variable dimension; and a complete complexity classification of high dimensional transportation problems, with practical applications to fundamental