Results 1  10
of
39
BuyatBulk Network Design
"... Theessenceofthesimplestbuyatbulknetwork designproblemisbuyingnetworkcapacity"wholesale"toguaranteeconnectivityfromallnetwork nodestoacertaincentralnetworkswitch.Capacityissoldwith"volumediscount":themorecapacityisbought,thecheaperisthepriceperunit ofbandwidth.WeprovideO(log2n)r ..."
Abstract

Cited by 103 (0 self)
 Add to MetaCart
(Show Context)
Theessenceofthesimplestbuyatbulknetwork designproblemisbuyingnetworkcapacity"wholesale"toguaranteeconnectivityfromallnetwork nodestoacertaincentralnetworkswitch.Capacityissoldwith"volumediscount":themorecapacityisbought,thecheaperisthepriceperunit ofbandwidth.WeprovideO(log2n)randomized approximationalgorithmfortheproblem.This solvestheopenproblemin[15].Theonlypreviouslyknownsolutionswererestrictedtospecial cases(Euclideangraphs)[15]. Wesolveadditionalnaturalvariationsofthe problem,suchasmultisinknetworkdesign,as wellasselectivenetworkdesign.Theseproblems canbeviewedasgeneralizationsofthetheGeneralizedSteinerConnectivityandPrizecollecting salesman(KMST)problems. Intheselectivenetworkdesignproblem,some subsetofkwellsmustbeconnectedtothe(single) renery,sothatthetotalcostisminimized.
Approximating the SingleSink Link Installation Problem in Network Design
, 1998
"... We initiate the algorithmic study of an important but NPhard problem that arises commonly in network design. The input consists of (1) An undirected graph with one sink node and multiple source nodes, a specified length for each edge, and a specified demand, dem v , for each source node v. (2) ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
We initiate the algorithmic study of an important but NPhard problem that arises commonly in network design. The input consists of (1) An undirected graph with one sink node and multiple source nodes, a specified length for each edge, and a specified demand, dem v , for each source node v. (2) A small set of cable types, where each cable type is specified by its capacity and its cost per unit length. The cost per unit capacity per unit length of a highcapacity cable may be significantly less than that of a lowcapacity cable, reflecting an economy of scale, i.e., the payoff for buying at bulk may be very high. The goal is to design a minimumcost network that can (simultaneously) route all the demands at the sources to the sink, by installing zero or more copies of each cable type on each edge of the graph. An additional restriction is that the demand of each source must follow a single path. The problem is to find a route from each source node to the sink and to assign ca...
Sequencing from compomers: Using mass spectrometry for DNA denovo sequencing of 200+ nt
 J. Comput. Biol
"... Abstract. One of the main endeavors in today’s Life Science remains the efficient sequencing of long DNA molecules. Today, most denovo sequencing of DNA is still performed using electrophoresisbased Sanger Sequencing, based on the Sanger concept of 1977. Methods using mass spectrometry to acquire ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
Abstract. One of the main endeavors in today’s Life Science remains the efficient sequencing of long DNA molecules. Today, most denovo sequencing of DNA is still performed using electrophoresisbased Sanger Sequencing, based on the Sanger concept of 1977. Methods using mass spectrometry to acquire the Sanger Sequencing data are limited by short sequencing lengths of 15–25 nt. We propose a new method for DNA sequencing using basespecific cleavage and mass spectrometry, that appears to be a promising alternative to classical DNA sequencing approaches. A single stranded DNA or RNA molecule is cleaved by a basespecific (bio)chemical reaction using, for example, RNAses. The cleavage reaction is modified such that not all, but only a certain percentage of those bases are cleaved. The resulting mixture of fragments is then analyzed using MALDITOF mass spectrometry, whereby we acquire the molecular masses of fragments. For every peak in the mass spectrum, we calculate those base compositions that will potentially create a peak of the observed mass and, repeating the cleavage reaction for all four bases, finally try to uniquely reconstruct the underlying sequence from these observed spectra. This leads us to the combinatorial problem of Sequencing From Compomers and, finally, to the graphtheoretical problem of finding a walk in a subgraph of the de Bruijn graph. Application of this method to simulated data indicates that it might be capable of sequencing DNA molecules with 200+ nt. 1
A Fast and Simple Algorithm for the Money Changing Problem
 ALGORITHMICA
, 2007
"... The Money Changing Problem (MCP) can be stated as follows: Given k positive integers a1 < ···< ak and a query integer M, is there a linear combination ∑ i ci ai = M with nonnegative integers ci,a decomposition of M? If so, produce one or all such decompositions. The largest integer without ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
The Money Changing Problem (MCP) can be stated as follows: Given k positive integers a1 < ···< ak and a query integer M, is there a linear combination ∑ i ci ai = M with nonnegative integers ci,a decomposition of M? If so, produce one or all such decompositions. The largest integer without such a decomposition is called the Frobenius number g(a1,...,ak). A data structure called the residue table of a1 words can be used to compute the Frobenius number in time O(a1). We present an intriguingly simple algorithm for computing the residue table which runs in time O(ka1), with no additional memory requirements, outperforming the best previously known algorithm. Simulations show that it performs well even on “hard ” instances from the literature. In addition, we can employ the residue table to answer MCP decision instances in constant time, and a slight modification of size O(a1) to compute one decomposition for a query M. Note that since both computing the Frobenius number and MCP (decision) are NPhard, one cannot expect to find an algorithm that is polynomial in the size of the input, i.e., in k, log ak, and log M. We then give an algorithm which, using a modification of the residue table, also constructible in O(ka1) time, computes all decompositions of a query integer M. Its worstcase running time is O(ka1) for each
The Quantum and Classical Complexity of Translationally Invariant Tiling and Hamiltonian Problems
"... Abstract — We study the complexity of a class of problems involving satisfying constraints which remain the same under translations in one or more spatial directions. In this paper, we show hardness of a classical tiling problem on an N ×N 2dimensional grid and a quantum problem involving finding t ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Abstract — We study the complexity of a class of problems involving satisfying constraints which remain the same under translations in one or more spatial directions. In this paper, we show hardness of a classical tiling problem on an N ×N 2dimensional grid and a quantum problem involving finding the ground state energy of a 1dimensional quantum system of N particles. In both cases, the only input is N, provided in binary. We show that the classical problem is NEXPcomplete and the quantum problem is QMAEXPcomplete. Thus, an algorithm for these problems which runs in time polynomial in N (exponential in the input size) would imply that EXP = NEXP or BQEXP = QMAEXP, respectively. Although tiling in general is already known to be NEXPcomplete, to our knowledge, all previous reductions require that either the set of tiles and their constraints or some varying boundary conditions be given as part of the input. In the problem considered here, these are fixed, constantsized parameters of the problem. Instead, the problem instance is encoded solely in the size of the system. 1.
The Complexity of Design Automation Problems
 Advanced Semiconductor Technology and Computer Systems
, 1980
"... This paper reviews several problems that arise in the area of design automation. Most of these problems are shown to be NPhard. Further, it is unlikely that any of these problems can be solved by fast approximation algorithms that guarantee solutions that are always within some fixed relative error ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
This paper reviews several problems that arise in the area of design automation. Most of these problems are shown to be NPhard. Further, it is unlikely that any of these problems can be solved by fast approximation algorithms that guarantee solutions that are always within some fixed relative error of the optimal solution value. This points out the importance of heuristics and other tools to obtain algorithms that perform well on the problem instances of interest.
A Polynomialtime Algorithm for the ChangeMaking Problem
 Department of Computer Science, Cornell University
, 1994
"... The changemaking problem is the problem of representing a given value with the fewest coins possible from a given set of coin denominations. To solve this problem for arbitrary coin systems is NPhard [L]. We investigate the problem of determining whether the greedy algorithm always produces the op ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
The changemaking problem is the problem of representing a given value with the fewest coins possible from a given set of coin denominations. To solve this problem for arbitrary coin systems is NPhard [L]. We investigate the problem of determining whether the greedy algorithm always produces the optimal result for a given coin system. Chang and Gill [CG] show that this can be solved in time polynomial in the size of the largest coin and in the number of coins. Kozen and Zaks [KZ] give a more efficient algorithm, and pose as an open problem whether there is an algorithm to solve this problem which is polynomial in the size of the input. In this paper, we will derive such an algorithm. We first obtain a characterization of the smallest counterexample (if there is one) for which the greedy algorithm is not optimal. We then derive a set of O(n 2 ) possible values (where n is the number of coins) which must contain the smallest counterexample. Each can be tested with O(n) arithmetic ope...
Approximability of sparse integer programs
 In Proc. 17th ESA
, 2009
"... The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ǫ> 0, if P = NP this ratio cannot be improved to k − 1 − ǫ, and under the unique games conjecture this ratio cannot be improved to k − ǫ. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsackcover inequalities. Second, for packing integer programs {max cx: Ax ≤ b,0 ≤ x ≤ d} where A has at most k nonzeroes per column, we give a 2 k k 2approximation algorithm. This is the first polynomialtime approximation algorithm for this problem with approximation ratio depending only on k, for any k> 1. Our approach starts from iterated LP relaxation, and then uses probabilistic and greedy methods to recover a feasible solution. Note added after publication: This version includes subsequent developments: a O(k 2) approximation for the latter problem using the iterated rounding framework, and several literature reference updates including a O(k)approximation for the same problem by Bansal et al.
Optimal bounds for the changemaking problem
 Theoretical Comput. Sci
, 1994
"... Abstract. The changemaking problem is the problem of representing agiven value with the fewest coins possible. We investigate the problem of determining whether the greedy algorithm produces an optimal representation of all amounts for a given set of coin denominations 1 = c1 < c2 < < cm. ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The changemaking problem is the problem of representing agiven value with the fewest coins possible. We investigate the problem of determining whether the greedy algorithm produces an optimal representation of all amounts for a given set of coin denominations 1 = c1 < c2 < < cm. Chang and Gill [1] show that if the greedy algorithm is not always optimal, then there exists a counterexample x in the range c3 x < cm(cmcm;1 + cm; 3cm;1) cm; cm;1 To test for the existence of such acounterexample, Chang and Gill propose computing and comparing the greedy and optimal representations of all x in this range. In this paper we show that if a counterexample exists, then the smallest one lies in the range c3 +1 < x < cm + cm;1 � and these bounds are tight. Moreover, we give a simple test for the existence of a counterexample that does not require the calculation of optimal representations. In addition, we giveacomplete characterization of threecoin systems and an e cient algorithm for all systems with a xed number of coins. Finally, weshowthat a related problem is coNPcomplete. 1