Results 1 - 10
of
314
Factoring polynomials with rational coefficients
- MATH. ANN
, 1982
"... In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q[X] in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q[X]. It is well known that this is equivalent to factoring primitive polynomia ..."
Abstract
-
Cited by 982 (11 self)
- Add to MetaCart
(Show Context)
In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q[X] in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q[X]. It is well known that this is equivalent to factoring primitive polynomials feZ[X] into irreducible factors in Z[X]. Here we call f ~ Z[X] primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. [8]. Its running time, measured in bit operations, is O(nl2+n9(log[fD3). Here f~Tl[X] is the polynomial to be factored, n = deg(f) is the degree of f, and for a polynomial ~ a ~ i with real coefficients a i. i An outline of the algorithm is as follows. First we find, for a suitable small prime number p, a p-adic irreducible factor h of f, to a certain precision. This is done with Berlekamp's algorithm for factoring polynomials over small finite fields, combined with Hensel's lemma. Next we look for the irreducible factor h o of f in
Worst-case equilibria
- IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE
, 1999
"... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..."
Abstract
-
Cited by 851 (17 self)
- Add to MetaCart
In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems.
Proportionate progress: A notion of fairness in resource allocation
- Algorithmica
, 1996
"... Given a set of n tasks and m resources, where each task x has a rational weight x:w = x:e=x:p; 0 < x:w < 1, a periodic schedule is one that allocates a resource to a task x for exactly x:e time units in each interval [x:p k; x:p (k + 1)) for all k 2 N. We de ne a notion of proportionate progre ..."
Abstract
-
Cited by 322 (26 self)
- Add to MetaCart
(Show Context)
Given a set of n tasks and m resources, where each task x has a rational weight x:w = x:e=x:p; 0 < x:w < 1, a periodic schedule is one that allocates a resource to a task x for exactly x:e time units in each interval [x:p k; x:p (k + 1)) for all k 2 N. We de ne a notion of proportionate progress, called P-fairness, and use it to design an e cient algorithm which solves the periodic scheduling problem. Keywords: Euclid's algorithm, fairness, network ow, periodic scheduling, resource allocation.
Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum Problems.
- Math. Programming
, 1993
"... We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3-algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3- algorithm with "deep insertions" and a practical algorithm for block Korkin--Z ..."
Abstract
-
Cited by 319 (7 self)
- Add to MetaCart
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of the L3-algorithm of Lenstra, Lenstra, Lov'asz (1982). We present a variant of the L3- algorithm with "deep insertions" and a practical algorithm for block Korkin--Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC 1+ computer.
The NP-completeness column: an ongoing guide
- JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H. Freem ..."
Abstract
-
Cited by 242 (0 self)
- Add to MetaCart
(Show Context)
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H. Freeman & Co., New York, 1979 (hereinafter referred to as "[G&J]"; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.) or open problems they would like publicized, should
Efficient and exact data dependence analysis
- PROCEEDINGS OF THE ACM SIGPLAN '91 CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION
, 1991
"... Data dependence testing is the basic step in detecting loop level parallelism in numerical programs. The problem is equivalent to integer linear programming and thus in general cannot be solved efficiently. Current methods in use employ inexact methods that sacrifice potential parallelism in order t ..."
Abstract
-
Cited by 125 (8 self)
- Add to MetaCart
Data dependence testing is the basic step in detecting loop level parallelism in numerical programs. The problem is equivalent to integer linear programming and thus in general cannot be solved efficiently. Current methods in use employ inexact methods that sacrifice potential parallelism in order to improve compiler efficiency. This paper shows that in practice, data dependence can be computed exactly and efficiently. There are three major ideas that lead to this result. First, we have developed and assembled a small set of efficient algorithms, each one exact for special case inputs. Combined with a moderately expensive backup test, they are exact for all the cases we have seen in practice. Second, we introduce a memorization technique to save results of previous tests, thus avoiding calling the data dependence routines multiple times on the same input. Third, we show that this approach can both be extended to compute distance and direction vectors and to use unknown symbolic terms without any loss of accuracy or efficiency, We have implemented our algorithm in the SUIF system, a general purpose compiler system developed at Stanford. We ran the algorithm on the PERFECT Club Benchmarks and our data dependence analyzer gave an exact solution in all cases efficiently.
Las Vegas algorithms for linear and integer programming when the dimension is small
- J. ACM
, 1995
"... Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algor ..."
Abstract
-
Cited by 115 (3 self)
- Add to MetaCart
(Show Context)
Abstract. This paper gives an algcmthm for solving linear programming problems. For a problem with tz constraints and d variables, the algorithm requires an expected O(d’n) + (log n)o(d)d’’+(’(’) + o(dJA log n) arithmetic operations, as rz ~ ~. The constant factors do not depend on d. Also, an algorlthm N gwen for integer hnear programmmg. Let p bound the number of bits required to specify the ratmnal numbers defmmg an input constraint or the ob~ective function vector. Let n and d be as before. Then, the algorithm requires expected 0(2d dn + S~dm In n) + dc)’d) ~ in H operations on numbers with O(1~p bits d ~ ~ ~z + ~, where the constant factors do not depend on d or p. The expectations are with respect to the random choices made by the algorithms, and the bounds hold for any gwen input. The techmque can be extended to other convex programming problems. For example, m algorlthm for finding the smallest sphere enclosing a set of /z points m Ed has the same t]me bound
On the Limits of Non-Approximability of Lattice Problems
, 1998
"... We show simple constant-round interactive proof systems for problems capturing the approximability, to within a factor of p n, of optimization problems in integer lattices; specifically, the closest vector problem (CVP), and the shortest vector problem (SVP). These interactive proofs are for th ..."
Abstract
-
Cited by 102 (3 self)
- Add to MetaCart
We show simple constant-round interactive proof systems for problems capturing the approximability, to within a factor of p n, of optimization problems in integer lattices; specifically, the closest vector problem (CVP), and the shortest vector problem (SVP). These interactive proofs are for the "coNP direction"; that is, we give an interactive protocol showing that a vector is "far" from the lattice (for CVP), and an interactive protocol showing that the shortest-latticevector is "long" (for SVP). Furthermore, these interactive proof systems are Honest-Verifier Perfect Zero-Knowledge. We conclude that approximating CVP (resp., SVP) within a factor of p n is in NP " coAM. Thus, it seems unlikely that approximating these problems to within a p n factor is NPhard. Previously, for the CVP (resp., SVP) problem, Lagarias et. al., Hastad and Banaszczyk showed that the gap problem corresponding to approximating CVP (resp., SVP) within n is in NP " coNP . On the other hand, Ar...
Effective lattice point counting in rational convex polytopes
- JOURNAL OF SYMBOLIC COMPUTATION
, 2003
"... This paper discusses algorithms and software for the enumeration of all lattice points inside a rational convex polytope: we describe LattE, a computer package for lattice point enumeration which contains the first implementation of A. Barvinok's algorithm [8]. We report on computational experi ..."
Abstract
-
Cited by 98 (14 self)
- Add to MetaCart
(Show Context)
This paper discusses algorithms and software for the enumeration of all lattice points inside a rational convex polytope: we describe LattE, a computer package for lattice point enumeration which contains the first implementation of A. Barvinok's algorithm [8]. We report on computational experiments with multiway contingency tables, knapsack type problems, rational polygons, and flow polytopes. We prove that this kind of symbolic-algebraic ideas surpasses the traditional branch-and-bound enumeration and in some instances LattE is the only software capable of counting. Using LattE, we have also computed new formulas of Ehrhart (quasi)polynomials for interesting families of polytopes (hypersimplices, truncated cubes, etc). We end with a survey of other "algebraic-analytic" algorithms, including a "polar" variation of Barvinok's algorithm which is very fast when the number of facet-defining inequalities is much smaller compared to the number of vertices.