Results 1  10
of
40
A.: Cuts from proofs: A complete and practical technique for solving linear inequalities over integers
 In: In CAV
, 2009
"... Abstract. We propose a novel, sound, and complete Simplexbased algorithm for solving linear inequalities over integers. Our algorithm, which can be viewed as a semantic generalization of the branchandbound technique, systematically discovers and excludes entire subspaces of the solution space con ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a novel, sound, and complete Simplexbased algorithm for solving linear inequalities over integers. Our algorithm, which can be viewed as a semantic generalization of the branchandbound technique, systematically discovers and excludes entire subspaces of the solution space containing no integer points. Our main insight is that by focusing on the defining constraints of a vertex, we can compute a proof of unsatisfiability for the intersection of the defining constraints and use this proof to systematically exclude subspaces of the feasible region with no integer points. We show experimentally that our technique significantly outperforms the top four competitors in the QFLIA category of the SMTCOMP ’08 when solving linear inequalities over integers. 1
A Family of Sparse Polynomial Systems Arising in Chemical Reaction Systems
, 1999
"... A class of sparse polynomial systems is investigated which is dened by a weighted directed graph and a weighted bipartite graph. They arise in the model of mass action kinetics for chemical reaction systems. In this application the number of real positive solutions within a certain affine subspace i ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
A class of sparse polynomial systems is investigated which is dened by a weighted directed graph and a weighted bipartite graph. They arise in the model of mass action kinetics for chemical reaction systems. In this application the number of real positive solutions within a certain affine subspace is of particular interest. We show that the simplest cases are equivalent to binomial systems while in general the solution structure is highly determined by the properties of the two graphs. First we recall results by Feinberg and give rigorous proofs. Secondly, we explain how the graphs determine the Newton polytopes of the system of sparse polynomials and thus determine the solution structure. The results on positive solutions from real algebraic geometry are applied to this particular situation. Examples illustrate the theoretical results.
Computing Hermite and Smith Normal Forms of Triangular Integer Matrices
 Linear Algebra Appl
, 1996
"... This paper considers the problem of transforming a triangular integer input matrix to canonical Hermite and Smith normal form. We provide algorithms and prove deterministic running times for both transformation problems that are linear (hence optimal) in the matrix dimension. The algorithms are easi ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
(Show Context)
This paper considers the problem of transforming a triangular integer input matrix to canonical Hermite and Smith normal form. We provide algorithms and prove deterministic running times for both transformation problems that are linear (hence optimal) in the matrix dimension. The algorithms are easily implemented, assume standard integer multiplication, and admit excellent performance in practice. The results presented here lead to faster practical algorithms for computing the Hermite and Smith normal form of an arbitrary (non triangular) integer input matrix. 1 Introduction It follows from Hermite [Her51] that any m \Theta n rank n integer matrix A can be transformed using a sequence of integer row operations to an upper triangular matrix H that has jth diagonal entry h j positive for 1 j n and offdiagonal entries ¯ h ij satisfying 0 ¯ h ij ! h j for 1 i ! j n. The matrix H  called the Hermite normal form of A  always exists and is unique. In this paper we consider the...
Computing Popov and Hermite forms of polynomial matrices
 In International Symposium on Symbolic and Algebmic Computation, Zutich, .%isse
, 1996
"... For a polynomial matrix P(z) of degree d in M~,~(K[z]) where K is a commutative field, a reduction to the Hermite normal form can be computed in O (ndM(n) + M(nd)) arithmetic operations if M(n) is the time required to multiply two n x n matrices over K. Further, a reduction can be computed using O(l ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
For a polynomial matrix P(z) of degree d in M~,~(K[z]) where K is a commutative field, a reduction to the Hermite normal form can be computed in O (ndM(n) + M(nd)) arithmetic operations if M(n) is the time required to multiply two n x n matrices over K. Further, a reduction can be computed using O(log~+ ’ (ml)) pamlel arithmetic steps and O(L(nd) ) processors if the same processor bound holds with time O (logX (rid)) for determining the lexicographically first maximal linearly independent subset of the set of the columns of an nd x nd matrix over K. These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials.
Shifted Normal Forms of Polynomial Matrices
 Proceeding of International Symposium on Symbolic and Algebraic Computation, ISSAC’99
, 1999
"... In this paper we study the problem of transforming, via invertible column operations, a matrix polynomial into a variety of shifted forms. Examples of forms covered in our framework include a column reduced form, a triangular form, a Hermite normal form or a Popov normal form along with their shifte ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
In this paper we study the problem of transforming, via invertible column operations, a matrix polynomial into a variety of shifted forms. Examples of forms covered in our framework include a column reduced form, a triangular form, a Hermite normal form or a Popov normal form along with their shifted counterparts. By obtaining degree bounds for unimodular multipliers of shifted Popov forms we are able to embed the problem of computing a normal form into one of determining a shifted form of a minimal polynomial basis for an associated matrix polynomial. Shifted minimal polynomial bases can be computed via sigma bases [1, 2] and in Popov form via Mahler systems [5]. The latter method gives a fractionfree algorithm for computing matrix normal forms. Key words: Popov Form, Hermite Normal Form, 1 Introduction Matrix polynomial arithmetic is fundamental to many applications in science and engineering. It is encountered in linear systems theory [12], determining minimal partial realization...
An LLLreduction algorithm with quasilinear time complexity
, 2010
"... Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Abstract. We devise an algorithm, e L 1, with the following specifications: It takes as input an arbitrary basis B = (bi)i ∈ Z d×d of a Euclidean lattice L; It computes a basis of L which is reduced for a mild modification of the LenstraLenstraLovász reduction; It terminates in time O(d 5+ε β + d ω+1+ε β 1+ε) where β = log max ‖bi ‖ (for any ε> 0 and ω is a valid exponent for matrix multiplication). This is the first LLLreducing algorithm with a time complexity that is quasilinear in β and polynomial in d. The backbone structure of e L 1 is able to mimic the KnuthSchönhage fast gcd algorithm thanks to a combination of cuttingedge ingredients. First the bitsize of our lattice bases can be decreased via truncations whose validity are backed by recent numerical stability results on the QR matrix factorization. Also we establish a new framework for analyzing unimodular transformation matrices which reduce shifts of reduced bases, this includes bitsize control and new perturbation tools. We illustrate the power of this framework by generating a family of reduction algorithms. 1
Normal Forms for General Polynomial Matrices
, 2001
"... We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the classical Popov form and the Hermite Normal Form.
A Linear Space Algorithm for Computing the Hermite Normal Form
 Proceedings ISSAC 2001, Lecture Notes in Computer Sci., 2146
, 2001
"... Computing the Hermite Normal Form of an n n integer matrix using the best current algorithms typically requires O(n 3 log M) space, where M is a bound on the entries of the input matrix. Although polynomial in the input size (which is O(n 2 log M)), this space blowup can easily become a seriou ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
Computing the Hermite Normal Form of an n n integer matrix using the best current algorithms typically requires O(n 3 log M) space, where M is a bound on the entries of the input matrix. Although polynomial in the input size (which is O(n 2 log M)), this space blowup can easily become a serious issue in practice when working on big integer matrices. In this paper we present a new algorithm for computing the Hermite Normal Form which uses only O(n 2 log M) space (i.e., essentially the same as the input size). When implemented using standard algorithms for integer and matrix multiplication, our algorithm has the same time complexity of the asymptotically fastest (but space inecient) algorithms. We also present a heuristic algorithm for HNF that achieves a substantial speedup when run on randomly generated input matrices.
Fast Algorithms for Linear Algebra Modulo N
 Proc. of Sixth Ann. Europ. Symp. on Algorithms: ESA'98
, 1998
"... Many linear algebra problems over the ring ZN of integers modulo N can be solved by transforming via elementary row operations an n \Theta m input matrix A to Howell form H. The nonzero rows of H give a canonical set of generators for the submodule of (ZN) m generated by the rows of A. In this ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Many linear algebra problems over the ring ZN of integers modulo N can be solved by transforming via elementary row operations an n \Theta m input matrix A to Howell form H. The nonzero rows of H give a canonical set of generators for the submodule of (ZN) m generated by the rows of A. In this paper we present an algorithm to recover H together with an invertible transformation matrix P which satisfies PA = H. The cost of the algorithm is O(nm !\Gamma1 ) operations with integers bounded in magnitude by N . This leads directly to fast algorithms for tasks involving ZNmodules, including an O(nm !\Gamma1 ) algorithm for computing the general solution over ZN of the system of linear equations xA = b, where b 2 (ZN) m . 1 Introduction The reduction of a matrix A over a field to reduced row echelon form H is a central topic in elementary linear algebra. The nonzero rows of H give a canonical basis for the row span S(A) of A, that is, the set of all linear combinations of rows...
Frobenius numbers by lattice point enumeration
 INTEGERS
"... The Frobenius number g(A) of a set A = (a1, a2,..., an) of positive integers is the largest integer not representable as a nonnegative linear combination of the ai. We interpret the Frobenius number in terms of a discrete tiling of the integer lattice of dimension n−1 and obtain a fast algorithm for ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
The Frobenius number g(A) of a set A = (a1, a2,..., an) of positive integers is the largest integer not representable as a nonnegative linear combination of the ai. We interpret the Frobenius number in terms of a discrete tiling of the integer lattice of dimension n−1 and obtain a fast algorithm for computing it. The algorithm appears to run in average time that is softly quadratic and we prove that this is the case for almost all of the steps. In practice, the algorithm is very fast: examples with n = 4 and the numbers in A having 100 digits take under one second. The running time increases with dimension and we can succeed up to n = 11. We use the geometric structure of a fundamental domain D, having a1 points, related to a lattice constructed from A. The domain encodes information needed to find the Frobenius number. One cannot generally store all of D, but it is possible to encode its shape by a small set of vectors and that is sufficient to get g(A). The ideas of our algorithm connect the Frobenius problem to methods in integer linear programming and computational algebra. A variation of these ideas works when n = 3, where D has much more structure. An integer programming method of Eisenbrand and Rote can be used to design an algorithm