Results 1  10
of
10
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 61 (18 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Efficient computation of the characteristic polynomial
 Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation
, 2005
"... We deal with the computation of the characteristic polynomial of dense matrices over word size finite fields and over the integers. We first present two algorithms for finite fields: one is based on Krylov iterates and Gaussian elimination. We compare it to an improvement of the second algorithm of ..."
Abstract

Cited by 18 (13 self)
 Add to MetaCart
We deal with the computation of the characteristic polynomial of dense matrices over word size finite fields and over the integers. We first present two algorithms for finite fields: one is based on Krylov iterates and Gaussian elimination. We compare it to an improvement of the second algorithm of KellerGehrig. Then we show that a generalization of KellerGehrig’s third algorithm could improve both complexity and computational time. We use these results as a basis for the computation of the characteristic polynomial of integer matrices. We first use early termination and Chinese remaindering for dense matrices. Then a probabilistic approach, based on integer minimal polynomial and Hensel factorization, is particularly well suited to sparse and/or structured matrices.
Interpolation of ShiftedLacunary Polynomials (Extended Abstract)
"... Given a “black box” function to evaluate an unknown rational polynomial f ∈Q[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine the sparsity t∈Z>0, the shift α∈Q, the exponents 0≤e1< ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Given a “black box” function to evaluate an unknown rational polynomial f ∈Q[x] at points modulo a prime p, we exhibit algorithms to compute the representation of the polynomial in the sparsest shifted power basis. That is, we determine the sparsity t∈Z>0, the shift α∈Q, the exponents 0≤e1< e2<···<et, and the coefficients c1,...,ct∈Q\{0} such that f (x)=c1(x−α) e1 + c2(x−α) e2 +···+ct(x−α) et. The computed sparsity t is absolutely minimal over any shifted power basis. The novelty of our algorithm is that the complexity is polynomial in the (sparse) representation size and in particular is logarithmic in deg f. Our method combines previous celebrated results on sparse interpolation and computing sparsest shifts, and provides a way to handle polynomials with extremely high degree which are, in some sense, sparse in information. We give both an unconditional deterministic algorithm which is polynomialtime but has a rather high complexity, and a more practical probabilistic algorithm which relies on some unknown constants.
Matrix Rank Certification
, 2001
"... Randomized algorithms are given for computing the rank of a matrix over a field of... ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Randomized algorithms are given for computing the rank of a matrix over a field of...
Bounds on the coefficients of the characteristic and minimal polynomials
 Journal of Inequalities in Pure and Applied Mathematics
, 2007
"... Abstract. This note presents absolute bounds on the size of the coefficients of the characteristic and minimal polynomials depending on the size of the coefficients of the associated matrix. Moreover, we present algorithms to compute more precise inputdependant bounds on these coefficients. Such bou ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. This note presents absolute bounds on the size of the coefficients of the characteristic and minimal polynomials depending on the size of the coefficients of the associated matrix. Moreover, we present algorithms to compute more precise inputdependant bounds on these coefficients. Such bounds are e.g. useful to perform deterministic chinese remaindering of the characteristic or minimal polynomial of an integer matrix. 1.
QuadraticTime Certificates in Linear Algebra
, 2011
"... We present certificates for the positive semidefiniteness of an n × n matrix A, whose entries are integers of binary length log ‖A‖, that can be verified in O(n 2+ǫ (log ‖A‖) 1+ǫ) binary operations for any ǫ> 0. The question arises in Hilbert/Artinbased rational sumofsquares certificates, i.e. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We present certificates for the positive semidefiniteness of an n × n matrix A, whose entries are integers of binary length log ‖A‖, that can be verified in O(n 2+ǫ (log ‖A‖) 1+ǫ) binary operations for any ǫ> 0. The question arises in Hilbert/Artinbased rational sumofsquares certificates, i.e., proofs, for polynomial inequalities with rational coefficients. We allow certificates that are validated by Monte Carlo randomized algorithms, as in Rusins M. Freivalds’s famous 1979 quadratic time certification for the matrix product. Our certificates occupy O(n 3+ǫ (log ‖A‖) 1+ǫ) bits, from which the verification algorithm randomly samples a quadratic amount. In addition, we give certificates of the same space and randomized validation time complexity for the Frobenius form and the characteristic and minimal polynomials. For determinant and rank we have certificates of essentiallyquadratic binary space and time complexity via Storjohann’s algorithms.
unknown title
, 2008
"... Finding the growth rate of a regular language in polynomial time ..."
(Show Context)
ELA MATRIX RANK CERTIFICATION∗
"... Abstract. Randomized algorithms are given for computing the rank of a matrix over a field of characteristic zero with conjugation operator. The matrix is treated as a black box. Only the capability to compute matrix×columnvector and rowvector×matrix products is used. The methods are exact, sometim ..."
Abstract
 Add to MetaCart
Abstract. Randomized algorithms are given for computing the rank of a matrix over a field of characteristic zero with conjugation operator. The matrix is treated as a black box. Only the capability to compute matrix×columnvector and rowvector×matrix products is used. The methods are exact, sometimes called seminumeric. They are appropriate for example for matrices with integer or rational entries. The rank algorithms are probabilistic of the Las Vegas type; the correctness of the result is guaranteed.