Results 1  10
of
23
Nearly Optimal Algorithms For Canonical Matrix Forms
, 1993
"... A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nea ..."
Abstract

Cited by 62 (13 self)
 Add to MetaCart
A Las Vegas type probabilistic algorithm is presented for finding the Frobenius canonical form of an n x n matrix T over any field K. The algorithm requires O~(MM(n)) = MM(n) (log n) ^ O(1) operations in K, where O(MM(n)) operations in K are sufficient to multiply two n x n matrices over K. This nearly matches the lower bound of \Omega(MM(n)) operations in K for this problem, and improves on the O(n^4) operations in K required by the previously best known algorithms. We also demonstrate a fast parallel implementation of our algorithm for the Frobenius form, which is processorefficient on a PRAM. As an application we give an algorithm to evaluate a polynomial g(x) in K[x] at T which requires only O~(MM(n)) operations in K when deg g < n^2. Other applications include sequential and parallel algorithms for computing the minimal and characteristic polynomials of a matrix, the rational Jordan form of a matrix, for testing whether two matrices are similar, and for matrix powering, which are substantially faster than those previously known.
On efficient sparse integer matrix Smith normal form computations
, 2001
"... We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. W ..."
Abstract

Cited by 42 (20 self)
 Add to MetaCart
(Show Context)
We present a new algorithm to compute the Integer Smith normal form of large sparse matrices. We reduce the computation of the Smith form to independent, and therefore parallel, computations modulo powers of wordsize primes. Consequently, the algorithm does not suffer from coefficient growth. We have implemented several variants of this algorithm (Elimination and/or BlackBox techniques) since practical performance depends strongly on the memory available. Our method has proven useful in algebraic topology for the computation of the homology of some large simplicial complexes.
A study of Coppersmith's block Wiedemann algorithm using matrix polynomials
 LMCIMAG, REPORT # 975 IM
, 1997
"... We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was a ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
(Show Context)
We analyse a randomized block algorithm proposed by Coppersmith for solving large sparse systems of linear equations, Aw = 0, over a finite field K =GF(q). It is a modification of an algorithm of Wiedemann. Coppersmith has given heuristic arguments to understand why the algorithm works. But it was an open question to prove that it may produce a solution, with positive probability, for small finite fields e.g. for K =GF(2). We answer this question nearly completely. The algorithm uses two random matrices X and Y of dimensions m \Theta N and N \Theta n. Over any finite field, we show how the parameters m and n of the algorithm may be tuned so that, for any input system, a solution is computed with high probability. Conversely, for certain particular input systems, we show that the conditions on the input parameters may be relaxed to ensure the success. We also improve the probability bound of Kaltofen in the case of large cardinality fields. Lastly, for the sake of completeness of the...
Computing Popov and Hermite forms of polynomial matrices
 In International Symposium on Symbolic and Algebmic Computation, Zutich, .%isse
, 1996
"... For a polynomial matrix P(z) of degree d in M~,~(K[z]) where K is a commutative field, a reduction to the Hermite normal form can be computed in O (ndM(n) + M(nd)) arithmetic operations if M(n) is the time required to multiply two n x n matrices over K. Further, a reduction can be computed using O(l ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
For a polynomial matrix P(z) of degree d in M~,~(K[z]) where K is a commutative field, a reduction to the Hermite normal form can be computed in O (ndM(n) + M(nd)) arithmetic operations if M(n) is the time required to multiply two n x n matrices over K. Further, a reduction can be computed using O(log~+ ’ (ml)) pamlel arithmetic steps and O(L(nd) ) processors if the same processor bound holds with time O (logX (rid)) for determining the lexicographically first maximal linearly independent subset of the set of the columns of an nd x nd matrix over K. These results are obtamed by applying in the matrix case, the techniques used in the scalar case of the gcd of polynomials.
An O(n³) Algorithm for Frobenius Normal Form
 IN PROCEEDINGS OF THE 1998 INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND ALGEBRAIC COMPUTATION
, 1997
"... We describe an O(n³) field operations algorithm for computing the Frobenius normal form of an n \Theta n matrix. As applications we get O(n³) algorithms for two other classical problems: computing the minimal polynomial of a matrix and testing two matrices for similarity. Assuming standard matrix mu ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
We describe an O(n³) field operations algorithm for computing the Frobenius normal form of an n \Theta n matrix. As applications we get O(n³) algorithms for two other classical problems: computing the minimal polynomial of a matrix and testing two matrices for similarity. Assuming standard matrix multiplication, the previously best known deterministic complexity bound for all three problems is O(n^4).
An algorithm for computing a new normal form for dynamical systems
 J. Symbolic Comput
, 2000
"... We propose in this paper a new normal form for dynamical systems or vector fields which improves the classical normal forms in the sense that it is a further reduction of the classical normal forms. We give an algorithm for an effective computation of these normal forms. Our approach is rational in ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
We propose in this paper a new normal form for dynamical systems or vector fields which improves the classical normal forms in the sense that it is a further reduction of the classical normal forms. We give an algorithm for an effective computation of these normal forms. Our approach is rational in the sense that if the coefficients of the system are in a field K (which, in practice, is Q, R), so is the normal form and all computations are done in K. As a particular case, if the matrix of the linear part is a companion matrix then we reduce the dynamical system to a single differential equation. Our method is applicable for both the nilpotent and the nonnilpotent cases. We have implemented our algorithm in Maple V and obtained many examples of the further reduced normal forms up to some finite order. c ○ 2000 Academic Press
Computing Rational Forms of Integer Matrices
 J. SYMBOLIC COMPUT
, 2000
"... A new algorithm is presented for finding the Frobenius rational form F 2 Z nn of any A 2 Z nn which requires an expected O(n 4 (log n+log kAk)+n 3 (log n+log kAk) 2 ) word operations using standard integer and matrix arithmetic. This improves substantially on the fastest previously known a ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
A new algorithm is presented for finding the Frobenius rational form F 2 Z nn of any A 2 Z nn which requires an expected O(n 4 (log n+log kAk)+n 3 (log n+log kAk) 2 ) word operations using standard integer and matrix arithmetic. This improves substantially on the fastest previously known algorithms. The algorithm is probabilistic of the Las Vegas type: it assumes a source of random bits but always produces the correct answer. Las Vegas algorithms are also presented for computing a transformation matrix to the Frobenius form, and for computing the rational Jordan form of an integer matrix.
On the computation of minimal polynomials, cyclic vectors, and Frobenius forms
 LINEAR ALGEBRA APPL
, 1997
"... Various algorithms connected with the computation of the minimal polynomial of a square n×n matrix over a field k are presented here. The complexity of the first algorithm, where the complete factorization of the characteristic polynomial is needed, is O (√nn³). It produces the minimal polynomial a ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Various algorithms connected with the computation of the minimal polynomial of a square n×n matrix over a field k are presented here. The complexity of the first algorithm, where the complete factorization of the characteristic polynomial is needed, is O (√nn³). It produces the minimal polynomial and all characteristic subspaces of a matrix of size n. Furthermore an iterative algorithm for the minimal polynomial is presented with complexity O(n³ +n²m²), where m is a parameter of the used ShiftHessenberg matrix. It does not require knowledge of the characteristic polynomial. Important here is the fact that the average value of m or mA is ≈ log n. Next we are concerned with the topic of finding a cyclic vector first for a matrix whose characteristic polynomial is squarefree. Using the ShiftHessenberg form leads to an algorithm at cost O(n³ + m²n²). A more sophisticated recurrent procedure gives the result in O(n³) steps. In particular, a normal basis for an extended finite field will be obtained complexity O(n³ + n² log q). Finally the Frobenius form is obtained with asymptotic average complexity O(n 3 log n). All algorithms are deterministic. In all four cases, the complexity obtained is better than for the heretofore best known deterministic algorithm. The results are summarized in Tables 1, 2, 3 and 4.
Black Box Frobenius Decompositions over Small Fields (Extended Abstract)
, 2000
"... A new randomized algorithm is presented for computation of the Frobenius form and transition matrix for an n × n matrix over a field. Using standard matrix and polynomial arithmetic, the algorithm has an asymptotic expected complexity that matches the worst case complexity of the best know ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
A new randomized algorithm is presented for computation of the Frobenius form and transition matrix for an n &times; n matrix over a field. Using standard matrix and polynomial arithmetic, the algorithm has an asymptotic expected complexity that matches the worst case complexity of the best known deterministic algorithmic for this problem, recently given by Storjohann and Villard [16]. The new algorithm is based on the evaluation of Krylov spaces, rather than an elimination technique, and may therefore be superior when applied to sparse or structured matrices with a small number of invariant factors.
Fast parallel computation of the Smith normal form of polynomial matrices
 In International Symposium on Symbolic and Algebraic Computation
, 1994
"... We establish that the Smith normal form of a polynomial matrix in F[z]nxn, where F is an arbitrary commutative field, can be computed in NCF. 1 ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We establish that the Smith normal form of a polynomial matrix in F[z]nxn, where F is an arbitrary commutative field, can be computed in NCF. 1