Results 1  10
of
21
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 63 (21 self)
 Add to MetaCart
(Show Context)
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
An OutputSensitive Variant of the Baby Steps/Giant Steps Determinant Algorithm
, 2001
"... This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
This paper provides an adaptive version of the unblocked baby steps/giant steps algorithm [20, Section 2]. The result is most easily stated when b # where # is the determinant to be computed and # with 1 is not known. Note that by Hadamard's bound # # n(b + log 2 (n)/2), so # = 0 covers the worst case. We describe a Monte Carlo algorithm that produces # in (n bit operations, again with standard matrix arithmetic. The corresponding bit complexity of the early termination Gaussian elimination method is 4# , which is always more, and that of the algorithm by [10] is (n 1+1/2
Toeplitz and Hankel Meet Hensel and Newton: Nearly Optimal Algorithms and Their Practical Acceleration with Saturated Initialization
 Program in Computer Science, The Graduate
, 2004
"... We extend Hensel lifting for solving general and structured linear systems of equations to the rings of integers modulo nonprimes, e.g. modulo a power of two. This enables significant saving of word operations. We elaborate upon this approach in the case of Toeplitz linear systems. In this case, we ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We extend Hensel lifting for solving general and structured linear systems of equations to the rings of integers modulo nonprimes, e.g. modulo a power of two. This enables significant saving of word operations. We elaborate upon this approach in the case of Toeplitz linear systems. In this case, we initialize lifting with the MBA superfast algorithm, estimate that the overall bit operation (Boolean) cost of the solution is optimal up to roughly a logarithmic factor, and prove that the degeneration is unlikely even where the basic prime is fixed but the input matrix is random. We also comment on the extension of our algorithm to some other fundamental computations with (possibly singular) general and structured matrices and univariate polynomials as well as to the computation of the sign and the value of the determinant of an integer matrix.
Certification of the QR Factor R, and of Lattice Basis Reducedness
"... Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not suffic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not sufficient with respect to n, and to the numerical properties of the basis, the algorithm will answer “failed”. Hence a positive answer is a rigorous certificate. For implementing the certificate itself, we propose a floating point algorithm for computing (certified) error bounds for the R factor of the QR factorization. This algorithm takes into account all possible approximation and rounding errors. The certificate may be implemented using matrix library routines only. We report experiments that show that for a reduced basis of adequate dimension and quality the certificate succeeds, and establish the effectiveness of the certificate. This effectiveness is applied for certifying the output of fastest existing floating point heuristics of LLL reduction, without slowing down the whole process.
Algebraic algorithms
"... This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the class ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This article, along with [Elkadi and Mourrain 1996], explain the correlation between residue theory and the Dixon matrix, which yields an alternative method for studying and approximating all common solutions. In 1916, Macaulay [1916] constructed a matrix whose determinant is a multiple of the classical resultant for n homogeneous polynomials in n variables. The Macaulay matrix si16 multaneously generalizes the Sylvester matrix and the coefficient matrix of a system of linear equations [Kapur and Lakshman Y. N. 1992]. As the Dixon formulation, the Macaulay determinant is a multiple of the resultant. Macaulay, however, proved that a certain minor of his matrix divides the matrix determinant so as to yield the exact resultant in the case of generic homogeneous polynomials. Canny [1990] has invented a general method that perturbs any polynomial system and extracts a nontrivial projection operator. Using recent results pertaining to sparse polynomial systems [Gelfand et al. 1994, Sturmfels 1991], a matrix formula for computing the sparse resultant of n + 1 polynomials in n variables was given by Canny and Emiris [1993] and consequently improved in [Canny and Pedersen 1993, Emiris and Canny 1995]. The determinant of the sparse resultant matrix, like the Macaulay and Dixon matrices, only yields a projection operation, not the exact resultant. Here, sparsity means that only certain monomials in each of the n + 1 polynomials have nonzero coefficients. Sparsity is measured in geometric terms, namely, by the Newton polytope
Unified nearly optimal algorithms for structured integer matrices. Operator Theory
 Advances and Applications, 199, 359–375
, 2010
"... We seek the solution of banded, Toeplitz, Hankel, Vandermonde, Cauchy and other structured linear systems of equations with integer coefficients. By combining Hensel’s symbolic lifting with either divideandconquer algorithms or numerical iterative refinement, we unify the solution for all these st ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We seek the solution of banded, Toeplitz, Hankel, Vandermonde, Cauchy and other structured linear systems of equations with integer coefficients. By combining Hensel’s symbolic lifting with either divideandconquer algorithms or numerical iterative refinement, we unify the solution for all these structures. We yield the solution in nearly optimal randomized Boolean time, which covers both solution and its correctness verification. Our algorithms and nearly optimal time bounds are extended to the computation of the determinant of a structured integer matrix, its rank and a basis for its null space as well as to some fundamental computations with univariate polynomials that have integer coefficients. Furthermore, we allow to perform lifting modulo a properly bounded power of two to implement our algorithms in binary within a fixed computer precision.
Bayesian OutTrees
, 2008
"... A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to effici ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
A Bayesian treatment of latent directed graph structure for noniid data is provided where each child datum is sampled with a directed conditional dependence on a single unknown parent datum. The latent graph structure is assumed to lie in the family of directed outtree graphs which leads to efficient Bayesian inference. The latent likelihood of the data and its gradients are computable in closed form via Tutte’s directed matrix tree theorem using determinants and inverses of the outLaplacian. This novel likelihood subsumes iid likelihood, is exchangeable and yields efficient unsupervised and semisupervised learning algorithms. In addition to handling taxonomy and phylogenetic datasets the outtree assumption performs surprisingly well as a semiparametric density estimator on standard iid datasets. Experiments with unsupervised and semisupervised learning are shown on various UCI and taxonomy datasets.
Additive Preconditioning and Aggregation in Matrix Computations
"... Multiplicative preconditioning is a popular SVDbased techniques for the solution of linear systems of equations, but our SVDfree additive preconditioners are more readily available and better preserve matrix structure. We combine additive preconditioning with aggregation and other relevant techniq ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Multiplicative preconditioning is a popular SVDbased techniques for the solution of linear systems of equations, but our SVDfree additive preconditioners are more readily available and better preserve matrix structure. We combine additive preconditioning with aggregation and other relevant techniques to facilitate the solution of linear systems of equations and some other fundamental matrix computations. Our analysis and experiments show the power of our approach, guide us in selecting most effective policies of preconditioning and aggregation, and provide some new insights into these and related subjects of matrix computations.
Some Inequalities Related to the Seysen Measure of a Lattice ∗
, 2009
"... Given a lattice L, a basis B of L together with its dual B ∗ P, the orthogonality measure S(B) = i bi2b ∗ i   2 of B was introduced by M. Seysen [9] in 1993. This measure (the Seysen measure in the sequel, also known as the Seysen metric [11]) is at the heart of the Seysen lattice reduction ..."
Abstract
 Add to MetaCart
(Show Context)
Given a lattice L, a basis B of L together with its dual B ∗ P, the orthogonality measure S(B) = i bi2b ∗ i   2 of B was introduced by M. Seysen [9] in 1993. This measure (the Seysen measure in the sequel, also known as the Seysen metric [11]) is at the heart of the Seysen lattice reduction algorithm and is linked with different geometrical properties of the basis [8, 7, 10, 11]. In this paper, we explicit different expressions for this measure as well as new inequalities.