Results 1  10
of
58
On The Complexity Of Computing Determinants
 COMPUTATIONAL COMPLEXITY
, 2001
"... We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bi ..."
Abstract

Cited by 59 (18 self)
 Add to MetaCart
We present new baby steps/giant steps algorithms of asymptotically fast running time for dense matrix problems. Our algorithms compute the determinant, characteristic polynomial, Frobenius normal form and Smith normal form of a dense n n matrix A with integer entries in (n and (n bit operations; here denotes the largest entry in absolute value and the exponent adjustment by "+o(1)" captures additional factors for positive real constants C 1 , C 2 , C 3 . The bit complexity (n results from using the classical cubic matrix multiplication algorithm. Our algorithms are randomized, and we can certify that the output is the determinant of A in a Las Vegas fashion. The second category of problems deals with the setting where the matrix A has elements from an abstract commutative ring, that is, when no divisions in the domain of entries are possible. We present algorithms that deterministically compute the determinant, characteristic polynomial and adjoint of A with n and O(n ) ring additions, subtractions and multiplications.
Improving the Robustness of Private Information Retrieval
 In Proceedings of IEEE Security and Privacy Symposium
, 2007
"... Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond ..."
Abstract

Cited by 44 (16 self)
 Add to MetaCart
(Show Context)
Since 1995, much work has been done creating protocols for private information retrieval (PIR). Many variants of the basic PIR model have been proposed, including such modifications as computational vs. informationtheoretic privacy protection, correctness in the face of servers that fail to respond or that respond incorrectly, and protection of sensitive data against the database servers themselves. In this paper, we improve on the robustness of PIR in a number of ways. First, we present a Byzantinerobust PIR protocol which provides informationtheoretic privacy protection against coalitions of up to all but one of the responding servers, improving the previous result by a factor of 3. In addition, our protocol allows for more of the responding servers to return incorrect information while still enabling the user to compute the correct result. We then extend our protocol so that queries have informationtheoretic protection if a limited number of servers collude, as before, but still retain computational protection if they all collude. We also extend the protocol to provide informationtheoretic protection to the contents of the database against collusions of limited numbers of the database servers, at no additional communication cost or increase in the number of servers. All of our protocols retrieve a block of data with communication cost only O(ℓ) times the size of the block, where ℓ is the number of servers. Finally, we discuss our implementation of these protocols, and measure their performance in order to determine their practicality. 1
Computing the Rank and a Small Nullspace Basis of a Polynomial Matrix
, 2005
"... We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
We reduce the problem of computing the rank and a nullspace basis of a univariate polynomial matrix to polynomial matrix multiplication. For an input n×n matrix of degree d over a field K we give a rank and nullspace algorithm using about the same number of operations as for multiplying two matrices of dimension n and degree d. If the latter multiplication is done in MM(n, d) = O˜(n ω d) operations, with ω the exponent of matrix multiplication over K, then the algorithm uses O˜(MM(n, d)) operations in K. For m×n matrices of rank r and degree d, the cost expression is O˜(nmr ω−2 d). The softO notation O ˜ indicates some missing logarithmic factors. The method is randomized with Las Vegas certification. We achieve our results in part through a combination of matrix Hensel highorder lifting and matrix minimal fraction reconstruction, and through the computation of minimal or small degree vectors in the nullspace seen as a K[x]module.
Normal Forms for General Polynomial Matrices
, 2001
"... We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
We present an algorithm for the computation of a shifted Popov Normal Form of a rectangular polynomial matrix. For speci c input shifts, we obtain methods for computing the matrix greatest common divisor of two matrix polynomials (in normal form) or such polynomial normal form computation as the classical Popov form and the Hermite Normal Form.
Dense linear algebra over wordsize prime fields: the FFLAS and FFPACK packages
, 2009
"... In the past two decades, some major efforts have been made to reduce exact (e.g. integer, rational, polynomial) linear algebra problems to matrix multiplication in order to provide algorithms with optimal asymptotic complexity. To provide efficient implementations of such algorithms one need to be c ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
In the past two decades, some major efforts have been made to reduce exact (e.g. integer, rational, polynomial) linear algebra problems to matrix multiplication in order to provide algorithms with optimal asymptotic complexity. To provide efficient implementations of such algorithms one need to be careful with the underlying arithmetic. It is well known that modular techniques such as the Chinese remainder algorithm or the padic lifting allow very good practical performance, especially when word size arithmetic are used. Therefore, finite field arithmetic becomes an important core for efficient exact linear algebra libraries. In this paper, we study high performance implementations of basic linear algebra routines over word size prime fields: specially the matrix multiplication; our goal being to provide an exact alternate to the numerical BLAS library. We show that this is made possible by a careful combination of numerical computations and asymptotically faster algorithms. Our kernel has
Essentially optimal computation of the inverse of generic polynomial matrices
 J. Complexity
, 2004
"... We present an inversion algorithm for nonsingular n n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O B ðn 3 dÞ field operations for a generic input; the softO notation O B indicates some missinglogðndÞ factors. ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
We present an inversion algorithm for nonsingular n n matrices whose entries are degree d polynomials over a field. The algorithm is deterministic and, when n is a power of two, requires O B ðn 3 dÞ field operations for a generic input; the softO notation O B indicates some missinglogðndÞ factors. Up to such logarithmic factors, this asymptotic complexity is of the same order as the number of distinct field elements necessary to represent the inverse matrix.
Faster Inversion and Other Black Box Matrix Computations Using Efficient Block Projections
"... Efficient block projections of nonsingular matrices have recently been used by the authors in [10] to obtain an efficient algorithm to find rational solutions for sparse systems of linear equations. In particular a bound of O˜(n 2.5) machine operations is presented for this computation assuming tha ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Efficient block projections of nonsingular matrices have recently been used by the authors in [10] to obtain an efficient algorithm to find rational solutions for sparse systems of linear equations. In particular a bound of O˜(n 2.5) machine operations is presented for this computation assuming that the input matrix can be multiplied by a vector with constantsized entries using O˜(n) machine operations. Somewhat more general bounds for blackbox matrix computations are also derived. Unfortunately, the correctness of this algorithm depends on the existence of efficient block projections of nonsingular matrices, and this was only conjectured. In this paper we establish the correctness of the algorithm from [10] by proving the existence of efficient block projections for arbitrary nonsingular matrices over sufficiently large fields. We further demonstrate the usefulness of these projections by incorporating them into existing blackbox matrix algorithms to derive improved bounds for the cost of several matrix problems. We consider, in particular, matrices that can be multiplied by a vector using O˜(n) field operations: We show how to compute the inverse of any such nonsingular matrix over any field using an expected number of O˜(n 2.27) operations in that field. A basis for the null space of such a matrix, and a certification of its
Ideal forms of Coppersmith’s theorem and GuruswamiSudan list decoding
, 2011
"... We develop a framework for solving polynomial equations with size constraints on solutions. We obtain our results by showing how to apply a technique of Coppersmith for finding small solutions of polynomial equations modulo integers to analogous problems over polynomial rings, number fields, and f ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We develop a framework for solving polynomial equations with size constraints on solutions. We obtain our results by showing how to apply a technique of Coppersmith for finding small solutions of polynomial equations modulo integers to analogous problems over polynomial rings, number fields, and function fields. This gives us a unified view of several problems arising naturally in cryptography, coding theory, and the study of lattices. We give (1) a polynomialtime algorithm for finding small solutions of polynomial equations modulo ideals over algebraic number fields, (2) a faster variant of the GuruswamiSudan algorithm for list decoding of ReedSolomon codes, and (3) an algorithm for list decoding of algebraicgeometric codes that handles both singlepoint and multipoint codes. Coppersmith’s algorithm uses lattice basis reduction to find a short vector in a carefully constructed lattice; powerful analogies from algebraic number theory allow us to identify the appropriate analogue of a lattice in each case and provide efficient algorithms to find a suitably short vector, thus allowing us to give completely parallel proofs of the above theorems.
Simplified highspeed highdistance list decoding for alternant codes
"... Abstract. This paper presents a simplified listdecoding algorithm to correct any number w of errors in any alternant code of any length n with any designed distance t + 1 over any finite field Fq; in particular, in the classical Goppa codes used in the McEliece and Niederreiter publickey cryptosys ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a simplified listdecoding algorithm to correct any number w of errors in any alternant code of any length n with any designed distance t + 1 over any finite field Fq; in particular, in the classical Goppa codes used in the McEliece and Niederreiter publickey cryptosystems. The algorithm is efficient for w close to, and in many cases slightly beyond, the Fq Johnson bound J ′ = n ′ − √ n ′ (n ′ − t − 1) where n ′ = n(q − 1)/q, assuming t + 1 ≤ n ′. In the typical case that qn/t ∈ (lg n) O(1) and that the parent field has (lg n) O(1) bits, the algorithm uses n(lg n) O(1) bit operations for w ≤ J ′ − n/(lg n) O(1) ; O(n 4.5) bit operations for w ≤ J ′ + o((lg n) / lg lg n); and n O(1) bit operations for w ≤ J ′ + O((lg n) / lg lg n). 1