Results 1  10
of
20
Recursive calculation of dominant singular subspaces
 SIAM J. Matrix Anal. Appl
, 1999
"... Abstract. In this paper we show how to compute recursively an approximation of the left and right dominant singular subspaces of a given matrix. In order to perform as few as possible operations on each column of the matrix, we use a variant of the classical Gram–Schmidt algorithm to estimate this s ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we show how to compute recursively an approximation of the left and right dominant singular subspaces of a given matrix. In order to perform as few as possible operations on each column of the matrix, we use a variant of the classical Gram–Schmidt algorithm to estimate this subspace. The method is shown to be particularly suited for matrices with many more rows than columns. Bounds for the accuracy of the computed subspace are provided. Moreover, the analysis of error propagation in this algorithm provides new insights in the loss of orthogonality typically observed in the classical Gram–Schmidt method.
Componentwise Perturbation Analyses for the QR Factorization
"... This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, Q T Q = I, R upper triangular, for a given real m n matrix A of rank n. Such specic analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper gives componentwise perturbation analyses for Q and R in the QR factorization A = QR, Q T Q = I, R upper triangular, for a given real m n matrix A of rank n. Such specic analyses are important for example when the columns of A are badly scaled. First order perturbation bounds are given for both Q and R
PERTURBATION ANALYSIS OF THE QR FACTOR R IN THE CONTEXT OF LLL LATTICE BASIS REDUCTION
, 2009
"... ... an efficiently computable notion of reduction of basis of a Euclidean lattice that is now commonly referred to as LLLreduction. The precise definition involves the Rfactor of the QR factorisation of the basis matrix. A natural mean of speeding up the LLL reduction algorithm is to use a (floati ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
... an efficiently computable notion of reduction of basis of a Euclidean lattice that is now commonly referred to as LLLreduction. The precise definition involves the Rfactor of the QR factorisation of the basis matrix. A natural mean of speeding up the LLL reduction algorithm is to use a (floatingpoint) approximation to the Rfactor. In the present article, we investigate the accuracy of the factor R of the QR factorisation of an LLLreduced basis. The results we obtain should be very useful to devise LLLtype algorithms relying on floatingpoint approximations.
Rigorous perturbation bounds for some matrix factorizations
 SIAM J. Matrix Anal. Appl
"... Abstract. This article presents rigorous normwise perturbation bounds for the Cholesky, LU and QR factorizations with normwise or componentwise perturbations in the given matrix. The considered componentwise perturbations have the form of backward rounding errors for the standard factorization algor ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This article presents rigorous normwise perturbation bounds for the Cholesky, LU and QR factorizations with normwise or componentwise perturbations in the given matrix. The considered componentwise perturbations have the form of backward rounding errors for the standard factorization algorithms. The used approach is a combination of the classic and refined matrix equation approaches. Each of the new rigorous perturbation bounds is a small constant multiple of the corresponding firstorder perturbation bound obtained by the refined matrix equation approach in the literature and can be estimated efficiently. These new bounds can be much tighter than the existing rigorous bounds obtained by the classic matrix equation approach, while the conditions for the former to hold are almost as moderate as the conditions for the latter to hold. AMS subject classifications. 15A23, 65F35 Key words. Perturbation analysis, normwise perturbation, componentwise perturbation,
Certification of the QR Factor R, and of Lattice Basis Reducedness
"... Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not suffic ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Given a lattice basis of n vectors in Z n, we propose an algorithm using 12n 3 + O(n 2) floating point operations for checking whether the basis is LLLreduced. If the basis is reduced then the algorithm will hopefully answer “yes”. If the basis is not reduced, or if the precision used is not sufficient with respect to n, and to the numerical properties of the basis, the algorithm will answer “failed”. Hence a positive answer is a rigorous certificate. For implementing the certificate itself, we propose a floating point algorithm for computing (certified) error bounds for the R factor of the QR factorization. This algorithm takes into account all possible approximation and rounding errors. The certificate may be implemented using matrix library routines only. We report experiments that show that for a reduced basis of adequate dimension and quality the certificate succeeds, and establish the effectiveness of the certificate. This effectiveness is applied for certifying the output of fastest existing floating point heuristics of LLL reduction, without slowing down the whole process.
Sensitivity Analyses for Factorizations of Sparse or Structured Matrices
, 1998
"... For a unique factorization of a matrix B, the effect of sparsity or other structure on measuring the sensitivity of the factors of B to some change G in B is considered. In particular, normbased analyses of the QR and Cholesky factorizations are examined. If B is structured but G is not, it is sho ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
For a unique factorization of a matrix B, the effect of sparsity or other structure on measuring the sensitivity of the factors of B to some change G in B is considered. In particular, normbased analyses of the QR and Cholesky factorizations are examined. If B is structured but G is not, it is shown that the expressions for the condition numbers are identical to those when B is not structured, but because of the structure the condition numbers may be easier to estimate. If G is structured, whether B is or not, then the expressions for the condition numbers can change, and it is shown how to derive the new expressions. Cases where B and G have the same sparsity structure occur often: here, for the QR factorization an example shows the value of the new expression can be arbitrarily smaller, but for the Cholesky factorization of a tridiagonal matrix and perturbation the value of the new expression cannot be significantly different from the value of the old one. Thus taking account of sparsity can show the condition is much better than would be suggested by ignoring it, but only for some classes of problems, and perhaps only for some types of factorization. The generalization of these ideas to other factorizations is discussed.
Perturbation Analyses for the Cholesky Downdating Problem
, 1996
"... New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an up ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
New perturbation analyses are presented for the block Cholesky downdating problem U T U = R T R \Gamma X T X. These show how changes in R and X alter the Cholesky factor U . There are two main cases for the perturbation matrix \DeltaR in R: (1) \DeltaR is a general matrix; (2)\DeltaR is an upper triangular matrix. For both cases, first order perturbation bounds for the downdated Cholesky factor U are given using two approaches  a detailed "matrixvector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute, and a simpler "matrixequation" analysis which provides results that are weaker but easier to compute or estimate. The analyses more accurately reflect the sensitivity of the problem than previous results. As X ! 0, the asymptotic values of the new condition numbers for case (1) have bounds that are independent of 2 (R) if R was found using the standard pivoting strategy in the Cholesky factorization, and the asymptotic values of the new condition numbers for case (2) are unity. Simple reasoning shows this last result must be true for the sensitivity of the problem, but previous condition numbers did not exhibit this. Key Words. perturbation analysis, sensitivity, condition, asymptotic condition, Cholesky factorization, downdating AMS Subject Classifications: 15A23, 65F35 1.
On The Sensitivity Of The LU Factorization
, 1997
"... This paper gives sensitivity analyses by two approaches for L and U in the factorization A = LU for general perturbations in A which are sufficiently small in norm. By the matrixvector equation approach, we derive the condition numbers for the L and U factors. By the matrix equation approach we d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
This paper gives sensitivity analyses by two approaches for L and U in the factorization A = LU for general perturbations in A which are sufficiently small in norm. By the matrixvector equation approach, we derive the condition numbers for the L and U factors. By the matrix equation approach we derive corresponding condition estimates. We show how partial pivoting and complete pivoting affect the sensitivity of the LU factorization.