Results 1  10
of
20
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
Lanczos Bidiagonalization With Partial Reorthogonalization
, 1998
"... A partial reorthogonalization procedure (BPRO) for maintaining semiorthogonality among the left and right Lanczos vectors in the Lanczos bidiagonalization (LBD) is presented. The resulting algorithm is mathematically equivalent to the symmetric Lanczos algorithm with partial reorthogonalization (PR ..."
Abstract

Cited by 84 (0 self)
 Add to MetaCart
A partial reorthogonalization procedure (BPRO) for maintaining semiorthogonality among the left and right Lanczos vectors in the Lanczos bidiagonalization (LBD) is presented. The resulting algorithm is mathematically equivalent to the symmetric Lanczos algorithm with partial reorthogonalization
A ROBUST AND EFFICIENT PARALLEL SVD SOLVER BASED ON RESTARTED LANCZOS BIDIAGONALIZATION ∗
"... Abstract. Lanczos bidiagonalization is a competitive method for computing a partial singular value decomposition of a large sparse matrix, that is, when only a subset of the singular values and corresponding singular vectors are required. However, a straightforward implementation of the algorithm ha ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
is to be implemented on a distributedmemory parallel computer, then additional precautions are required so that parallel efficiency is maintained as the number of processors increases. In this paper, we present a Lanczos bidiagonalization procedure implemented in SLEPc, a software library for the solution of large
NONUNIT BIDIAGONAL DECOMPOSITIONS OF TOTALLY NONNEGATIVE MATRICES
"... Abstract: A procedure requiring only n3/3 flops for factorizing a given totally nonnegative (TN) n x n matrix A=[aij] is proposed as against existing procedures requiring n3/2 flops. In this procedure, at the ith step, the ith nonunit bidiagonal factor of A1 is generated. Product of such previous ..."
Abstract
 Add to MetaCart
Abstract: A procedure requiring only n3/3 flops for factorizing a given totally nonnegative (TN) n x n matrix A=[aij] is proposed as against existing procedures requiring n3/2 flops. In this procedure, at the ith step, the ith nonunit bidiagonal factor of A1 is generated. Product of such previous
A BIDIAGONALIZATIONREGULARIZATION PROCEDURE FOR LARGE SCALE DISCRETIZATIONS OF ILLPOSED PROBLEMS*
"... Abstract. In this paper, we consider illposed problems which discretize to linear least squares problems with matrices K of high dimensions. The algorithm proposed uses K only as an operator and does not need to explicitly store or modify it. A method related to one of Lanczos is used to project th ..."
Abstract
 Add to MetaCart
the problem onto a subspace for which K is bidiagonal. It is then an easy matter to solve the projected problem by standard regularization techniques. These ideas are illustrated with some integral equations of the first kind with convolution kernels, and sample numerical results are given. Key words, ill
Parallel Computation Of Spectral Portraits Of Matrices By Bidiagonalization
, 1995
"... : We describe parallel programs for computation of spectral portraits of matrices on Paragon and Connection Machine 5. The method used consists of bidiagonal reduction of a complex square matrix by unitary Householder transformations and computation of the minimal singular value of the resulting rea ..."
Abstract
 Add to MetaCart
real bidiagonal matrix by the bisection procedure employing Sturm sequences. The computation of bidiagonal reduction uses the blockcyclic distribution of matrices on a rectangular processor grid in order to get good load balancing. Since the computation of spectral portraits needs to calculate
A Comprehensive Study of Task Coalescing for Selecting Parallelism Granularity in a TwoStage Bidiagonal Reduction
"... We present new high performance numerical kernels combined with advanced optimization techniques that significantly increase the performance of parallel bidiagonal reduction. Our approach is based on developing efficient finegrained computational tasks as well as reducing overheads associated with ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
with their highlevel scheduling during the socalled bulge chasing procedure that is an essential phase of a scalable bidiagonalization procedure. In essence, we coalesce multiple tasks in a way that reduces the time needed to switch execution context between the scheduler and useful computational tasks
More Accurate Bidiagonal Reduction For Computing The Singular Value Decomposition
 SIAM J. Matrix Anal. Appl
, 1998
"... . Bidiagonal reduction is the preliminary stage for the fastest stable algorithms for computing the singular value decomposition. However, the best error bounds on bidiagonal reduction methods are of the form A + ffiA = UBV T ; kffiAk 2 " M f(n)kAk2 where B is bidiagonal, U and V are ortho ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
are orthogonal, " M is machine precision, and f(n) is a modestly growing function of the dimensions of A. A Givensbased bidiagonal reduction procedure is proposed that satisfies A+ ffiA = U(B + ffiB)V T ; where ffiB is bounded componentwise and ffiA satisfies a tighter columnwise bound. Thus
Efficient Algorithms for Reducing Banded Matrices to Bidiagonal and Tridiagonal Form
, 1997
"... This paper presents efficient techniques for the orthogonal reduction of banded matrices to bidiagonal and symmetric tridiagonal form. The algorithms are numerically stable and well suited to parallel execution. Experiments on the Intel Paragon show that even on a single processor these methods usua ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
steps. First, a finite algorithm reduces the matrix to bidiagonal form, A \Gamma! B = U T 1 AV 1 , and then an iterative method (e.g., the Golub/Kahan procedure [8]) is used to compute the SVD of the bidiagonal matrix, B = U 2 \SigmaV T 2 . Thus, the SVD of A is given by A = (U 1 U 2 )\Sigma(V 1 V 2
A REFINED HARMONIC LANCZOS BIDIAGONALIZATION METHOD AND AN IMPLICITLY RESTARTED ALGORITHM FOR COMPUTING THE SMALLEST SINGULAR TRIPLETS OF LARGE MATRICES
, 906
"... Abstract. The harmonic Lanczos bidiagonalization method can be used to compute the smallest singular triplets of a large matrix A. We prove that for good enough projection subspaces harmonic Ritz values converge if the columns of A are strongly linearly independent. On the other hand, harmonic Ritz ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
, and suggest refined harmonic shifts that are theoretically better than the harmonic shifts used within the implicitly restarted Lanczos bidiagonalization algorithm (IRHLB). We propose a novel procedure that can numerically compute the refined harmonic shifts efficiently and accurately. Numerical experiments
Results 1  10
of
20