Results 1  10
of
85
ITERATIVE METHODS FOR THE FORCEBASED QUASICONTINUUM APPROXIMATION
, 2009
"... Forcebased atomisticcontinuum hybrid methods are the only known pointwise consistent methods for coupling a general atomistic model to a finite element continuum model. For this reason, and due to their algorithmic simplicity, forcebased coupling methods have become a popular approach for atomist ..."
Abstract

Cited by 25 (18 self)
 Add to MetaCart
Forcebased atomisticcontinuum hybrid methods are the only known pointwise consistent methods for coupling a general atomistic model to a finite element continuum model. For this reason, and due to their algorithmic simplicity, forcebased coupling methods have become a popular approach for atomisticcontinuum hybrid methods as well as other types of multiphysics model coupling. However, the recently discovered unusual stability properties of the linearized forcebased quasicontinuum approximation, especially its indefiniteness, present a challenge to the development of efficient and reliable iterative methods. Using a combination of rigorous analysis and computational experiments, we present a systematic study of the stability and rate of convergence of a variety of linear stationary iterative methods and generalized minimal residual methods (GMRES) for the solution of the linearized forcebased quasicontinuum equations.
Computing and deflating eigenvalues while solving multiple right hand side linear systems with an application to quantum chromodynamics
, 2008
"... Abstract. We present a new algorithm that computes eigenvalues and eigenvectors of a Hermitian positive definite matrix while solving a linear system of equations with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could be saved and recombined through the eigenvectors of the t ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new algorithm that computes eigenvalues and eigenvectors of a Hermitian positive definite matrix while solving a linear system of equations with Conjugate Gradient (CG). Traditionally, all the CG iteration vectors could be saved and recombined through the eigenvectors of the tridiagonal projection matrix, which is equivalent theoretically to unrestarted Lanczos. Our algorithm capitalizes on the iteration vectors produced by CG to update only a small window of about ten vectors that approximate the eigenvectors. While this window is restarted in a locally optimal way, the CG algorithm for the linear system is unaffected. Yet, in all our experiments, this small window converges to the required eigenvectors at a rate identical to unrestarted Lanczos. After the solution of the linear system, eigenvectors that have not accurately converged can be improved in an incremental fashion by solving additional linear systems. In this case, eigenvectors identified in earlier systems can be used to deflate, and thus accelerate, the convergence of subsequent systems. We have used this algorithm with excellent results in lattice QCD applications, where hundreds of right hand sides may be needed. Specifically, about 70 eigenvectors are obtained to full accuracy after solving 24 right hand sides. Deflating these from the large number of subsequent right hand sides removes the dreaded critical slowdown, where the conditioning of the matrix increases as the quark mass reaches a critical value. Our experiments show almost a constant number of iterations for our method, regardless of quark mass, and speedups of 8 over original CG for light quark masses.
THE EXPONENTIALLY CONVERGENT TRAPEZOIDAL RULE
"... Abstract. It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods all across scientific computing, including algorithms related to inverse Laplace transforms, special functions, complex analysis, rational approximation, integral equations, and the computation of functions and eigenvalues of matrices and operators.
Block preconditioning of realvalued iterative algorithms for complex linear systems
, 2008
"... ..."
INTERPRETING IDR AS A PETROVGALERKIN METHOD
, 2009
"... The IDR method of Sonneveld and van Gijzen [SIAM J. Sci. Comput., 31:1035–1062, 2008] is shown to be a PetrovGalerkin (projection) method with a particular choice of left Krylov subspaces; these left subspaces are rational Krylov spaces. Consequently, other methods, such as BiCGStab and ML(s)BiCG ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
The IDR method of Sonneveld and van Gijzen [SIAM J. Sci. Comput., 31:1035–1062, 2008] is shown to be a PetrovGalerkin (projection) method with a particular choice of left Krylov subspaces; these left subspaces are rational Krylov spaces. Consequently, other methods, such as BiCGStab and ML(s)BiCG, which are mathematically equivalent to some versions of IDR, can also be interpreted as PetrovGalerkin methods. The connection with rational Krylov spaces inspired a new version of IDR, called RitzIDR, where the poles of the rational function are chosen as certain Ritz values. Experiments are presented illustrating the effectiveness of this new version.
Deflated and augmented Krylov subspace methods: A framework for deflated . . .
, 2013
"... We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gra ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gradient method (BiCG) and some of its generalizations, including BiCGStab approach does not depend on particular recurrences and thus simplifies the derivation of theoretical results. It easily leads to a variety of realizations by specific algorithms. We do not go into algorithmic details, but we show that for every method there are two different approaches for extending it by augmentation and deflation: one that explicitly takes care of the augmentation space in every step, and one that applies the unchanged basic algorithm to a projected problem but requires a correction step at the end. Both typically generate a Krylov space for a singular operator that is associated with the projected problem. The deflated biconjugate gradient requires two such Krylov spaces, but it also allows us to solve two dual linear systems at once. Deflated Lanczostype product methods fit in our new framework too. The question of how to extract the augmentation and deflation subspace is not addressed here.
The noisy power method: A meta algorithm with applications
 In NIPS
, 2014
"... We provide a new robust convergence analysis of the wellknown power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrix ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
We provide a new robust convergence analysis of the wellknown power method for computing the dominant singular vectors of a matrix that we call the noisy power method. Our result characterizes the convergence behavior of the algorithm when a significant amount noise is introduced after each matrixvector multiplication. The noisy power method can be seen as a metaalgorithm that has recently found a number of important applications in a broad range of machine learning problems including alternating minimization for matrix completion, streaming principal component analysis (PCA), and privacypreserving spectral analysis. Our general analysis subsumes several existing adhoc convergence bounds and resolves a number of open problems in multiple applications: Streaming PCA. A recent work of Mitliagkas et al. (NIPS 2013) gives a spaceefficient algorithm for PCA in a streaming model where samples are drawn from a gaussian spiked covariance model. We give a simpler and more general analysis that applies to arbitrary distributions confirming experimental evidence of Mitliagkas et al. Moreover, even in the spiked covariance model our result gives quantitative improvements in a natural parameter regime. It is also notably simpler and follows easily from our general convergence analysis of the noisy power method together with a matrix Chernoff bound. Private PCA. We provide the first nearlylinear time algorithm for the problem of differentially private principal component analysis that achieves nearly tight worstcase error bounds. Complementing our worstcase bounds, we show that the error dependence of our algorithm on the matrix dimension can be replaced by an essentially tight dependence on the coherence of the matrix. This result resolves the main problem left open by Hardt and Roth (STOC 2013). The coherence is always bounded by the matrix dimension but often substantially smaller thus leading to strong averagecase improvements over the optimal worstcase bound. 1