Results 1 
8 of
8
LOWRANK TENSOR METHODS WITH SUBSPACE CORRECTION FOR SYMMETRIC EIGENVALUE PROBLEMS∗
"... Abstract. We consider the solution of largescale symmetric eigenvalue problems for which it is known that the eigenvectors admit a lowrank tensor approximation. Such problems arise, for example, from the discretization of highdimensional elliptic PDE eigenvalue problems or in strongly correlated ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the solution of largescale symmetric eigenvalue problems for which it is known that the eigenvectors admit a lowrank tensor approximation. Such problems arise, for example, from the discretization of highdimensional elliptic PDE eigenvalue problems or in strongly correlated spin systems. Our methods are built on imposing lowrank (block) tensor train (TT) structure on the trace minimization characterization of the eigenvalues. The common approach of alternating optimization is combined with an enrichment of the TT cores by (preconditioned) gradients, as recently proposed by Dolgov and Savostyanov for linear systems. This can equivalently be viewed as a subspace correction technique. Several numerical experiments demonstrate the performance gains from using this technique.
Tensor numerical methods for highdimensional PDEs: Basic theory and initial applications
, 2014
"... ..."
(Show Context)
Superfast Wavelet Transform Using QTT
, 2013
"... We propose a superfast discrete Haar wavelet transform (SFHWT) as well as its inverse, using the QTT representation for the Haar transform matrices and inputoutput vectors. Though the Haar matrix itself does not have a low QTTrank approximation, we show that factor matrices used at each step of t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We propose a superfast discrete Haar wavelet transform (SFHWT) as well as its inverse, using the QTT representation for the Haar transform matrices and inputoutput vectors. Though the Haar matrix itself does not have a low QTTrank approximation, we show that factor matrices used at each step of the traditional multilevel Haar wavelet transform algorithm have explicit QTT representations of low rank. The SFHWT applies to a vector representing a signal sampled on a uniform grid of size N = 2d. We develop two algorithms which roughly require square logarithmic time complexity with respect to the grid size, O(log2N), hence outperforming the traditional fast Haar wavelet transform (FHWT) of linear complexity, O(N). Our approach also applies to the FHWT inverse as well as to the multidimensional wavelet transform. Numerical experiments demonstrate that the SFHWT algorithm is robust in keeping low rank of the resulting output vector and it outperforms the traditional FHWT for grid size larger than a certain value depending on the spacial dimension. AMS Subject Classication: 65F30, 65F50, 65N35, 65F10 Key words: Tensorstructured methods, fast wavelet transform, multilevel methods, canon
MaxPlanckInstitut für Mathematik in den Naturwissenschaften Leipzig
"... Gridbased lattice summation of electrostatic potentials by lowrank tensor approximation ..."
Abstract
 Add to MetaCart
(Show Context)
Gridbased lattice summation of electrostatic potentials by lowrank tensor approximation