Results 1 
8 of
8
L.: Krylovtype methods for tensor computations
 I. Linear Algebra Appl
, 2013
"... Several Krylovtype procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank(p, q, r), ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Several Krylovtype procedures are introduced that generalize matrix Krylov methods for tensor computations. They are denoted minimal Krylov recursion, maximal Krylov recursion, and contracted tensor product Krylov recursion. It is proved that, for a given tensor A with multilinear rank(p, q, r), the minimal Krylov recursion extracts the correct subspaces associated to the tensor in p+q+r number of tensorvectorvector multiplications. An optimized minimal Krylov procedure is described that, for a given multilinear rank of an approximation, produces a better approximation than the standard minimal recursion. We further generalize the matrix Krylov decomposition to a tensor Krylov decomposition. The tensor Krylov methods are intended for the computation of low multilinear rank approximations of large and sparse tensors, but they are also useful for certain dense and structured tensors for computing their higher order singular value decompositions or obtaining starting points for the best lowrank computations of tensors. A set of numerical experiments, using realworld and synthetic data sets, illustrate some of the properties of the tensor Krylov methods.
Fast truncation of mode ranks for bilinear tensor operations
 Numerical Linear Algebra with Applications
, 2012
"... Abstract. We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3 + r4). The algorithm is based on the cross approxim ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a fast algorithm for mode rank truncation of the result of a bilinear operation on 3tensors given in the Tucker or canonical form. If the arguments and the result have mode sizes n and mode ranks r, the computation costs O(nr3 + r4). The algorithm is based on the cross approximation of Gram matrices, and the accuracy of the resulted Tucker approximation is limited by square root of machine precision.
Fast multidimensional convolution in lowrank formats via cross approximation. arXiv:1402.5649
, 2013
"... Abstract. We propose new crossconv algorithm for approximate computation of convolution in different lowrank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We propose new crossconv algorithm for approximate computation of convolution in different lowrank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact in different applications. The key idea is based on applying cross approximation in the “frequency domain”, where convolution becomes a simple elementwise product. We illustrate efficiency of our algorithm by computing the threedimensional Newton potential and by presenting preliminary results for solution of the HartreeFock equation on tensorproduct grids.
tensor calculus
"... Twolevel TuckerTTQTT format for optimized tensor calculus by ..."
(Show Context)
AN INITIAL STUDY
, 2013
"... The product of a dense tensor with a vector in every mode except one, called a tensorvector product, is a key operation in several algorithms for computing the canonical tensor decomposition. In these applications, it is even more common to compute a tensorvector product with the same tensor and r ..."
Abstract
 Add to MetaCart
(Show Context)
The product of a dense tensor with a vector in every mode except one, called a tensorvector product, is a key operation in several algorithms for computing the canonical tensor decomposition. In these applications, it is even more common to compute a tensorvector product with the same tensor and r concurrently available sets of vectors, an operation we refer to as a multiplevector tensorvector product (MTVP). Current techniques for implementing these operations rely on explicitly reordering the elements of the tensor in order to leverage available matrix libraries. This approach has two significant disadvantages: reordering the data can be expensive if only a small number of concurrent sets of vectors is available in the MTVP, and this requires excessive amounts of additional memory. In this work, we consider two techniques resolving these issues. Successive contractions are proposed to eliminate explicit data reordering, while blocking tackles the excessive memory consumption. The numerical experiments on a wide variety of tensor shapes indicate the effectiveness of these optimizations, clearly illustrating that the additional memory consumption can be limited to tolerable amounts, generally without sacrificing expeditious execution. For several fourthorder tensors, the additional memory requirements were three orders of magnitude smaller than competing implementations, while throughputs of upward of 75 % of the peak performance of the computer system can be attained for large values of r.
Exact NMR simulation of proteinsize spin systems using tensor train formalism
, 2014
"... We introduce a new method, based on alternating optimization, for compact representation of spin Hamiltonian, and solution of linear systems in the tensor train format. We demonstrate its utility by simulating, without significant approximations, a 15NNMR spectrum of ubiquitin—protein containing s ..."
Abstract
 Add to MetaCart
(Show Context)
We introduce a new method, based on alternating optimization, for compact representation of spin Hamiltonian, and solution of linear systems in the tensor train format. We demonstrate its utility by simulating, without significant approximations, a 15NNMR spectrum of ubiquitin—protein containing several hundred interacting nuclear spins. Existing simulation algorithms for the spin system and the NMR experiment in question either require significant approximations or scale exponentiallywith the system size. We compare the proposed method to the Spinach package that uses heuristic restricted state space (RSS) techniques to achieve polynomial complexity scaling. When the spin system topology is close to a linear chain (e.g. for backbone of a protein), the tensor train representation of a Hamiltonian is more compact and can be computed faster than the sparse representation using the RSS.