Results 1 
7 of
7
Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case
, 2010
"... New algorithms are proposed for the Tucker approximation of a 3tensor, that access it using only the tensorbyvectorbyvector multiplication subroutine. In the matrix case, Krylov methods are methods of choice to approximate the dominant column and row subspaces of a sparse or structured matrix g ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
New algorithms are proposed for the Tucker approximation of a 3tensor, that access it using only the tensorbyvectorbyvector multiplication subroutine. In the matrix case, Krylov methods are methods of choice to approximate the dominant column and row subspaces of a sparse or structured matrix given through the matrixbyvector multiplication subroutine. Using the Wedderburn rank reduction formula, we propose an algorithm of matrix approximation that computes Krylov subspaces and allows generalization to the tensor case. Several variants of proposed tensor algorithms differ by pivoting strategies, overall cost and quality of approximation. By convincing numerical experiments we show that the proposed methods are faster and more accurate than the minimal Krylov recursion, proposed recently by Eldén and Savas.
Nonpolynomial galerkin projection on deforming meshes
 ACM Trans. Graph
, 2013
"... Figure 1: Our method enables reduced simulation of fluid flow around this flying bird over 2000 times faster than the corresponding full simulation and reduced radiosity computation in this architectural scene over 113 times faster than the corresponding full radiosity. This paper extends Galerkin p ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Figure 1: Our method enables reduced simulation of fluid flow around this flying bird over 2000 times faster than the corresponding full simulation and reduced radiosity computation in this architectural scene over 113 times faster than the corresponding full radiosity. This paper extends Galerkin projection to a large class of nonpolynomial functions typically encountered in graphics. We demonstrate the broad applicability of our approach by applying it to two strikingly different problems: fluid simulation and radiosity rendering, both using deforming meshes. Standard Galerkin projection cannot efficiently approximate these phenomena. Our approach, by contrast, enables the compact representation and approximation of these complex nonpolynomial systems, including quotients and roots of polynomials. We rely on representing each function to be modelreduced as a composition of tensor products, matrix inversions, and matrix roots. Once a function has been represented in this form, it can be easily modelreduced, and its reduced form can be evaluated with time and memory costs dependent only on the dimension of the reduced space.
Jacobi algorithm for the best low multilinear rank approximation of symmetric tensors
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2013
"... The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algo ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
The problem discussed in this paper is the symmetric best low multilinear rank approximation of thirdorder symmetric tensors. We propose an algorithm based on Jacobi rotations, for which symmetry is preserved at each iteration. Two numerical examples are provided indicating the need for such algorithms. An important part of the paper consists of proving that our algorithm converges to stationary points of the objective function. This can be considered an advantage of the proposed algorithm over existing symmetrypreserving algorithms in the literature.
THIRD ORDER TENSORS AS OPERATORS ON MATRICES: A THEORETICAL AND COMPUTATIONAL FRAMEWORK WITH APPLICATIONS IN IMAGING ∗
"... Recent work by Kilmer and Martin, [10] and Braman [2] provides a setting in which the familiar tools of linear algebra can be extended to better understand thirdorder tensors. Continuing along this vein, this paper investigates further implications including: 1) a bilinear operator on the matrices ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Recent work by Kilmer and Martin, [10] and Braman [2] provides a setting in which the familiar tools of linear algebra can be extended to better understand thirdorder tensors. Continuing along this vein, this paper investigates further implications including: 1) a bilinear operator on the matrices which is nearly an inner product and which leads to definitions for length of matrices, angle between two matrices and orthogonality of matrices and 2) the use of tlinear combinations to characterize the range and kernel of a mapping defined by a thirdorder tensor and the tproduct and the quantification of the dimensions of those sets. These theoretical results lead to the study of orthogonal projections as well as an effective GramSchmidt process for producing an orthogonal basis of matrices. The theoretical framework also leads us to consider the notion of tensor polynomials and their relation to tensor eigentuples defined in [2]. Implications for extending basic algorithms such as the power method, QR iteration, and Krylov subspace methods are discussed. We conclude with two examples in image processing: using the orthogonal elements generated via a GolubKahan iterative bidiagonalization scheme for facial recognition and solving a regularized image deblurring problem.
BLOCK TENSORS AND SYMMETRIC EMBEDDINGS
, 1010
"... Abstract. Well known connections exist between the singular value decomposition of a matrix A and the Schur decomposition of its symmetric embedding sym(A) = ([0A; A T 0]). In particular, if σ is a singular value of A then +σ and −σ are eigenvalues of the symmetric embedding. The top and bottom hal ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. Well known connections exist between the singular value decomposition of a matrix A and the Schur decomposition of its symmetric embedding sym(A) = ([0A; A T 0]). In particular, if σ is a singular value of A then +σ and −σ are eigenvalues of the symmetric embedding. The top and bottom halves of sym(A)’s eigenvectors are singular vectors for A. Power methods applied to A can be related to power methods applied to sym(A). The rank of sym(A) is twice the rank of A. In this paper we develop similar connections for tensors by building on LH. Lim’s variational approach to tensor singular values and vectors. We show how to embed a general orderd tensor A into an orderd symmetric tensor sym(A). Through the embedding we relate power methods for A’s singular values to power methods for sym(A)’s eigenvalues. Finally, we connect the multilinear and outer product rank of A to the multilinear and outer product rank of sym(A). Key words. tensor, block tensor, symmetric tensor, tensor rank AMS subject classifications. 15A18, 15A69, 65F15
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (2012) Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/nla.1845 The power and Arnoldi methods in an algebra of circulants
"... Circulant matrices play a central role in a recently proposed formulation of threeway data computations. In this setting, a threeway table corresponds to a matrix where each ‘scalar ’ is a vector of parameters defining a circulant. This interpretation provides many generalizations of results from ..."
Abstract
 Add to MetaCart
(Show Context)
Circulant matrices play a central role in a recently proposed formulation of threeway data computations. In this setting, a threeway table corresponds to a matrix where each ‘scalar ’ is a vector of parameters defining a circulant. This interpretation provides many generalizations of results from matrix or vectorspace algebra. These results and algorithms are closely related to standard decoupling techniques on blockcirculant matrices using the fast Fourier transform. We derive the power and Arnoldi methods in this algebra. In the course of our derivation, we define inner products, norms, and other notions. These extensions are straightforward in an algebraic sense, but the implications are dramatically different from the standard matrix case. For example, the number of eigenpairs may exceed the dimension of the matrix, although it is still polynomial in it. It is thus necessary to take an extra step and carefully select a smaller, canonical set of size equal to the dimension of the matrix from which all possible eigenpairs can be formed. Copyright © 2012
DataDriven Methods for Interactive Simulation of Complex Phenomena
, 2014
"... duction, crowdsourcing, player models Creating realistic virtual worlds requires fast, detailed physical simulations. Traditional simulation techniques based on discretization in time and space must trade speed for detail. Frequently, this tradeoff results in either coarse, unrealistic simulation, ..."
Abstract
 Add to MetaCart
duction, crowdsourcing, player models Creating realistic virtual worlds requires fast, detailed physical simulations. Traditional simulation techniques based on discretization in time and space must trade speed for detail. Frequently, this tradeoff results in either coarse, unrealistic simulation, or slowerthanrealtime response. Datadriven simulation techniques avoid this tradeoff by operating on compact representations of simulation state, which can be updated quickly due to their small size. These representations are learned from training simulations that resemble the runtime output we want the simulation to produce. In this thesis, we greatly expand the scope of datadriven simulation in practical applications by answering three important questions. First, how can we reconfigure simulation domains at runtime? While simple forms of datadriven simulation operate in a monolithic fashion, we show how one important datadriven simulation technique can be extended to create modular simulation tiles that can be rearranged at runtime. Second, how can we simulate a wide variety of phenomena? One popular datadriven simulation method, Galerkin projection, only works for simulations with polynomial dynam