Results 1  10
of
89
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 467 (20 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 193 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
(Show Context)
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Stability and instability of solitary waves of the fifthorder KdV equation: a numerical framework
 Physica D
, 2002
"... The spectral problem associated with the linearization about solitary waves of the generalized fifthorder KdV equation is formulated in terms of the Evans function, a complex analytic function whose zeros correspond to eigenvalues. A numerical framework, based on a fast robust shooting algorithm on ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
(Show Context)
The spectral problem associated with the linearization about solitary waves of the generalized fifthorder KdV equation is formulated in terms of the Evans function, a complex analytic function whose zeros correspond to eigenvalues. A numerical framework, based on a fast robust shooting algorithm on exterior algebra spaces is introduced. The complete algorithm has several new features, including a rigorous numerical algorithm for choosing starting values, a new method for numerical analytic continuation of starting vectors, the role of the Grassmannian G2(C 5) in choosing the numerical integrator, and the role of the Hodge star operator for relating � 2 (C 5) and � 3(C 5) and deducing a range of numerically computable forms for the Evans function. The algorithm is illustrated by computing the stability and instability of solitary waves of the fifthorder KdV equation with polynomial nonlinearity. Table of Contents
Numerical Exterior Algebra and the Compound Matrix Method
 Numer. Math
, 2000
"... The compound matrix method, which has been proposed by Ng & Reid for numerically integrating systems of dierential equations in hydrodynamic stability on k = 2; 3 dimensional subspaces, is reformulated in terms of exterior algebra. This formulation leads to a general framework for deriving th ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
The compound matrix method, which has been proposed by Ng & Reid for numerically integrating systems of dierential equations in hydrodynamic stability on k = 2; 3 dimensional subspaces, is reformulated in terms of exterior algebra. This formulation leads to a general framework for deriving the induced systems, and leads to several new results including: the role of Hodge duality in constructing systems, adjoints and boundary conditions, the role of analyticity for systems on unbounded domains, general formulation of induced boundary conditions, and the role of geometric integrators for preserving the manifold of k dimensional subspaces. The formulation is presented for kdimensional subspaces of systems on C with k and n arbitrary, and detailed examples are given for the case k = 2 and n = 4, with an indication of implementation details for systems of larger dimension. The theory is then applied to two examples: 2D boundarylayer ow past a compliant surface and the instability of jetlike pro les.
Numerical Taylor expansions of invariant manifolds in large dynamical systems
, 1996
"... In this paper we develop a numerical method for computing higher order local approximations of invariant manifolds, such as stable, unstable or center manifolds near steady states of a dynamical system. The underlying system is assumed to be large in the sense that a large sparse Jacobian at the equ ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
In this paper we develop a numerical method for computing higher order local approximations of invariant manifolds, such as stable, unstable or center manifolds near steady states of a dynamical system. The underlying system is assumed to be large in the sense that a large sparse Jacobian at the equilibrium occurs, for which only a linear (black box) solver and a low dimensional invariant subspace is available, but for which methods like the QRAlgorithm are considered to be too expensive. Our method is based on an analysis of the multilinear Sylvester equations for the higher derivatives which can be solved under certain nonresonance conditions. These conditions are weaker than the standard gap conditions on the spectrum which guarantee the existence of the invariant manifold. The final algorithm requires the solution of several large linear systems with a bordered Jacobian. To these systems we apply a block elimination method recently developed by Govaerts and Pryce (1991, 1993).
GrassmannCayley Algebra and
 VIII. Handbook of Geometric Computing
, 2005
"... ..."
(Show Context)
The Construction Of rRegular Wavelets For Arbitrary Dilations
 J. FOURIER ANAL. APPL
"... Given any dilation matrix with integer entries and a natural number r we construct an associated rregular multiresolution analysis with rregular wavelet basis. We also prove that regular wavelets have vanishing moments. ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Given any dilation matrix with integer entries and a natural number r we construct an associated rregular multiresolution analysis with rregular wavelet basis. We also prove that regular wavelets have vanishing moments.
Universal projective embeddings of the Grassmannian, half spinor and dual orthogonal geometries
 Quart. J. Math. Oxford
, 1983
"... Abstract. The Grassmannian, half spinor and dual orthogonal geometries all have embeddings in projective spaces such that their lines embed as entire projective lines. For example, any Grassmannian geometry arising from a vector space over a (commutative) field has an embedding in the projective ge ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The Grassmannian, half spinor and dual orthogonal geometries all have embeddings in projective spaces such that their lines embed as entire projective lines. For example, any Grassmannian geometry arising from a vector space over a (commutative) field has an embedding in the projective geometry of an exterior power of that vector space. The latter two types of geometry possess embeddings in the related space of spinors, as studied by Chevalley. This paper describes these embeddings and shows that all other projective embeddings are semilinear images of them (with the exception of the dual orthogonal geometries when the underlying field has characteristic two.) 1. Projective Embeddings LET CS = ($>, Z£) be a geometry with pointset 9 and lineset X; we regard each line as a subset of 9. If C8l = (&l,3!l) is a classical (i.e., Desarguesian) projective geometry and <f>: 9 —> $>, is an injection, we say that 4> is a projective embedding (or simply an embedding) when the map which <p induces on lines maps if into Xx. It is essentially this notion of embedding which has been used by several authors, including Bueken