Results 1 
9 of
9
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 193 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
(Show Context)
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
The Lyapunov Characteristic Exponents and their
 Computation, Lect. Notes Phys
, 2010
"... For want of a nail the shoe was lost. For want of a shoe the horse was lost. For want of a horse the rider was lost. For want of a rider the battle was lost. For want of a battle the kingdom was lost. And all for the want of a horseshoe nail. For Want of a Nail (proverbial rhyme) Summary. We present ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
For want of a nail the shoe was lost. For want of a shoe the horse was lost. For want of a horse the rider was lost. For want of a rider the battle was lost. For want of a battle the kingdom was lost. And all for the want of a horseshoe nail. For Want of a Nail (proverbial rhyme) Summary. We present a survey of the theory of the Lyapunov Characteristic Exponents (LCEs) for dynamical systems, as well as of the numerical techniques developed for the computation of the maximal, of few and of all of them. After some historical notes on the first attempts for the numerical evaluation of LCEs, we discuss in detail the multiplicative ergodic theorem of Oseledec [99], which provides the theoretical basis for the computation of the LCEs. Then, we analyze the algorithm for the computation of the maximal LCE, whose value has been extensively used as an indicator of chaos, and the algorithm of the so–called ‘standard method’, developed by Benettin et al. [14], for the computation of many LCEs. We also consider different discrete and continuous methods for computing the LCEs based on the QR or the singular value decomposition techniques. Although, we are mainly interested in finite–dimensional conservative systems, i. e. autonomous Hamiltonian systems and symplectic maps, we also briefly refer to the evaluation of LCEs of dissipative systems and time series. The relation of two chaos detection techniques, namely the fast Lyapunov indicator (FLI) and the generalized alignment index (GALI), to the computation of the LCEs is also discussed. 1
QuasiNewton methods on Grassmannians and multilinear approximations of tensors
, 2009
"... Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the same optimality property as the usual bfgs updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems, and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims. Key words. Grassmann manifold, Grassmannian, product of Grassmannians, Grassmann quasiNewton, Grassmann bfgs, Grassmann lbfgs, multilinear rank, symmetric multilinear rank, tensor, symmetric tensor, approximations
Induced Operators on Symmetry Classes of Tensors
"... Let V be an ndimensional Hilbert space. Suppose H is a subgroup of the symmetric group of degree m, and X:: H  C is a character of degree 1 on H. Consider the symmetrizer on the tensor space S(V 1 "' Win)  iH I X(tT)vai(1)... Val(m) defined by H and X:. The vector space Vn(H)= S(m ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
Let V be an ndimensional Hilbert space. Suppose H is a subgroup of the symmetric group of degree m, and X:: H  C is a character of degree 1 on H. Consider the symmetrizer on the tensor space S(V 1 "' Win)  iH I X(tT)vai(1)... Val(m) defined by H and X:. The vector space Vn(H)= S(mV) is a subspace of V, called the symmetry class of tensors over V associated with H and X:. The elements in V(H) of the form S(Vl ... v,) are called decomposable tensors and are denoted by Vl * ... * v. For any linear operator T acting on V, there is a (unique) induced operator It'(T) acting on V (H) satisfying K(T)vl , . . . , Vm = TVl , . . . , Tvm.
Calculus of smooth functions between convenient vector spaces
 Aarhus Preprint Series 1984/85
"... We give an exposition of some basic results concerning remainders in Taylor expansions for smooth functions between convenient vector spaces, in the sense of Frölicher and Kriegl, cf. [2], [11], [3], [13]. We needed such results in [9], but could not find them in the works quoted. 1 The method we em ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We give an exposition of some basic results concerning remainders in Taylor expansions for smooth functions between convenient vector spaces, in the sense of Frölicher and Kriegl, cf. [2], [11], [3], [13]. We needed such results in [9], but could not find them in the works quoted. 1 The method we employ is very puristic: we never have to consider limits, or any other analytic tools, except for finite dimensional vector spaces R n. In this sense, we carry Frölicher’s program of considering mutually balancing sets of curves R → X and functions X → R to the extreme. (Also, the puristic aspect makes it easy to transfer the theory to ”synthetic ” contexts, like [7].) Besides the introduction, where we recall some of the existing theory, the paper contains two sections: 1) on the general theory of Taylor remainders for smooth maps X → Y, where X and Y are convenient vector spaces; and 2) a more refined theory for the case where X = R n. First, we recall the notion of convenient vector space in the formulation of [3]: it is a vector space X over R, equipped with a linear subspace X ′ ∗This is a retyping of a preprint [8] with the same title, Aarhus Preprint Series 1984/85 No. 18. The bibliography has been updated, since [9] and [10] in the meantime have been published. Also, [4] has been published (1988). The numbering of the equations have changed, but the numbering of Propositions, Theorems, etc. is unchanged compared to the Preprint Version. 1 [4] does have some of these results; [8] is quoted there (Section 4.4) in connection with
Future Directions in TensorBased Computation and Modeling
, 2009
"... Highdimensional modeling is becoming ubiquitous across the sciences and engineering because of advances in sensor technology and storage technology. Computationallyoriented
researchers no longer have to avoid what were once intractably large, tensorstructured data sets. The current NSF promotion ..."
Abstract
 Add to MetaCart
Highdimensional modeling is becoming ubiquitous across the sciences and engineering because of advances in sensor technology and storage technology. Computationallyoriented
researchers no longer have to avoid what were once intractably large, tensorstructured data sets. The current NSF promotion of “computational thinking” is timely: we need a focused international eﬀort to oversee the transition from matrixbased to tensorbased computational
thinking. The successful problemsolving tools provided by the numerical linear algebra community need to be broadened and generalized. However, tensorbased research is not just
matrixbased research with additional subscripts. Tensors are data objects in their own right and there is much to learn about their geometry and their connections to statistics and operator theory. This requires full participation of researchers from engineering, the natural sciences, and the information sciences, together with statisticians, mathematicians, numerical analysts, and software/language designers. Representatives from these disciplines participated in the Workshop. We believe that the NSF can help ensure the vitality of “big N” engineering and science by systematically supporting research in tensorbased computation and modeling.
unknown title
"... Most chapters in this handbook are concerned with various aspects and implications of linearity; Chapter 14 and this chapter are unusual in that they are about multilinearity. Just as linear operators and their coordinate representations, i.e., matrices, are the main objects of interest in other ch ..."
Abstract
 Add to MetaCart
Most chapters in this handbook are concerned with various aspects and implications of linearity; Chapter 14 and this chapter are unusual in that they are about multilinearity. Just as linear operators and their coordinate representations, i.e., matrices, are the main objects of interest in other chapters, tensors and their coordinate representations, i.e., hypermatrices, are the main objects of interest in this chapter. The parallel is summarized in the following schematic: linearity → linear operators, bilinear forms, dyads → matrices multilinearity → tensors → hypermatrices Chapter 14, or indeed the monographs on multilinear algebra such as [Gre78, Mar23, Nor84, Yok92], are about properties of a whole space of tensors. This chapter is about properties of a single tensor and its coordinate representation, a hypermatrix. The first two sections introduce (1) a hypermatrix, (2) a tensor as an element of a tensor product of vector spaces, its coordinate representation as a hypermatrix, and a tensor as a multilinear functional. The next sections discuss the various generalizations of wellknown linear algebraic and matrix theoretic notions, such as rank, norm, and determinant, to