Results 1  10
of
13
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 193 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Multiplying matrices faster than coppersmithwinograd
 In Proc. 44th ACM Symposium on Theory of Computation
, 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 148 (8 self)
 Add to MetaCart
(Show Context)
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Nonnegative approximations of nonnegative tensors
 Jour. Chemometrics
, 2009
"... Abstract. We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naïve Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative parafac, will always have opt ..."
Abstract

Cited by 40 (15 self)
 Add to MetaCart
(Show Context)
Abstract. We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naïve Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative parafac, will always have optimal solutions. The result holds for any choice of norms and, under a mild assumption, even Brègman divergences. hal00410056, version 1 16 Aug 2009 1. Dedication This article is dedicated to the memory of our late colleague Richard Allan Harshman. It is loosely organized around two of Harshman’s best known works — parafac [19] and lsi [13], and answers two questions that he posed. We target this article to a technometrics readership. In Section 4, we discussed a few aspects of nonnegative tensor factorization and Hofmann’s plsi, a variant of the lsi model coproposed by Harshman [13]. In Section 5, we answered a question of Harshman on why the apparently unrelated construction of Bini, Capovani, Lotti, and Romani in [1] should be regarded as the first example of what he called ‘parafac degeneracy ’ [27]. Finally in Section 6, we showed that such parafac degeneracy will not happen for nonnegative approximations of nonnegative tensors, answering another question of his. 2.
Geometry and the complexity of matrix multiplication
, 2007
"... Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We survey results in algebraic complexity theory, focusing on matrix multiplication. Our goals are (i) to show how open questions in algebraic complexity theory are naturally posed as questions in geometry and representation theory, (ii) to motivate researchers to work on these questions, and (iii) to point out relations with more general problems in geometry. The key geometric objects for our study are the secant varieties of Segre varieties. We explain how these varieties are also useful for algebraic statistics, the study of phylogenetic invariants, and quantum computing.
Determinantal equations for secant varieties and the EisenbudKohStillman conjecture. arXiv:1007.0192v3
, 2010
"... We sketch how to construct an example of a smoothable scheme R ⊂ PV and a smooth variety X ⊂ PV, such that R ∩X is locally Gorenstein, but not smoothable. Such an example illustrates that in the course of the proof of Theorem 1.1.1 in [BGL10] one really needs to treat this special case. We wrote dow ..."
Abstract

Cited by 27 (6 self)
 Add to MetaCart
(Show Context)
We sketch how to construct an example of a smoothable scheme R ⊂ PV and a smooth variety X ⊂ PV, such that R ∩X is locally Gorenstein, but not smoothable. Such an example illustrates that in the course of the proof of Theorem 1.1.1 in [BGL10] one really needs to treat this special case. We wrote down this example upon a request of an anonymous referee of this paper, and also motivated by questions of audiences during the author’s presentation in Grenoble and Berlin. To begin with note that unless R∩X = R, or R∩X is “small enough ” (so that all schemes of given degree and embedding dimension are smoothable), there is no obvious reason, why should R∩X be smoothable. In general, smoothability issues are very delicate and often rely on a case by case study, rather than general statements, see for instance proofs in [CEVV09] or [CN09]. Thus even if X ∩ R was always smoothable, for some weird reason, then the proof would be much more complicated than the proof of Theorem 1.1.1 in [BGL10]. Below we present a series of steps how we one can construct R and X for which R ∩ X is nonsmoothable, but without giving all the details. By the very nature of the smoothability issue, both R and X will be quite large. We keep in mind that it is desirable to construct R and X such that R ∩ X is locally
The border rank of the multiplication of 2 × 2 matrices is seven
 J. Amer. Math. Soc
"... One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
One of the leading problems of algebraic complexity theory is matrix multiplication. The naïve multiplication of two n × n matrices uses n 3 multiplications. In 1969, Strassen [20] presented an explicit algorithm for multiplying 2 × 2 matrices using seven multiplications. In the opposite direction, Hopcroft and Kerr [12] and
On the Arithmetic Complexity of StrassenLike Matrix Multiplications
"... The Strassen algorithm for multiplying 2 × 2 matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension n yields a total arithmetic complexity of (7n2.81 − 6n2) for n = 2k. Winograd showed that using seven multiplications for this kind of ..."
Abstract
 Add to MetaCart
The Strassen algorithm for multiplying 2 × 2 matrices requires seven multiplications and 18 additions. The recursive use of this algorithm for matrices of dimension n yields a total arithmetic complexity of (7n2.81 − 6n2) for n = 2k. Winograd showed that using seven multiplications for this kind of multiplications is optimal, so any algorithm for multiplying 2 × 2 matrices with seven multiplications is therefore called a Strassenlike algorithm. Winograd also discovered an additively optimal Strassenlike algorithm with 15 additions. This algorithm is called the Winograd’s variant, whose arithmetic complexity is (6n2.81 − 5n2) for n = 2k and (3.73n2.81 − 5n2) for n = 8 · 2k, which is the bestknown bound for Strassenlike multiplications. This paper proposes a method that reduces the complexity of Winograd’s variant to (5n2.81 + 0.5n2.59 + 2n2.32 − 6.5n2) for n = 2k. It is also shown that the total arithmetic complexity can be improved to (3.55n2.81 + 0.148n2.59 + 1.02n2.32 − 6.5n2) for n = 8 · 2k, which, to the best of our knowledge, improves the bestknown bound for a Strassenlike matrix multiplication algorithm.
Exact and Approximation Algorithms for the Maximum Constraint Satisfaction Problem over the Point Algebra
"... We study the constraint satisfaction problem over the point algebra. In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x ≤ y), (x = y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constrai ..."
Abstract
 Add to MetaCart
We study the constraint satisfaction problem over the point algebra. In this problem, an instance consists of a set of variables and a set of binary constraints of forms (x < y), (x ≤ y), (x = y) or (x = y). Then, the objective is to assign integers to variables so as to satisfy as many constraints as possible. This problem contains many important problems such as Correlation Clustering, Maximum Acyclic Subgraph, and Feedback Arc Set. We first give an exact algorithm that runs in O ∗ log 5 (3 log 6 n) time, which improves the previous best O ∗ (3 n) obtained by a standard dynamic programming. Our algorithm combines the dynamic programming with the splitandlist technique. The splitandlist technique involves matrix products and we make use of sparsity of matrices to speed up the computation. As for approximation, we give a 0.4586approximation algorithm when the objective is maximizing the number of satisfied constraints, and give an O(log n log log n)approximation algorithm when the objective is minimizing the number of unsatisfied constraints.