Results

**1 - 5**of**5**### Semidefinite relaxations for best rank-1 tensor approximations

- SIAM JOUNRAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2014

"... ..."

### Tensor Low Multilinear Rank Approximation by Structured Matrix Low-Rank Approximation*

"... Abstract — We present a new connection between higher-order tensors and affinely structured matrices, in the context of low-rank approximation. In particular, we show that the tensor low multilinear rank approximation problem can be reformulated as a structured matrix low-rank approximation, the lat ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract — We present a new connection between higher-order tensors and affinely structured matrices, in the context of low-rank approximation. In particular, we show that the tensor low multilinear rank approximation problem can be reformulated as a structured matrix low-rank approximation, the latter being an extensively studied and well understood problem. We first consider symmetric tensors. Although the symmetric tensor problem is at least as difficult as the general unstructured tensor problem, the symmetry allows us to simplify and clearly show the relation to the matrix structured low-rank approx-imation problem. By imposing linear equality constraints in the optimization problem, the proposed approach is applicable to unstructured tensors, as well as to affinely structured tensors. Therefore, it can be used to find (locally) optimal low multilinear rank approximation with a predefined structure. An advantage of the proposed approach is that it can deal with more difficult variations of the main problem, including having missing and fixed elements in the given tensor or ap-proximating with respect to a weighted norm. The drawback is its higher computational cost, compared to existing algorithms, partially due to the generality of the approach.

### Coordinate-descent for learning orthogonal matrices through Givens rotations

"... Optimizing over the set of orthogonal matrices is a central component in problems like sparse-PCA or tensor decomposition. Unfortunately, such optimization is hard since simple operations on orthogonal matrices easily break orthogonal-ity, and correcting orthogonality usually costs a large amount of ..."

Abstract
- Add to MetaCart

Optimizing over the set of orthogonal matrices is a central component in problems like sparse-PCA or tensor decomposition. Unfortunately, such optimization is hard since simple operations on orthogonal matrices easily break orthogonal-ity, and correcting orthogonality usually costs a large amount of computation. Here we propose a framework for optimiz-ing orthogonal matrices, that is the parallel of coordinate-descent in Euclidean spaces. It is based on Givens-rotations, a fast-to-compute op-eration that affects a small number of entries in the learned matrix, and preserves orthogonality. We show two applications of this approach: an al-gorithm for tensor decompositions used in learn-ing mixture models, and an algorithm for sparse-PCA. We study the parameter regime where a Givens rotation approach converges faster and achieves a superior model on a genome-wide brain-wide mRNA expression dataset. 1.