Results 1  10
of
15
Tensor decompositions for signal processing applications. From Twoway to Multiway Component Analysis
 ESATSTADIUS INTERNAL REPORT
, 2014
"... The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundame ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrixbased methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced causeeffect and multiview data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization.
Nonnegative matrix factorization revisited: Uniqueness and algorithm for symmetric decomposition
 IEEE TRANS. SIGNAL PROCESSING
, 2014
"... Nonnegative matrix factorization (NMF) has found numerous applications, due to its ability to provide interpretable decompositions. Perhaps surprisingly, existing results regarding its uniqueness properties are rather limited, and there is much room for improvement in terms of algorithms as well. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) has found numerous applications, due to its ability to provide interpretable decompositions. Perhaps surprisingly, existing results regarding its uniqueness properties are rather limited, and there is much room for improvement in terms of algorithms as well. Uniqueness aspects of NMF are revisited here from a geometrical point of view. Both symmetric and asymmetric NMF are considered, the former being tantamount to elementwise nonnegative squareroot factorization of positive semidefinite matrices. New uniqueness results are derived, e.g., it is shown that a sufficient condition for uniqueness is that the conic hull of the latent factors is a superset of a particular secondorder cone. Checking this condition is shown to be NPcomplete; yet this and other results offer insights on the role of latent sparsity in this context. On the computational side, a new algorithm for symmetric NMF is proposed, which is very different from existing ones. It alternates between Procrustes rotation and projection onto the nonnegative orthant to find a nonnegative matrix close to the span of the dominant subspace. Simulation results show promising performance with respect to the stateofart. Finally, the new algorithm is applied to a clustering problem for coauthorship data, yielding meaningful and interpretable results.
ON CONVERGENCE OF THE MAXIMUM BLOCK IMPROVEMENT METHOD∗
"... Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updat ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle blockstructured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes secondorder assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rankone tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method.
Tensor Principal Component Analysis via Convex Optimization
, 2012
"... This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is supersymmetric with an even degr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is supersymmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rankone, then the embedded matrix must be rankone too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rankone constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a lowrank solution; (2) relaxing the rankone constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rankone solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well.
Blind Separation of QuasiStationary Sources: Exploiting Convex Geometry in Covariance Domain
, 2015
"... This paper revisits blind source separation of instantaneously mixed quasistationary sources (BSSQSS), motivated by the observation that in certain applications (e.g., speech) there exist time frames during which only one source is active, or locally dominant. Combined with nonnegativity of sourc ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
This paper revisits blind source separation of instantaneously mixed quasistationary sources (BSSQSS), motivated by the observation that in certain applications (e.g., speech) there exist time frames during which only one source is active, or locally dominant. Combined with nonnegativity of source powers, this endows the problem with a nice convex geometry that enables elegant and efficient BSS solutions. Local dominance is tantamount to the socalled pure pixel/separability assumption in hyperspectral unmixing/nonnegative matrix factorization, respectively. Building on this link, a very simple algorithm called successive projection algorithm (SPA) is considered for estimating the mixing system in closed form. To complement SPA in the specific BSSQSS context, an algebraic preprocessing procedure is proposed to suppress shortterm source crosscorrelation interference. The proposed procedure is simple, effective, and supported by theoretical analysis. Solutions based on volume minimization (VolMin) are also considered. By theoretical analysis, it is shown that VolMin guarantees perfect mixing system identifiability under an assumption more relaxed than (exact) local dominance—which means wider applicability in practice. Exploiting the specific structure of BSSQSS, a fast VolMin algorithm is proposed for the overdetermined case. Careful simulations using real speech sources showcase the simplicity, efficiency, and accuracy of the proposed algorithms.
On New Classes of Nonnegative Symmetric Tensors
, 2014
"... Abstract. In this paper we introduce three new classes of nonnegative forms (or equivalently, symmetric tensors) and their extensions. The newly identified nonnegative symmetric tensors constitute distinctive convex cones in the space of general symmetric tensors (order 6 or above). For the special ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we introduce three new classes of nonnegative forms (or equivalently, symmetric tensors) and their extensions. The newly identified nonnegative symmetric tensors constitute distinctive convex cones in the space of general symmetric tensors (order 6 or above). For the special case of quartic forms, they collapse into the set of convex quartic homogeneous polynomial functions. We discuss the properties and applications of the new classes of nonnegative symmetric tensors in the context of polynomial and tensor optimization. Key words. symmetric tensors, nonnegative forms, polynomial and tensor optimization AMS subject classifications. 15A69, 12Y05, 90C26
Ambiguity Function Shaping for Cognitive Radar Via Complex Quartic Optimization
"... Abstract—In this paper, we propose a cognitive approach to design phaseonly modulated waveforms sharing a desired rangeDoppler response. The idea is to minimize the average value of the ambiguity function of the transmitted signal over some rangeDoppler bins, which are identified exploiting a pl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a cognitive approach to design phaseonly modulated waveforms sharing a desired rangeDoppler response. The idea is to minimize the average value of the ambiguity function of the transmitted signal over some rangeDoppler bins, which are identified exploiting a plurality of knowledge sources. From a technical point of view, this is tantamount to optimizing a real and homogeneous complex quartic order polynomial with a constant modulus constraint on each optimization variable. After proving some interesting properties of the considered problem, we devise a polynomialtime waveform optimization procedure based on the Maximum Block Improvement (MBI) method and the theory of conjugatepartialsymmetric/conjugatesupersymmetric fourth order tensors. At the analysis stage, we assess the performance of the proposed technique showing its capability to properly shape the rangeDoppler response of the transmitted waveform. Index Terms—Cognitive radar, radar waveform optimization, maximum block improvement method, complex tensor optimization. I.
REGULAR PAPER Approximation Algorithms for Discrete Polynomial Optimization
"... ..."
(Show Context)
AN INITIAL STUDY
, 2013
"... The product of a dense tensor with a vector in every mode except one, called a tensorvector product, is a key operation in several algorithms for computing the canonical tensor decomposition. In these applications, it is even more common to compute a tensorvector product with the same tensor and r ..."
Abstract
 Add to MetaCart
(Show Context)
The product of a dense tensor with a vector in every mode except one, called a tensorvector product, is a key operation in several algorithms for computing the canonical tensor decomposition. In these applications, it is even more common to compute a tensorvector product with the same tensor and r concurrently available sets of vectors, an operation we refer to as a multiplevector tensorvector product (MTVP). Current techniques for implementing these operations rely on explicitly reordering the elements of the tensor in order to leverage available matrix libraries. This approach has two significant disadvantages: reordering the data can be expensive if only a small number of concurrent sets of vectors is available in the MTVP, and this requires excessive amounts of additional memory. In this work, we consider two techniques resolving these issues. Successive contractions are proposed to eliminate explicit data reordering, while blocking tackles the excessive memory consumption. The numerical experiments on a wide variety of tensor shapes indicate the effectiveness of these optimizations, clearly illustrating that the additional memory consumption can be limited to tolerable amounts, generally without sacrificing expeditious execution. For several fourthorder tensors, the additional memory requirements were three orders of magnitude smaller than competing implementations, while throughputs of upward of 75 % of the peak performance of the computer system can be attained for large values of r.