Results

**1 - 3**of**3**### Joint Tensor Factorization and Outlying Slab Suppression With Applications

"... Abstract—We consider factoring low-rank tensors in the pres-ence of outlying slabs. This problem is important in practice, because data collected in many real-world applications, such as speech, fluorescence, and some social network data, fit this paradigm. Prior work tackles this problem by iterati ..."

Abstract
- Add to MetaCart

Abstract—We consider factoring low-rank tensors in the pres-ence of outlying slabs. This problem is important in practice, because data collected in many real-world applications, such as speech, fluorescence, and some social network data, fit this paradigm. Prior work tackles this problem by iteratively selecting a fixed number of slabs and fitting, a procedure which may not converge. We formulate this problem from a group-sparsity promoting point of view, and propose an alternating optimiza-tion framework to handle the corresponding minimization-based low-rank tensor factorization problem. The proposed algorithm features a similar per-iteration complexity as the plain trilinear alternating least squares (TALS) algorithm. Convergence of the proposed algorithm is also easy to analyze under the framework of alternating optimization and its variants. In addition, regularization and constraints can be easily incorpo-rated to make use of a priori information on the latent loading factors. Simulations and real data experiments on blind speech separation, fluorescence data analysis, and social network mining are used to showcase the effectiveness of the proposed algorithm. Index Terms—Canonical polyadic decomposition, group spar-sity, iteratively reweighted, outliers, PARAFAC, robustness, tensor decomposition. I.

### Robustness Analysis of Structured Ma- trix Factorization via Self-Dictionary Mixed-Norm Optimization

"... Abstract—We are interested in a low-rank matrix factorization problem where one of the matrix factors has a special structure; specifically, its columns live in the unit simplex. This problem finds applications in diverse areas such as hyperspectral unmixing, video summarization, spectrum sensing, a ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—We are interested in a low-rank matrix factorization problem where one of the matrix factors has a special structure; specifically, its columns live in the unit simplex. This problem finds applications in diverse areas such as hyperspectral unmixing, video summarization, spectrum sensing, and blind speech separa-tion. Prior works showed that such a factorization problem can be formulated as a self-dictionary sparse optimization problem under some assumptions that are considered realistic in many applications, and convex mixed norms were employed as optimiza-tion surrogates to realize the factorization in practice. Numerical results have shown that the mixed-norm approach demonstrates promising performance. In this letter, we conduct performance analysis of the mixed-norm approach under noise perturbations. Our result shows that using a convex mixed norm can indeed yield provably good solutions. More importantly, we also show that using nonconvex mixed (quasi) norms is more advantageous in terms of robustness against noise. Index Terms—Matrix factorization, performance analysis, self-dictionary sparse optimization. I.

### Principled Neuro-Functional Connectivity Discovery

"... How can we reverse-engineer the brain connectiv-ity, given the input stimulus, and the correspond-ing brain-activity measurements, for several experi-ments? We show how to solve the problem in a prin-cipled way, modeling the brain as a linear dynami-cal system (LDS), and solving the resulting “syste ..."

Abstract
- Add to MetaCart

(Show Context)
How can we reverse-engineer the brain connectiv-ity, given the input stimulus, and the correspond-ing brain-activity measurements, for several experi-ments? We show how to solve the problem in a prin-cipled way, modeling the brain as a linear dynami-cal system (LDS), and solving the resulting “system identification ” problem after imposing sparsity and non-negativity constraints on the appropriate matri-ces. These are reasonable assumptions in some appli-cations, including magnetoencephalography (MEG). There are three contributions: (a) Proof: We prove that this simple condition resolves the ambigu-ity of similarity transformation in the LDS identifica-tion problem; (b) Algorithm: we propose an effective algorithm which further induces sparse connectivity in a principled way; and (c) Validation: our experi-ments on semi-synthetic (C. elegans), as well as real MEG data, show that our method recovers the neural connectivity, and it leads to interpretable results. 1