Results 1 
9 of
9
Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions
, 2014
"... Modern applications such as computational neuroscience, neuroinformatics and pattern/image recognition generate massive amounts of multidimensional data with multiple aspects and high dimensionality. Big data require novel technologies to efficiently process massive datasets within tolerable elapsed ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Modern applications such as computational neuroscience, neuroinformatics and pattern/image recognition generate massive amounts of multidimensional data with multiple aspects and high dimensionality. Big data require novel technologies to efficiently process massive datasets within tolerable elapsed times. Such a new emerging technology for multidimensional big data is a multiway analysis via tensor networks (TNs) and tensor decompositions (TDs) which decompose tensors to sets of factor (component) matrices and loworder (core) tensors. Tensors (i.e., multiway arrays) provide a natural and compact representation for such massive multidimensional data via suitable lowrank approximations. Dynamic tensor analysis allows us to discover meaningful hidden structures of complex data and perform generalizations by capturing multilinear and multiaspect relationships. We will discuss some emerging TN models, their mathematical and graphical descriptions and associated learning algorithms for largescale TDs and TNs, with many potential applications including: Anomaly detection, feature extraction, classification, cluster analysis, data fusion and integration, pattern recognition, predictive modeling, regression, time series analysis and multiway component analysis.
Tensor numerical methods for highdimensional PDEs: Basic theory and initial applications
, 2014
"... ..."
Linesearch methods and rank increase on lowrank matrix varieties
"... Abstract—Based on an explicit characterization of tangent cones one can devise linesearch methods to minimize functions on the variety of matrices with rank bounded by some fixed value, thereby extending the Riemannian optimization techniques from the smooth manifold of fixed rank to its closure. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Based on an explicit characterization of tangent cones one can devise linesearch methods to minimize functions on the variety of matrices with rank bounded by some fixed value, thereby extending the Riemannian optimization techniques from the smooth manifold of fixed rank to its closure. This allows for a rankadaptive optimization strategy where locally optimal solutions of some smaller rank are used as a starting point for an improved approximation with a larger rank. Contrary to optimization on the smooth manifold of fixedrank matrices, no special treatment is needed for rankdeficient matrices when optimizing on the variety. Hence, this gives a sound theoretical framework for the analysis of rankincreasing greedy algorithms, which can be more efficient than starting the calculations with large but fixed rank. 1.
Tensor Networks for Big Data Analytics and LargeScale Optimization Problems
, 2014
"... Tensor decompositions and tensor networks are emerging and promising tools for data analysis and data mining. In this paper we review basic and emerging models and associated algorithms for largescale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphica ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Tensor decompositions and tensor networks are emerging and promising tools for data analysis and data mining. In this paper we review basic and emerging models and associated algorithms for largescale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very highorder tensors from lowerorder original data) and super compression of data achieved via quantized tensor train (QTT) networks. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions.
On lowrank approximability of solutions to highdimensional operator equations and . . .
, 2014
"... ..."
Very LargeScale Singular Value Decomposition Using Tensor Train Networks
, 2014
"... We propose new algorithms for singular value decomposition (SVD) of very largescale matrices based on a lowrank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute several dominant singular values and corresponding singular vectors for largesca ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We propose new algorithms for singular value decomposition (SVD) of very largescale matrices based on a lowrank tensor approximation technique called the tensor train (TT) format. The proposed algorithms can compute several dominant singular values and corresponding singular vectors for largescale structured matrices given in a TT format. The computational complexity of the proposed methods scales logarithmically with the matrix size under the assumption that both the matrix and the singular vectors admit lowrank TT decompositions. The proposed methods, which are called the alternating least squares for SVD (ALSSVD) and modified alternating least squares for SVD (MALSSVD), compute the left and right singular vectors approximately through block TT decompositions. The very largescale optimization problem is reduced to sequential smallscale optimization problems, and each core tensor of the block TT decompositions can be updated by applying any standard optimization methods. The optimal ranks of the block TT decompositions are determined adaptively during iteration process, so that we can achieve high approximation accuracy. Extensive numerical simulations are conducted for several types of TTstructured matrices such as Hilbert matrix, Toeplitz matrix, random matrix with prescribed singular values, and tridiagonal matrix. The simulation results demonstrate the effectiveness of the proposed methods compared with standard SVD algorithms and TTbased algorithms developed for symmetric eigenvalue decomposition.
Real Time Big Data Analytics Dependence on Network Monitoring Solutions using Tensor Networks and its Decomposition
"... Organizations dealing with huge volumes of data must have a big data infrastructure in place that can accommodate the load of storing, analysing and transporting the data. Suboptimal network performance represents a potential point of failure. Therefore, it is essential to implement redundancy and/o ..."
Abstract
 Add to MetaCart
Organizations dealing with huge volumes of data must have a big data infrastructure in place that can accommodate the load of storing, analysing and transporting the data. Suboptimal network performance represents a potential point of failure. Therefore, it is essential to implement redundancy and/or a fail over strategy in order to minimize downtime. With network monitoring, we come to know the status of everything on the network without having to watch it personally and be able to take the timely action to correct problems. But to the extent that companies increase their reliance on realtime streams of marketing and performance big data, the network will become a central part of big data application performance. This is why incorporating network monitoring should be on the company's big data road map if we anticipate using live streaming and analytics of big data in business applications.
unknown title
, 2015
"... Lowrank solvers for unsteady StokesBrinkman optimal control problem with random data ..."
Abstract
 Add to MetaCart
(Show Context)
Lowrank solvers for unsteady StokesBrinkman optimal control problem with random data
Preconditioned lowrank Riemannian optimization for linear systems with tensor product structure
, 2015
"... The numerical solution of partial differential equations on highdimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic ite ..."
Abstract
 Add to MetaCart
(Show Context)
The numerical solution of partial differential equations on highdimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, lowrank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding lowrank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensorbased approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.