Results 1  10
of
163
Linear Algebra Operators for GPU Implementation of Numerical Algorithms
 ACM Transactions on Graphics
, 2003
"... In this work, the emphasis is on the development of strategies to realize techniques of numerical computing on the graphics chip. In particular, the focus is on the acceleration of techniques for solving sets of algebraic equations as they occur in numerical simulation. We introduce a framework for ..."
Abstract

Cited by 324 (9 self)
 Add to MetaCart
In this work, the emphasis is on the development of strategies to realize techniques of numerical computing on the graphics chip. In particular, the focus is on the acceleration of techniques for solving sets of algebraic equations as they occur in numerical simulation. We introduce a framework
TensorMatrix Products with a Compressed Sparse Tensor
, 2015
"... The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multiway data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensormatr ..."
Abstract
 Add to MetaCart
The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multiway data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensor
Linear algebra for tensor problems
 Computing
"... Abstract. By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
, Tucker decomposition, tensor approximations, low rank approximations, skeleton decompositions, dimensionality reduction, data compression, largescale matrices, datasparse methods.
TurboSMT: Accelerating Coupled Sparse MatrixTensor Factorizations by 200x
"... How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ’edible’, ’fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of man ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
of many settings of the Coupled MatrixTensor Factorization (CMTF) problem. Can we accelerate any CMTF solver, so that it runs within a few minutes instead of tens of hours to a day, while maintaining good accuracy? We introduce TurboSMT, a metamethod capable of doing exactly that: it boosts
On Accelerated Hard Thresholding Methods for Sparse Approximation
, 2011
"... We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust sp ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We propose and analyze acceleration schemes for hard thresholding methods with applications to sparse approximation in linear inverse systems. Our acceleration schemes fuse combinatorial, sparse projection algorithms with convex optimization algebra to provide computationally efficient and robust
Accelerated Online LowRank Tensor Learning for Multivariate SpatioTemporal Streams
"... Lowrank tensor learning has many applications in machine learning. A series of batch learning algorithms have achieved great successes. However, in many emerging applications, such as climate data analysis, we are confronted with largescale tensor streams, which pose significant challenges to e ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
lenges to existing solutions. In this paper, we propose an accelerated online lowrank tensor learning algorithm (ALTO) to solve the problem. At each iteration, we project the current tensor to a lowdimensional tensor, using the information of the previous lowrank tensor, in order to perform efficient tensor
HigherOrder Web Link Analysis Using Multilinear Algebra
 IEEE INTERNATIONAL CONFERENCE ON DATA MINING
, 2005
"... Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal eigenvector (or singular vector) of a particular nonnegative matrix that captures the hyperlink structu ..."
Abstract

Cited by 69 (18 self)
 Add to MetaCart
representation is a sparse, threeway tensor. The first two dimensions of the tensor represent the web pages while the third dimension adds the anchor text. We then use the rank1 factors of a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify
Dataflow acceleration of Krylov subspace sparse banded problems
"... AbstractMost of the efforts in the FPGA community related to sparse linear algebra focus on increasing the degree of internal parallelism in matrixvector multiply kernels. We propose a parametrisable dataflow architecture presenting an alternative and complementary approach to support acceleratio ..."
Abstract
 Add to MetaCart
acceleration of banded sparse linear algebra problems which benefit from building a Krylov subspace. We use banded structure of a matrix A to overlap the computations Ax, A 2 x, . . . , A k x by building a pipeline of matrixvector multiplication processing elements (PEs) each performing A i x. Due to on
ALGEBRAIC ALGORITHMS1
, 2012
"... This is a preliminary version of a Chapter on Algebraic Algorithms in the up ..."
Abstract
 Add to MetaCart
This is a preliminary version of a Chapter on Algebraic Algorithms in the up
A New Approach for Accelerating the Sparse Matrixvector Multiplication
"... Sparse matrixvector multiplication (shortly SpMV) is one of most common subroutines in the numerical linear algebra. The problem is that the memory access patterns during the SpMV are irregular and the utilization of cache can suffer from low spatial or temporal locality. Approaches to improve the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Sparse matrixvector multiplication (shortly SpMV) is one of most common subroutines in the numerical linear algebra. The problem is that the memory access patterns during the SpMV are irregular and the utilization of cache can suffer from low spatial or temporal locality. Approaches to improve
Results 1  10
of
163