Results 1  10
of
6,580
Computing A Sparse Jacobian Matrix By Rows And Columns
, 1995
"... this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graphtheoretic characterization of the problem is given. KEY WORDS: AD, Forward and Reverse mode, Nonlinear Optimization, Numerical Diffe ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Differences, Sparsity. 1 INTRODUCTION In many nonlinear optimization problems one often needs to estimate the Jacobian matrix of a nonlinear function F : R
Reducing the Number of AD Passes for Computing a Sparse Jacobian Matrix
, 2000
"... A reduction in the computational work is possible if we do not require that the nonzeros of a Jacobian matrix be determined directly. If a column or row partition is available, the proposed substitution technique can be used to reduce the number of groups in the partition further. In this chapte ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
A reduction in the computational work is possible if we do not require that the nonzeros of a Jacobian matrix be determined directly. If a column or row partition is available, the proposed substitution technique can be used to reduce the number of groups in the partition further
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 536 (17 self)
 Add to MetaCart
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural
Nonnegative matrix factorization with sparseness constraints,”
 Journal of Machine Learning Research,
, 2004
"... Abstract Nonnegative matrix factorization (NMF) is a recently developed technique for finding partsbased, linear representations of nonnegative data. Although it has successfully been applied in several applications, it does not always result in partsbased representations. In this paper, we sho ..."
Abstract

Cited by 498 (0 self)
 Add to MetaCart
Abstract Nonnegative matrix factorization (NMF) is a recently developed technique for finding partsbased, linear representations of nonnegative data. Although it has successfully been applied in several applications, it does not always result in partsbased representations. In this paper, we
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 653 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (36 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity
High dimensional graphs and variable selection with the Lasso
 ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract

Cited by 736 (22 self)
 Add to MetaCart
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso
Robust principal component analysis?
 Journal of the ACM,
, 2011
"... Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the lowrank and the ..."
Abstract

Cited by 569 (26 self)
 Add to MetaCart
Abstract This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a lowrank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 773 (23 self)
 Add to MetaCart
illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem
Results 1  10
of
6,580