Results 1  10
of
31
Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate
 ACM Transactions on Mathematical Software
, 2008
"... CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for b ..."
Abstract

Cited by 109 (8 self)
 Add to MetaCart
CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI/ISO C, with both C and MATLAB TM interfaces. It appears in MATLAB 7.2 as x=A\b when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.
Dynamic supernodes in sparse Cholesky update/downdate and triangular solves
 ACM Trans. Math. Software
, 2006
"... The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorizatio ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
(Show Context)
The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a lowrank change to A (an update/downdate, A = A±WW T). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced, which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLASbased) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package, which forms the basis of x=A\b in MATLAB when A is sparse and symmetric positive definite. 1
Algorithm 8xx: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate
, 2006
"... ..."
(Show Context)
Reducing the average complexity of ML detection using semidefinite relaxation
 in Proc. IEEE ICASSP’05
, 2005
"... c ○ 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
c ○ 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. KUNGL TEKNISKA HÖGSKOLAN Institutionen för
Internet Traffic Matrices: A Primer
"... The increasing demand of various services from the Internet has led to an exponential growth of Internet traffic in the last decade, and that growth is likely to continue. With this demand comes the increasing importance of network operations management, planning, provisioning and traffic engineerin ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The increasing demand of various services from the Internet has led to an exponential growth of Internet traffic in the last decade, and that growth is likely to continue. With this demand comes the increasing importance of network operations management, planning, provisioning and traffic engineering. A key input into these processes is the traffic matrix, and this is the focus of this chapter. The traffic matrix represents the volumes of traffic from sources to destinations in a network. Here, we first explore the various issues involved in measuring and characterising these matrices. The insights obtained are used to develop models of the traffic, depending on the properties of traffic to be captured: temporal, spatial or spatiotemporal properties. The models are then used in various applications, such as the recovery of traffic matrices, network optimisation and engineering activities, anomaly detection and the synthesis of artificial traffic matrices for testing routing protocols. We conclude the chapter by summarising open questions in Internet traffic matrix research and providing a list resources useful for the researcher and practitioner. 1
Formal Methods For HighPerformance Linear Algebra
, 2000
"... The core curriculum of any firstrate undergraduate Computer Science department includes at least one course that focuses on the formal derivation and verification of algorithms [6]. Many of us in scientific... ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The core curriculum of any firstrate undergraduate Computer Science department includes at least one course that focuses on the formal derivation and verification of algorithms [6]. Many of us in scientific...
Solving rankdeficient and illposed problems using UTV and QR factorizations
 SIAM J. Matrix Annal. App
"... The algorithm of Mathias and Stewart [A block QR algorithm and the singular value decomposition, Linear Algebra and Its Applications, 182:91100, 1993] is examined as a tool for constructing regularized solutions to rankdeficient and illposed linear equations. The algorithm is based on a sequence ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
The algorithm of Mathias and Stewart [A block QR algorithm and the singular value decomposition, Linear Algebra and Its Applications, 182:91100, 1993] is examined as a tool for constructing regularized solutions to rankdeficient and illposed linear equations. The algorithm is based on a sequence of QR factorizations. If it is stopped after the first step it produces that same solution as the complete orthogonal decomposition used in LAPACK’s xGELSY. However we show that for lowrank problems a careful implementation can lead to an order of magnitude improvement in speed over xGELSY as implemented in LAPACK. We prove, under assumptions similar to assumptions used by others, that if the numerical rank is chosen at a gap in the singular value spectrum and if the initial factorization is rankrevealing then, even if the algorithm is stopped after the first step, approximately half the time its solutions are closer to the desired solution than are the singular value decomposition (SVD) solutions. Conversely, the SVD will be closer approximately half the time and in this case overall the two algorithms are very similar in accuracy. We confirm this with numerical experiments. Although the algorithm works best for problems with a gap in the singular value spectrum, numerical experiments suggest that it may work well for problems with no gap. 1
SubspaceBased Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions: Survey and Analysis
, 2007
"... We survey the definitions and use of rankrevealing matrix decompositions in singlechannel noise reduction algorithms for speech signals. Our algorithms are based on the rankreduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We survey the definitions and use of rankrevealing matrix decompositions in singlechannel noise reduction algorithms for speech signals. Our algorithms are based on the rankreduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both diagonal (eigenvalue and singular value) decompositions and rankrevealing triangular decompositions (ULV, URV, VSV, ULLV, and ULLIV). In addition, we show how the subspacebased algorithms can be analyzed and compared by means of simple FIR filter interpretations. The algorithms are illustrated with working Matlab code and applications in speech processing.
Stewart's Pivoted QLP Decomposition for LowRank Matrices
, 2002
"... The pivoted QLP decomposition, introduced by G. W. Stewart [19], represents the rst two steps in an algorithm which approximates the SVD. If A is an mbyn matrix, the matrix A0 is rst factored as A0 = QR, and then the matrix R 1 is factored as R 1 = PL , resulting in A = Q1LP 0 , wi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The pivoted QLP decomposition, introduced by G. W. Stewart [19], represents the rst two steps in an algorithm which approximates the SVD. If A is an mbyn matrix, the matrix A0 is rst factored as A0 = QR, and then the matrix R 1 is factored as R 1 = PL , resulting in A = Q1LP 0 , with Q and P orthogonal, L lowertriangular, and 0 and 1 permutation matrices. The Q and P matrices provided approximations of the left and right singular subspaces, and the diagonal elements of L are excellent approximations of the singular values of A. Stewart observed that pivoting is not necessary in the second step, allowing one to eciently truncate the decomposition, computing only the rst few columns of R andn L and choosing the stopping point dynamically. In this paper, we demonstrate that this truncating actually works by extending our theory for the complete pivoted QLP decomposition [11]. In particular, say there is a gap between k and k+1 , and partition the matrix L into diagonal blocks L11 and L22 and odiagonal block L21 , where L11 is kbyk. If we compute only the block L11 , the convergence of ( j (L11 ) j for j = 1; : : : ; k are all quadratic in the gap ratio k+1=k . Hence, if the gap ratio is small as it usually is when A has numerical rank k, then all of the singular values are likely to be well approximated. This truncated pivoted QLP decomposition can be computed in O(mnk) time, making it ideal for accurate SVD approximations for lowrank problems.