Results 1  10
of
32
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 538 (19 self)
 Add to MetaCart
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, networks and graphs, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, and power networks). The collection meets a vital need that artificiallygenerated matrices cannot meet, and is widely used by the sparse matrix algorithms community for the development and performance evaluation of sparse matrix algorithms. The collection includes software for accessing and managing the collection, from MATLAB, Fortran, and C.
SBA: a software package for generic sparse bundle adjustment
 ACM Transactions on Mathematical Software
, 2009
"... Foundation for Research and Technology—Hellas ..."
Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate
 ACM Transactions on Mathematical Software
, 2008
"... CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for b ..."
Abstract

Cited by 109 (8 self)
 Add to MetaCart
CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI/ISO C, with both C and MATLAB TM interfaces. It appears in MATLAB 7.2 as x=A\b when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.
Covariance recovery from a square root information matrix for data association
 Journal of Robotics and Autonomous Systems
"... Data association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the estimation problem in the form of marginal covariances. However, it is often difficult to access these quantities without calculating the full and de ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
(Show Context)
Data association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the estimation problem in the form of marginal covariances. However, it is often difficult to access these quantities without calculating the full and dense covariance matrix, which is prohibitively expensive. We present a dynamic programming algorithm for efficient recovery of the marginal covariances needed for data association. As input we use a square root information matrix as maintained by our incremental smoothing and mapping (iSAM) algorithm. The contributions beyond our previous work are an improved algorithm for recovering the marginal covariances and a more thorough treatment of data association now including the joint compatibility branch and bound (JCBB) algorithm. We further show how to make information theoretic decisions about measurements before actually taking the measurement, therefore allowing a reduction in estimation complexity by omitting uninformative measurements. We evaluate our work on simulated and realworld data. Key words: data association, smoothing, simultaneous localization and mapping 1.
Algorithm 8xx: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate
, 2006
"... ..."
(Show Context)
A Fast Parallel Algorithm for Selected Inversion of Structured Sparse Matrices with Application to 2D Electronic Structure Calculations
, 2009
"... An efficient parallel algorithm is presented and tested for computing selected components of H^−1 where H has the structure of a Hamiltonian matrix of twodimensional lattice models with local interaction. Calculations of this type are useful for several applications, including electronic structure ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
An efficient parallel algorithm is presented and tested for computing selected components of H^−1 where H has the structure of a Hamiltonian matrix of twodimensional lattice models with local interaction. Calculations of this type are useful for several applications, including electronic structure analysis of materials in which the diagonal elements of the Green’s functions are needed. The algorithm proposed here is a direct method based on an LDL T factorization. The elimination tree is used to organize the parallel algorithm. Synchronization overhead is reduced by passing the data level by level along this tree using the technique of local buffers and relative indices. The performance of the proposed parallel algorithm is analyzed by examining its load balance and communication overhead, and is shown to exhibit an excellent weak scaling on a largescale high performance parallel machine with distributed memory.
Solving rational eigenvalue problems via linearization
, 2008
"... Abstract. Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical applications. In this paper, we propose a linearizationbased method to solve the rational eigenvalue problem. The proposed method converts the rational eigenvalue problem i ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical applications. In this paper, we propose a linearizationbased method to solve the rational eigenvalue problem. The proposed method converts the rational eigenvalue problem into a wellstudied linear eigenvalue problem, and meanwhile, exploits and preserves the structure and properties of the original rational eigenvalue problem. For example, the lowrank property leads to a trimmed linearization. We show that solving a class of rational eigenvalue problems is just as convenient and efficient as solving linear eigenvalue problems. Key words. Rational eigenvalue problem, linearization, nonlinear eigenvalue problem AMS subject classifications. 65F15, 65F50, 15A18
Bayesian Modeling with Gaussian Processes using the GPstuff Toolbox,” arXiv:1206.5754 [cs, stat
, 2012
"... Gaussian processes (GP) are powerful tools for probabilistic modeling purposes. They can be used to define prior distributions over latent functions in hierarchical Bayesian models. The prior over functions is defined implicitly by the mean and covariance function, which determine the smoothness and ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Gaussian processes (GP) are powerful tools for probabilistic modeling purposes. They can be used to define prior distributions over latent functions in hierarchical Bayesian models. The prior over functions is defined implicitly by the mean and covariance function, which determine the smoothness and variability of the function. The inference can then be conducted directly in the function space by evaluating or approximating the posterior process. Despite their attractive theoretical properties GPs provide practical challenges in their implementation. GPstuff is a versatile collection of computational tools for GP models compatible with Linux and Windows MATLAB and Octave. It includes, among others, various inference methods, sparse approximations and tools for model assessment. In this work, we review these tools and demonstrate the use of GPstuff in several models. Last updated 20140411.
ECOS: An SOCP solver for embedded systems
 in European Control Converence
, 2013
"... Abstract — In this paper, we describe the embedded conic solver (ECOS), an interiorpoint solver for secondorder cone programming (SOCP) designed specifically for embedded applications. ECOS is written in low footprint, singlethreaded, libraryfree ANSIC and so runs on most embedded platforms. Th ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we describe the embedded conic solver (ECOS), an interiorpoint solver for secondorder cone programming (SOCP) designed specifically for embedded applications. ECOS is written in low footprint, singlethreaded, libraryfree ANSIC and so runs on most embedded platforms. The main interiorpoint algorithm is a standard primaldual Mehrotra predictorcorrector method with NesterovTodd scaling and selfdual embedding, with search directions found via a symmetric indefinite KKT system, chosen to allow stable factorization with a fixed pivoting order. The indefinite system is solved using Davis ’ SparseLDL package, which we modify by adding dynamic regularization and iterative refinement for stability and reliability, as is done in the CVXGEN code generation system, allowing us to avoid all numerical pivoting; the elimination ordering is found entirely symbolically. This keeps the solver simple, only 750 lines of code, with virtually no variation in run time. For small problems, ECOS is faster than most existing SOCP solvers; it is still competitive for mediumsized problems up to tens of thousands of variables. I.
BALANCED INCOMPLETE FACTORIZATION
"... In this paper we present a new incomplete factorization of a square matrix into triangular factors in which we get standard LU/LDL T factors (direct factors) and their inverses (inverse factors) at the same time. Algorithmically, we derive this method from the approach based on the ShermanMorrison ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
In this paper we present a new incomplete factorization of a square matrix into triangular factors in which we get standard LU/LDL T factors (direct factors) and their inverses (inverse factors) at the same time. Algorithmically, we derive this method from the approach based on the ShermanMorrison formula [16]. In contrast to the RIF algorithm [9], the direct and inverse factors here directly influence each other throughout the computation. Consequently, the algorithm to compute the approximate factors may mutually balance dropping in the factors and control their conditioning in this way. Although we describe the theory behind the factorization for general nonsymmetric matrices, in implementation and experiments we restrict for clarity and conciseness only to the case when the system matrix is symmetric and positive definite. In this case, we call the new approximate LDL T factorization Balanced Incomplete Factorization (BIF). Our experimental results confirm that this factorization is very robust and may be useful in solving difficult illconditioned problems by preconditioned iterative methods. Moreover, the internal coupling of computation of direct and inverse factors results in much shorter setup times (times to compute approximate decomposition) than RIF, a method of a similar and very high level of robustness.