Results 1  10
of
20
Solving LargeScale Sparse Semidefinite Programs for Combinatorial Optimization
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational re ..."
Abstract

Cited by 119 (11 self)
 Add to MetaCart
(Show Context)
We present a dualscaling interiorpoint algorithm and show how it exploits the structure and sparsity of some large scale problems. We solve the positive semidefinite relaxation of combinatorial and quadratic optimization problems subject to boolean constraints. We report the first computational results of interiorpoint algorithms for approximating the maximum cut semidefinite programs with dimension upto 3000.
Sparse Gaussian Elimination on High Performance Computers
, 1996
"... This dissertation presents new techniques for solving large sparse unsymmetric linear systems on high performance computers, using Gaussian elimination with partial pivoting. The efficiencies of the new algorithms are demonstrated for matrices from various fields and for a variety of high performan ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
This dissertation presents new techniques for solving large sparse unsymmetric linear systems on high performance computers, using Gaussian elimination with partial pivoting. The efficiencies of the new algorithms are demonstrated for matrices from various fields and for a variety of high performance machines. In the first part we discuss optimizations of a sequential algorithm to exploit the memory hierarchies that exist in most RISCbased superscalar computers. We begin with the leftlooking supernodecolumn algorithm by Eisenstat, Gilbert and Liu, which includes Eisenstat and Liu's symmetric structural reduction for fast symbolic factorization. Our key contribution is to develop both numeric and symbolic schemes to perform supernodepanel updates to achieve better data reuse in cache and floatingpoint register...
A FRISCHNEWTON ALGORITHM FOR SPARSE QUANTILE REGRESSION
"... Abstract. Recent experience has shown that interiorpoint methods using a log barrier approach are far superior to classical simplex methods for computing solutions to large parametric quantile regression problems. In many large empirical applications, the design matrix has a very sparse structure. ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Abstract. Recent experience has shown that interiorpoint methods using a log barrier approach are far superior to classical simplex methods for computing solutions to large parametric quantile regression problems. In many large empirical applications, the design matrix has a very sparse structure. A typical example is the classical fixedeffect model for panel data where the parametric dimension of the model can be quite large, but the number of nonzero elements is quite small. Adopting recent developments in sparse linear algebra we introduce a modified version of the FrischNewton algorithm for quantile regression described in Portnoy and Koenker (1997). The new algorithm substantially reduces the storage (memory) requirements and increases computational speed. The modified algorithm also facilitates the development of nonparametric quantile regression methods. The pseudo design matrices employed in nonparametric quantile regression smoothing are inherently sparse in both the fidelity and roughness penalty components. Exploiting the sparse structure of these problems opens up a whole range of new possibilities for multivariate smoothing on large data sets via ANOVAtype decomposition and partial linear models. 1.
Allatonce solution of timedependent PDEconstrained optimization problems
, 2010
"... Timedependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. Oneshot methods try to compute the solution to these problems in a single iteration that solves for all timesteps at the same time. In this paper, we look at onesh ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Timedependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. Oneshot methods try to compute the solution to these problems in a single iteration that solves for all timesteps at the same time. In this paper, we look at oneshot approaches for the optimal control of timedependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate timeevolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations.
Twodimensional Block Partitionings for the Parallel Sparse Cholesky Factorization: the Fanin Method
, 1997
"... This paper presents a discussion on 2D block mappings for the sparse Cholesky factorization on parallel MIMD architectures with distributed memory. It introduces the fanin algorithm in a general manner and proposes several mapping strategies. The grid mapping with row balancing, inspired from Rothb ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
This paper presents a discussion on 2D block mappings for the sparse Cholesky factorization on parallel MIMD architectures with distributed memory. It introduces the fanin algorithm in a general manner and proposes several mapping strategies. The grid mapping with row balancing, inspired from Rothberg's work [21, 22] proved to be more robust than the original fanout algorithm. Even more efficient is the proportional mapping, as show the experiments on a 32 processors IBM SP1 and on a Cray T3D. Subforesttosubcube mappings are also considered and give good results on the T3D.
Oneshot solution of a timedependent timeperiodic PDEconstrained optimization problem
, 2011
"... ..."
Matrix Methods
, 1998
"... We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesse ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider techniques for the solution of linear systems and eigenvalue problems. We are concerned with largescale applications where the matrix will be large and sparse. We discuss both direct and iterative techniques for the solution of sparse equations, contrasting their strengths and weaknesses and emphasizing that combinations of both are necessary in the arsenal of the applications scientist. We briefly review matrix diagonalization techniques for largescale problems.
SOLVING A SEQUENCE OF SPARSE compatible systems
"... We describe how to use an upper trapezoidal sparse orthogonal factorization to solve the sequence of sparse compatible systems needed to implement sparse reduced gradient versions of certain nonsimplex activeset LP methods. For reasons of familiarity, we focus on the reduced gradient nonsimplex a ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We describe how to use an upper trapezoidal sparse orthogonal factorization to solve the sequence of sparse compatible systems needed to implement sparse reduced gradient versions of certain nonsimplex activeset LP methods. For reasons of familiarity, we focus on the reduced gradient nonsimplex activeset method of Gill and Murray, but other algorithms of its class like the second stage of Megiddo’s crossover can be implemented with the proposed technique, which clearly shows its suitability to be used in a combined interiorpoint simplex methodology. Besides two examples illustrating all the given formulae, we report the results obtained with our implementation on top of the sparse toolbox of Matlab 5 when solving the first 15 smallest Netlib problems with a highlydegenerate Phase I, and several paralellizability issues are remarked.
Use of Computational Kernels in Full and Sparse Linear Solvers, Efficient Code Design on HighPerformance RISC Processors
 RISC processors, inVector and Parallel Processing { VECPAR'96
, 1997
"... . We believe that the availability of portable and efficient serial and parallel numerical libraries that can be used as building blocks is extremely important for both simplifying application software development and improving reliability. This is illustrated by considering the solution of full ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
. We believe that the availability of portable and efficient serial and parallel numerical libraries that can be used as building blocks is extremely important for both simplifying application software development and improving reliability. This is illustrated by considering the solution of full and sparse linear systems. We describe successive layers of computational kernels such as the BLAS, the sparse BLAS, blocked algorithms for factorizing full systems, direct and iterative methods for sparse linear systems. We also show how the architecture of the today's powerful RISC processors may influence efficient code design. 1 Introduction One of the common problems for application scientists is to exploit as efficiently as possible the hardware of highperformance computers (either serial or parallel) without totally rewriting or redesigning existing codes and algorithms. We believe that the availability of portable and efficient serial and parallel numerical libraries that ca...