Results 1 
8 of
8
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 773 (23 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
Design of a Parallel Nonsymmetric Eigenroutine Toolbox, Part I
, 1993
"... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools ar ..."
Abstract

Cited by 65 (12 self)
 Add to MetaCart
The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with illconditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem.
A list of matrix flows with applications
 IN HAMILTONIAN AND GRADIENTS FLOWS, ALGORITHMS AND CONTROL
, 1994
"... Many mathematical problems, such as existence questions, are studied by using an appropriate realization process, either iteratively or continuously. This article is a collection of differential equations that have been proposed as special continuous realization processes. In some cases, there are r ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
Many mathematical problems, such as existence questions, are studied by using an appropriate realization process, either iteratively or continuously. This article is a collection of differential equations that have been proposed as special continuous realization processes. In some cases, there are remarkable connections betweensmooth flows and discrete numerical algorithms. In other cases, the flow approach seems advantageous in tackling very difficult problems. The flow approach has potential applications ranging from new development of numerical algorithms to the theoretical solution of open problems. Various aspects of the recent development and applications of the flow approach are reviewed in this article.
Trading off Parallelism and Numerical Stability
, 1992
"... The fastest parallel algorithm for a problem may be significantly less stable numerically than the fastest serial algorithm. We illustrate this phenomenon by a series of examples drawn from numerical linear algebra. We also show how some of these instabilities may be mitigated by better floating poi ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
The fastest parallel algorithm for a problem may be significantly less stable numerically than the fastest serial algorithm. We illustrate this phenomenon by a series of examples drawn from numerical linear algebra. We also show how some of these instabilities may be mitigated by better floating point arithmetic.
Matrix Differential Equations: A Continuous Realization Process for Linear Algebra Problems
, 1992
"... Many mathematical problems, such as existence questions, are studied by using an appropriate realization process, either iteratively or continuously. In this article differential equation techniques are used as a special continuous realization process for linear algebra problems. The matrix differen ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Many mathematical problems, such as existence questions, are studied by using an appropriate realization process, either iteratively or continuously. In this article differential equation techniques are used as a special continuous realization process for linear algebra problems. The matrix differential equations are cast in fairly general frameworks of which special cases have been found to be closely related to important numerical algorithms. The main thrust is to study the dynamics of various isospectral flows. This approach has potential applications ranging from new developmentofnumerical algorithms to theoretical solution of open problems. Various aspects of the recent development and application in this direction are reviewed in this article.
Homotopies for solving polynomial systems within a bounded Domain
 Theor. Comp. Sci
, 1994
"... The problem considered in this paper is the computation of all solutions of a given polynomial system in a bounded domain. Proving Rouch¶e's Theorem by homotopy continuation concepts yields a new class of homotopy methods, the socalled regional homotopy methods. These methods rely on isolating ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The problem considered in this paper is the computation of all solutions of a given polynomial system in a bounded domain. Proving Rouch¶e's Theorem by homotopy continuation concepts yields a new class of homotopy methods, the socalled regional homotopy methods. These methods rely on isolating a part of the system to be solved, which dominates the rest of the system on the border of the domain. As the dominant part has a sparser structure, it is easier to solve. It will be used as start system in the regional homotopy. The paper further describes practical homotopy construction methods by presenting estimators to obtain bounds for polynomials over a bounded domain. Applications illustrate the usefulness of the approach. 1
On the Homotopy Method for Symmetric Modified Generalized Eigenvalue Problems
, 1996
"... Large sparse generalized eigenvalue problem plays a significant role in many application areas. Usually only a few smallest eigenpairs (i.e., eigenvalues and their corresponding eigenvectors) are desired in these applications. A frequently encountered problem is to solve a system slightly modified f ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Large sparse generalized eigenvalue problem plays a significant role in many application areas. Usually only a few smallest eigenpairs (i.e., eigenvalues and their corresponding eigenvectors) are desired in these applications. A frequently encountered problem is to solve a system slightly modified from the original system. If the modification is small, the new system can be solved by using Rayleigh Quotient Iteration (RQI); the initial Ritz vectors are provided by the eigenvectors from the original system. However, if the modification is relatively large, direct use of RQI is not sufficient and, in many cases, gives inaccurate results, such as missing some of the eigenvalues. The homotopy method can be used to remedy this problem. In this paper, we review the homotopy method and its theoretical background. The approach employed here is based on perturbation theory which is particularly suitable for practical analysis of the method. Based on this approach, we give some criteria for vari...