Results 1  10
of
1,684,239
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
gradient algorithms, indicating that I~QR is the most reliable algorithm when A is illconditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmationleast squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebralinear systems (direct and
Performance of various computers using standard linear equations software
, 2009
"... This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. ..."
Abstract

Cited by 409 (22 self)
 Add to MetaCart
This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers.
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 560 (10 self)
 Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so
Variable iterative methods for nonsymmetric systems of linear equations
 SIAM J. Numer. Anal
, 1983
"... Abstract. We consider a class of iterative algorithms for solving systems of linear equations where the coefficient matrix is nonsymmetric with positivedefinite symmetric part. The algorithms are modelled after the conjugate gradient method, and are well suited for large sparse systems. They do not ..."
Abstract

Cited by 240 (5 self)
 Add to MetaCart
Abstract. We consider a class of iterative algorithms for solving systems of linear equations where the coefficient matrix is nonsymmetric with positivedefinite symmetric part. The algorithms are modelled after the conjugate gradient method, and are well suited for large sparse systems. They do
Linear Equations
, 2008
"... now appears in Section V, Subsection A, of the Task Group’s report on Conceptual ..."
Abstract
 Add to MetaCart
now appears in Section V, Subsection A, of the Task Group’s report on Conceptual
Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
, 2006
"... Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our pr ..."
Abstract

Cited by 278 (23 self)
 Add to MetaCart
Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our
Decoding by Linear Programming
, 2004
"... This paper considers the classical error correcting problem which is frequently discussed in coding theory. We wish to recover an input vector f ∈ Rn from corrupted measurements y = Af + e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to rec ..."
Abstract

Cited by 1400 (17 self)
 Add to MetaCart
fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced
Solution of systems of linear equations by minimized iterations
 J. Res. Natl. Bur. Stand
, 1952
"... A simple algorithm is described which is well adapted to the effective solution of large systems of linear algebraic equations by a succession of wellconvergent approximations. 1. ..."
Abstract

Cited by 214 (0 self)
 Add to MetaCart
A simple algorithm is described which is well adapted to the effective solution of large systems of linear algebraic equations by a succession of wellconvergent approximations. 1.
Linear equations in primes
 Annals of Mathematics
"... Abstract. Consider a system Ψ of nonconstant affinelinear forms ψ1,..., ψt: Z d → Z, no two of which are linearly dependent. Let N be a large integer, and let K ⊆ [−N, N] d be convex. A generalisation of a famous and difficult open conjecture of Hardy and Littlewood predicts an asymptotic, as N → ..."
Abstract

Cited by 83 (5 self)
 Add to MetaCart
simultaneous linear system of equations, in which all unknowns are required to be prime. In this paper we (conditionally) verify this asymptotic under the assumption that no two of the affinelinear forms ψ1,..., ψt are affinely related; this excludes the important “binary ” cases such as the twin prime
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 766 (23 self)
 Add to MetaCart
illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem
Results 1  10
of
1,684,239