Results 1  10
of
13
Zhang,‘On a problem of D
 H. Lehmer’, Acta Math. Sinica
"... Backward perturbation analysis for scaled total leastsquares ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
Backward perturbation analysis for scaled total leastsquares
Core problems in linear algebraic systems
 SIAM. J. MATRIX ANAL. APPL
, 2006
"... For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved b ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
For any linear system Ax ≈ b we define a set of core problems and show that the orthogonal upper bidiagonalization of [b, A] gives such a core problem. In particular we show that these core problems have desirable properties such as minimal dimensions. When a total least squares problem is solved by first finding a core problem, we show the resulting theory is consistent with earlier generalizations, but much simpler and clearer. The approach is important for other related solutions and leads, for example, to an elegant solution to the data least squares problem. The ideas could be useful for solving illposed problems.
Lanczos tridiagonalization and core problems
, 2007
"... Abstract The Lanczos tridiagonalization orthogonally transforms a real symmetric matrix A to symmetric tridiagonal form. The GolubKahan bidiagonalization orthogonally reduces a nonsymmetric rectangular matrix to upper or lower bidiagonal form. Both algorithms are very closely related. The It is fu ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract The Lanczos tridiagonalization orthogonally transforms a real symmetric matrix A to symmetric tridiagonal form. The GolubKahan bidiagonalization orthogonally reduces a nonsymmetric rectangular matrix to upper or lower bidiagonal form. Both algorithms are very closely related. The It is further shown how the core problem can be used in a simple and efficient way for solving different formulations of the original approximation problem. Our contribution relates the core problem formulation to the Lanczos tridiagonalization and derives its characteristics from the relationship between the GolubKahan bidiagonalization, the Lanczos tridiagonalization and the wellknown properties of Jacobi matrices.
Characterizing matrices that are consistent with given solutions
"... Abstract. For given vectors b ∈ C m and y ∈ C n we describe a unitary transformation approach to deriving the set F of all matrices F ∈ C m×n such that y is an exact solution to the compatible system Fy = b. This is used for deriving minimal backward errors E and f such that (A+E)y = b+f when possib ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. For given vectors b ∈ C m and y ∈ C n we describe a unitary transformation approach to deriving the set F of all matrices F ∈ C m×n such that y is an exact solution to the compatible system Fy = b. This is used for deriving minimal backward errors E and f such that (A+E)y = b+f when possibly noisy data A ∈ C m×n and b ∈ C m are given, and the aim is to decide if y is a satisfactory approximate solution to Ax = b. The approach might be different, but the above results are not new. However we also prove the apparently new result that two well known approaches to making this decision are theoretically equivalent, and discuss how such knowledge can be used in designing effective stopping criteria for iterative solution techniques. All these ideas generalize to the following formulations. We extend our constructive approach to derive a superset FSTLS+ of the set FSTLS of all matrices F ∈ C m×n such that y is a scaled total least squares solution to Fy ≈ b. This is a new general result that specializes in two important ways. The ordinary least squares problem is an extreme case of the scaled total least squares problem, and we use our result to obtain the set FLS of all matrices F ∈ C m×n such that y is an exact least squares solution to Fy ≈ b. This complements the original lessconstructive derivation of Waldén, Karlson and Sun [Numerical Linear Algebra with Applications, 2:271–286 (1995)]. We do the equivalent for the data least squares problem—the other extreme case of the scaled total least squares problem. Not only can the results be used as indicated above for the compatible case, but the constructive technique we use could also be applicable to other backward problems—such as those for underdetermined systems, the singular value decomposition, and the eigenproblem.
TOWARDS A BACKWARD PERTURBATION ANALYSIS FOR DATA LEAST SQUARES PROBLEMS
, 2009
"... Given an approximate solution to a data least squares (DLS) problem, we would like to know its minimal backward error. Here we derive formulas for what we call an “extended” minimal backward error, which is at worst a lower bound on the minimal backward error. When the given approximate solution i ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Given an approximate solution to a data least squares (DLS) problem, we would like to know its minimal backward error. Here we derive formulas for what we call an “extended” minimal backward error, which is at worst a lower bound on the minimal backward error. When the given approximate solution is a good enough approximation to the exact solution of the DLS problem (which is the aim in practice), the extended minimal backward error is the actual minimal backward error, and this is also true in other easily assessed and common cases. Since it is computationally expensive to compute the extended minimal backward error directly, we derive a lower bound on it and an asymptotic estimate for it, both of which can be evaluated less expensively. Simulation results show that for reasonable approximate solutions the lower bound has the same order as the extended minimal backward error, and the asymptotic estimate is an excellent approximation to the extended minimal backward error.
Bidiagonalization as a fundamental decomposition of data in linear approximation problems, lecture at
 4th Workshop on TLS and ErrorsinVariables Modeling
, 2006
"... ..."
A Structured Data Least Squares Algorithm and its Application in Digital Filtering
, 2007
"... Numerical methods for the solution of the structured data least squares problem with special application in digital filtering are investigated. While the minimum meansquare error, i.e. ordinary least squares formulation, solves the linear system of equations for the case of noise in the right hand ..."
Abstract
 Add to MetaCart
Numerical methods for the solution of the structured data least squares problem with special application in digital filtering are investigated. While the minimum meansquare error, i.e. ordinary least squares formulation, solves the linear system of equations for the case of noise in the right hand side, data least squares is formulated for the problem with noise in the coefficient matrix. For the solution of the channel equalization problem of a linear time invariant channel, the coefficient matrix, and hence the error in the coefficient matrix, possesses Hankel structure. Experimental verification demonstrates that the imposition of the structure of the error in the formulation, structured data least squares, generates more accurate solutions than are achieved by either standard ordinary least squares or data least squares, for signals with high signal to noise ratios.
in honor of Gérard Meurant for his 60 th birthday Matrix Analysis and Applications
, 1519
"... in honor of Gérard Meurant for his 60 th birthday ..."