Results 1 
5 of
5
The Dynamical Systems Method for solving nonlinear equations with monotone operators
"... A review of the authors’s results is given. Several methods are discussed for solving nonlinear equations F(u) = f, where F is a monotone operator in a Hilbert space, and noisy data are given in place of the exact data. A discrepancy principle for solving the equation is formulated and justified. V ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
(Show Context)
A review of the authors’s results is given. Several methods are discussed for solving nonlinear equations F(u) = f, where F is a monotone operator in a Hilbert space, and noisy data are given in place of the exact data. A discrepancy principle for solving the equation is formulated and justified. Various versions of the Dynamical Systems Method (DSM) for solving the equation are formulated. These methods consist of a regularized Newtontype method, a gradienttype method, and a simple iteration method. A priori and a posteriori choices of stopping rules for these methods are proposed and justified. Convergence of the solutions, obtained by these methods, to the minimal norm solution to the equation F(u) = f is proved. Iterative schemes with a posteriori choices of stopping rule corresponding to the proposed DSM are formulated. Convergence of these iterative schemes to a solution to equation F(u) = f is justified. New nonlinear differential inequalities are derived and applied to a study of largetime behavior of solutions to evolution equations. Discrete versions of these inequalities are established.
Dynamical Systems Method (DSM) for solving equations with monotone operators without smoothness assumptions on F'(u)
 JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS
, 2010
"... ..."
ON A CLASS OF FROZEN REGULARIZED GAUSSNEWTON METHODS FOR NONLINEAR INVERSE PROBLEMS
"... Abstract. In this paper we consider a class of regularized GaussNewton methods for solving nonlinear inverse problems for which an a posteriori stopping rule is proposed to terminate the iteration. Such methods have the frozen feature that they require only the computation of the Fréchet derivative ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we consider a class of regularized GaussNewton methods for solving nonlinear inverse problems for which an a posteriori stopping rule is proposed to terminate the iteration. Such methods have the frozen feature that they require only the computation of the Fréchet derivative at the initial approximation. Thus the computational work is considerably reduced. Under certain mild conditions, we give the convergence analysis and derive various estimates, including the order optimality, on these methods. 1.
Newtontype regularization methods for nonlinear inverse problems
"... Abstract: Inverse problems arise whenever one searches for unknown causes based on observation of their effects. Such problems are usually illposed in the sense that their solutions do not depend continuously on the data. In practical applications, one never has the exact data; instead only noisy ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Inverse problems arise whenever one searches for unknown causes based on observation of their effects. Such problems are usually illposed in the sense that their solutions do not depend continuously on the data. In practical applications, one never has the exact data; instead only noisy data are available due to errors in the measurements. Thus, the development of stable methods for solving inverse problems is an important topic. In the last two decades, many methods have been developed for solving nonlinear inverse problems. Due to their straightforward implementation and fast convergence property, more and more attention has been paid on Newtontype regularization methods including the general iteratively regularized Gausnewton methods and the inexact Newton regularization methods. The iteratively regularized GaussNewton method was proposed by Bakushinski for solving nonlinear inverse problems in Hilbert spaces, and the method was quickly generalized to its general form. These methods produce all the iterates in some trust regions centered around the initial guess. The regularization property was explored under either a priori or a posteriori stopping rules. We will present our recent convergence results when the discrepancy principle is used to terminate the iteration. The inexact Newton regularization methods was initiated by Hanke and then generalized by Rieder to solve nonlinear inverse problems in Hilbert spaces. In contrast to the iteratively regularized GaussNewton methods, such methods produce the next iterate in a trust region centered around the current iterate by regularizing local linearized equations. An approximate solution is output by a discrepancy principle. Although numerical simulation indicates that they are quite efficient, for a long time it has been an open problem whether the inexact Newton methods are order optimal. We will report our recent work and confirm that the methods indeed are order optimal. In some situations, regularization methods formulated in Hilbert space setting may not produce good results since they tend to smooth the solutions and thus destroy the special feature in the exact solution. On the other hand, many inverse problems can be more naturally formulated in Banach spaces than in Hilbert spaces. Therefore, it is necessary to develop regularization methods in the framework of Banach spaces. By making use of duality mappings and Bregman distance we will indicate how to formulate some Newtontype methods in Banach space setting and present the corresponding convergence results.