Results 1  10
of
18
Trust Region SQP Methods With Inexact Linear System Solves For LargeScale Optimization
, 2006
"... by ..."
(Show Context)
Parallel multilevel restricted Schwarz preconditioners with pollution removing for PDEconstrained optimization
 SIAM J. Sci. Comput
"... Abstract. We develop a class of Vcycle type multilevel restricted additive Schwarz (RAS) methods and study the numerical and parallel performance of the new fully coupled methods for solving large sparse Jacobian systems arising from the discretization of some optimization problems constrained by n ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We develop a class of Vcycle type multilevel restricted additive Schwarz (RAS) methods and study the numerical and parallel performance of the new fully coupled methods for solving large sparse Jacobian systems arising from the discretization of some optimization problems constrained by nonlinear partial differential equations. Straightforward extensions of the onelevel RAS to multilevel do not work due to the pollution effects of the coarse interpolation. We then introduce, in this paper, a pollution removing coarsetofine interpolation scheme for one of the components of the multicomponent linear system, and show numerically that the combination of the new interpolation scheme with the RAS smoothed multigrid method provides an effective family of techniques for solving rather difficult PDEconstrained optimization problems. Numerical examples involving the boundary control of incompressible NavierStokes flows are presented in detail. Key words. Schwarz preconditioners, domain decomposition, multilevel methods, parallel computing, partial differential equations constrained optimization, inexact Newton, flow control. 1. Introduction. There are two major families of Newton techniques for solving nonlinear optimization problems: reduced space methods, characterized by the partition of the problem into smaller ones at each Newton step, and full space ones. As
Fast iterative solution of elliptic control problems in wavelet discretization
 J. COMP. APPL. MATH
, 2005
"... We investigate wavelet methods for the efficient numerical solution of a class of control problems constrained by a linear elliptic boundary value problem where the cost functional may contain fractional Sobolev norms of the control and the state. Starting point is the formulation of the infinitedi ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We investigate wavelet methods for the efficient numerical solution of a class of control problems constrained by a linear elliptic boundary value problem where the cost functional may contain fractional Sobolev norms of the control and the state. Starting point is the formulation of the infinitedimensional control problem in terms of (boundaryadapted biorthogonal spline)wavelets, involving only ℓ2 norms of wavelet expansion coefficients (where different norms are realized by a diagonal scaling together with a Riesz map) and constraints in form of an ℓ2 isomorphism. The coupled system of equations resulting from optimization is solved by an inexact conjugate gradient (CG) method for the control, which involves the approximate inversion of the primal and the adjoint operator using again CG iterations. Starting from a coarse discretization level, we use nested iteration to solve the coupled system on successively finer uniform discretizations up to discretization error accuracy on each level. The resulting inexact CG scheme is a ‘fast solver’: it is of asymptotic optimal complexity in the sense that the overall computational effort to compute the solution up to discretization error on the finest grid is proportional to the number of unknowns on that grid, a consequence of grid–independent condition numbers of the linear operators in wavelet coordinates. In the numerical examples we study the choice of different norms and the regularization parameter in the cost functional and their effect on the solution. Moreover, for different situations the performance of the fully iterative inexact CG scheme is investigated, confirming the theoretical results.
EFFICIENT PRECONDITIONERS FOR OPTIMALITY SYSTEMS ARISING IN CONNECTION WITH INVERSE PROBLEMS
, 2010
"... This paper is devoted to the numerical treatment of linear optimality systems (OS) arising in connection with inverse problems for partial differential equations. If such inverse problems are regularized by Tikhonov regularization, then it follows from standard theory that the associated OS is wel ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
This paper is devoted to the numerical treatment of linear optimality systems (OS) arising in connection with inverse problems for partial differential equations. If such inverse problems are regularized by Tikhonov regularization, then it follows from standard theory that the associated OS is wellposed, provided that the regularization parameter α is positive and that the involved state equation satisfies suitable assumptions. We explain and analyze how certain mapping properties of the operators appearing in the OS can be employed to define efficient preconditioners for finite element (FE) approximations of such systems. The key feature of the scheme is that the numberof iterations needed to solve the preconditioned problem by the minimal residual method is bounded independentlyof the mesh parameter h, used in the FE discretization, and only increases moderately as α → 0. More specifically, if the stopping criterion for the iteration process is defined in terms of the associated energy norm, then the number of iterations required (in the severely illposed case) cannot grow faster than O((ln(α)) 2). Our analysis is based on a careful study of the involved operators which yields the distribution of the eigenvalues of the preconditioned OS. Finally, the theoretical results are illuminated by a number of numerical experiments addressing both a model problem studied by Borzi, Kunisch and Kwak [14] and an inverse problem arising in connection with electrocardiography [41].
Domain decomposition methods for advection dominated linearquadratic elliptic optimal control problems
 Comp. Methods in Applied Mech. Eng
"... We present an optimizationlevel domain decomposition (DD) preconditioner for the solution of advection dominated elliptic linearquadratic optimal control problems, which arise in many science and engineering applications. The DD preconditioner is based on a decomposition of the optimality cond ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We present an optimizationlevel domain decomposition (DD) preconditioner for the solution of advection dominated elliptic linearquadratic optimal control problems, which arise in many science and engineering applications. The DD preconditioner is based on a decomposition of the optimality conditions for the elliptic linearquadratic optimal control problem into smaller subdomain optimality conditions with Dirichlet boundary conditions for the states and the adjoints on the subdomain interfaces.
Domain Decomposition Methods for LinearQuadratic Elliptic Optimal Control Problems
, 2004
"... This thesis is concerned with the development of domain decomposition (DD) based preconditioners for linearquadratic elliptic optimal control problems (LQEOCPs), their analysis, and numerical studies of their performance on model problems. The solution of LQEOCPs arises in many applications, ei ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This thesis is concerned with the development of domain decomposition (DD) based preconditioners for linearquadratic elliptic optimal control problems (LQEOCPs), their analysis, and numerical studies of their performance on model problems. The solution of LQEOCPs arises in many applications, either directly or as subproblems in Newton or Sequential Quadratic Programming methods for the solution of nonlinear elliptic optimal control problems. After a finite element discretization, convex LQEOCPs lead to large scale symmetric indefinite linear systems. The solution of these large systems is a very time consuming step and must be done iteratively, typically with a preconditioned Krylov subspace method. Developing good preconditioners for these linear systems is an important part of improving the overall performance of the solution method. The DD
Integration of Sequential Quadratic Programming and Domain Decomposition Methods for Nonlinear Optimal Control Problems
"... Summary. We discuss the integration of a sequential quadratic programming (SQP) method with an optimizationlevel domain decomposition (DD) preconditioner for the solution of the quadratic optimization subproblems. The DD method is an extension of the wellknown NeumannNeumann method to the optimiz ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Summary. We discuss the integration of a sequential quadratic programming (SQP) method with an optimizationlevel domain decomposition (DD) preconditioner for the solution of the quadratic optimization subproblems. The DD method is an extension of the wellknown NeumannNeumann method to the optimization context and is based on a decomposition of the first order system of optimality conditions. The SQP method uses a trustregion globalization and requires the solution of quadratic subproblems that are known to be convex, hence solving the first order system of optimality conditions associated with these subproblems is equivalent to solving these subproblems. In addition, our SQP method allows the inexact solution of these subproblems and adjusts the level of exactness with which these subproblems are solved based on the progress of the SQP method. The overall method is applied to a boundary control problem governed by a semilinear elliptic equation. 1
FETIDP methods for Optimal Control Problems
"... We consider FETIDP domain decomposition methods for optimal control problems of the form min ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider FETIDP domain decomposition methods for optimal control problems of the form min
ANALYSIS OF THE MINIMAL RESIDUAL METHOD APPLIED TO ILLPOSED OPTIMALITY SYSTEMS
, 2012
"... We analyze the performance of the Minimal Residual Method applied to linear KarushKuhnTucker systems arising in connection with inverse problems. Such optimality systems typically have a saddle point structure and have unique solutions for all α> 0, where α is the parameter employed in the Tik ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We analyze the performance of the Minimal Residual Method applied to linear KarushKuhnTucker systems arising in connection with inverse problems. Such optimality systems typically have a saddle point structure and have unique solutions for all α> 0, where α is the parameter employed in the Tikhonov regularization. Unfortunately, the associated spectral condition number is very large for small values of α, which strongly indicates that their numerical treatment is difficult. Our main result shows that a broad range of linear ill posed optimality systems can be solved with a number of iterations of order O(ln(α −1)). More precisely, in the severely ill posed case the number of iterations needed by the Minimal Residual Method cannot grow faster than O(ln(α −1)) as α → 0. This result is obtained by carefully analyzing the spectrum of the associated saddle point operator: Except for a few isolated eigenvalues, the spectrum consists of bounded intervals. Krylov subspace methods handle such problems very well. We illuminate our theoretical findings with some numerical results for inverse problems involving partial differential equations. Our investigation is inspired by Prof. H. Egger’s discussion of similar results valid for the conjugate gradient algorithm applied to the normal equations.