Results 1  10
of
12
A framework for the adaptive finite element solution of large inverse problems. I. Basic techniques
, 2004
"... Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of a ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Since problems involving the estimation of distributed coefficients in partial differential equations are numerically very challenging, efficient methods are indispensable. In this paper, we will introduce a framework for the efficient solution of such problems. This comprises the use of adaptive finite element schemes, solvers for the large linear systems arising from discretization, and methods to treat additional information in the form of inequality constraints on the parameter to be recovered. The methods to be developed will be based on an allatonce approach, in which the inverse problem is solved through a Lagrangian formulation. The main feature of the paper is the use of a continuous (function space) setting to formulate algorithms, in order to allow for discretizations that are adaptively refined as nonlinear iterations proceed. This entails that steps such as the description of a Newton step or a line search are first formulated on continuous functions and only then evaluated for discrete functions. On the other hand, this approach avoids the dependence of finite dimensional norms on the mesh size, making individual steps of the algorithm comparable even if they used differently refined meshes. Numerical examples will demonstrate the applicability and efficiency of the method for problems with several million unknowns and more than 10,000 parameters. Key words. Adaptive finite elements, inverse problems, Newton method on function spaces. AMS subject classifications. 65N21,65K10,35R30,49M15,65N50 1. Introduction. Parameter
Stochastic algorithms for inverse problems involving PDEs and many measurements. Submitted
, 2012
"... Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The m ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Inverse problems involving systems of partial differential equations (PDEs) can be very expensive to solve numerically. This is so especially when many experiments, involving different combinations of sources and receivers, are employed in order to obtain reconstructions of acceptable quality. The mere evaluation of a misfit function (the distance between predicted and observed data) often requires hundreds and thousands of PDE solves. This article develops and assesses dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden. We present in detail our methods for solving such inverse problems for the famous DC resistivity and EIT problems. These methods involve incorporation of a priori information such as piecewise smoothness, bounds on the sought conductivity surface, or even a piecewise constant solution. We then assume that all experiments share the same set of receivers and concentrate on methods for reducing the number of combinations of experiments, called simultaneous sources, that are used at each stabilized GaussNewton iteration. Algorithms for controlling the number of such combined sources are proposed and justified. Evaluating the misfit approximately, except for the final verification for terminating the process, always involves random sampling. Methods for Selecting the combined simultaneous sources, involving either random sampling or truncated SVD, are proposed and compared. Highly efficient variants of the resulting algorithms are identified. 1
ASSESSING STOCHASTIC ALGORITHMS FOR LARGE SCALE NONLINEAR LEAST SQUARES PROBLEMS USING EXTREMAL PROBABILITIES OF LINEAR COMBINATIONS OF GAMMA RANDOM VARIABLES
"... Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale nonlinear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps ar ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This article considers stochastic algorithms for efficiently solving a class of large scale nonlinear least squares (NLS) problems which frequently arise in applications. We propose eight variants of a practical randomized algorithm where the uncertainties in the major stochastic steps are quantified. Such stochastic steps involve approximating the NLS objective function using MonteCarlo methods, and this is equivalent to the estimation of the trace of corresponding symmetric positive semidefinite (SPSD) matrices. For the latter, we prove tight necessary and sufficient conditions on the sample size (which translates to cost) to satisfy the prescribed probabilistic accuracy. We show that these conditions are practically computable and yield small sample sizes. They are then incorporated in our stochastic algorithm to quantify the uncertainty in each randomized step. The bounds we use are applications of more general results regarding extremal tail probabilities of linear combinations of gamma distributed random variables. We derive and prove new results concerning the maximal and minimal tail probabilities of such linear combinations, which can be considered independently of the rest of this paper.
Data completion and stochastic algorithms for PDE inversion problems with many measurements
, 2013
"... Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assumi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Inverse problems involving systems of partial differential equations (PDEs) with many measurements or experiments can be very expensive to solve numerically. In a recent paper we examined dimensionality reduction methods, both stochastic and deterministic, to reduce this computational burden, assuming that all experiments share the same set of receivers. In the present article we consider the more general and practically important case where receivers are not shared across experiments. We propose a data completion approach to alleviate this problem. This is done by means of an approximation using a gradient or Laplacian regularization, extending existing data for each experiment to the union of all receiver locations. Results using the method of simultaneous sources with the completed data are then compared to those obtained by a more general but slower random subset method which requires no modifications. 1
The lost honour of ℓ2based regularization
, 2012
"... In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1based techniques ought to altogether replace the simpler, faster and bet ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In the past two decades, regularization methods based on the ℓ1 norm, including sparse wavelet representations and total variation, have become immensely popular. So much so, that we were led to consider the question whether ℓ1based techniques ought to altogether replace the simpler, faster and better known ℓ2based alternatives as the default approach to regularization techniques. The occasionally tremendous advances of ℓ1based techniques are not in doubt. However, such techniques also have their limitations. This article explores advantages and disadvantages compared to ℓ2based techniques using several practical case studies. Taking into account the considerable added hardship in calculating solutions of the resulting computational problems, ℓ1based techniques must offer substantial advantages to be worthwhile. In this light our results suggest that in many applications, though not all, ℓ2based recovery may still be preferred. 1
List of Tables............................. v
, 2012
"... In this work, we implement an inversion algorithm for airborne electromagnetic (AEM) data in the frequency domain by using 2D conductivity models. First, we discretize the 2.5D Maxwell’s equations on a staggered grid and test the numerical accuracy of the forward solution. The inverse problem is th ..."
Abstract
 Add to MetaCart
(Show Context)
In this work, we implement an inversion algorithm for airborne electromagnetic (AEM) data in the frequency domain by using 2D conductivity models. First, we discretize the 2.5D Maxwell’s equations on a staggered grid and test the numerical accuracy of the forward solution. The inverse problem is then solved by regularized minimization approach using the limited memory BFGS variant of the quasiNewton method. Next, EM responses from a synthetic 2D conductivity model are inverted to validate the algorithm. Finally, we use the algorithm on an AEM field dataset from a RESOLVE survey and compare the inversion results to those obtained from a wellestablished 1D implementation. ii
ALGORITHMS THAT SATISFY A STOPPING CRITERION, PROBABLY
"... Iterative numerical algorithms are typically equipped with a stopping criterion, where the iteration process is terminated when some error or misfit measure is deemed to be below a given tolerance. This is a useful setting for comparing algorithm performance, among other purposes. However, in practi ..."
Abstract
 Add to MetaCart
(Show Context)
Iterative numerical algorithms are typically equipped with a stopping criterion, where the iteration process is terminated when some error or misfit measure is deemed to be below a given tolerance. This is a useful setting for comparing algorithm performance, among other purposes. However, in practical applications a precise value for such a tolerance is rarely known; rather, only some possibly vague idea of the desired quality of the numerical approximation is at hand. This leads us to think of ways to relax the notion of exactly satisfying a tolerance value, in a hopefully profitable way. We give wellknown examples where a deterministic relaxation of this notion is applied. Another possibility which we concentrate on is a probabilistic relaxation of the given tolerance. This allows, for instance, derivation of proven bounds on the sample size of certain Monte Carlo methods. We describe this in the context of particular applications.
RICAM Linz, Austrian Academy of Sciences
"... Abstract. Parameter identification problems for partial differential equations (PDEs) often lead to large scale inverse problems. To reduce the computational effort for the repeated solution of the forward and even of the inverse problem — as it is required for determining the regularization paramet ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Parameter identification problems for partial differential equations (PDEs) often lead to large scale inverse problems. To reduce the computational effort for the repeated solution of the forward and even of the inverse problem — as it is required for determining the regularization parameter, e.g., according to the discrepancy principle in Tikhonov regularization — we use adaptive finite element discretizations based on goal oriented error estimators. This concept provides an estimate of the error in a socalled quantity of interest — a functional of the searched for parameter q and the PDE solution u — based on which the discretizations of q and u are locally refined. The crucial question for parameter identification problems is the choice of an appropriate quantity of interest. A convergence analysis of the Tikhonov regularization with the discrepancy principle on discretized spaces for q and u shows, that in order to determine the correct regularization parameter, one has to guarantee sufficiently high accuracy in the squared residual norm — which is therefore our quantity of interest — whereas q and u themselves need not be computed precisely everywhere. This fact allows for relatively low dimensional adaptive meshes and hence for a considerable reduction of the computational effort. In this paper we study an efficient inexact Newton algorithm for determining an optimal regularization parameter in Tikhonov regularization according to the discrepancy principle. With the help of error estimators we guide this algorithm and control the accuracy requirements for its convergence. This leads to a highly efficient method for determining the regularization parameter. Goal oriented adaptive discretization 2 1.