Results 1  10
of
56
Electrostatics of Nanosystems: Application to microtubules and the ribosome
 Proc. Natl. Acad. Sci. USA
, 2001
"... Evaluation of the electrostatic properties of biomolecules has become a standard practice in molecular biophysics. Foremost among the models used to elucidate the electrostatic potential is the PoissonBoltzmann equation, however, existing methods for solving this equation have limited the scope ..."
Abstract

Cited by 468 (23 self)
 Add to MetaCart
Evaluation of the electrostatic properties of biomolecules has become a standard practice in molecular biophysics. Foremost among the models used to elucidate the electrostatic potential is the PoissonBoltzmann equation, however, existing methods for solving this equation have limited the scope of accurate electrostatic calculations to relatively small biomolecular systems.
An adaptive finite element method for the Laplace–Beltrami operator on implicitly defined surfaces
 SIAM J. Numer. Anal
"... Abstract. We present an adaptive finite element method for approximating solutions to the LaplaceBeltrami equation on surfaces in R3 which may be implicitly represented as level sets of smooth functions. Residualtype a posteriori error bounds which show that the error may be split into a “residual ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present an adaptive finite element method for approximating solutions to the LaplaceBeltrami equation on surfaces in R3 which may be implicitly represented as level sets of smooth functions. Residualtype a posteriori error bounds which show that the error may be split into a “residual part ” and a “geometric part ” are established. In addition, implementation issues are discussed and several computational examples are given. Key words. LaplaceBeltrami operator, adaptive finite element methods, a posteriori error estimation, boundary value problems on surfaces AMS subject classification. 58J32, 65N15, 65N30 1. Introduction. In
A New Paradigm for Parallel Adaptive Meshing Algorithms
 SIAM J. Sci. Comput
, 2003
"... We present a new approach to the use of parallel computers with adaptive finite element methods. This approach addresses the load balancing problem in a new way, requiring far less communication than current approaches. It also allows existing sequential adaptive PDE codes such as PLTMG and MC to ru ..."
Abstract

Cited by 46 (9 self)
 Add to MetaCart
(Show Context)
We present a new approach to the use of parallel computers with adaptive finite element methods. This approach addresses the load balancing problem in a new way, requiring far less communication than current approaches. It also allows existing sequential adaptive PDE codes such as PLTMG and MC to run in a parallel environment without a large investment in recoding. In this new approach, the load balancing problem is reduced to the numerical solution of a small elliptic problem on a single processor, using a sequential adaptive solver, without requiring any modifications to the sequential solver. The small elliptic problem is used to produce a posteriori error estimates to predict future element densities in the mesh, which are then used in a weighted recursive spectral bisection of the initial mesh. The bulk of the calculation then takes place independently on each processor, with no communication, using possibly the same sequential adaptive solver. Each processor adapts its region of the mesh independently, and a nearly loadbalanced mesh distribution is usually obtained as a result of the initial weighted spectral bisection. Only the initial fanout of the mesh decomposition to the processors requires communication. Two additional steps requiring boundary exchange communication may be employed after the individual processors reach an adapted solution, namely, the construction of a global conforming mesh from the independent subproblems, followed by a final smoothing phase using the subdomain solutions as an initial guess. We present a series of convincing numerical experiments that illustrate the e#ectiveness of this approach. The justification of the initial refinement prediction step, as well as the justification of skipping the two communicationintensive steps, ...
Multilevel Solvers For Unstructured Surface Meshes
 SIAM J. Sci. Comput
"... Parameterization of unstructured surface meshes is of fundamental importance in many applications of Digital Geometry Processing. Such parameterization approaches give rise to large and exceedingly illconditioned systems which are difficult or impossible to solve without the use of sophisticated mu ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
Parameterization of unstructured surface meshes is of fundamental importance in many applications of Digital Geometry Processing. Such parameterization approaches give rise to large and exceedingly illconditioned systems which are difficult or impossible to solve without the use of sophisticated multilevel preconditioning strategies. Since the underlying meshes are very fine to begin with, such multilevel preconditioners require mesh coarsening to build an appropriate hierarchy. In this paper we consider several strategies for the construction of hierarchies using ideas from mesh simplification algorithms used in the computer graphics literature. We introduce two novel hierarchy construction schemes and demonstrate their superior performance when used in conjunction with a multigrid preconditioner.
Rough solutions of the Einstein constraints on closed manifolds without nearCMC conditions. Submitted for publication. Available as arXiv:0712.0798v1 [grqc
"... ABSTRACT. We consider the conformal decomposition of Einstein’s constraint equations introduced by Lichnerowicz and York, on a closed manifold. We establish existence of nonCMC weak solutions using a combination of a priori estimates for the individual Hamiltonian and momentum constraints, barrier ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
(Show Context)
ABSTRACT. We consider the conformal decomposition of Einstein’s constraint equations introduced by Lichnerowicz and York, on a closed manifold. We establish existence of nonCMC weak solutions using a combination of a priori estimates for the individual Hamiltonian and momentum constraints, barrier constructions and fixedpoint techniques for the Hamiltonian constraint, RieszSchauder theory for the momentum constraint, together with a topological fixedpoint argument for the coupled system. Although we present general existence results for nonCMC weak solutions when the rescaled background metric is in any of the three Yamabe classes, an important new feature of the results we present for the positive Yamabe class is the absence of the nearCMC assumption, if the freely specifiable part of the data given by the tracelesstransverse part of the rescaled extrinsic curvature and the matter fields are sufficiently small, and if the energy density of matter is not identically zero. In this case, the mean extrinsic curvature can be taken to be an arbitrary smooth function without restrictions on the size of its spatial derivatives, so that it can be arbitrarily far from constant, giving what is apparently the first existence results for nonCMC solutions without the
Higherorder finite element methods and pointwise error estimates for elliptic problems on surfaces
 SIAM J. Numer. Anal
"... Abstract. We define higherorder analogs to the piecewise linear surface finite element method studied in [Dz88] and prove error estimates in both pointwise and L2based norms. Using the LaplaceBeltrami problem on an implicitly defined surface Γ as a model PDE, we define Lagrange finite element met ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We define higherorder analogs to the piecewise linear surface finite element method studied in [Dz88] and prove error estimates in both pointwise and L2based norms. Using the LaplaceBeltrami problem on an implicitly defined surface Γ as a model PDE, we define Lagrange finite element methods of arbitrary degree on polynomial approximations to Γ which likewise are of arbitrary degree. Then we prove a priori error estimates in the L2, H1, and corresponding pointwise norms that demonstrate the interaction between the “PDE error ” that arises from employing a finitedimensional finite element space and the “geometric error ” that results from approximating Γ. We also consider parametric finite element approximations that are defined on Γ and thus induce no geometric error. Computational examples confirm the sharpness of our error estimates. Key words. LaplaceBeltrami operator, surface finite element methods, a priori error estimates, boundary value problems on surfaces, pointwise and maximum norm error estimates AMS subject classification. 58J32, 65N15, 65N30 1. Introduction. The
An Odyssey Into Local Refinement And Multilevel Preconditioning I: Optimality Of . . .
 SIAM J. NUMER. ANAL
, 2002
"... In this article, we examine the BramblePasciakXu (BPX) preconditioner in the setting of local 2D and 3D mesh refinement. While the available optimality results for the BPX preconditioner have been constructed primarily in the setting of uniformly refined meshes, a notable exception is the 2D resul ..."
Abstract

Cited by 27 (14 self)
 Add to MetaCart
(Show Context)
In this article, we examine the BramblePasciakXu (BPX) preconditioner in the setting of local 2D and 3D mesh refinement. While the available optimality results for the BPX preconditioner have been constructed primarily in the setting of uniformly refined meshes, a notable exception is the 2D result due to Dahmen and Kunoth, which established BPX optimality on meshes produced by a restricted class of local 2D redgreen refinement. The purpose of this article is to extend the original 2D DahmenKunoth result to several additional types of local 2D and 3D redgreen (conforming) and red (nonconforming) refinement procedures. The extensions are accomplished through a 3D extension of the 2D framework in the original DahmenKunoth work, by which the question of optimality is reduced to establishing that locally enriched finite element subspaces allow for the construction of a scaled basis which is formally Riesz stable. This construction in turn rests entirely on establishing a number of geometrical properties between neighboring simplices produced by the local refinement algorithms. These properties are then used to build Rieszstable scaled bases for use in the BPX optimality framework. Since the theoretical framework supports arbitrary spatial dimension d 1, we indicate clearly which geometrical properties, established here for several 2D and 3D local refinement procedures, must be reestablished to show BPX optimality for spatial dimension 4. Finally, we also present a simple alternative optimality proof of the BPX preconditioner on quasiuniform meshes in two and three spatial dimensions, through the use of Kfunctionals and H stability of L 2 projection for s 1. The proof techniques we use are quite general; in particular, the results require no smoothnes...
Optimality of multilevel preconditioners for local mesh refinement in three dimensions
 SIAM J. Numer. Anal
"... Abstract. In this article, we establish optimality of the Bramble–Pasciak–Xu (BPX) norm equivalence and optimality of the wavelet modified (or stabilized) hierarchical basis (WHB) preconditioner in the setting of local 3D mesh refinement. In the analysis of WHB methods, a critical first step is to e ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
(Show Context)
Abstract. In this article, we establish optimality of the Bramble–Pasciak–Xu (BPX) norm equivalence and optimality of the wavelet modified (or stabilized) hierarchical basis (WHB) preconditioner in the setting of local 3D mesh refinement. In the analysis of WHB methods, a critical first step is to establish the optimality of BPX norm equivalence for the refinement procedures under consideration. While the available optimality results for the BPX norm have been constructed primarily in the setting of uniformly refined meshes, a notable exception is the local 2D redgreen result due to Dahmen and Kunoth. The purpose of this article is to extend this original 2D optimality result to the local 3D redgreen refinement procedure introduced by Bornemann, Erdmann, and Kornhuber, and then to use this result to extend the WHB optimality results from the quasiuniform setting to local 2D and 3D redgreen refinement scenarios. The BPX extension is reduced to establishing that locally enriched finite element subspaces allow for the construction of a scaled basis which is formally Riesz stable. This construction turns out to rest not only on the shape regularity of the refined elements, but also critically on a number of geometrical properties we establish between neighboring simplices produced by the Bornemann–Erdmann–Kornhuber (BEK) refinement procedure. It is possible to show that the number of degrees of freedom used for smoothing is bounded by a constant times the number of degrees of freedom introduced at that level of refinement, indicating that a practical, implementable version of the resulting BPX preconditioner for the BEK refinement setting has provably optimal (linear) computational complexity per iteration. An interesting implication of the optimality of the WHB preconditioner is the a priori H 1stability of the L2projection. The existing a posteriori approaches in the literature dictate a reconstruction of the mesh if such conditions cannot be satisfied. The theoretical framework employed supports arbitrary spatial dimension d ≥ 1 and requires no coefficient smoothness assumptions beyond those required for wellposedness in H 1.
FeaturePreserving Adaptive Mesh Generation for Molecular Shape Modeling and Simulation
, 2007
"... We describe a chain of algorithms for molecular surface and volumetric mesh generation. We take as inputs the centers and radii of all atoms of a molecule and the toolchain outputs both triangular and tetrahedral meshes that can be used for molecular shape modeling and simulation. Experiments on a n ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
We describe a chain of algorithms for molecular surface and volumetric mesh generation. We take as inputs the centers and radii of all atoms of a molecule and the toolchain outputs both triangular and tetrahedral meshes that can be used for molecular shape modeling and simulation. Experiments on a number of molecules are demonstrated, showing that our methods possess several desirable properties: featurepreservation, local adaptivity, high quality, and smoothness (for surface meshes). We also demonstrate an example of molecular simulation using the finite element method and the meshes generated by our method. The approaches presented and their implementations are also applicable to other types of inputs such as 3D scalar volumes and triangular surface meshes with low quality, and hence can be used for generation/improvment of meshes in a broad range of applications.
Generalized Green's Functions and the Effective Domain of Influence
 SIAM J. Sci. Comput
, 2002
"... One wellknown approach to a posteriori analysis of finite element solutions of elliptic problems estimates the error in a quantity of interest in terms of residuals and a generalized Green's function. The generalized Green's function solves the adjoint problem with data related to a quant ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
One wellknown approach to a posteriori analysis of finite element solutions of elliptic problems estimates the error in a quantity of interest in terms of residuals and a generalized Green's function. The generalized Green's function solves the adjoint problem with data related to a quantity of interest and measures the e#ects of stability, including any decay of influence characteristic of elliptic problems. We show that consideration of the generalized Green's function can be used to improve the e#ciency of the solution process when the goal is to compute multiple quantities of interest and/or to compute quantities of interest that involve globallysupported information such as average values and norms. In the latter case, we introduce a solution decomposition in which we solve a set of problems involving localized information, and then recover the desired information by combining the local solutions. By treating each computation of a quantity of interest independently, the maximum number of elements required to achieve the desired accuracy can be decreased significantly.