Results 1  10
of
33
Convergence of a Balancing Domain Decomposition by Constraints and Energy Minimization
, 2002
"... A convergence theory is presented for a substructuring preconditioner based on constrained energy minimization concepts. The preconditioner is formulated as an Additive Schwarz method and analyzed by building on existing results for Balancing Domain Decomposition. The main result is a bound on the c ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
A convergence theory is presented for a substructuring preconditioner based on constrained energy minimization concepts. The preconditioner is formulated as an Additive Schwarz method and analyzed by building on existing results for Balancing Domain Decomposition. The main result is a bound on the condition number based on inequalities involving the matrices of the preconditioner. Estimates of the usual form C(1 + log²(H/h)) are obtained under the standard assumptions of substructuring theory. Computational results demonstrating the performance of method are included.
Domain Decomposition Algorithms for the Partial Differential Equations of Linear Elasticity
, 1990
"... The use of the finite element method for elasticity problems results in extremely large, sparse linear systems. Historically these have been solved using direct solvers like Choleski's method. These linear systems are often illconditioned and hence require good preconditioners if they are to b ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
(Show Context)
The use of the finite element method for elasticity problems results in extremely large, sparse linear systems. Historically these have been solved using direct solvers like Choleski's method. These linear systems are often illconditioned and hence require good preconditioners if they are to be solved iteratively. We propose and analyze three new, parallel iterative domain decomposition algorithms for the solution of these linear systems. The algorithms are also useful for other elliptic partial differential equations. Domain decomposition algorithms are designed to take advantage of a new generation of parallel computers. The domain is decomposed into overlapping or nonoverlapping subdomains. The discrete approximation to a partial differential equation is then obtained iteratively by solving problems associated with each subdomain. The algorithms are often accelerated using the conjugate gradient method. The first new algorithm presented here borrows heavily from multilevel type a...
Algebraic theory of multiplicative Schwarz methods
 NUMER. MATH.
, 2001
"... The convergence of multiplicative Schwarztype methods for solving linear systems when the coefficient matrix is either a nonsingular Mmatrix or a symmetric positive definite matrix is studied using classical and new results from the theory of splittings. The effect on convergence of algorithmic ..."
Abstract

Cited by 30 (20 self)
 Add to MetaCart
The convergence of multiplicative Schwarztype methods for solving linear systems when the coefficient matrix is either a nonsingular Mmatrix or a symmetric positive definite matrix is studied using classical and new results from the theory of splittings. The effect on convergence of algorithmic parameters such as the number of subdomains, the amount of overlap, the result of inexact local solves and of “coarse grid” corrections (global coarse solves) is analyzed in an algebraic setting. Results on algebraic additive Schwarz are also included.
Robust dimension reduction, fusion frames, and Grassmannian packings,
 Appl. Comput. Harmon. Anal.
, 2009
"... Abstract We consider estimating a random vector from its measurements in a fusion frame, in presence of noise and subspace erasures. A fusion frame is a collection of subspaces, for which the sum of the projection operators onto the subspaces is bounded below and above by constant multiples of the ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
(Show Context)
Abstract We consider estimating a random vector from its measurements in a fusion frame, in presence of noise and subspace erasures. A fusion frame is a collection of subspaces, for which the sum of the projection operators onto the subspaces is bounded below and above by constant multiples of the identity operator. We first consider the linear minimum meansquared error (LMSSE) estimation of the random vector of interest from its fusion frame measurements in the presence of additive white noise. Each fusion frame measurement is a vector whose elements are inner products of an orthogonal basis for a fusion frame subspace and the random vector of interest. We derive bounds on the meansquared error (MSE) and show that the MSE will achieve its lower bound if the fusion frame is tight. We then analyze the robustness of the constructed LMMSE estimator to erasures of the fusion frame subspaces. We limit our erasure analysis to the class of tight fusion frames and assume that all erasures are equally important. Under these assumptions, we prove that tight fusion frames consisting of equidimensional subspaces have maximum robustness (in the MSE sense) with respect to erasures of one subspace among all tight fusion frames, and that the optimal subspace dimension depends on signaltonoise ratio (SNR). We also prove that tight fusion frames consisting of equidimensional subspaces with equal pairwise chordal distances are most robust with respect to two and more subspace erasures, among the class of equidimensional tight fusion frames. We call such fusion frames equidistance tight fusion frames. We prove that the squared chordal distance between the subspaces in such fusion frames meets the socalled simplex bound, and thereby establish connections between equidistance tight fusion frames and optimal Grassmannian packings. Finally, we present several examples for the construction of equidistance tight fusion frames.
Sparse fusion frames: Existence and construction,
 Adv. Comput. Math.
, 2011
"... Abstract. Fusion frame theory is an emerging mathematical theory that provides a natural framework for performing hierarchical data processing. A fusion frame can be regarded as a framelike collection of subspaces in a Hilbert space, and thereby generalizes the concept of a frame for signal repres ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
(Show Context)
Abstract. Fusion frame theory is an emerging mathematical theory that provides a natural framework for performing hierarchical data processing. A fusion frame can be regarded as a framelike collection of subspaces in a Hilbert space, and thereby generalizes the concept of a frame for signal representation. However, when the signal and/or subspace dimensions are large, the decomposition of the signal into its fusion frame measurements through subspace projections typically requires a large number of additions and multiplications, and this makes the decomposition intractable in applications with limited computing budget. To address this problem, in this paper, we introduce the notion of a sparse fusion frame, that is, a fusion frame whose subspaces are generated by orthonormal basis vectors that are sparse in a 'uniform basis' over all subspaces, thereby enabling lowcomplexity fusion frame decompositions. We study the existence and construction of sparse fusion frames, but our focus is on developing simple algorithmic constructions that can easily be adopted in practice to produce sparse fusion frames with desired (given) operators. By a desired (or given) operator we simply mean one that has a desired (or given) set of eigenvalues for the fusion frame operator. We start by presenting a complete characterization of Parseval fusion frames in terms of the existence of special isometries defined on an encompassing Hilbert space. We then introduce two general methodologies to generate new fusion frames from existing ones, namely the Spatial Complement Method and the Naimark Complement Method, and analyze the relationship between the parameters of the original and the new fusion frame. We proceed by establishing existence conditions for 2sparse fusion frames for any given fusion frame operator, for which the eigenvalues are greater than or equal to two. We then provide an easily implementable algorithm for computing such 2sparse fusion frames.
An algebraic convergence theory for restricted additive Schwarz methods using weighted max norms
 SIAM J. NUMER. ANAL
, 2001
"... Convergence results for the restrictive additive Schwarz (RAS) method of Cai and Sarkis [SIAM J. Sci. Comput., 21 (1999), pp. 792–797] for the solution of linear systems of the form Ax = b are provided using an algebraic view of additive Schwarz methods and the theory of multisplittings. The linear ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
Convergence results for the restrictive additive Schwarz (RAS) method of Cai and Sarkis [SIAM J. Sci. Comput., 21 (1999), pp. 792–797] for the solution of linear systems of the form Ax = b are provided using an algebraic view of additive Schwarz methods and the theory of multisplittings. The linear systems studied are usually discretizations of partial differential equations in two or three dimensions. It is shown that in the case of A symmetric positive definite, the projections defined by the methods are not orthogonal with respect to the inner product defined by A, and therefore the standard analysis cannot be used here. The convergence results presented are for the class of Mmatrices (and more generally for Hmatrices) using weighted max norms. Comparison between different versions of the RAS method are given in terms of these norms. A comparison theorem with respect to the classical additive Schwarz method makes it possible to indirectly get quantitative results on rates of convergence which otherwise cannot be obtained by the theory. Several RAS variants are considered, including new ones and twolevel schemes.
Studies in Domain Decomposition: Multilevel Methods and the Biharmonic Dirichlet Problem
, 1991
"... A class of multilevel methods for second order problems is considered in the additive Schwarz framework. It is established that, in the general case, the condition number of the iterative operator grows at most linearly with the number of levels. The bound is independent of the mesh sizes and the nu ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
A class of multilevel methods for second order problems is considered in the additive Schwarz framework. It is established that, in the general case, the condition number of the iterative operator grows at most linearly with the number of levels. The bound is independent of the mesh sizes and the number of levels under a regularity assumption. This is an improvement of a result by Dryja and Widlund on a multilevel additive Schwarz algorithm, and the theory given by Bramble, Pasciak and Xu for the BPX algorithm. Additive Schwarz and iterative substructuring algorithms for the biharmonic equation are also considered. These are domain decomposition methods which have previously been developed extensively for second order elliptic problems by Bramble, Pasciak and Schatz, Dryja and Widlund and others. Optimal convergence properties are established for additive Schwarz algorithms for the biharmonic equation discretized by certain conforming finite elements. The number of iterations for the i...
Domain decomposition: a bridge between nature and parallel computers
 PARALLEL COMPUTERS, ADAPTIVE, MULTILEVEL AND HIERARCHICAL COMPUTATIONAL STRATEGIES
, 1992
"... ..."
(Show Context)
Convergence theory of restricted multiplicative Schwarz methods
 IN PREPARATION
, 2003
"... Convergence results for the restricted multiplicative Schwarz (RMS) method, the multiplicative version of the restricted additive Schwarz (RAS) method for the solution of linear systems of the form Ax = b, are provided. An algebraic approach is used to prove convergence results for nonsymmetric Mm ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Convergence results for the restricted multiplicative Schwarz (RMS) method, the multiplicative version of the restricted additive Schwarz (RAS) method for the solution of linear systems of the form Ax = b, are provided. An algebraic approach is used to prove convergence results for nonsymmetric Mmatrices. Several comparison theorems are also established. These theorems compare the asymptotic rate of convergence with respect to the amount of overlap, the exactness of the subdomain solver, and the number of domains. Moreover, comparison theorems are given between the RMS and RAS methods as well as between the RMS and the classical multiplicative Schwarz method.
Nearly Sharp Sufficient Conditions on Exact Sparsity Pattern Recovery
"... Abstract—Consider the ndimensional vector y = X + where 2 p has only k nonzero entries and 2 n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of given the observation vector y and the m ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract—Consider the ndimensional vector y = X + where 2 p has only k nonzero entries and 2 n is a Gaussian noise. This can be viewed as a linear system with sparsity constraints corrupted by noise, where the objective is to estimate the sparsity pattern of given the observation vector y and the measurement matrix X. First, we derive a nonasymptotic upper bound on the probability that a specific wrong sparsity pattern is identified by the maximumlikelihood estimator. We find that this probability depends (inversely) exponentially on the difference of kX k2 and the `2norm of X projected onto the range of columns of X indexed by the wrong sparsity pattern. Second, when X is randomly drawn from a Gaussian ensemble, we calculate a nonasymptotic upper bound on the probability of the maximumlikelihood decoder not declaring (partially) the true sparsity pattern. Consequently, we obtain sufficient conditions on the sample size n that guarantee almost surely the recovery of the true sparsity pattern. We find that the required growth rate of sample size n matches the growth rate of previously established necessary conditions. Index Terms—Hypothesis testing, random projections, sparsity pattern recovery, subset selection, underdetermined systems of equations. I.