Results 1  10
of
20
FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract

Cited by 253 (6 self)
 Add to MetaCart
(Show Context)
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions
, 2009
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing l ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
(Show Context)
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider
Sparse grids and related approximation schemes for higher dimensional problems
"... The efficient numerical treatment of highdimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach ..."
Abstract

Cited by 46 (12 self)
 Add to MetaCart
The efficient numerical treatment of highdimensional problems is hampered by the curse of dimensionality. We review approximation techniques which overcome this problem to some extent. Here, we focus on methods stemming from Kolmogorov’s theorem, the ANOVA decomposition and the sparse grid approach and discuss their prerequisites and properties. Moreover, we present energynorm based sparse grids and demonstrate that, for functions with bounded mixed derivatives on the unit hypercube, the associated approximation rate in terms of the involved degrees of freedom shows no dependence on the dimension at all, neither in the approximation order nor in the order constant.
CROSSGRAMIAN BASED MODEL REDUCTION FOR DATASPARSE SYSTEMS ∗
"... Abstract. Model order reduction (MOR) is common in simulation, control and optimization of complex dynamical systems arising in modeling of physical processes and in the spatial discretization of parabolic partial differential equations (PDEs) in two or more dimensions. Typically, after a semidiscre ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Model order reduction (MOR) is common in simulation, control and optimization of complex dynamical systems arising in modeling of physical processes and in the spatial discretization of parabolic partial differential equations (PDEs) in two or more dimensions. Typically, after a semidiscretization of the differential operator by the finite element method (FEM) or by the boundary element method, we have a large statespace dimension n = O(10 4). It is assumed that the number of inputs and outputs is equal and much smaller than n. We show how to compute an approximate reducedorder system of order r ≪ n with a balancingrelated model reduction method. The method is based on the computation of the crossGramian (CG) X, which is the solution of one Sylvester equation. As standard algorithms for the solution of Sylvester equations are of limited use for largescale systems, we investigate approaches based on the sign function method. To make this iterative method applicable in the largescale setting, we use a modified iteration scheme for computing lowrank factors of the solution X and we incorporate structural information from the underlying PDE model into the approach. By using datasparse matrix approximations, hierarchical matrix formats, and the corresponding formatted arithmetic we obtain an efficient solver having linearpolylogarithmic complexity. We show that the reducedorder model can then be computed from the lowrank factors directly. Numerical experiments demonstrate the efficiency of our approach.
Gramianbased model reduction for datasparse systems
, 2007
"... Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated largescale systems have to be solved very often. To attack these problems in reasonable time it is abs ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Model reduction is a common theme within the simulation, control and optimization of complex dynamical systems. For instance, in control problems for partial differential equations, the associated largescale systems have to be solved very often. To attack these problems in reasonable time it is absolutely necessary to reduce the dimension of the underlying system. We focus on model reduction by balanced truncation where a system theoretical background provides some desirable properties of the reducedorder system. The major computational task in balanced truncation is the solution of largescale Lyapunov equations, thus the method is of limited use for really largescale applications. We develop an effective implementation of balancingrelated model reduction methods in exploiting the structure of the underlying problem. This is done by a datasparse approximation of the largescale state matrix A using the hierarchical matrix format. Furthermore, we integrate
A PARALLEL SWEEPING PRECONDITIONER FOR HETEROGENEOUS 3D HELMHOLTZ EQUATIONS∗
"... Abstract. A parallelization of a sweeping preconditioner for 3D Helmholtz equations without internal resonance is introduced and benchmarked for several challenging velocity models. The setup and application costs of the sequential preconditioner are shown to be O(γ2N4/3) and O(γN logN), where γ(ω) ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. A parallelization of a sweeping preconditioner for 3D Helmholtz equations without internal resonance is introduced and benchmarked for several challenging velocity models. The setup and application costs of the sequential preconditioner are shown to be O(γ2N4/3) and O(γN logN), where γ(ω) denotes the modestly frequencydependent number of grid points per Perfectly Matched Layer. Several computational and memory improvements are introduced relative to using blackbox sparsedirect solvers for the auxiliary problems, and competitive runtimes and iteration counts are reported for highfrequency problems distributed over thousands of cores. Two opensource packages are released along with this paper: Parallel Sweeping Preconditioner (PSP) and the underlying distributed multifrontal solver, Clique.
PARALLEL HIERARCHICAL MATRIX PRECONDITIONERS FOR THE CURLCURL OPERATOR
, 2009
"... This paper deals with the preconditioning of the curlcurl operator. We use H(curl)conforming finite elements for the discretization of our corresponding magnetostatic model problem. Jumps in the material parameters influence the condition of the problem. We will demonstrate by theoretical estimates ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This paper deals with the preconditioning of the curlcurl operator. We use H(curl)conforming finite elements for the discretization of our corresponding magnetostatic model problem. Jumps in the material parameters influence the condition of the problem. We will demonstrate by theoretical estimates and numerical experiments that hierarchical matrices are well suited to construct efficient parallel preconditioners for the fast and robust iterative solution of such problems.
Hmatrix approximability of the inverse of FEM matrices
 Institut für Analysis und Scientific Computing
, 2013
"... ar ..."
HMATRIX PRECONDITIONERS IN CONVECTIONDOMINATED PROBLEMS
"... Abstract. Hierarchical matrices provide a datasparse way to approximate fully populated matrices. In this paper we exploit Hmatrix techniques to approximate the LUdecompositions of stiffness matrices as they appear in (finite element or finite difference) discretizations of convectiondominated el ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Hierarchical matrices provide a datasparse way to approximate fully populated matrices. In this paper we exploit Hmatrix techniques to approximate the LUdecompositions of stiffness matrices as they appear in (finite element or finite difference) discretizations of convectiondominated elliptic partial differential equations. These sparse Hmatrix approximations may then be used as preconditioners in iterative methods. Whereas the approximation of the matrix inverse by an Hmatrix requires some modification in the underlying index clustering when applied to convectiondominant problems, the HLUdecomposition works well in the standard Hmatrix setting even in the convection dominant case. We will complement our theoretical analysis with some numerical examples. Key words. Hierarchical matrices, datasparse approximation, preconditioning, convectiondominant problems AMS subject classifications. 65F05, 65F30, 65F50
An Efficient Algorithm for Graph Bisection of Triangularizations
"... Graph bisection is an elementary problem in graph theory. We consider the best known experimental algorithms and introduce a new algorithm called LongestPathAlgorithm. Applying this algorithm to the cluster tree generation of hierarchical matrices, arising for example in discretizations of partial ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Graph bisection is an elementary problem in graph theory. We consider the best known experimental algorithms and introduce a new algorithm called LongestPathAlgorithm. Applying this algorithm to the cluster tree generation of hierarchical matrices, arising for example in discretizations of partial equations, we show that this algorithm outperforms previous algorithms.