Results 1  10
of
143
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 536 (17 self)
 Add to MetaCart
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, networks and graphs, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, and power networks). The collection meets a vital need that artificiallygenerated matrices cannot meet, and is widely used by the sparse matrix algorithms community for the development and performance evaluation of sparse matrix algorithms. The collection includes software for accessing and managing the collection, from MATLAB, Fortran, and C.
Computing communities in large networks using random walks
 J. of Graph Alg. and App. bf
, 2004
"... Dense subgraphs of sparse graphs (communities), which appear in most realworld complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advan ..."
Abstract

Cited by 226 (3 self)
 Add to MetaCart
Dense subgraphs of sparse graphs (communities), which appear in most realworld complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm, called Walktrap, which runs in time O(mn 2) and space O(n 2) in the worst case, and in time O(n 2 log n) and space O(n 2) in most realworld cases (n and m are respectively the number of vertices and edges in the input graph). Extensive comparison tests show that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time.
Directed scalefree graphs.
 In SODA’03,
, 2003
"... Abstract We introduce a model for directed scalefree graphs that grow with preferential attachment depending in a natural way on the inand outdegrees. We show that the resulting inand outdegree distributions are power laws with different exponents, reproducing observed properties of the worldw ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
(Show Context)
Abstract We introduce a model for directed scalefree graphs that grow with preferential attachment depending in a natural way on the inand outdegrees. We show that the resulting inand outdegree distributions are power laws with different exponents, reproducing observed properties of the worldwide web. We also derive exponents for the distribution of in(out) degrees among vertices with fixed out(in) degree. We conclude by suggesting a corresponding model with hidden variables.
Automatic Data Layout for HighPerformance Fortran
 IN PROCEEDINGS OF SUPERCOMPUTING '95
, 1994
"... High Performance Fortran (HPF) is rapidly gaining acceptance as a language for parallel programming. The goal of HPF is to provide a simple yet ecient machine independent parallel programming model. Besides the algorithm selection, the data layout choice is the key intellectual step in writing an ec ..."
Abstract

Cited by 73 (3 self)
 Add to MetaCart
High Performance Fortran (HPF) is rapidly gaining acceptance as a language for parallel programming. The goal of HPF is to provide a simple yet ecient machine independent parallel programming model. Besides the algorithm selection, the data layout choice is the key intellectual step in writing an ecient HPF program. The developers of HPF did not believe that data layouts can be determined automatically in all cases. Therefore HPF requires the user to specify the data layout. It is the task of the HPF compiler to generate ecient code for the user supplied data layout. The choice
Automatic Data Layout Using 01 Integer Programming
 In Proceedings of the International Conference on Parallel Architectures and Compilation Techniques (PACT94
, 1994
"... : The goal of languages like Fortran D or High Performance Fortran (HPF) is to provide a simple yet efficient machineindependent parallel programming model. By shifting much of the burden of machinedependent optimization to the compiler, the programmer is able to write dataparallel programs that ..."
Abstract

Cited by 65 (5 self)
 Add to MetaCart
(Show Context)
: The goal of languages like Fortran D or High Performance Fortran (HPF) is to provide a simple yet efficient machineindependent parallel programming model. By shifting much of the burden of machinedependent optimization to the compiler, the programmer is able to write dataparallel programs that can be compiled and executed with good performance on many different architectures. However, the choice of a good data layout is still left to the programmer. Even the most sophisticated compiler may not be able to compensate for a poorly chosen data layout since many compiler decisions are driven by the data layout specified in the program. The choice of a good data layout depends on many factors, including the target machine architecture, the compilation system, the problem size, and the number of processors available. The option of remapping arrays at specific points in the program makes the choice even harder. Current programming tools provide little or no support for this difficult sele...
Centrality estimation in large networks
 INTL. JOURNAL OF BIFURCATION AND CHAOS, SPECIAL ISSUE ON COMPLEX NETWORKS’ STRUCTURE AND DYNAMICS
, 2007
"... Centrality indices are an essential concept in network analysis. For those based on shortestpath distances the computation is at least quadratic in the number of nodes, since it usually involves solving the singlesource shortestpaths (SSSP) problem from every node. Therefore, exact computation is ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
Centrality indices are an essential concept in network analysis. For those based on shortestpath distances the computation is at least quadratic in the number of nodes, since it usually involves solving the singlesource shortestpaths (SSSP) problem from every node. Therefore, exact computation is infeasible for many large networks of interest today. Centrality scores can be estimated, however, from a limited number of SSSP computations. We present results from an experimental study of the quality of such estimates under various selection strategies for the source vertices.
Optimal Evaluation of Array Expressions on Massively Parallel Machines
 ACM TRANS. PROG. LANG. SYST
, 1992
"... ..."