Results 1  10
of
17
Experimental Study on SpeedUp Techniques for Timetable Information Systems
 PROCEEDINGS OF THE 7TH WORKSHOP ON ALGORITHMIC APPROACHES FOR TRANSPORTATION MODELING, OPTIMIZATION, AND SYSTEMS (ATMOS 2007
, 2007
"... During the last years, impressive speedup techniques for DIJKSTRA’s algorithm have been developed. Unfortunately, recent research mainly focused on road networks. However, fast algorithms are also needed for other applications like timetable information systems. Even worse, the adaption of recentl ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
(Show Context)
During the last years, impressive speedup techniques for DIJKSTRA’s algorithm have been developed. Unfortunately, recent research mainly focused on road networks. However, fast algorithms are also needed for other applications like timetable information systems. Even worse, the adaption of recently developed techniques to timetable information is more complicated than expected. In this work, we check whether results from road networks are transferable to timetable information. To this end, we present an extensive experimental study of the most prominent speedup techniques on different types of inputs. It turns out that recently developed techniques are much slower on graphs derived from timetable information than on road networks. In addition, we gain amazing insights into the behavior of speedup techniques in general.
A spaceefficient parallel algorithm for computing betweenness centrality in distributed memory
 In Proc. Int’l. Conf. on High Performance Computing (HiPC 2010
, 2010
"... Abstract—Betweenness centrality is a measure based on shortest paths that attempts to quantify the relative importance of nodes in a network. As computation of betweenness centrality becomes increasingly important in areas such as social network analysis, networks of interest are becoming too large ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Betweenness centrality is a measure based on shortest paths that attempts to quantify the relative importance of nodes in a network. As computation of betweenness centrality becomes increasingly important in areas such as social network analysis, networks of interest are becoming too large to fit in the memory of a single processing unit, making parallel execution a necessity. Parallelization over the vertex set of the standard algorithm, with a final reduction of the centrality for each vertex, is straightforward but requires Ω(V  2) storage. In this paper we present a new parallelizable algorithm with low spatial complexity that is based on the best known sequential algorithm. Our algorithm requires O(V  + E) storage and enables efficient parallel execution. Our algorithm is especially well suited to distributed memory processing because it can be implemented using coarsegrained parallelism. The presented time bounds for parallel execution of our algorithm on CRCW PRAM and on distributed memory systems both show good asymptotic performance. Experimental results with a distributed memory computer show the practical applicability of our algorithm. I.
Scalable Inmemory RDFS Closure on Billions of Triples
 In Proc. SSWS, volume 669 of CEUR WS Proceedings
"... Abstract. We present an RDFS closure algorithm, specifically designed and implemented on the Cray XMT supercomputer, that obtains inference rates of 13 million inferences per second on the largest system configuration we used. The Cray XMT, with its large global memory (4TB for our experiments), p ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We present an RDFS closure algorithm, specifically designed and implemented on the Cray XMT supercomputer, that obtains inference rates of 13 million inferences per second on the largest system configuration we used. The Cray XMT, with its large global memory (4TB for our experiments), permits the construction of a conceptually straightforward algorithm, fundamentally a series of operations on a shared hash table. Each thread is given a partition of triple data to process, a dedicated copy of the ontology to apply to the data, and a reference to the hash table into which it inserts inferred triples. The global nature of the hash table allows the algorithm to avoid a common obstacle for distributed memory machines: the creation of duplicate triples. On LUBM data sets ranging between 1.3 billion and 5.3 billion triples, we obtain nearly linear speedup except for two portions: file I/O, which can be ameliorated with the additional service nodes, and data structure initialization, which requires nearly constant time for runs involving 32 processors or more.
Distributed SociaLite: A DatalogBased Language for LargeScale Graph Analysis
"... Largescale graph analysis is becoming important with the rise of worldwide social network services. Recently in SociaLite, we proposed extensions to Datalog to efficiently and succinctly implement graph analysis programs on sequential machines. This paper describes novel extensions and optimizatio ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Largescale graph analysis is becoming important with the rise of worldwide social network services. Recently in SociaLite, we proposed extensions to Datalog to efficiently and succinctly implement graph analysis programs on sequential machines. This paper describes novel extensions and optimizations of SociaLite for parallel and distributed executions to support largescale graph analysis. With distributed SociaLite, programmers simply annotate how data are to be distributed, then the necessary communication is automatically inferred to generate parallel code for cluster of multicore machines. It optimizes the evaluation of recursive monotone aggregate functions using a delta stepping technique. In addition, approximate computation is supported in SociaLite, allowing programmers to trade off accuracy for less time and space. We evaluated SociaLite with six core graph algorithms used in many social network analyses. Our experiment with 64 Amazon EC2 8core instances shows that SociaLite programs performed within a factor of two with respect to ideal weak scaling. Compared to optimized Giraph, an opensource alternative of Pregel, SociaLite programs are 4 to 12 times faster across benchmark algorithms, and 22 times more succinct on average. As a declarative query language, SociaLite, with the help of a compiler that generates efficient parallel and approximate code, can be used easily to create many social apps that operate on largescale distributed graphs. 1.
Advanced Shortest Paths Algorithms on a MassivelyMultithreaded Architecture
"... We present a study of multithreaded implementations of Thorup’s algorithm for solving the Single Source Shortest Path (SSSP) problem for undirected graphs. Our implementations leverage the fledgling MultiThreaded Graph Library (MTGL) to perform operations such as finding connected components and ext ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We present a study of multithreaded implementations of Thorup’s algorithm for solving the Single Source Shortest Path (SSSP) problem for undirected graphs. Our implementations leverage the fledgling MultiThreaded Graph Library (MTGL) to perform operations such as finding connected components and extracting induced subgraphs. To achieve good parallel performance from this algorithm, we deviate from several theoretically optimal algorithmic steps. In this paper, we present simplifications that perform better in practice, and we describe details of the multithreaded implementation that were necessary for scalability. We study synthetic graphs that model unstructured networks, such as social networks and economic transaction networks. Most of the recent progress in shortest path algorithms relies on structure that these networks do not have. In this work, we take a step back and explore the synergy between an elegant theoretical algorithm and an elegant computer architecture. Finally, we conclude with a prediction that this work will become relevant to shortest path computation on structured networks. 1.
Parallel Computation of Best Connections in Public Transportation Networks. Journal version. Submitted for publication. Online available at i11www.iti.unikarlsruhe
, 2011
"... Abstract—Exploiting parallelism in route planning algorithms is a challenging algorithmic problem with obvious applications in mobile navigation and timetable information systems. In this work, we present a novel algorithm for the socalled onetoall profilesearch problem in public transportation ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract—Exploiting parallelism in route planning algorithms is a challenging algorithmic problem with obvious applications in mobile navigation and timetable information systems. In this work, we present a novel algorithm for the socalled onetoall profilesearch problem in public transportation networks. It answers the question for all fastest connections between a given station S and any other station at any time of the day in a single query. This algorithm allows for a very natural parallelization, yielding excellent speedups on standard multicore servers. Our approach exploits the facts that first, timedependent traveltime functions in such networks can be represented as a special class of piecewise linear functions, and that second, only few connections from S are useful to travel far away. Introducing the connectionsetting property, we are able to extend DIJKSTRA’s algorithm in a sound manner. Furthermore, we also accelerate stationtostation queries by preprocessing important connections within the public transportation network. As a result, we are able to compute all relevant connections between two random stations in a complete public transportation network of a big city (Los Angeles) on a standard multicore server in less than 55 ms on average. I.
Compact Graph Representations and Parallel Connectivity Algorithms for Massive Dynamic Network Analysis
"... Graphtheoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel highperformanc ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Graphtheoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel highperformance combinatorial techniques for analyzing largescale information networks, encapsulating dynamic interaction data in the order of billions of entities. We present new data structures to represent dynamic interaction networks, and discuss algorithms for processing parallel insertions and deletions of edges in smallworld networks. With these new approaches, we achieve an average performance rate of 25 million structural updates per second and a parallel speedup of nearly 28 on a 64way Sun UltraSPARC T2 multicore processor, for insertions and deletions to a smallworld network of 33.5 million vertices and 268 million edges. We also design parallel implementations of fundamental dynamic graph kernels related to connectivity and centrality queries. Our implementations are freely distributed as part of the opensource SNAP (Smallworld Network Analysis and Partitioning) complex network analysis framework. 1.
Social Media and Social Reality: Theory, Evidence and Validation
 In IEEE Intelligence and Security Informatics Conference, Workshop on Current Issues in Predictive Approaches to Intelligence and Security Analytics (PAISA10
, 2010
"... Abstract — Social Media provide an exciting and novel view into social phenomena. The vast amounts of data that can be gathered from the Internet coupled with massively parallel supercomputers such as the Cray XMT open new vistas for research. Conclusions drawn from such analysis must recognize that ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Social Media provide an exciting and novel view into social phenomena. The vast amounts of data that can be gathered from the Internet coupled with massively parallel supercomputers such as the Cray XMT open new vistas for research. Conclusions drawn from such analysis must recognize that social media are distinct from the underlying social reality. Rigorous validation is essential. This paper briefly presents results obtained from computational analysis of social mediautilizing both blog and twitter data. Validation of these results is discussed in the context of a framework of established methodologies from the social sciences. Finally, an outline for a set of supporting studies is proposed.
A Study of Different Parallel Implementations of Single Source Shortest Path Algorithms
"... We present a study of parallel implementations of single source shortest path (SSSP) algorithms. In the last three decades number of parallel SSSP algorithms have been developed and implemented on the different type of machines. We have divided some of these implementations into two groups, first ar ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present a study of parallel implementations of single source shortest path (SSSP) algorithms. In the last three decades number of parallel SSSP algorithms have been developed and implemented on the different type of machines. We have divided some of these implementations into two groups, first are those where parallelization is achieved in the internal operations of sequential SSSP algorithm and second are where an actual graph is divided into subgraphs, and serial SSSP algorithm executes parallel on separate processing units for each subgraph. These parallel implementations have used PRAM, CRAY supercomputer, dynamically reconfigurable processor and Graphics processing unit as platform to run them.
A Multilevel Simplification Algorithm for Computing the Average ShortestPath Length of ScaleFree Complex Network
"... Computing the average shortestpath length (ASPL) of a large scalefree network needs much memory space and computation time. Based on the feature of scalefree network, we present a simplification algorithm by cutting the suspension points and the connected edges; the ASPL of the original network ..."
Abstract
 Add to MetaCart
(Show Context)
Computing the average shortestpath length (ASPL) of a large scalefree network needs much memory space and computation time. Based on the feature of scalefree network, we present a simplification algorithm by cutting the suspension points and the connected edges; the ASPL of the original network can be computed through that of the simplified network. We also present a multilevel simplification algorithm to get ASPL of the original network directly from that of the multisimplified network. Our experiment shows that these algorithms require less memory space and time in computing the ASPL of scalefree network, which makes it possible to analyze large networks that were previously impossible due to memory limitations.