Results 1  10
of
10
Online estimating the k central nodes of a network
 In Proc. of the IEEE Network Science Workshop (NSW
, 2011
"... Estimating the most influential nodes in a network is a fundamental problem in network analysis. Influential nodes may be important spreaders of diseases in biological networks, key actors in terrorist networks, or marketing targets in social ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Estimating the most influential nodes in a network is a fundamental problem in network analysis. Influential nodes may be important spreaders of diseases in biological networks, key actors in terrorist networks, or marketing targets in social
STREAMER: a distributed framework for incremental closeness centrality computation
 In Proc. of IEEE Cluster
, 2013
"... Abstract—Networks are commonly used to model the traffic patterns, social interactions, or web pages. The nodes in a network do not possess the same characteristics: some nodes are naturally more connected and some nodes can be more important. Closeness centrality (CC) is a global metric that quanti ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Networks are commonly used to model the traffic patterns, social interactions, or web pages. The nodes in a network do not possess the same characteristics: some nodes are naturally more connected and some nodes can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given node in the network. When the network is dynamic and keeps changing, the relative importance of the nodes also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose STREAMER, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined and replicated parallelism and takes NUMA effects into account. It speeds up the maintenance of the CC of a real graph with 916K vertices and 4.3M edges by a factor of 497 using a 64 nodes cluster. I.
kCentralities: Local Approximations of Global Measures Based on Shortest Paths
"... A lot of centrality measures have been developed to analyze different aspects of importance. Some of the most popular centrality measures (e.g. betweenness centrality, closeness centrality) are based on the calculation of shortest paths. This characteristic limits the applicability of these measures ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A lot of centrality measures have been developed to analyze different aspects of importance. Some of the most popular centrality measures (e.g. betweenness centrality, closeness centrality) are based on the calculation of shortest paths. This characteristic limits the applicability of these measures for larger networks. In this article we elaborate on the idea of boundeddistance shortest paths calculations. We claim criteria for kcentrality measures and we introduce one algorithm for calculating both betweenness and closeness based centralities. We also present normalizations for these measures. We show that kcentrality measures are good approximations for the corresponding centrality measures by achieving a tremendous gain of calculation time and also having linear calculation complexity Θ(n) for networks with constant average degree. This allows researchers to approximate centrality measures based on shortest paths for networks with millions of nodes or with high frequency in dynamically changing networks.
Incremental Algorithms for Network Management and Analysis based on Closeness Centrality
, 1303
"... Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centralitybased network management. The problem has many applicat ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centralitybased network management. The problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination. It can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration. As a fundamental building block to tackle that problem, we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. Our algorithms are proven to be efficient on many reallife networks, especially on smallworld networks, which have a small diameter and a spikeshaped shortest distance distribution. In addition to closeness centrality, they can also be a great arsenal for the shortestpathbased management and analysis of the networks. We experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal DBLPcoauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch. To the best of our knowledge, this is the first work which can yield practical largescale network management based on closeness centrality values.
Incremental Algorithms for Closeness Centrality 2 IEEE BigData’13
"... citation graphs • Facebook has a billion users and a trillion connections • Twitter has more than 200 million users • Who
is
more
important
in a network?
Who
controls the flow
between
nodes? • Centrality
metrics
answer these quesAons • Closeness
Centrality
( ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
citation graphs • Facebook has a billion users and a trillion connections • Twitter has more than 200 million users • Who
is
more
important
in a network?
Who
controls the flow
between
nodes? • Centrality
metrics
answer these quesAons • Closeness
Centrality
(CC)
is an intriguing
metric • How
to
handle
changes? • Incremental
algorithms
are essenAal Incremental Algorithms
for
Closeness
Centrality
3 IEEE BigData’13
Fast exact and approximate computation of betweenness centrality in social networks
"... Abstract. Social networks have demonstrated in the last few years to be a powerful and flexible concept useful to represent and analyze data emerging from social interactions and social activities. The study of these networks can thus provide a deeper understanding of many emergent global phenomena. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Social networks have demonstrated in the last few years to be a powerful and flexible concept useful to represent and analyze data emerging from social interactions and social activities. The study of these networks can thus provide a deeper understanding of many emergent global phenomena. The amount of data available in the form of social networks is growing by the day. This poses many computational challenging problems for their analysis. In fact many analysis tools suitable to analyze small to medium sized networks are inefficient for large social networks. The computation of the betweenness centrality index (BC) is a well established method for network data analysis and it is also important as subroutine in more advanced algorithms, such as the GirvanNewman method for graph partitioning. In this paper we present a novel approach for the computation of the betweenness centrality, which speeds up considerably Brandes ’ algorithm (the current state of the art) in the context of social networks. Our
Algorithms, Performance
"... The betweenness centrality metric has always been intriguing for graph analyses and used in various applications. Yet, it is one of the most computationally expensive kernels in graph mining. In this work, we investigate a set of techniques to make the betweenness centrality computations faster on G ..."
Abstract
 Add to MetaCart
(Show Context)
The betweenness centrality metric has always been intriguing for graph analyses and used in various applications. Yet, it is one of the most computationally expensive kernels in graph mining. In this work, we investigate a set of techniques to make the betweenness centrality computations faster on GPUs as well as on heterogeneous CPU/GPU architectures. Our techniques are based on virtualization of the vertices with high degree, strided access to adjacency lists, removal of the vertices with degree 1, and graph ordering. By combining these techniques within a finegrain parallelism, we reduced the computation time on GPUs significantly for a set of social networks. On CPUs, which can usually have access to a large amount of memory, we used a coarsegrain parallelism. We showed that heterogeneous computing, i.e., using both architectures at the same time, is a promising solution for betweenness centrality. Experimental results show that the proposed techniques can be a great arsenal to reduce the centrality computation time for networks. In particular, it reduces the computation time of a 234 million edges graph from more than 4 months to less than 12 days.
A Survey of Sybil Attacks in Networks 1
"... Most peertopeer systems are vulnerable to Sybil attacks. The Sybil attack is an attack wherein a reputation system is subverted by a considerable number of forging identities in peertopeer networks. By illegitimately infusing false or biased information via the pseudonymous identities, an advers ..."
Abstract
 Add to MetaCart
Most peertopeer systems are vulnerable to Sybil attacks. The Sybil attack is an attack wherein a reputation system is subverted by a considerable number of forging identities in peertopeer networks. By illegitimately infusing false or biased information via the pseudonymous identities, an adversary can mislead a system into making decisions benefiting herself. For example, in a distributed voting system, an adversary can easily change the overall popularity of an option by providing plenty of false praise, or badmouthing the option through these fake identities. In this paper, we summarize the existing Sybil defense techniques, and further provide some new research areas. Unlike traditional surveys about Sybil defense, we first categorize the Sybil defense methods, mainly according to their designed time, and then classify the methods by their approaches. We believe that by understanding the evolution of the solutions, readers could essentially have more insights on the problem. In a nutshell, the research on the Sybil defense technique has experienced four phases: (1) traditional security keybased approaches, (2) specific peertopeer system featurebased solutions, (3) social networkbased methods, and (4) social communitybased techniques. Besides all of these antisybil methods, readers will also find some Sybil attackrelated topics, such as Sockpuppets in online discussion forums. By the end of the paper, we will provide some predictions about directions for future research.
Large(r) Networks and Centrality
"... citation graphs • Facebook has a billion users and a trillion connections • Twitter has more than 200 million users ..."
Abstract
 Add to MetaCart
citation graphs • Facebook has a billion users and a trillion connections • Twitter has more than 200 million users
Incremental Closeness Centrality in Distributed Memory
, 2015
"... Networks are commonly used to model traffic patterns, social interactions, or web pages. The vertices in a network do not possess the same characteristics: some vertices are naturally more connected and some vertices can be more important. Closeness centrality (CC) is a global metric that quantifies ..."
Abstract
 Add to MetaCart
(Show Context)
Networks are commonly used to model traffic patterns, social interactions, or web pages. The vertices in a network do not possess the same characteristics: some vertices are naturally more connected and some vertices can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given vertex in the network. When the network is dynamic and keeps changing, the relative importance of the vertices also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose Streamer, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined, replicated parallelism, and SpMMbased BFSs, and it takes NUMA effects into account. It makes maintaining the Closeness Centrality values of reallife networks with millions of interactions significantly faster and obtains almost linear speedups on a 64 nodes 8 threads/node cluster.