Results 1  10
of
246
SmallWorld Phenomena and the Dynamics of Information
 In Advances in Neural Information Processing Systems (NIPS) 14
, 2001
"... Introduction The problem of searching for information in networks like the World Wide Web can be approached in a variety of ways, ranging from centralized indexing schemes to decentralized mechanisms that navigate the underlying network without knowledge of its global structure. The decentralized ap ..."
Abstract

Cited by 175 (5 self)
 Add to MetaCart
(Show Context)
Introduction The problem of searching for information in networks like the World Wide Web can be approached in a variety of ways, ranging from centralized indexing schemes to decentralized mechanisms that navigate the underlying network without knowledge of its global structure. The decentralized approach appears in a variety of settings: in the behavior of users browsing the Web by following hyperlinks; in the design of focused crawlers [4, 5, 8] and other agents that explore the Web's links to gather information; and in the search protocols underlying decentralized peertopeer systems such as Gnutella [10], Freenet [7], and recent research prototypes [21, 22, 23], through which users can share resources without a central server. In recent work, we have been investigating the problem of decentralized search in large information networks [14, 15]. Our initial motivation was an experiment that dealt directly with the search problem in a decidedly preInternet context: Stanley Milgram
Know thy Neighbor's Neighbor: the Power of Lookahead in Randomized P2P Networks
 In Proceedings of the 36th ACM Symposium on Theory of Computing (STOC
, 2004
"... Several peertopeer networks are based upon randomized graph topologies that permit e#cient greedy routing, e.g., randomized hypercubes, randomized Chord, skipgraphs and constructions based upon smallworld percolation networks. In each of these networks, a node has outdegree #(log n), where n de ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
Several peertopeer networks are based upon randomized graph topologies that permit e#cient greedy routing, e.g., randomized hypercubes, randomized Chord, skipgraphs and constructions based upon smallworld percolation networks. In each of these networks, a node has outdegree #(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lowerbounds for greedy routing for these networks, and analyze NeighborofNeighbor (NoN)greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.
SmallWorld FileSharing Communities
, 2003
"... Web caches, content distribution networks, peertopeer file sharing networks, distributed file systems, and data grids all have in common that they involve a community of users who generate requests for shared data. In each case, overall system performance can be improved significantly if we can fi ..."
Abstract

Cited by 90 (10 self)
 Add to MetaCart
(Show Context)
Web caches, content distribution networks, peertopeer file sharing networks, distributed file systems, and data grids all have in common that they involve a community of users who generate requests for shared data. In each case, overall system performance can be improved significantly if we can first identify and then exploit interesting structure within a community's access patterns. To this end, we propose a novel perspective on file sharing based on the study of the relationships that form among users based on the files in which they are interested. We propose a new structure that captures common user interests in datathe datasharing graph and justify its utility with studies on three datadistribution systems: a highenergy physics collaboration, the Web, and the Kazaa peertopeer network. We find smallworld patterns in the datasharing graphs of all three communities. We analyze these graphs and propose some probable causes for these emergent smallworld patterns. The significance of smallworld patterns is twofold: it provides a rigorous support to intuition and, perhaps most importantly, it suggests ways to design mechanisms that exploit these naturally emerging patterns.
Anonymizing Social Networks
 VLDB 2008
, 2008
"... Advances in technology have made it possible to collect data about individuals and the connections between them, such as email correspondence and friendships. Agencies and researchers who have collected such social network data often have a compelling interest in allowing others to analyze the data. ..."
Abstract

Cited by 71 (3 self)
 Add to MetaCart
(Show Context)
Advances in technology have made it possible to collect data about individuals and the connections between them, such as email correspondence and friendships. Agencies and researchers who have collected such social network data often have a compelling interest in allowing others to analyze the data. However, in many cases the data describes relationships that are private (e.g., email correspondence) and sharing the data in full can result in unacceptable disclosures. In this paper, we present a framework for assessing the privacy risk of sharing anonymized network data. This includes a model of adversary knowledge, for which we consider several variants and make connections to known graph theoretical results. On several realworld social networks, we show that simple anonymization techniques are inadequate, resulting in substantial breaches of privacy for even modestly informed adversaries. We propose a novel anonymization technique based on perturbing the network and demonstrate empirically that it leads to substantial reduction of the privacy threat. We also analyze the effect that anonymizing the network has on the utility of the data for social network analysis.
The darknet and the future of content distribution
 In Proceedings of the 2002 ACM Workshop on Digital Rights Management
, 2002
"... ..."
(Show Context)
Parallel algorithms for evaluating centrality indices in realworld networks
 In Proceedings of the International Conference on Parallel Processing (ICPP
, 2006
"... This paper discusses fast parallel algorithms for evaluating several centrality indices frequently used in complex network analysis. These algorithms have been optimized to exploit properties typically observed in realworld large scale networks, such as the low average distance, high local density, ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
(Show Context)
This paper discusses fast parallel algorithms for evaluating several centrality indices frequently used in complex network analysis. These algorithms have been optimized to exploit properties typically observed in realworld large scale networks, such as the low average distance, high local density, and heavytailed power law degree distributions. We test our implementations on real datasets such as the web graph, proteininteraction networks, movieactor and citation networks, and report impressive parallel performance for evaluation of the computationally intensive centrality metrics (betweenness and closeness centrality) on highend shared memory symmetric multiprocessor and multithreaded architectures. To our knowledge, these are the first parallel implementations of these widelyused social network analysis metrics. We demonstrate that it is possible to rigorously analyze networks three orders of magnitude larger than instances that can be handled by existing network analysis (SNA) software packages. For instance, we compute the exact betweenness centrality value for each vertex in a large US patent citation network (3 million patents, 16 million citations) in 42 minutes on 16 processors, utilizing 20GB RAM of the IBM p5 570. Current SNA packages on the other hand cannot handle graphs with more than hundred thousand edges. 1
Approximating clustering coefficient and transitivity
 Journal of Graph Algorithms and Applications
, 2005
"... Since its introduction in the year 1998 by Watts and Strogatz, the clustering coefficient has become a frequently used tool for analyzing graphs. In 2002 the transitivity was proposed by Newman, Watts and Strogatz as an alternative to the clustering coefficient. As many networks considered in comple ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
(Show Context)
Since its introduction in the year 1998 by Watts and Strogatz, the clustering coefficient has become a frequently used tool for analyzing graphs. In 2002 the transitivity was proposed by Newman, Watts and Strogatz as an alternative to the clustering coefficient. As many networks considered in complex systems are huge, the efficient computation of such network parameters is crucial. Several algorithms with polynomial running time can be derived from results known in graph theory. The main contribution of this work is a new fast approximation algorithm for the weighted clustering coefficient which also gives very efficient approximation algorithms for the clustering coefficient and the transitivity. We namely present an algorithm with running time in O(1) for the clustering coefficient, respectively with running time in O(n) for the transitivity. By an experimental study we demonstrate the performance of the proposed algorithms on realworld data as well as on generated graphs. Moreover we give a simple graph generator algorithm that works according to the preferential attachment rule but also generates graphs with adjustable clustering coefficient.
Bipartite Graphs as Models of Complex Networks
 Aspects of Networking
, 2004
"... It appeared recently that the classical random graph model used to represent realworld complex networks does not capture their main properties. Since then, various attempts have been made to provide accurate models. We study here the first model which achieves the following challenges: it produces ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
(Show Context)
It appeared recently that the classical random graph model used to represent realworld complex networks does not capture their main properties. Since then, various attempts have been made to provide accurate models. We study here the first model which achieves the following challenges: it produces graphs which have the three main wanted properties (clustering, degree distribution, average distance), it is based on some realworld observations, and it is sufficiently simple to make it possible to prove its main properties. This model consists in sampling a random bipartite graph with prescribed degree distribution. Indeed, we show that any complex network can be viewed as a bipartite graph with some specific characteristics, and that its main properties can be viewed as consequences of this underlying structure. We also propose a growing model based on this observation. Introduction.
Bipartite structure of all complex networks
 Information Processing Letters
"... The analysis and modelling of various complex networks has received much attention in the last few years. Some such networks display a natural bipartite structure: two kinds of nodes coexist with links only between nodes of different kinds. This bipartite structure has not been deeply studied unti ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
(Show Context)
The analysis and modelling of various complex networks has received much attention in the last few years. Some such networks display a natural bipartite structure: two kinds of nodes coexist with links only between nodes of different kinds. This bipartite structure has not been deeply studied until now, mainly because it appeared to be specific to only a few complex networks. However, we show here that all complex networks can be viewed as bipartite structures sharing some important statistics, like degree distributions. The basic properties of complex networks can be viewed as consequences of this underlying bipartite structure. This leads us to propose the first simple and intuitive model for complex networks which captures the main properties met in practice.
Relevance of Massively Distributed Explorations of the Internet Topology: Simulation Results
, 2005
"... Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of ..."
Abstract

Cited by 40 (13 self)
 Add to MetaCart
(Show Context)
Internet maps are generally constructed using the traceroute tool from a few sources to many destinations. It appeared recently that this exploration process gives a partial and biased view of the real topology, which leads to the idea of increasing the number of sources to improve the quality of the maps. In this paper, we present a set of experiments we have conduced to evaluate the relevance of this approach. It appears that the statistical properties of the underlying network have a strong influence on the quality of the obtained maps, which can be improved using massively distributed explorations. Conversely, we show that the exploration process induces some properties on the maps. We validate our analysis using realworld data and experiments and we discuss its implications.