Results 1 - 10
of
216
Epidemic Thresholds in Real Networks
"... How will a virus propagate in a real network? How long does it take to disinfect a network given particular values of infection rate and virus death rate? What is the single best node to immunize? Answering these questions is essential for devising network-wide strategies to counter viruses. In addi ..."
Abstract
-
Cited by 101 (10 self)
- Add to MetaCart
How will a virus propagate in a real network? How long does it take to disinfect a network given particular values of infection rate and virus death rate? What is the single best node to immunize? Answering these questions is essential for devising network-wide strategies to counter viruses. In addition, viral propagation is very similar in principle to the spread of rumors, information, and “fads, ” implying that the solutions for viral propagation would also offer insights into these other problem settings. We answer these questions by developing a nonlinear dynamical system (NLDS) that accurately models viral propagation in any arbitrary network, including real and synthesized network graphs. We propose a general epidemic threshold condition for the NLDS system: we prove that the epidemic threshold for a network is exactly the inverse of the largest eigenvalue of its adjacency matrix. Finally, we show that below the epidemic threshold, infections die out at an exponential rate. Our epidemic threshold model subsumes many known thresholds for special-case graphs (e.g., Erdös–Rényi, BA powerlaw, homogeneous). We demonstrate the predictive power of our model with extensive experiments on real and synthesized graphs, and show that our threshold condition holds for arbitrary graphs. Finally, we show how to utilize our threshold condition for practical uses: It can dictate which nodes to immunize; it can assess the effects of a throttling
On the Approximability of Influence in Social Networks
, 2008
"... In this paper, we study the spread of influence through a social network, in a model initially studied by Kempe, Kleinberg and Tardos [14, 15]: We are given a graph modeling a social network, where each node v has a (fixed) threshold tv, such that the node will adopt a new product if tv of its neigh ..."
Abstract
-
Cited by 82 (1 self)
- Add to MetaCart
In this paper, we study the spread of influence through a social network, in a model initially studied by Kempe, Kleinberg and Tardos [14, 15]: We are given a graph modeling a social network, where each node v has a (fixed) threshold tv, such that the node will adopt a new product if tv of its neighbors adopt it. Our goal is to find a small set S of nodes such that targeting the product to S would lead to adoption of the product by a large number of nodes in the graph. We show strong inapproximability results for several variants of this problem. Our main result says that the problem of minimizing the size of S, while ensuring that targeting S would influence the whole network into adopting the product, is hard to approximate within a polylogarithmic factor. This implies similar results if only a fixed fraction of the network is ensured to adopt the product. Further, the hardness of approximation result continues to hold when all nodes have majority thresholds, or have constant degree and threshold two. The latter answers a complexity question proposed in [10, 29]. We also give some positive results for more restricted cases, such as when the underlying graph is a tree.
Optimal and scalable distribution of content updates over a mobile social network
- In Proc. IEEE INFOCOM
, 2009
"... Number: CR-PRL-2008-08-0001 ..."
Computing Separable Functions via Gossip
, 2006
"... Motivated by applications to sensor, peer-to-peer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individu ..."
Abstract
-
Cited by 75 (6 self)
- Add to MetaCart
(Show Context)
Motivated by applications to sensor, peer-to-peer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions. The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].
A systematic framework for unearthing the missing links: measurements and impact
- in Proc. NSDI
, 2007
"... The lack of an accurate representation of the Internet topology at the Autonomous System (AS) level is a limiting factor in the design, simulation, and modeling efforts in inter-domain routing protocols. In this paper, we design and implement a framework for identifying AS links that are missing fro ..."
Abstract
-
Cited by 57 (5 self)
- Add to MetaCart
(Show Context)
The lack of an accurate representation of the Internet topology at the Autonomous System (AS) level is a limiting factor in the design, simulation, and modeling efforts in inter-domain routing protocols. In this paper, we design and implement a framework for identifying AS links that are missing from the commonly-used Internet topology snapshots. We apply our framework and show that the new links that we find change the current Internet topology model in a non-trivial way. First, in more detail, our framework provides a large-scale comprehensive synthesis of the available sources of information. We cross-validate and compare BGP routing tables, Internet Routing Registries, and traceroute data, while we extract significant new information from the less-studied Internet Exchange Points (IXPs). We identify 40 % more edges and approximately 300 % more peer-to-peer edges compared to commonly used data sets. Second, we identify properties of the new edges and quantify their effects on important topological properties. Given the new peer-topeer edges, we find that for some ASes more than 50% of their paths stop going through their ISP providers assuming policy-aware routing. A surprising observation is that the degree of a node may be a poor indicator of which ASes it will peer with: the two degrees differ by a factor of four or more in 50 % of the peer-to-peer links. Finally, we attempt to estimate the number of edges we may still be missing. 1
Fast distributed algorithms for computing separable functions
- IEEE Trans. Inform. Theory
"... Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and adhoc networks. The task of computing separable f ..."
Abstract
-
Cited by 57 (5 self)
- Add to MetaCart
(Show Context)
Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and adhoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme. Index Terms—Data aggregation, distributed algorithms, gossip algorithms, randomized algorithms. I.
Protecting against network infections: A game theoretic perspective
- In INFOCOM 2009, IEEE
, 2009
"... Abstract — Security breaches and attacks are critical problems in today’s networking. A key-point is that the security of each host depends not only on the protection strategies it chooses to adopt but also on those chosen by other hosts in the network. The spread of Internet worms and viruses is on ..."
Abstract
-
Cited by 44 (2 self)
- Add to MetaCart
(Show Context)
Abstract — Security breaches and attacks are critical problems in today’s networking. A key-point is that the security of each host depends not only on the protection strategies it chooses to adopt but also on those chosen by other hosts in the network. The spread of Internet worms and viruses is only one example. This class of problems has two aspects. First, it deals with epidemic processes, and as such calls for the employment of epidemic theory. Second, the distributed and autonomous nature of decision-making in major classes of networks (e.g., P2P, ad-hoc, and most notably the Internet) call for the employment of game theoretical approaches. Accordingly, we propose a unified framework that combines the N-intertwined, SIS epidemic model with a noncooperative game model. We determine the existence of a Nash equilibrium of the respective game and characterize its properties. We show that its quality, in terms of overall network security, largely depends on the underlying topology. We then provide a bound on the level of system inefficiency due to the noncooperative behavior, namely, the “price of anarchy ” of the game. We observe that the price of anarchy may be prohibitively high, hence we propose a scheme for steering users towards socially efficient behavior. I.
Graph theory and networks in biology
- IET Systems Biology, 1:89 – 119
, 2007
"... In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of bio-molecular networks, as well as the application of centrality measures to interaction networks and research on the hierarch ..."
Abstract
-
Cited by 43 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we present a survey of the use of graph theoretical techniques in Biology. In particular, we discuss recent work on identifying and modelling the structure of bio-molecular networks, as well as the application of centrality measures to interaction networks and research on the hierarchical structure of such networks and network motifs. Work on the link between structural network properties and dynamics is also described, with emphasis on synchronization and disease propagation. 1
Modeling Cyber-Insurance: Towards A Unifying Framework
, 2010
"... We propose a comprehensive formal framework to classify all market models of cyber-insurance we are aware of. The framework features a common terminology and deals with the specific properties of cyber-risk in a unified way: interdependent security, correlated risk, and information asymmetries. A su ..."
Abstract
-
Cited by 43 (3 self)
- Add to MetaCart
We propose a comprehensive formal framework to classify all market models of cyber-insurance we are aware of. The framework features a common terminology and deals with the specific properties of cyber-risk in a unified way: interdependent security, correlated risk, and information asymmetries. A survey of existing models, tabulated according to our framework, reveals a discrepancy between informal arguments in favor of cyber-insurance as a tool to align incentives for better network security, and analytical results questioning the viability of a market for cyber-insurance. Using our framework, we show which parameters should be considered and endogenized in future models to close this gap. 1