Results 1 - 10
of
154
The structure and function of complex networks
- SIAM REVIEW
, 2003
"... Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, ..."
Abstract
-
Cited by 2600 (7 self)
- Add to MetaCart
(Show Context)
Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Gossip-based aggregation in large dynamic networks
- ACM TRANS. COMPUT. SYST
, 2005
"... As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block fo ..."
Abstract
-
Cited by 271 (43 self)
- Add to MetaCart
(Show Context)
As computer networks increase in size, become more heterogeneous and span greater geographic distances, applications must be designed to cope with the very large scale, poor reliability, and often, with the extreme dynamism of the underlying network. Aggregation is a key functional building block for such applications: it refers to a set of functions that provide components of a distributed system access to global information including network size, average load, average uptime, location and description of hotspots, and so on. Local access to global information is often very useful, if not indispensable for building applications that are robust and adaptive. For example, in an industrial control application, some aggregate value reaching a threshold may trigger the execution of certain actions; a distributed storage system will want to know the total available free space; load-balancing protocols may benefit from knowing the target average load so as to minimize the load they transfer. We propose a gossip-based protocol for computing aggregate values over network components in a fully decentralized fashion. The class of aggregate functions we can compute is very broad and includes many useful special cases such as counting, averages, sums, products, and extremal values. The protocol is suitable for extremely large and highly dynamic systems due to its proactive structure—all nodes receive the aggregate value continuously, thus being able to track
The Peer Sampling Service: Experimental Evaluation of Unstructured Gossip-Based Implementations
- In Middleware ’04: Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware
, 2004
"... Abstract. In recent years, the gossip-based communication model in large-scale distributed systems has become a general paradigm with important applications which include information dissemination, aggregation, overlay topology management and synchronization. At the heart of all of these protocols l ..."
Abstract
-
Cited by 187 (41 self)
- Add to MetaCart
(Show Context)
Abstract. In recent years, the gossip-based communication model in large-scale distributed systems has become a general paradigm with important applications which include information dissemination, aggregation, overlay topology management and synchronization. At the heart of all of these protocols lies a fundamental distributed abstraction: the peer sampling service. In short, the aim of this service is to provide every node with peers to exchange information with. Analytical studies reveal a high reliability and efficiency of gossip-based protocols, under the (often implicit) assumption that the peers to send gossip messages to are selected uniformly at random from the set of all nodes. In practice—instead of requiring all nodes to know all the peer nodes so that a random sample could be drawn—a scalable and efficient way to implement the peer sampling service is by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. This paper presents a generic framework to implement reliable and efficient peer sampling services. The framework generalizes existing approaches and makes it easy to introduce new ones. We use this framework to explore and compare several implementations of our abstract scheme. Through extensive experimental analysis, we show that all of them lead to different peer sampling services none of which is uniformly random. This clearly renders traditional theoretical approaches invalid, when the underlying peer sampling service is based on a gossip-based scheme. Our observations also help explain important differences between design choices of peer sampling algorithms, and how these impact the reliability of the corresponding service. 1
Gossip-based Peer Sampling
, 2007
"... Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this se ..."
Abstract
-
Cited by 161 (43 self)
- Add to MetaCart
Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this service provides every node with peers to gossip with. We promote this service to the level of a first-class abstraction of a large-scale distributed system, similar to a name service being a first-class abstraction of a local-area system. We present a generic framework to implement a peer-sampling service in a decentralized manner by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. Our framework generalizes existing approaches and makes it easy to discover new ones. We use this framework to empirically explore and compare several implementations of the peer-sampling service. Through extensive simulation experiments we show that—although all protocols provide a good quality uniform random stream of peers to each node locally—traditional theoretical assumptions about the randomness of the unstructured overlays as a whole do not hold in any of the instances. We also show that different design decisions result in severe differences from the point of view of two crucial aspects: load balancing and fault tolerance. Our simulations are validated by means of a wide-area implementation.
Classification of Random Boolean Networks
, 2002
"... We provide the first classification of different types of RandomBoolean Networks (RBNs). We study the differences of RBNs depending on the degree of synchronicity and determinism of their updating scheme. For doing so, we first define three new types of RBNs. We note some similarities and difference ..."
Abstract
-
Cited by 70 (14 self)
- Add to MetaCart
(Show Context)
We provide the first classification of different types of RandomBoolean Networks (RBNs). We study the differences of RBNs depending on the degree of synchronicity and determinism of their updating scheme. For doing so, we first define three new types of RBNs. We note some similarities and differences between different types of RBNs with the aid of a public software laboratory we developed. Particularly, we find that the point attractors are independent of the updating scheme, and that RBNs are more different depending on their determinism or non-determinism rather than depending on their synchronicity or asynchronicity. We also show a way of mapping non-synchronous deterministic RBNs into synchronous RBNs. Our results are important for justifying the use of specific types of RBNs for modelling natural phenomena.
The re-emergence of “emergence”: A venerable concept in search of a theory
- COMPLEXITY
, 2002
"... Despite its current popularity, “emergence” is a concept with a venerable history and an elusive, ambiguous standing in contemporary evolutionary theory. This paper briefly recounts the history of the term and details some of its current usages. Not only are there radically varying interpretations a ..."
Abstract
-
Cited by 43 (0 self)
- Add to MetaCart
Despite its current popularity, “emergence” is a concept with a venerable history and an elusive, ambiguous standing in contemporary evolutionary theory. This paper briefly recounts the history of the term and details some of its current usages. Not only are there radically varying interpretations about what emergence means but “reductionist ” and “holistic ” theorists have very different views about the issue of causation. However, these two seemingly polar positions are not irreconcilable. Reductionism, or detailed analysis of the parts and their interactions, is essential for answering the “how ” question in evolution--how does a complex living system work? But holism is equally necessary for answering the “why ” question-- why did a particular arrangement of parts evolve? In order to answer the “why ” question, a broader, multi-leveled paradigm is required. The reductionist approach to explaining emergent complexity has entailed a search for underlying “laws of emergence.” Another alternative is the “Synergism Hypothesis, ” which focuses on the “economics ” – the functional effects produced by emergent wholes and their selective consequences. This theory, in a nutshell, proposes that the synergistic (co-operative) effects produced by various combinations of parts have played a major causal role in the evolution of biological complexity. It will also be argued that emergent phenomena represent, in effect, a subset of a much larger universe of combined effects in the natural world; there are many different kinds of synergy, but not all synergies represent emergent phenomena.
Robust aggregation protocols for large-scale overlay networks
- In Proceedings of the 2004 International Conference on Dependable Systems and Networks (DSN’04
, 2004
"... Aggregation refers to a set of functions that provide global information about a distributed system. These functions operate on numeric values distributed over the system and can be used to count network size, determine extremal values and compute averages, products or sums. Aggregation allows impor ..."
Abstract
-
Cited by 40 (12 self)
- Add to MetaCart
(Show Context)
Aggregation refers to a set of functions that provide global information about a distributed system. These functions operate on numeric values distributed over the system and can be used to count network size, determine extremal values and compute averages, products or sums. Aggregation allows important basic functionality to be achieved in fully distributed and peer-to-peer networks. For example, in a monitoring application, some aggregate reaching a specific value may trigger the execution of certain operations; distributed storage systems may need to know the total free space available; load-balancing protocols may benefit from knowing the target average load so as to minimize the transfered load. Building on the simple but efficient idea of antientropy aggregation (a scheme based on the anti-entropy epidemic communication model), in this paper we introduce practically applicable robust and adaptive protocols for proactive aggregation, including the calculation of average, product and extremal values. We show how the averaging protocol can be applied to compute further aggregates like sum, variance and the network size. We present theoretical and empirical evidence supporting the robustness of the averaging protocol under different scenarios. 1.
Network Robustness and Graph Topology
, 2004
"... Two important recent trends in military and civilian communications have been the increasing tendency to base operations around an internal network, and the increasing threats to communications infrastructure. This combination of factors makes it important to study the robustness of network topologi ..."
Abstract
-
Cited by 38 (5 self)
- Add to MetaCart
Two important recent trends in military and civilian communications have been the increasing tendency to base operations around an internal network, and the increasing threats to communications infrastructure. This combination of factors makes it important to study the robustness of network topologies. We use graph-theoretic concepts of connectivity to do this, and argue that node connectivity is the most useful such measure. We examine the relationship between node connectivity and network symmetry, and describe two conditions which robust networks should satisfy. To assist with the process of designing robust networks, we have developed a powerful network design and analysis tool called CAVALIER, which we briefly describe.
Modular Stability Tools for Distributed Computation and Control
- TO BE PUBLISHED IN INT. J. ADAPTIVE CONTROL AND SIGNAL PROCESSING, 17(6)
, 2002
"... Much recent functional modelling of the central nervous system, beyond traditional “neural net” approaches, focuses on its distributed computational architecture. This paper discusses extensions of our recent work aimed at understanding this architecture from an overall nonlinear stability and conve ..."
Abstract
-
Cited by 35 (24 self)
- Add to MetaCart
Much recent functional modelling of the central nervous system, beyond traditional “neural net” approaches, focuses on its distributed computational architecture. This paper discusses extensions of our recent work aimed at understanding this architecture from an overall nonlinear stability and convergence point of view, and at constructing artificial devices exploiting similar modularity. Applications to synchronisation and to schooling are also described. The development makes extensive use of nonlinear contraction theory.
RELIABILITY ENGINEERING: OLD PROBLEMS AND NEW CHALLENGES
"... The first recorded usage of the word reliability dates back to the 1800s, albeit referred to a person and not a technical system. Since then, the concept of reliability has become a pervasive attribute worth of both qualitative and quantitative connotations. In particular, the revolutionary social, ..."
Abstract
-
Cited by 35 (3 self)
- Add to MetaCart
The first recorded usage of the word reliability dates back to the 1800s, albeit referred to a person and not a technical system. Since then, the concept of reliability has become a pervasive attribute worth of both qualitative and quantitative connotations. In particular, the revolutionary social, cultural and technological changes that have occurred from the 1800s to the 2000s have contributed to the need for a rational framework and quantitative treatment of the reliability of engineered systems and plants. This has led to the rise of reliability engineering as a scientific discipline. In this paper, some considerations are shared with respect to a number of problems and challenges which researchers and practitioners in reliability engineering are facing when analyzing today’s complex systems. The focus will be on the contribution of reliability to system safety and on its role within system risk analysis.