Results 1 - 10
of
1,682
Wide-Area Traffic: The Failure of Poisson Modeling
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 1995
"... Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 wide-area traces, investigating a number of wide-area TCP arrival processes (session and con ..."
Abstract
-
Cited by 1775 (24 self)
- Add to MetaCart
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. We evaluate 24 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTP data connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. We find that user-initiated TCP session arrivals, such as remotelogin and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib [Danzig et al, 1992] interarrivals preserves burstiness over many time scales; and that FTP data connection arrivals within FTP sessions come bunched into “connection bursts,” the largest of which are so large that they completely dominate FTP data traffic. Finally, we offer some results regarding how our findings relate to the possible self-similarity of widearea traffic.
Modeling and simulation of genetic regulatory systems: A literature review
- JOURNAL OF COMPUTATIONAL BIOLOGY
, 2002
"... In order to understand the functioning of organisms on the molecular level, we need to know which genes are expressed, when and where in the organism, and to which extent. The regulation of gene expression is achieved through genetic regulatory systems structured by networks of interactions between ..."
Abstract
-
Cited by 738 (14 self)
- Add to MetaCart
(Show Context)
In order to understand the functioning of organisms on the molecular level, we need to know which genes are expressed, when and where in the organism, and to which extent. The regulation of gene expression is achieved through genetic regulatory systems structured by networks of interactions between DNA, RNA, proteins, and small molecules. As most genetic regulatory networks of interest involve many components connected through interlocking positive and negative feedback loops, an intuitive understanding of their dynamics is hard to obtain. As a consequence, formal methods and computer tools for the modeling and simulation of genetic regulatory networks will be indispensable. This paper reviews formalisms that have been employed in mathematical biology and bioinformatics to describe genetic regulatory systems, in particular directed graphs, Bayesian networks, Boolean networks and their generalizations, ordinary and partial differential equations, qualitative differential equations, stochastic equations, and rule-based formalisms. In addition, the paper discusses how these formalisms have been used in the simulation of the behavior of actual regulatory systems.
Probabilistic Boolean networks: a rule-based uncertainty model for gene regulatory networks
, 2002
"... Motivation: Our goal is to construct a model for genetic regulatory networks such that the model class: (i ) incorporates rule-based dependencies between genes; (ii ) allows the systematic study of global network dynamics; (iii ) is able to cope with uncertainty, both in the data and the model selec ..."
Abstract
-
Cited by 391 (59 self)
- Add to MetaCart
Motivation: Our goal is to construct a model for genetic regulatory networks such that the model class: (i ) incorporates rule-based dependencies between genes; (ii ) allows the systematic study of global network dynamics; (iii ) is able to cope with uncertainty, both in the data and the model selection; and (iv ) permits the quantification of the relative influence and sensitivity of genes in their interactions with other genes.
Reveal, A General Reverse Engineering Algorithm For Inference Of Genetic Network Architectures
, 1998
"... Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/o ..."
Abstract
-
Cited by 344 (5 self)
- Add to MetaCart
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n=50 (elements) and k=3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10 15) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean
Genetic Network Inference: From Co-Expression Clustering To Reverse Engineering
, 2000
"... motivation: Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using highthroughput gene expression assays, we are able to measure the output of the ge ..."
Abstract
-
Cited by 336 (0 self)
- Add to MetaCart
motivation: Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using highthroughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-cluster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e. who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting and bioengineering.
Metaheuristics in combinatorial optimization: Overview and conceptual comparison
- ACM COMPUTING SURVEYS
, 2003
"... The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important meta ..."
Abstract
-
Cited by 314 (17 self)
- Add to MetaCart
The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behaviour of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
Asset pricing under endogenous expectations in an artificial stock market
, 1996
"... We propose a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create. And we explore the implications of this theory computationally using our Santa Fe artificial stock market. Asset markets, we argue, ..."
Abstract
-
Cited by 303 (20 self)
- Add to MetaCart
We propose a theory of asset pricing based on heterogeneous agents who continually adapt their expectations to the market that these expectations aggregatively create. And we explore the implications of this theory computationally using our Santa Fe artificial stock market. Asset markets, we argue, have a recursive nature in that agents ’ expectations are formed on the basis of their anticipations of other agents ’ expectations, which precludes expectations being formed by deductive means. Instead traders continually hypothesize—continually explore—expectational models, buy or sell on the basis of those that perform best, and confirm or discard these according to their performance. Thus individual beliefs or expectations become endogenous to the market, and constantly compete within an ecology of others ’ beliefs or expectations. The ecology of beliefs co-evolves over time. Computer experiments with this endogenous-expectations market explain one of the more striking puzzles in finance: that market traders often believe in such concepts as technical trading, “market psychology, ” and bandwagon effects, while academic theorists believe in market efficiency and a lack of speculative opportunities. Both views, we show, are correct, but within different regimes. Within a regime where investors explore alternative expectational models at a low rate, the market settles into the rational-
Evolutionary computation: Comments on the history and current state
- IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 1997
"... Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and ..."
Abstract
-
Cited by 280 (0 self)
- Add to MetaCart
(Show Context)
Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950’s. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) [with links to genetic programming (GP) and classifier systems (CS)], evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e., representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete.
Fitness Distance Correlation as a Measure of Problem Difficulty for Genetic Algorithms
- Proceedings of the Sixth International Conference on Genetic Algorithms
, 1995
"... A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptiv ..."
Abstract
-
Cited by 258 (5 self)
- Add to MetaCart
A measure of search difficulty, fitness distance correlation (FDC), is introduced and examined in relation to genetic algorithm (GA) performance. In many cases, this correlation can be used to predict the performance of a GA on problems with known global maxima. It correctly classifies easy deceptive problems as easy and difficult non-deceptive problems as difficult, indicates when Gray coding will prove better than binary coding, and is consistent with the surprises encountered when GAs were used on the Tanese and royal road functions. The FDC measure is a consequence of an investigation into the connection between GAs and heuristic search. 1 INTRODUCTION A correspondence between evolutionary algorithms and heuristic state space search is developed in (Jones, 1995b). This is based on a model of fitness landscapes as directed, labeled graphs that are closely related to the state spaces employed in heuristic search. We examine one aspect of this correspondence, the relationship between...