Results 1  10
of
179
Tutorial on agentbased modeling and simulation
 Proceedings of the 2005 Winter Simulation Conference
, 2005
"... Agentbased modelling and simulation (ABMS) is a relatively new approach to modelling systems composed of autonomous, interacting agents. Agentbased modelling is a way to model the dynamics of complex systems and complex adaptive systems. Such systems often selforganize themselves and create emerg ..."
Abstract

Cited by 147 (8 self)
 Add to MetaCart
Agentbased modelling and simulation (ABMS) is a relatively new approach to modelling systems composed of autonomous, interacting agents. Agentbased modelling is a way to model the dynamics of complex systems and complex adaptive systems. Such systems often selforganize themselves and create emergent order. Agentbased models also include models of behaviour (human or otherwise) and are used to observe the collective effects of agent behaviours and interactions. The development of agent modelling tools, the availability of microdata, and advances in computation have made possible a growing number of agentbased applications across a variety of domains and disciplines. This article provides a brief introduction to ABMS, illustrates the main concepts and foundations, discusses some recent applications across a variety of disciplines, and identifies methods and toolkits for developing agent models.
A comparative survey of automated parametersearch methods for compartmental neural models
 J. Computational Neuroscience
, 1999
"... Abstract. One of the most difficult and timeconsuming aspects of building compartmental models of single neurons is assigning values to free parameters to make models match experimental data. Automated parametersearch methods potentially represent a more rapid and less laborintensive alternative ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
Abstract. One of the most difficult and timeconsuming aspects of building compartmental models of single neurons is assigning values to free parameters to make models match experimental data. Automated parametersearch methods potentially represent a more rapid and less laborintensive alternative to choosing parameters manually. Here we compare the performance of four different parametersearch methods on several singleneuron models. The methods compared are conjugategradient descent, genetic algorithms, simulated annealing, and stochastic search. Each method has been tested on five different neuronal models ranging from simple models with between 3 and 15 parameters to a realistic pyramidal cell model with 23 parameters. The results demonstrate that genetic algorithms and simulated annealing are generally the most effective methods. Simulated annealing was overwhelmingly the most effective method for simple models with small numbers of parameters, but the genetic algorithm method was equally effective for more complex models with larger numbers of parameters. The discussion considers possible explanations for these results and makes several specific recommendations for the use of parameter searches on neuronal models.
Evolutionary computation in medicine: an overview
 ARTIFICIAL INTELLIGENCE IN MEDICINE
, 2000
"... The term evolutionary computation encompasses a host of methodologies inspired by natural evolution that are used to solve hard problems. This paper provides an overview of evolutionary computation as applied to problems in the medical domains. We begin by outlining the basic workings of six types o ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
The term evolutionary computation encompasses a host of methodologies inspired by natural evolution that are used to solve hard problems. This paper provides an overview of evolutionary computation as applied to problems in the medical domains. We begin by outlining the basic workings of six types of evolutionary algorithms: genetic algorithms, genetic programming, evolution strategies, evolutionary programming, classifier systems, and hybrid systems. We then describe how evolutionary algorithms are applied to solve medical problems, including diagnosis, prognosis, imaging, signal processing, planning, and scheduling. Finally, we provide an extensive bibliography, classified both according to the medical task addressed and according to the evolutionary technique used.
Swarm Intelligence Algorithms for Data Clustering
 IN SOFT COMPUTING FOR KNOWLEDGE DISCOVERY AND DATA MINING BOOK, PART IV
"... Clustering aims at representing large datasets by a fewer number of prototypes or clusters. It brings simplicity in modeling data and thus plays a central role in the process of knowledge discovery and data mining. Data mining tasks, in these days, require fast and accurate partitioning of huge da ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
Clustering aims at representing large datasets by a fewer number of prototypes or clusters. It brings simplicity in modeling data and thus plays a central role in the process of knowledge discovery and data mining. Data mining tasks, in these days, require fast and accurate partitioning of huge datasets, which may come with a variety of attributes or features. This, in turn, imposes severe computational requirements on the relevant clustering techniques. A family of bioinspired algorithms, wellknown as Swarm Intelligence (SI) has recently emerged that meets these requirements and has successfully been applied to a number of real world clustering problems. This chapter explores the role of SI in clustering different kinds of datasets. It finally describes a new SI technique for partitioning any dataset into an optimal number of groups through one run of optimization. Computer simulations undertaken in this research have also been provided to demonstrate the effectiveness of the proposed algorithm.
Singleunit activity in cortical area mst associated with disparityvergence eye movements: evidence for population coding
 J. Neurophysiol
, 2001
"... Singleunit activity in cortical area MST associated with disparityvergence eye movements: evidence for population coding. J Neurophysiol 85: 2245–2266, 2001. Singleunit discharges were recorded in the medial superior temporal area (MST) of five behaving monkeys. Brief (230ms) horizontal dispari ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Singleunit activity in cortical area MST associated with disparityvergence eye movements: evidence for population coding. J Neurophysiol 85: 2245–2266, 2001. Singleunit discharges were recorded in the medial superior temporal area (MST) of five behaving monkeys. Brief (230ms) horizontal disparity steps were applied to large correlated or anticorrelated randomdot patterns (in which the dots had the same or opposite contrast, respectively, at the two eyes), eliciting vergence eye movements at short latencies [65.8 6 4.5 (SD) ms]. Disparity tuning curves, describing the dependence of the initial vergence responses (measured over the period 50–110 ms after the step) on the magnitude of the steps, resembled the derivative of a Gaussian, the curves obtained with correlated and anticorrelated patterns having opposite sign. Cells with disparityrelated activity were isolated using correlated stimuli, and disparity tuning curves describing the dependence of these initial neuronal responses (measured over
Efficient and accurate construction of genetic linkage maps: Supplementary Text S1
"... Theorem 1. Let li and lj be two markers that belong to two different LGs, and let di,j be the Hamming distance between A[i, ] and A[j,]. Then, E(di,j) = n/2 and P(di,j < δ) ≤ e −2(n/2−δ)2 n where δ < n/2. Proof. Let ck ∈ N and let Xk i,j be a random indicator variable which is equal to 1 if ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
Theorem 1. Let li and lj be two markers that belong to two different LGs, and let di,j be the Hamming distance between A[i, ] and A[j,]. Then, E(di,j) = n/2 and P(di,j < δ) ≤ e −2(n/2−δ)2 n where δ < n/2. Proof. Let ck ∈ N and let Xk i,j be a random indicator variable which is equal to 1 if ck is a recombinant with respect to li and lj and to 0 otherwise. Clearly E(Xk 1 i,j) = 2, and di,j = ∑ k Xk i,j. The family of random variables {Xk i,j: 1 ≤ k ≤ n} are i.i.d. According to linearity of expectation, E(di,j) = n/2. The bound P(di,j < δ) ≤ e−2(n/2−δ)2 n [1]. derives directly from Hoeffding’s inequality In the rest of this Section, let us assume that all the markers in M belong to the same linkage
Particle Swarm Optimization and Differential Evolution Algorithms: Technical Analysis, Applications and Hybridization Perspectives
"... Summary. Since the beginning of the nineteenth century, a significant evolution in optimization theory has been noticed. Classical linear programming and traditional nonlinear optimization techniques such as Lagrange’s Multiplier, Bellman’s principle and Pontyagrin’s principle were prevalent until ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Summary. Since the beginning of the nineteenth century, a significant evolution in optimization theory has been noticed. Classical linear programming and traditional nonlinear optimization techniques such as Lagrange’s Multiplier, Bellman’s principle and Pontyagrin’s principle were prevalent until this century. Unfortunately, these derivative based optimization techniques can no longer be used to determine the optima on rough nonlinear surfaces. One solution to this problem has already been put forward by the evolutionary algorithms research community. Genetic algorithm (GA), enunciated by Holland, is one such popular algorithm. This chapter provides two recent algorithms for evolutionary optimization – well known as particle swarm optimization (PSO) and differential evolution (DE). The algorithms are inspired by biological and sociological motivations and can take care of optimality on rough, discontinuous and multimodal surfaces. The chapter explores several schemes for controlling the convergence behaviors of PSO and DE by a judicious selection of their parameters. Special emphasis is given on the hybridizations of PSO and DE algorithms with other soft computing tools. The article finally discusses the mutual synergy of PSO with DE leading to a more powerful global search algorithm and its practical applications. 1
Genetic Optimization Using Derivatives: The rgenoud package for R
 Journal of Statistical Software. (in
, 2010
"... This introduction to the R package rgenoud is a modified version of Mebane and Sekhon (2009), published in the Journal of Statistical Software. That version of the introduction contains higher resolution figures. Genoud is an R that combines evolutionary search algorithms with derivativebased (Newt ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
This introduction to the R package rgenoud is a modified version of Mebane and Sekhon (2009), published in the Journal of Statistical Software. That version of the introduction contains higher resolution figures. Genoud is an R that combines evolutionary search algorithms with derivativebased (Newton or quasiNewton) methods to solve difficult optimization problems. Genoud may also be used for optimization problems for which derivatives do not exist. Genoud solves problems that are nonlinear or perhaps even discontinuous in the parameters of the function to be optimized. When the function to be optimized (for example, a loglikelihood) is nonlinear in the model’s parameters, the function will generally not be globally concave and may have irregularities such as saddlepoints or discontinuities. Optimization methods that rely on derivatives of the objective function may be unable to find any optimum at all. Multiple local optima may exist, so that there is no guarantee that a derivativebased method will converge to the global optimum. On the other hand, algorithms that do not use derivative information (such as pure genetic algorithms) are for many problems needlessly poor at local hill climbing. Most statistical problems are regular in a neighborhood of the solution. Therefore, for some portion of the search space, derivative information is useful. The function supports parallel processing on multiple CPUs on a single machine or a cluster of computers. Keywords:˜genetic algorithm, evolutionary program, optimization, parallel computing, R. 1.