Results 1  10
of
20
Making Models Match: Replicating an AgentBased Model
 Journal of Artificial Societies and Social Simulation
, 2007
"... Scientists have increasingly employed computer models in their work. Recent years have seen a
proliferation of agentbased models in the natural and social sciences. But with the exception of
a few "classic" models, most of these models have never been replicated by anyone but the
original ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Scientists have increasingly employed computer models in their work. Recent years have seen a
proliferation of agentbased models in the natural and social sciences. But with the exception of
a few "classic" models, most of these models have never been replicated by anyone but the
original developer. As replication is a critical component of the scientific method and a core
practice of scientists, we argue herein for an increased practice of replication in the agent
based modeling community, and for widespread discussion of the issues surrounding
replication. We begin by clarifying the concept of replication as it applies to ABM. Furthermore
we argue that replication may have even greater benefits when applied to computational models
than when applied to physical experiments. Replication of computational models affects model
verification and validation and fosters shared understanding about modeling decisions. To
facilitate replication, we must create standards for both how to replicate models and how to
evaluate the replication. In this paper, we present a case study of our own attempt to replicate a
classic agentbased model. We begin by describing an agentbased model from political
science that was developed by Axelrod and Hammond. We then detail our effort to replicate that
model and the challenges that arose in recreating the model and in determining if the
replication was successful. We conclude this paper by discussing issues for (1) researchers
attempting to replicate models and (2) researchers developing models in order to facilitate the
replication of their results.
Editorial: Errors in the Variables, Unobserved Heterogeneity, and Other Ways of Hiding Statistical Error
 Marketing Science
, 2006
"... One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e.g., measurement error) or faulty theories. Testing theories, however, differs from estimating unknown parameters in known relationships. When testing theories, it is sufficiently dangerous to cure inconsistencies by adding observed explanatory variables (i.e., beyond the theory), let alone unobserved explanatory variables. Adding ad hocexplanatory variables mimics experimental controls when experiments are impractical. Assuming unobservable variables is different, partly because realizations of unobserved variables are unavailable for validating estimates. When different statistical assumptions about error produce dramatically different conclusions, we should doubt the theory, the data, or both. Theory tests should be insensitive to assumptions about error, particularly adjustments for error from unobserved variables. These adjustments can fallaciously inflate support for wrong theories, partly by implicitly underweighting observations inconsistent with the theory. Inconsistent estimates often convey an important message—the data are inconsistent with the theory! Although adjustments for unobserved variables and ex post information are extraordinarily useful when estimating known relationships, when
Reliability of Computational Science
, 2006
"... Today’s computers allow us to simulate large, complex physical problems. Many times the mathematical models describing such problems are based on a relatively small amount of available information such as experimental measurements. The question arises whether the computed data could be used as the b ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Today’s computers allow us to simulate large, complex physical problems. Many times the mathematical models describing such problems are based on a relatively small amount of available information such as experimental measurements. The question arises whether the computed data could be used as the basis for decision in critical engineering, economic, medicine applications. The representative list of engineering accidents occurred in the past years and their reasons illustrates the question. The paper describes a general framework for Verification and Validation which deals with this question. The framework is then applied to an illustrative engineering problem, in which the basis for decision is a specific quantity of interest, namely the probability that the quantity does not exceed a given value. The V&V framework is applied and explained in detail. The result of the analysis is the computation of the failure probability as well as a quantification of the confidence in the computation, depending on the amount of available experimental data. 1
Validation and forecasting accuracy in models of climate change
 International Journal of Forecasting
, 2011
"... www.elsevier.com/locate/ijforecast ..."
Hypothesis testing for validation and certification
 J. Complexity
"... We develop a hypothesis testing framework for the formulation of the problems of 1) the validation of a simulation model and 2) using modeling to certify the performance of a physical system1. These results are used to solve the extrapolative validation and certification problems, namely problems wh ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We develop a hypothesis testing framework for the formulation of the problems of 1) the validation of a simulation model and 2) using modeling to certify the performance of a physical system1. These results are used to solve the extrapolative validation and certification problems, namely problems where the regime of interest is different than the regime for which we have experimental data. We use concentration of measure theory to develop the tests and analyze their errors. This work was stimulated by the work of Lucas, Owhadi, and Ortiz [1] where a rigorous method of validation and certification is described and tested. In Remark 2.5 we describe the connection between the two approaches. Moreover, as mentioned in that work these results have important implications in the Quantification of Margins and Uncertainties (QMU) framework. In particular, in Remark 2.6 we describe how it provides a rigorous interpretation of the notion of confidence and new notions of margins and uncertainties which allow this interpretation. Since certain concentration parameters used in the above tests may be unkown, we furthermore show, in the last half of the paper, how to derive equally powerful tests which estimate them from sample data, thus replacing the assumption of the values of the concentration parameters with weaker assumptions. 1
A perspective and framework for the conceptual modelling of knowledge
 Murdoch University
, 2013
"... I declare that this thesis is my own account of my research and contains as its main content work which has not previously been submitted for a degree at any tertiary education institution. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
I declare that this thesis is my own account of my research and contains as its main content work which has not previously been submitted for a degree at any tertiary education institution.
Validating a Computational Model of DecisionMaking using Empirical Data
 NAACSOS Conference
, 2003
"... Let us be frank with ourselves and pose the question, why a computational model in the first place? To answer this, we must demonstrate that analytic and statistical methods neither adequately explain our process nor properly predict the outcomes for anticipated scenarios, while computation achieves ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Let us be frank with ourselves and pose the question, why a computational model in the first place? To answer this, we must demonstrate that analytic and statistical methods neither adequately explain our process nor properly predict the outcomes for anticipated scenarios, while computation achieves both. Implicit in this endeavor is the requirement for us to define the criteria by which we convince ourselves, and others, that the model is both correct and useful. That is, at minimum, the model must both fit the given data and generalize to new data using parameters that are sensible, meaning that their derivation from the empirical data is clear. These tasks constitute the everpresent challenge of justifying and validating a computational model. The ideal model is sufficiently predictive, so as to obviate the need for collecting new data, and yet maximally parsimonious, so as to be easily understood and usable. While many developed models of all flavors mathematical, statistical, and computational can describe a host of social phenomena, these phenomena are relatively simple compared to the ones that still require quantitative explication that is accurate enough to inform consequential decisions such as policy and organizational strategy. In short, the state of the art in modeling still imposes no standards on many of the aforementioned validation issues. At what point does, say, a statistical model become inadequate to capture the complexity of the problem and needs to be upgraded to a computational model? Sometimes, we are attracted to the dynamic nature of computation, which gives us a means to observe the evolution of the system. Other times, the system can only be accurately described as a dynamic process. As more and more models arrive on the scene, we need to ask if an extant model is sufficient or if we need to develop yet another, specifically tailored to our specific problem. In this paper, we validate the predictive ability of a generalized computational model of decisionmaking using empirical data from two distinct studies. The first study comprises data obtained
Towards Policy Relevant Environmental Modeling: Contextual Validity and Pragmatic Models
 U.S. Department of the Interior, U.S. Geological
, 2013
"... This report is preliminary and has not been reviewed for conformity with U.S. Geological Survey editorial standards or with the North American Stratigraphic Code. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
This report is preliminary and has not been reviewed for conformity with U.S. Geological Survey editorial standards or with the North American Stratigraphic Code. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
Uncertainty from model calibration: Applying a new method to calibrate energy demand for transport. Utrecht/Bilthoven
 Utrecht University, Dept
, 2009
"... # The Author(s) 2009. This article is published with open access at Springerlink.com Abstract Uncertainties in energy demand modelling originate from both limited understanding of the realworld system and a lack of data for model development, calibration and validation. These uncertainties allow f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
# The Author(s) 2009. This article is published with open access at Springerlink.com Abstract Uncertainties in energy demand modelling originate from both limited understanding of the realworld system and a lack of data for model development, calibration and validation. These uncertainties allow for the development of different models, but also leave room for different calibrations of a single model. Here, an automated model calibration procedure was developed and tested for transport sector energy use modelling in the TIMER 2.0 global energy model. This model describes energy use on the basis of activity levels, structural change and autonomous and priceinduced energy efficiency improvements. We found that the model could reasonably reproduce historic data under different sets of parameter values, leading to different projections of future energy demand levels. Projected energy use for 2030 shows a range of 44–95 % around the bestfit projection. Two different model interpretations of the past can generally be distinguished: (1) high useful energy intensity and major energy efficiency improvements or (2) low useful energy intensity and little efficiency improvement. Generally, the first lead to higher future energy demand levels than the second, but model and insights do not provide decisive arguments to attribute a higher likelihood to one of the alternatives.
AU DOCTORAT EN SCIENCES ADMINISTRATIVES PAR
, 2009
"... Avertissement La diffusion de cette thèse se fait dans le respect des droits de son auteur, qui a signé le formulaire Autorisation de reproduire et de diffuser un travail de recherche de cycles supérieurs (SDU522 Rév.012006). Cette autorisation stipule que «conformément à l'article 11 du Règ ..."
Abstract
 Add to MetaCart
Avertissement La diffusion de cette thèse se fait dans le respect des droits de son auteur, qui a signé le formulaire Autorisation de reproduire et de diffuser un travail de recherche de cycles supérieurs (SDU522 Rév.012006). Cette autorisation stipule que «conformément à l'article 11 du Règlement no 8 des études de cycles supérieurs, [l'auteur] concède à l'Université du Québec à Montréal une licence non exclusive d'utilisation et de publication de la totalité ou d'une partie importante de [son] travail de recherche pour des fins pédagogiques et non commerciales. Plus précisément, [l'auteur] autorise l'Université du Québec à Montréal à reproduire, diffuser, prêter, distribuer ou vendre des copies de [son] travail de recherche à des fins non commerciales sur quelque support que ce soit, y compris l'Internet. Cette licence et cette autorisation n'entraînent pas une renonciation de [la] part [de l'auteur] à [ses] droits moraux ni à [ses] droits de propriété intellectuelle. Sauf entente contraire, [l'auteur] conserve la liberté de diffuser et de commercialiser ou non ce travail dont [il] possède un exemplaire.» To her who always knew how to be by my side