Results 1  10
of
89
A Methodology for Fitting and Validating Metamodels in Simulation
 European Journal of Operational Research
, 1997
"... This expository paper discusses the relationships among metamodels, simulation models, and problem entities. A metamodel or response surface is an approximation of the input/output function implied by the underlying simulation model. There are several types of metamodel: linear regression, splines, ..."
Abstract

Cited by 93 (5 self)
 Add to MetaCart
This expository paper discusses the relationships among metamodels, simulation models, and problem entities. A metamodel or response surface is an approximation of the input/output function implied by the underlying simulation model. There are several types of metamodel: linear regression, splines, neural networks, etc. This paper distinguishes between fitting and validating a metamodel. Metamodels may have different goals: (i) understanding, (ii) prediction, (iii) optimization, and (iv) verification and validation. For this metamodeling, a process with thirteen steps is proposed. Classic design of experiments (DOE) is summarized, including standard measures of fit such as the Rsquare coefficient and crossvalidation measures. This DOE is extended to sequential or stagewise DOE. Several validation criteria, measures, and estimators are discussed. Metamodels in general are covered, along with a procedure for developing linear regression (including polynomial) metamodels. Keywords Simul...
Simple Procedures for Selecting the Best Simulated System when the Number of Alternatives Is Large
 Operations Research
, 1999
"... In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that rankingandselection (R&S) procedures may require too much computation to be practical. Our approach i ..."
Abstract

Cited by 65 (15 self)
 Add to MetaCart
In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of alternatives is finite, but large enough that rankingandselection (R&S) procedures may require too much computation to be practical. Our approach is to use the data provided by the first stage of sampling in an R&S procedure to screen out alternatives that are not competitive and thereby avoid the (typically much larger) secondstage sample for these systems. Our procedures represent a compromise between standard R&S proceduresthat are easy to implement, but can be computationally inefficientand fully sequential proceduresthat can be statistically efficient, but are more difficult to implement and depend on more restrictive assumptions. We present a general theory for constructing combined screening and indifferencezone selection procedures, several specific procedures and a portion of an extensive empirical evaluation. ...
Simulation Optimization: A Review, New Developments, and Applications
 In Proceedings of the 37th Winter Simulation Conference
, 2005
"... ABSTRACT We provide a descriptive review of the main approaches for carrying out simulation optimization, and sample some recent algorithmic and theoretical developments in simulation optimization research. Then we survey some of the software available for simulation languages and spreadsheets, and ..."
Abstract

Cited by 54 (5 self)
 Add to MetaCart
(Show Context)
ABSTRACT We provide a descriptive review of the main approaches for carrying out simulation optimization, and sample some recent algorithmic and theoretical developments in simulation optimization research. Then we survey some of the software available for simulation languages and spreadsheets, and present several illustrative applications.
Stochastic Gradient Estimation
, 2006
"... We consider the problem of efficiently estimating gradients from stochastic simulation. Although the primary motivation is their use in simulation optimization, the resulting estimators can also be useful in other ways, e.g., sensitivity analysis. The main approaches described are finite differences ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
We consider the problem of efficiently estimating gradients from stochastic simulation. Although the primary motivation is their use in simulation optimization, the resulting estimators can also be useful in other ways, e.g., sensitivity analysis. The main approaches described are finite differences (including simultaneous perturbations), perturbation analysis, the likelihood ratio/score function method, and the use of weak derivatives.
Pricing American options: A comparison of Monte Carlo simulation approaches
 Journal of Computational Finance
, 1999
"... A number of Monte Carlo simulationbased approaches have been proposed within the past decade to address the problem of pricing Americanstyle derivatives. The purpose of this paper is to empirically test some of these algorithms on a common set of problems in order to be able to assess the strength ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
A number of Monte Carlo simulationbased approaches have been proposed within the past decade to address the problem of pricing Americanstyle derivatives. The purpose of this paper is to empirically test some of these algorithms on a common set of problems in order to be able to assess the strengths and weaknesses of each approach as a function of the problem characteristics. In addition, we introduce another simulationbased approach that parameterizes the early exercise curve and casts the valuation problem as an optimization problem of maximizing the expected payoff (under the martingale measure) with respect to the associated parameters, the optimization problem carried out using a simultaneous perturbation stochastic approximation (SPSA) algorithm.
Simulation Optimization
 In Proceedings of the 2001 Winter Simulation Conference, edited by
"... planning on register automata ..."
On choosing parameters in retrospectiveapproximation algorithms for simulationoptimization
 Proceedings of the 2006 Winter Simulation Conference. Institute of Electrical and Electronics Engineers: Piscataway
"... The Stochastic RootFinding Problem is that of finding a zero of a vectorvalued function known only through a stochastic simulation. The SimulationOptimization Problem is that of locating a realvalued function’s minimum, again with only a stochastic simulation that generates function estimates. ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
The Stochastic RootFinding Problem is that of finding a zero of a vectorvalued function known only through a stochastic simulation. The SimulationOptimization Problem is that of locating a realvalued function’s minimum, again with only a stochastic simulation that generates function estimates. Retrospective Approximation (RA) is a samplepath technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA’s iterates. Then we characterize a class of errortolerance and samplesize sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class, and provide a numerical example illustrating the key results. 1
Optimal Structured Feedback Policies for ABR Flow Control Using Two Timescale SPSA
 Control,” Proceedings of the Summer Computer Simulation Conference, Society for Computer Simulation
, 1994
"... Abstract—Optimal structured feedback control policies for ratebased flow control of available bit rate service in asynchronous transfer mode networks are obtained in the presence of information and propagation delays, using a numerically efficient twotimescale simultaneous perturbation stochastic ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Optimal structured feedback control policies for ratebased flow control of available bit rate service in asynchronous transfer mode networks are obtained in the presence of information and propagation delays, using a numerically efficient twotimescale simultaneous perturbation stochastic approximation algorithm. Models comprising both a single bottleneck node and a network with multiple bottleneck nodes are considered. A convergence analysis of the algorithm is presented. Numerical experiments demonstrate fast convergence even in the presence of significant delays. We also illustrate performance comparisons with the wellknown Explicit Rate Indication for Congestion Avoidance (ERICA) algorithm and describe another algorithm (based on ERICA) that does not require estimating available bandwidth (as in ERICA). Index Terms—Network of nodes, optimal structured feedback policies, ratebased ABR flow control, single bottleneck node, twotimescale SPSA. I.
Variablenumber samplepath optimization
"... The samplepath method is one of the most important tools in simulationbased optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The samplepath method is one of the most important tools in simulationbased optimization. The basic idea of the method is to approximate the expected simulation output by the average of sample observations with a common random number sequence. In this paper, we describe a new variant of Powell’s UOBYQA (Unconstrained Optimization BY Quadratic Approximation) method, which integrates a Bayesian VariableNumber SamplePath (VNSP) scheme to choose appropriate number of samples at each iteration. The statistically accurate scheme determines the number of simulation runs, and guarantees the global convergence of the algorithm. The VNSP scheme saves a significant amount of simulation operations compared to general purpose ‘fixednumber' samplepath methods. We present numerical results based on the new algorithm.
Convergence analysis of gradient descent stochastic algorithms
 Journal of Optimization Theory and Applications
, 1996
"... Abstract. This paper proves convergence of a samplepath based stochastic gradientdescent algorithm for optimizing expectedvalue performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gra ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proves convergence of a samplepath based stochastic gradientdescent algorithm for optimizing expectedvalue performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the case where the expectedvalue function is continuously differentiable; and the other, when that function is nondifferentiable but the sample performance functions are convex. The proofs are based on a version of the uniform law of large numbers which is provable for many discrete event systems where infinitesimal perturbation analysis is known to be strongly consistent. Key Words. Gradient descent, subdifferentials, uniform laws of large numbers, infinitesimal perturbation analysis, discrete event dynamic systems. I.