Results 1  10
of
127
Theory and applications of Robust Optimization
, 2007
"... In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most pr ..."
Abstract

Cited by 110 (16 self)
 Add to MetaCart
(Show Context)
In this paper we survey the primary research, both theoretical and applied, in the field of Robust Optimization (RO). Our focus will be on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying the most prominent theoretical results of RO over the past decade, we will also present some recent results linking RO to adaptable models for multistage decisionmaking problems. Finally, we will highlight successful applications of RO across a wide spectrum of domains, including, but not limited to, finance, statistics, learning, and engineering.
A sample approximation approach for optimization with probabilistic constraints
 IPCO 2007, Lecture Notes in Comput. Sci
, 2007
"... Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larg ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.
Stochastic programming approach to optimization under uncertainty
 MATHEMATICAL PROGRAMMING
, 2006
"... In this paper we discuss computational complexity and risk averse approaches to two and multistage stochastic programming problems. We argue that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indic ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
In this paper we discuss computational complexity and risk averse approaches to two and multistage stochastic programming problems. We argue that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages. We discuss an extension of coherent risk measures to a multistage setting and, in particular, dynamic programming equations for such problems.
Selected topics in robust convex optimization
 MATH. PROG. B, THIS ISSUE
, 2007
"... Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robu ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
(Show Context)
Robust Optimization is a rapidly developing methodology for handling optimization problems affected by nonstochastic “uncertainbutbounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control.
Constructing uncertainty sets for robust linear optimization
, 2006
"... doi 10.1287/opre.1080.0646 ..."
Sample Average Approximation Method for Chance Constrained Programming: Theory and Applications
 J OPTIM THEORY APPL (2009) 142: 399–416
, 2009
"... We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the resulting problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance constrai ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
(Show Context)
We study sample approximations of chance constrained problems. In particular, we consider the sample average approximation (SAA) approach and discuss the convergence properties of the resulting problem. We discuss how one can use the SAA method to obtain good candidate solutions for chance constrained problems. Numerical experiments are performed to correctly tune the parameters involved in the SAA. In addition, we present a method for constructing statistical lower bounds for the optimal value of the considered problem and discuss how one should tune the underlying parameters. We apply the SAA to two chance constrained problems. The first is a linear portfolio selection problem with returns following a multivariate lognormal distribution. The second is a joint chance constrained version of a simple blending problem.
Percentile Optimization for Markov Decision Processes with Parameter Uncertainty
"... Markov decision processes are an effective tool in modeling decisionmaking in uncertain dynamic environments. Since the parameters of these models are typically estimated from data or learned from experience, it is not surprising that the actual performance of a chosen strategy often significantl ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Markov decision processes are an effective tool in modeling decisionmaking in uncertain dynamic environments. Since the parameters of these models are typically estimated from data or learned from experience, it is not surprising that the actual performance of a chosen strategy often significantly differs from the designer’s initial expectations due to unavoidable modeling ambiguity. In this paper, we present a set of percentile criteria that are conceptually natural and representative of the tradeoff between optimistic and pessimistic point of views on the question. We study the use of these criteria under different forms of uncertainty for both the rewards and the transitions. Some forms will be shown to be efficiently solvable and others highly intractable. In each case, we will outline solution concepts that take parametric uncertainty into account in the process of decision making.
Constructing risk measures from uncertainty sets
, 2005
"... We propose a unified theory that links uncertainty sets in robust optimization to risk measures in portfolio optimization. We illustrate the correspondence between uncertainty sets and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of t ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
We propose a unified theory that links uncertainty sets in robust optimization to risk measures in portfolio optimization. We illustrate the correspondence between uncertainty sets and some popular risk measures in finance, and show how robust optimization can be used to generalize the concepts of these measures. We also show that by using properly defined uncertainty sets in robust optimization models, one can in fact construct coherent risk measures. Our approach to creating coherent risk measures is easy to apply in practice, and computational experiments suggest that it may lead to superior portfolio performance. Our results have implications for efficient portfolio optimization under different measures of risk.
On choosing parameters in retrospectiveapproximation algorithms for simulationoptimization
 Proceedings of the 2006 Winter Simulation Conference. Institute of Electrical and Electronics Engineers: Piscataway
"... The Stochastic RootFinding Problem is that of finding a zero of a vectorvalued function known only through a stochastic simulation. The SimulationOptimization Problem is that of locating a realvalued function’s minimum, again with only a stochastic simulation that generates function estimates. ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
The Stochastic RootFinding Problem is that of finding a zero of a vectorvalued function known only through a stochastic simulation. The SimulationOptimization Problem is that of locating a realvalued function’s minimum, again with only a stochastic simulation that generates function estimates. Retrospective Approximation (RA) is a samplepath technique for solving such problems, where the solution to the underlying problem is approached via solutions to a sequence of approximate deterministic problems, each of which is generated using a specified sample size, and solved to a specified error tolerance. Our primary focus, in this paper, is providing guidance on choosing the sequence of sample sizes and error tolerances in RA algorithms. We first present an overview of the conditions that guarantee the correct convergence of RA’s iterates. Then we characterize a class of errortolerance and samplesize sequences that are superior to others in a certain precisely defined sense. We also identify and recommend members of this class, and provide a numerical example illustrating the key results. 1
Nested Latin Hypercube Design
 Biometrika
, 2009
"... We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Lat ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
We propose an approach to constructing nested Latin hypercube designs. Such designs are useful for conducting multiple computer experiments with different levels of accuracy. A nested Latin hypercube design with two layers is defined to be a special Latin hypercube design that contains a smaller Latin hypercube design as a subset. Our method is easy to implement and can accommodate any number of factors. We also extend this method to construct nested Latin hypercube designs with more than two layers. Illustrative examples are given. Some statistical properties of the constructed designs are derived.