Results 1  10
of
214
Stochastic Approximation Approach to Stochastic Programming
"... In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of th ..."
Abstract

Cited by 267 (20 self)
 Add to MetaCart
(Show Context)
In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the Stochastic Approximation (SA) and the Sample Average Approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say linear) structure of the considered problem, while the SA approach is a crude subgradient method which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convexconcave stochastic saddle point problems, and present (in our opinion highly encouraging) results of numerical experiments.
Convex Approximations of Chance Constrained Programs
"... We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given (close to one) probability, a system of randomly perturbed convex constraints. Our goal is to build a computationally tractable approximation of this (typically intractabl ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given (close to one) probability, a system of randomly perturbed convex constraints. Our goal is to build a computationally tractable approximation of this (typically intractable) problem, i.e., an explicitly given convex optimization program with the feasible set contained in the one of the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independent of each other random variables, we build a large deviations type approximation, referred to as ‘Bernstein approximation’, of the chance constrained problem. This approximation is convex, and thus efficiently solvable. We propose a simulationbased scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and wellknown scenario approximation approaches. Finally, we extend our construction to the case of ambiguously chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.
The Empirical Behavior of Sampling Methods for Stochastic Programming
 Annals of Operations Research
, 2002
"... We investigate the quality of solutions obtained from sampleaverage approximations to twostage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain highquality s ..."
Abstract

Cited by 115 (17 self)
 Add to MetaCart
(Show Context)
We investigate the quality of solutions obtained from sampleaverage approximations to twostage stochastic linear programs with recourse. We use a recently developed software tool executing on a computational grid to solve many large instances of these problems, allowing us to obtain highquality solutions and to verify optimality and nearoptimality of the computed solutions in various ways.
A stochastic programming approach for supply chain network design under uncertainty
, 2003
"... ..."
The sample average approximation method applied to stochastic routing problems: a computational study
 Computational Optimization and Applications
"... Abstract. The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
(Show Context)
Abstract. The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and firststage integer variables. For each of the three problem classes, we use decomposition and branchandcut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0 % of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably nearoptimal solutions to these difficult stochastic programs using only a moderate amount of computation time. Keywords: salesman stochastic optimization, stochastic programming, stochastic routing, shortest path, traveling 1.
Stochastic optimization is (almost) as easy as deterministic optimization
 in Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science
, 2004
"... Stochastic optimization problems attempt to model uncertainty in the data by assuming that (part of) the input is specified in terms of a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the ..."
Abstract

Cited by 62 (7 self)
 Add to MetaCart
(Show Context)
Stochastic optimization problems attempt to model uncertainty in the data by assuming that (part of) the input is specified in terms of a probability distribution. We consider the wellstudied paradigm of 2stage models with recourse: first, given only distributional information about (some of) the data one commits on initial actions, and then once the actual data is realized (according to the distribution), further (recourse) actions can be taken. We give the first approximation algorithms for 2stage discrete stochastic optimization problems with recourse for which the underlying random data is given by a “black box ” and no restrictions are placed on the costs in the two stages, based on an FPRAS for the LP relaxation of the stochastic problem (which has exponentially many variables and constraints). Among the range of applications we consider are stochastic versions of the set cover, vertex cover, facility location, multicut (on trees), and multicommodity flow problems. 1.
A sample approximation approach for optimization with probabilistic constraints
 IPCO 2007, Lecture Notes in Comput. Sci
, 2007
"... Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larg ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
(Show Context)
Abstract. We study approximations of optimization problems with probabilistic constraints in which the original distribution of the underlying random vector is replaced with an empirical distribution obtained from a random sample. We show that such a sample approximation problem with risk level larger than the required risk level will yield a lower bound to the true optimal value with probability approaching one exponentially fast. This leads to an a priori estimate of the sample size required to have high confidence that the sample approximation will yield a lower bound. We then provide conditions under which solving a sample approximation problem with a risk level smaller than the required risk level will yield feasible solutions to the original problem with high probability. Once again, we obtain a priori estimates on the sample size required to obtain high confidence that the sample approximation problem will yield a feasible solution to the original problem. Finally, we present numerical illustrations of how these results can be used to obtain feasible solutions and optimality bounds for optimization problems with probabilistic constraints.
Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework.
, 2010
"... Abstract In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (ACSA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the largedeviatio ..."
Abstract

Cited by 48 (9 self)
 Add to MetaCart
(Show Context)
Abstract In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (ACSA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the largedeviation results associated with the convergence rate of a nearly optimal ACSA algorithm presented in
An Integer Programming Approach for Linear Programs with Probabilistic Constraints
, 2008
"... Linear programs with joint probabilistic constraints (PCLP) are difficult to solve because the feasible region is not convex. We consider a special case of PCLP in which only the righthand side is random and this random vector has a finite distribution. We give a mixedinteger programming formulati ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
(Show Context)
Linear programs with joint probabilistic constraints (PCLP) are difficult to solve because the feasible region is not convex. We consider a special case of PCLP in which only the righthand side is random and this random vector has a finite distribution. We give a mixedinteger programming formulation for this special case and study the relaxation corresponding to a single row of the probabilistic constraint. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. We present computational results which indicate that by using our strengthened formulations, instances that are considerably larger than have been considered before can be solved to optimality.