Results 1  10
of
132
Efficient Solution Algorithms for Factored MDPs
, 2003
"... This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the re ..."
Abstract

Cited by 174 (4 self)
 Add to MetaCart
This paper addresses the problem of planning under uncertainty in large Markov Decision Processes (MDPs). Factored MDPs represent a complex state space using state variables and the transition model using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size of structured MDPs, but the complexity of exact solution algorithms for such MDPs can grow exponentially in the representation size. In this paper, we present two approximate solution algorithms that exploit structure in factored MDPs. Both use an approximate value function represented as a linear combination of basis functions, where each basis function involves only a small subset of the domain variables. A key contribution of this paper is that it shows how the basic operations of both algorithms can be performed efficiently in closed form, by exploiting both additive and contextspecific structure in a factored MDP. A central element of our algorithms is a novel linear program decomposition technique, analogous to variable elimination in Bayesian networks, which reduces an exponentially large LP to a provably equivalent, polynomialsized one. One algorithm uses approximate linear programming, and the second approximate dynamic programming. Our dynamic programming algorithm is novel in that it uses an approximation based on maxnorm, a technique that more directly minimizes the terms that appear in error bounds for approximate MDP algorithms. We provide experimental results on problems with over 10^40 states, demonstrating a promising indication of the scalability of our approach, and compare our algorithm to an existing stateoftheart approach, showing, in some problems, exponential gains in computation time.
Convex approximations of chance constrained programs
 SIAM Journal of Optimization
, 2006
"... Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its ..."
Abstract

Cited by 120 (9 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independentofeachother random variables, we build a large deviationtype approximation, referred to as “Bernstein approximation, ” of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulationbased scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and wellknown scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.
Generalizing plans to new environments in relational MDPs
 In International Joint Conference on Artificial Intelligence (IJCAI03
, 2003
"... A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for dire ..."
Abstract

Cited by 111 (2 self)
 Add to MetaCart
(Show Context)
A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for direct planning. In this paper, we present an approach to the generalization problem based on a new framework of relational Markov Decision Processes (RMDPs). An RMDP can model a set of similar environments by representing objects as instances of different classes. In order to generalize plans to multiple environments, we define an approximate value function specified in terms of classes of objects and, in a multiagent setting, by classes of agents. This classbased approximate value function is optimized relative to a sampled subset of environments, and computed using an efficient linear programming method. We prove that a polynomial number of sampled environments suffices to achieve performance close to the performance achievable when optimizing over the entire space. Our experimental results show that our method generalizes plans successfully to new, significantly larger, environments, with minimal loss of performance relative to environmentspecific planning. We demonstrate our approach on a real strategic computer war game. 1
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 110 (12 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
Second Order Cone Programming Approaches for Handling Missing and Uncertain Data
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We propose a novel second order cone programming formulation for designing robust classifiers which can handle uncertainty in observations. Similar formulations are also derived for designing regression functions which are robust to uncertainties in the regression setting. The proposed formulations ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
We propose a novel second order cone programming formulation for designing robust classifiers which can handle uncertainty in observations. Similar formulations are also derived for designing regression functions which are robust to uncertainties in the regression setting. The proposed formulations are independent of the underlying distribution, requiring only the existence of second order moments. These formulations are then specialized to the case of missing values in observations for both classification and regression problems. Experiments show that the proposed formulations outperform imputation.
Distributionally Robust Optimization under Moment Uncertainty with Application to DataDriven Problems
"... Stochastic programs can effectively describe the decisionmaking problem in an uncertain environment. Unfortunately, such programs are often computationally demanding to solve. In addition, their solutions can be misleading when there is ambiguity in the choice of a distribution for the random param ..."
Abstract

Cited by 53 (4 self)
 Add to MetaCart
Stochastic programs can effectively describe the decisionmaking problem in an uncertain environment. Unfortunately, such programs are often computationally demanding to solve. In addition, their solutions can be misleading when there is ambiguity in the choice of a distribution for the random parameters. In this paper, we propose a model describing one’s uncertainty in both the distribution’s form (discrete, Gaussian, exponential, etc.) and moments (mean and covariance). We demonstrate that for a wide range of cost functions the associated distributionally robust stochastic program can be solved efficiently. Furthermore, by deriving new confidence regions for the mean and covariance of a random vector, we provide probabilistic arguments for using our model in problems that rely heavily on historical data. This is confirmed in a practical example of portfolio selection, where our framework leads to better performing policies on the “true” distribution underlying the daily return of assets.
B (2004) Linear program approximations for factored continuousstate Markov decision processes
"... Approximate linear programming (ALP) has emerged recently as one of the most promising methods for solving complex factored MDPs with nite state spaces. In this work we show that ALP solutions are not limited only to MDPs with nite state spaces, but that they can also be applied successfully to fact ..."
Abstract

Cited by 41 (12 self)
 Add to MetaCart
(Show Context)
Approximate linear programming (ALP) has emerged recently as one of the most promising methods for solving complex factored MDPs with nite state spaces. In this work we show that ALP solutions are not limited only to MDPs with nite state spaces, but that they can also be applied successfully to factored continuousstate MDPs (CMDPs). We show how one can build an ALPbased approximation for such a model and contrast it to existing solution methods. We argue that this approach offers a robust alternative for solving high dimensional continuousstate space problems. The point is supported by experiments on three CMDP problems with 2425 continuous state factors. 1
Ambiguous Chance Constrained Problems And Robust Optimization
 Mathematical Programming
, 2004
"... In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denote ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
In this paper we study ambiguous chance constrained problems where the distributions of the random parameters in the problem are themselves uncertain. We primarily focus on the special case where the uncertainty set Q of the distributions is of the form Q = {Q : # p (Q, Q 0 ) # #}, where # p denotes the Prohorov metric. The ambiguous chance constrained problem is approximated by a robust sampled problem where each constraint is a robust constraint centered at a sample drawn according to the central measure Q 0 . The main contribution of this paper is to show that the robust sampled problem is a good approximation for the ambiguous chance constrained problem with high probability. This result is established using the StrassenDudley Representation Theorem that states that when the distributions of two random variables are close in the Prohorov metric one can construct a coupling of the random variables such that the samples are close with high probability. We also show that the robust sampled problem can be solved e#ciently both in theory and in practice. 1
Solving Factored MDPs with Continuous and Discrete Variables
 IN PROCEEDINGS OF THE 20TH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 2004
"... Although many realworld stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current stateoftheart methods cannot adequately address these problems. We present the first framework that can exploit problem structure for modeling ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
Although many realworld stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current stateoftheart methods cannot adequately address these problems. We present the first framework that can exploit problem structure for modeling and solving hybrid problems efficiently. We formulate these problems as hybrid Markov decision processes (MDPs with continuous and discrete state and action variables), which we assume can be represented in a factored way using a hybrid dynamic Bayesian network (hybrid DBN). This formulation also allows us to apply our methods to collaborative multiagent settings. We present a new linear program approximation method that exploits the structure of the hybrid MDP and lets us compute approximate value functions more efficiently. In particular, we describe a new factored discretization of continuous variables that avoids the exponential blowup of traditional approaches. We provide theoretical bounds on the quality of such an approximation and on its scaleup potential. We support our theoretical arguments with experiments on a set of control problems with up to 28dimensional continuous state space and 22dimensional action space.