Results 1  10
of
14
Comparison between the mean variance optimal and the mean quadratic variation optimal trading strategies.
, 2011
"... Abstract We compare optimal liquidation policies in continuous time in the presence of trading impact using numerical solutions of Hamilton Jacobi Bellman (HJB) partial differential equations (PDE). In particular, we compare the path dependent, timeconsistent meanquadraticvariation strategy with ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract We compare optimal liquidation policies in continuous time in the presence of trading impact using numerical solutions of Hamilton Jacobi Bellman (HJB) partial differential equations (PDE). In particular, we compare the path dependent, timeconsistent meanquadraticvariation strategy with the pathindependent, timeinconsistent (precommitment) meanvariance strategy. We show that the two different risk measures lead to very different strategies and liquidation profiles. In terms of the optimal trading velocities, the meanquadraticvariation strategy is much less sensitive to changes in asset price and varies more smoothly. In terms of the liquidation profiles, the meanvariance strategy strategy is much more variable, although the mean liquidation profiles for the two strategies are surprisingly similar. On a numerical note, we show that using an interpolation scheme along a parametric curve in conjunction with the semiLagrangian method results in significantly better accuracy than standard axisaligned linear interpolation. We also demonstrate how a scaled computational grid can improve solution accuracy.
Continuous time mean variance asset allocation: a time consistent strategy. Working
, 2009
"... We develop a numerical scheme for determining the optimal asset allocation strategy for timeconsistent, continuous time, mean variance optimization. Any type of constraint can be applied to the investment policy. The optimal policies for timeconsistent and precommitment strategies are compared. W ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We develop a numerical scheme for determining the optimal asset allocation strategy for timeconsistent, continuous time, mean variance optimization. Any type of constraint can be applied to the investment policy. The optimal policies for timeconsistent and precommitment strategies are compared. When realistic constraints are applied, the efficient frontiers for the precommitment and timeconsistent strategies are similar, but the optimal investment strategies are quite different.
Reachability probabilities in Markovian timed automata
, 2011
"... Abstract — We propose a novel stochastic extension of timed automata, i.e. Markovian Timed Automata. We study the problem of optimizing timebounded reachability probabilities in this model, i.e., the maximum likelihood to hit a set of goal locations within a given deadline. We propose Bellman equ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We propose a novel stochastic extension of timed automata, i.e. Markovian Timed Automata. We study the problem of optimizing timebounded reachability probabilities in this model, i.e., the maximum likelihood to hit a set of goal locations within a given deadline. We propose Bellman equations to characterize the probability, and provide two approaches to solve the Bellman equations, namely, a discretization and a reduction to HamiltonJacobiBellman equations. I.
Finite difference approximation for stochastic optimal stopping problems with delays
 Journal of Industrial and Management Optimization
, 2006
"... This paper considers the computational issue of the optimal stopping problem for the stochastic functional differential equation treated in [4]. The finite difference method developed by Barles and Souganidis [2] is used to obtain a numerical approximation for the viscosity solution of the infinit ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper considers the computational issue of the optimal stopping problem for the stochastic functional differential equation treated in [4]. The finite difference method developed by Barles and Souganidis [2] is used to obtain a numerical approximation for the viscosity solution of the infinite dimensional HamiltonJacobiBellman variational inequality (HJBVI) associated with the optimal stopping problem. The convergence results are then established.
Stochastic Optimal Control
, 2014
"... C ertified by............................................. ictiolas Roy ..."
Comparison of Mean Variance Like Strategies for Optimal Asset Allocation Problems
, 2010
"... We determine the optimal dynamic investment policy for a mean quadratic variation objective function by numerical solution of a nonlinear HamiltonJacobiBellman (HJB) partial differential equation (PDE). We compare the efficient frontiers and optimal investment policies for three mean variance like ..."
Abstract
 Add to MetaCart
(Show Context)
We determine the optimal dynamic investment policy for a mean quadratic variation objective function by numerical solution of a nonlinear HamiltonJacobiBellman (HJB) partial differential equation (PDE). We compare the efficient frontiers and optimal investment policies for three mean variance like strategies: precommitment mean variance, timeconsistent mean variance, and mean quadratic variation, assuming realistic investment constraints (e.g. no bankruptcy, finite shorting, borrowing). When the investment policy is constrained, the efficient frontiers for all three objective functions are similar, but the optimal policies are quite different.
Numerical Solutions to Optimal PowerFlowConstrained Vibratory Energy Harvesting Problems
"... Abstract — This study addresses the formulation of optimal numerical controllers for stochasticallyexcited vibratory energy harvesters in which a singledirectional power electronic converter is used to regulate powerflow. Singledirectional converters have implementation advantages for smallsca ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — This study addresses the formulation of optimal numerical controllers for stochasticallyexcited vibratory energy harvesters in which a singledirectional power electronic converter is used to regulate powerflow. Singledirectional converters have implementation advantages for smallscale applications, but restrict the domain of feasible controllers. Optimizing the average power generated in such systems can be accomplished by formulating the constrained control problem in terms of stochastic HamiltonJacobi theory. However, solving the stochastic HamiltonJacobi equation (HJE) is challenging because it is a nonlinear partial differential equation. As such, we investigate the capability of the pseudospectral (PS) method to solve the HJE with mixed statecontrol constraints. The performance of the PS controller is computed for a singledegreeoffreedom resonant oscillator with electromagnetic coupling. We compare the PS performance to the performance of the optimal static admittance controller as well as the optimal unconstrained linearquadraticGaussian controller. Index Terms — Energy harvesting, optimal control, constrained control systems, stochastic systems.
Invariantly Admissible Policy Iteration for a Class of Nonlinear Optimal Control Problems
, 2013
"... ar ..."
(Show Context)
A Martingale Approach and TimeConsistent Samplingbased Algorithms for Risk Management in Stochastic Optimal Control
"... Abstract—In this paper, we consider a class of stochastic optimal control problems with risk constraints that are expressed as bounded probabilities of failure for particular initial states. We present here a martingale approach that diffuses a risk constraint into a martingale to construct timeco ..."
Abstract
 Add to MetaCart
Abstract—In this paper, we consider a class of stochastic optimal control problems with risk constraints that are expressed as bounded probabilities of failure for particular initial states. We present here a martingale approach that diffuses a risk constraint into a martingale to construct timeconsistent control policies. The martingale stands for the level of risk tolerance over time. By augmenting the system dynamics with the controlled martingale, the original riskconstrained problem is transformed into a stochastic target problem. We extend the incremental Markov Decision Process (iMDP) algorithm to approximate arbitrarily well an optimal feedback policy of the original problem by sampling in the augmented state space and computing proper boundary conditions for the reformulated problem. We show that the algorithm is both probabilistically sound and asymptotically optimal. The performance of the proposed algorithm is demonstrated on motion planning and control problems subject to bounded probability of collision in uncertain cluttered environments. I.