Results 11  20
of
70
Nondeterministic labeled Markov processes: Bisimulations and logical characterizations
 In Proc. of the 6th Int. Conf. on the Quantitative Evaluation of Systems (QEST 2009
, 2009
"... We extend the theory of labeled Markov processes with internal nondeterminism, a fundamental concept for the further development of a process theory with abstraction on nondeterministic continuous probabilistic systems. We define nondeterministic labeled Markov processes (NLMP) and provide both a s ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We extend the theory of labeled Markov processes with internal nondeterminism, a fundamental concept for the further development of a process theory with abstraction on nondeterministic continuous probabilistic systems. We define nondeterministic labeled Markov processes (NLMP) and provide both a state based bisimulation and an event based bisimulation. We show the relation between them, including that the largest state bisimulation is also an event bisimulation. We also introduce a variation of the HennessyMilner logic that characterizes event bisimulation and that is sound w.r.t. the state base bisimulation for arbitrary NLMP. This logic, however, is infinitary as it contains a denumerable ∨. We then introduce a finitary sublogic that characterize both state and event bisimulation for image finite NLMP whose underlying measure space is also analytic. Hence, in this setting, all notions of bisimulation we deal with turn out to be equal. 1.
A Capturing Continuous Data and Answering Aggregate Queries in Probabilistic XML
"... Sources of data uncertainty and imprecision are numerous. A way to handle this uncertainty is to associate probabilistic annotations to data. Many such probabilistic database models have been proposed, both in the relational and in the semistructured setting. The latter is particularly well adapted ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Sources of data uncertainty and imprecision are numerous. A way to handle this uncertainty is to associate probabilistic annotations to data. Many such probabilistic database models have been proposed, both in the relational and in the semistructured setting. The latter is particularly well adapted to the management of uncertain data coming from a variety of automatic processes. An important problem, in the context of probabilistic XML databases, is that of answering aggregate queries (count, sum, avg, etc.), which has received limited attention so far. In a model unifying the various (discrete) semistructured probabilistic models studied up to now, we present algorithms to compute the distribution of the aggregation values (exploiting some regularity properties of the aggregate functions) and probabilistic moments (especially, expectation and variance) of this distribution. We also prove the intractability of some of these problems and investigate approximation techniques. We finally extend the discrete model to a continuous one, in order to take into account continuous data values, such as measurements from sensor networks, and extend our algorithms and complexity results to the continuous case.
Generalized Decision Rule Approximations for Stochastic Programming via Liftings
, 2013
"... Stochastic programming provides a versatile framework for decisionmaking under uncertainty, but the resulting optimization problems can be computationally demanding. It has recently been shown that primal and dual linear decision rule approximations can yield tractable upper and lower bounds on the ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Stochastic programming provides a versatile framework for decisionmaking under uncertainty, but the resulting optimization problems can be computationally demanding. It has recently been shown that primal and dual linear decision rule approximations can yield tractable upper and lower bounds on the optimal value of a stochastic program. Unfortunately, linear decision rules often provide crude approximations that result in loose bounds. To address this problem, we propose a lifting technique that maps a given stochastic program to an equivalent problem on a higherdimensional probability space. We prove that solving the lifted problem in primal and dual linear decision rules provides tighter bounds than those obtained from applying linear decision rules to the original problem. We also show that there is a onetoone correspondence between linear decision rules in the lifted problem and families of nonlinear decision rules in the original problem. Finally, we identify structured liftings that give rise to highly flexible piecewise linear and nonlinear decision rules, and we assess their performance in the context of a dynamic production planning problem. 1
Robust Markov Decision Processes
, 2012
"... Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision mak ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a prespecified probability 1 − β. Afterwards, we determine a policy that attains the highest worstcase performance over this confidence region. By construction, this policy achieves or exceeds its worstcase performance with a confidence of at least 1 − β. Our method involves the solution of tractable conic programs of moderate size.
Stability of Random Attractors Under Perturbation and Approximation
, 2001
"... The comparison of the longtime behaviour of dynamical systems and their numerical approximations is not straightforward since in general such methods only converge on bounded time intervals. However, one can still compare their asymptotic behaviour using the global attractor, and this is now standa ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The comparison of the longtime behaviour of dynamical systems and their numerical approximations is not straightforward since in general such methods only converge on bounded time intervals. However, one can still compare their asymptotic behaviour using the global attractor, and this is now standard in the deterministic case. In random dynamical systems there is an additional problem, since the convergence of numerical methods for such systems is usually given only on average. In this paper the deterministic approach is extended to cover stochastic di#erential equations, giving necessary and su#cient conditions for the random attractor arising from a random dynamical system to be upper semicontinuous with respect to a given family of perturbations or approximations.
The OS* algorithm: a joint approach to exact optimization and sampling
, 2012
"... Most current sampling algorithms for highdimensional distributions are based on MCMC techniques and are approximate in the sense that they are valid only asymptotically. Rejection sampling, on the other hand, produces valid samples, but is unrealistically slow in highdimension spaces. The OS * al ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Most current sampling algorithms for highdimensional distributions are based on MCMC techniques and are approximate in the sense that they are valid only asymptotically. Rejection sampling, on the other hand, produces valid samples, but is unrealistically slow in highdimension spaces. The OS * algorithm that we propose is a unified approach to exact optimization and sampling, based on incremental refinements of a functional upper bound, which combines ideas of adaptive rejection sampling and of A * optimization search. We show that the choice of the refinement can be done in a way that ensures tractability in highdimension spaces, and we present first experiments in two different settings: inference in highorder HMMs and in large discrete graphical models. ∗Work conducted during an internsphip at XRCE. 1 ar X iv
Efficient multistart strategies for local search algorithms
 In
, 2009
"... Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multistart strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multistart strategies motivated by works on multiarmed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and kmeans as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase. 1.
In Search of the Root of Fuzziness: The Measure Theoretic Meaning of Partial Presence.
 Annals of Fuzzy Mathematics and Informatics,
, 2011
"... ..."
(Show Context)
Low discrepancy constructions in the triangle
, 2014
"... Most quasiMonte Carlo research focuses on sampling from the unit cube. Many problems, especially in computer graphics, are defined via quadrature over the unit triangle. QuasiMonte Carlo methods for the triangle have been developed by Pillards and Cools (2005) and by Brandolini et al. (2013). Thi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Most quasiMonte Carlo research focuses on sampling from the unit cube. Many problems, especially in computer graphics, are defined via quadrature over the unit triangle. QuasiMonte Carlo methods for the triangle have been developed by Pillards and Cools (2005) and by Brandolini et al. (2013). This paper presents two QMC constructions in the triangle with a vanishing discrepancy. The first is a version of the van der Corput sequence customized to the unit triangle. It is an extensible digital construction that attains a discrepancy below 12/ N. The second construction rotates an integer lattice through an angle whose tangent is a quadratic irrational number. It attains a discrepancy of O(log(N)/N) which is the best possible rate. Previous work strongly indicated that such a discrepancy was possible, but no constructions were available. Scrambling the digits of the first construction improves its accuracy for integration of smooth functions. Both constructions also yield convergent estimates for integrands that are Riemann integrable on the triangle without requiring bounded variation.