Results 1  10
of
47
Quadraturebased methods for obtaining approximate solutions to nonlinear asset pricing models
 ECONOMETRICA
, 1991
"... ..."
Parallel Computation of Multivariate Normal Probabilities
"... We present methods for the computation of multivariate normal probabilities on parallel/ distributed systems. After a transformation of the initial integral, an approximation can be obtained using MonteCarlo or quasirandom methods. We propose a metaalgorithm for asynchronous sampling methods and d ..."
Abstract

Cited by 207 (9 self)
 Add to MetaCart
We present methods for the computation of multivariate normal probabilities on parallel/ distributed systems. After a transformation of the initial integral, an approximation can be obtained using MonteCarlo or quasirandom methods. We propose a metaalgorithm for asynchronous sampling methods and derive efficient parallel algorithms for the computation of MVN distribution functions, including a method based on randomized Korobov and Richtmyer sequences. Timing results of the implementations using the MPI parallel environment are given. 1 Introduction The computation of the multivariate normal distribution function F (a; b) = j\Sigmaj \Gamma 1 2 (2) \Gamma n 2 Z b a e \Gamma 1 2 x \Sigma \Gamma1 x dx: (1) often leads to computationalintensive integration problems. Here \Sigma is an n \Theta n symmetric positive definite covariance matrix; furthermore one of the limits in each integration variable may be infinite. Genz [5] performs a sequence of transformations resu...
Numerical Computation of Rectangular Bivariate And Trivariate normal and t probabilities
 STATISTICS AND COMPUTING
, 2004
"... Algorithms for the computation of bivariate and trivariate normal and t probabilities for rectangles are reviewed. The algorithms use numerical integration to approximate transformed probability distribution integrals. A generalization of Plackett's formula is derived for bivariate and trivaria ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
Algorithms for the computation of bivariate and trivariate normal and t probabilities for rectangles are reviewed. The algorithms use numerical integration to approximate transformed probability distribution integrals. A generalization of Plackett's formula is derived for bivariate and trivariate t probabilities. New methods are described for the numerical computation of bivariate and trivariate t probabilities. Test results are provided, along with recommendations for the most efficient algorithms for single and double precision computations.
Pricing and Hedging American Options: A Recursive Integration Method
 Review of Financial Studies
, 1996
"... In this paper, we present a new method for pricing and hedging American options along with an efficient implementation procedure. The proposed method is efficient and accurate in computing both option values and various option hedge parameters. We demonstrate the computational accuracy and efficienc ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
In this paper, we present a new method for pricing and hedging American options along with an efficient implementation procedure. The proposed method is efficient and accurate in computing both option values and various option hedge parameters. We demonstrate the computational accuracy and efficiency of this numerical procedure in relation to other competing approaches. We also suggest how the method can be applied to the case of any American option for which a closedform solution exists for the corresponding European option. A variety of financial products such as fixedincome derivatives, mortgagebacked securities and corporate securities have earlyexercise or Americanstyle features that significantly influence their valuation and hedging. Considerable interest exists, therefore, in both academic and practitioner circles, in methods of valuation and hedging Americanstyle options that are conceptually sound, as well as efficient in their implementation. It has been recognized early in the ...
Towards Sharp Inapproximability For Any 2CSP
"... We continue the recent line of work on the connection between semidefinite programmingbased approximation algorithms and the Unique Games Conjecture. Given any boolean 2CSP (or more generally, any nonnegative objective function on two boolean variables), we show how to reduce the search for a good ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
(Show Context)
We continue the recent line of work on the connection between semidefinite programmingbased approximation algorithms and the Unique Games Conjecture. Given any boolean 2CSP (or more generally, any nonnegative objective function on two boolean variables), we show how to reduce the search for a good inapproximability result to a certain numeric minimization problem. The key objects in our analysis are the vector triples arising when doing clausebyclause analysis of algorithms based on semidefinite programming. Given a weighted set of such triples of a certain restricted type, which are “hard ” to round in a certain sense, we obtain a Unique Gamesbased inapproximability matching this “hardness ” of rounding the set of vector triples. Conversely, any instance together with an SDP solution can be viewed as a set of vector triples, and we show that we can always find an assignment to the instance which is at least as good as the “hardness ” of rounding the corresponding set of vector triples. We conjecture that the restricted type required for the hardness result is in fact no restriction, which would imply that these upper and lower bounds match exactly. This conjecture is supported by all existing results for specific 2CSPs. As an application, we show that MAX 2AND is hard to approximate within 0.87435. This improves upon the best previous hardness of αGW + ɛ ≈ 0.87856, and comes very close to matching the approximation ratio of the best algorithm known, 0.87401. It also establishes that balanced instances of MAX 2AND, i.e., instances in which each variable occurs positively and negatively equally often, are not the hardest to approximate, as these can be approximated within a factor αGW.
Visualizing the positional and geometrical variability of isosurfaces in uncertain scalar fields
 in Computer Graphics Forum
, 2011
"... We present a novel approach for visualizing the positional and geometrical variability of isosurfaces in uncertain 3D scalar fields. Our approach extends recent work by Pöthkow and Hege [PH10] in that it accounts for correlations in the data to determine more reliable isosurface crossing probabiliti ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
We present a novel approach for visualizing the positional and geometrical variability of isosurfaces in uncertain 3D scalar fields. Our approach extends recent work by Pöthkow and Hege [PH10] in that it accounts for correlations in the data to determine more reliable isosurface crossing probabilities. We introduce an incremental updatescheme that allows integrating the probability computation into fronttoback volume raycasting efficiently. Our method accounts for homogeneous and anisotropic correlations, and it determines for each sampling interval along a ray the probability of crossing an isosurface for the first time. To visualize the positional and geometrical uncertainty even under viewing directions parallel to the surface normal, we propose a new color mapping scheme based on the approximate spatial deviation of possible surface points from the mean surface. The additional use of saturation enables to distinguish between areas of high and low statistical dependence. Experimental results confirm the effectiveness of our approach for the visualization of uncertainty related to position and shape of convex and concave isosurface structures. Categories and Subject Descriptors (according to ACM CCS): Generation—Display algorithms, Viewing algorithms
EM algorithms for multivariate Gaussian mixture models with truncated and censored data
, 2010
"... We present expectationmaximization(EM) algorithms for fitting multivariate Gaussian mixture models to data that is truncated, censored or truncated and censored. These two types of incomplete measurements are naturally handled together through their relation to the multivariate truncated Gaussian d ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We present expectationmaximization(EM) algorithms for fitting multivariate Gaussian mixture models to data that is truncated, censored or truncated and censored. These two types of incomplete measurements are naturally handled together through their relation to the multivariate truncated Gaussian distribution. We illustrate our algorithms on synthetic and flow cytometry data.
Valuation of PerformanceDependent Options
, 2006
"... Performancedependent options are financial derivatives whose payoff depends on the performance of one asset in comparison to a set of benchmark assets. In this paper, we present a novel approach for the valuation of general performancedependent options. To this end, we use a multidimensional Black ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Performancedependent options are financial derivatives whose payoff depends on the performance of one asset in comparison to a set of benchmark assets. In this paper, we present a novel approach for the valuation of general performancedependent options. To this end, we use a multidimensional BlackScholes model to describe the temporal development of the asset prices. The martingale approach then yields the fair price of such options as a multidimensional integral whose dimension is the number of stochastic processes used in the model. The integrand is typically discontinuous which makes accurate solutions difficult to achieve by numerical approaches, though. Using tools from computational geometry, we are able to derive a pricing formula which only involves the evaluation of several smooth multivariate normal distributions. This way, performancedependent options can efficiently be priced even for highdimensional problems as is shown by numerical results.
Better Approximations to Cumulative Normal Functions.” Wilmott Magazine
, 2005
"... 1. The need for high precision cumulative normal functions Espen Haug relates a story to me of how his book (Haug 1998) has received a rather scathing review at the Amazon website by one reader; and the underlying reason for the problem is in actual fact the inaccuracy of the cumulative normal appro ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
1. The need for high precision cumulative normal functions Espen Haug relates a story to me of how his book (Haug 1998) has received a rather scathing review at the Amazon website by one reader; and the underlying reason for the problem is in actual fact the inaccuracy of the cumulative normal approximation in his book, this inaccuracy is in turn inherited by the bivariate cumulative approximation. As a consequence, option prices where the bivariate cumulative is used can be negative, under not absurd inputs! It is important to remember that in most if not all approximations, the nvariate cumulative function will use the n − 1variate. It makes sense a priori to have a high precision univariate cumulative normal, but it makes even more sense if we are going to use the bivariate cumulative normal, asbesides needing to be satisfactory in its own right this will rely on the univariate that we have chosen. And, if we use a trivariate cumulative then one will require a high precision bivariate cumulative function. 2. Univariate cumulative normal The Cumulative Standard Normal Integral is the function:
Real Investment Opportunity Valuation and Timing Using a FiniteLived American Exchange Option Methodology
, 2002
"... FCAR (Fonds pour la formation de chercheurs et l'Aide à la Recherche), and SSHRC (Social Sciences and Humanities Research Council of Canada) are gratefully acknowledged. Any remaining errors belong solely to the authors. Real Investment Opportunity Valuation and Timing Using a FiniteLived Amer ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
FCAR (Fonds pour la formation de chercheurs et l'Aide à la Recherche), and SSHRC (Social Sciences and Humanities Research Council of Canada) are gratefully acknowledged. Any remaining errors belong solely to the authors. Real Investment Opportunity Valuation and Timing Using a FiniteLived American Exchange Option Methodology In practice, the investment opportunities that can be delayed are more like exchange than simple call options, because there are uncertainties both in the gross project value (underlying asset) and in the investment cost (exercise price). Companies that have the option to invest at anytime until a certain date (the maturity), also often have some opportunity costs (the lost cash flows) in holding the option instead of the project. Incorporating these aspects leads to a more realistic evaluation process. In this research, we value three real investment projects as finitelived American exchange options, correcting and applying the Carr (1988) model. We conclude that, as expected, the traditional Net Present Value method substantially undervalues projects with this kind of flexibility (excluding those that are deep inthemoney). This leads to wrong decisions about the timing of these investments. We also conclude that the results from using the corrected 1988 Carr model differ substantially from those that we obtain from using the uncorrected version. As expected, the corrected model gives results that