Results 1 - 10
of
42
Consistent Model Specification Tests
- Journal of Econometrics
, 1982
"... This paper reviews the literature on tests for the correct specification of the functional form of parametric conditional expectation and conditional distribution models. In particular I will discuss various versions of the Integrated Conditional Moment (ICM) test and the ideas behind them. 1 ..."
Abstract
-
Cited by 92 (13 self)
- Add to MetaCart
This paper reviews the literature on tests for the correct specification of the functional form of parametric conditional expectation and conditional distribution models. In particular I will discuss various versions of the Integrated Conditional Moment (ICM) test and the ideas behind them. 1
Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function
"... The performance of stochastic optimisers can be assessed experimentally on given problems by performing multiple optimisation runs, and analysing the results. Since an optimiser may be viewed as an estimator for the (Pareto) minimum of a (vector) function, stochastic optimiser performance is dis ..."
Abstract
-
Cited by 46 (3 self)
- Add to MetaCart
The performance of stochastic optimisers can be assessed experimentally on given problems by performing multiple optimisation runs, and analysing the results. Since an optimiser may be viewed as an estimator for the (Pareto) minimum of a (vector) function, stochastic optimiser performance is discussed in the light of the criteria applicable to more usual statistical estimators. Multiobjective optimisers are shown to deviate considerably from standard point estimators, and to require special statistical methodology. The attainment function is formulated, and related results from random closed-set theory are presented, which cast the attainment function as a mean-like measure for the outcomes of multiobjective optimisers. Finally, a covariance-measure is defined, which should bring additional insight into the stochastic behaviour of multiobjective optimisers. Computational issues and directions for further work are discussed at the end of the paper.
Some statistical pitfalls in copula modeling for financial applications.
- Capital Formation, Governance and Banking,
, 2005
"... March 2004 Abstract In this paper we discuss some statistical pitfalls that may occur in modeling cross-dependences with copulas in financial applications. In particular we focus on issues arising in the estimation and the empirical choice of copulas as well as in the design of time-dependent copul ..."
Abstract
-
Cited by 28 (3 self)
- Add to MetaCart
(Show Context)
March 2004 Abstract In this paper we discuss some statistical pitfalls that may occur in modeling cross-dependences with copulas in financial applications. In particular we focus on issues arising in the estimation and the empirical choice of copulas as well as in the design of time-dependent copulas.
Exploring the performance of stochastic multiobjective optimisers with the second-order attainment function
- Evolutionary Multi-criterion Optimization (EMO 2005), LNCS 3410
, 2005
"... Abstract. The attainment function has been proposed as a measure of the statistical performance of stochastic multiobjective optimisers which encompasses both the quality of individual non-dominated solutions in objective space and their spread along the trade-off surface. It has also been related t ..."
Abstract
-
Cited by 12 (7 self)
- Add to MetaCart
(Show Context)
Abstract. The attainment function has been proposed as a measure of the statistical performance of stochastic multiobjective optimisers which encompasses both the quality of individual non-dominated solutions in objective space and their spread along the trade-off surface. It has also been related to results from random closed-set theory, and cast as a mean-like, first-order moment measure of the outcomes of multiobjective optimisers. In this work, the use of more informative, second-order moment measures for the evaluation and comparison of multiobjective optimiser performance is explored experimentally, with emphasis on the interpretability of the results. 1
Testing multivariate uniformity and its applications
- Math. Comp
, 2001
"... Abstract. Some new statistics are proposed to test the uniformity of random samples in the multidimensional unit cube [0, 1] d (d ≥ 2). These statistics are derived from number-theoretic or quasi-Monte Carlo methods for measuring the discrepancy of points in [0, 1] d. Under the null hypothesis that ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Some new statistics are proposed to test the uniformity of random samples in the multidimensional unit cube [0, 1] d (d ≥ 2). These statistics are derived from number-theoretic or quasi-Monte Carlo methods for measuring the discrepancy of points in [0, 1] d. Under the null hypothesis that the samples are independent and identically distributed with a uniform distribution in [0, 1] d, we obtain some asymptotic properties of the new statistics. By Monte Carlo simulation, it is found that the finite-sample distributions of the new statistics are well approximated by the standard normal distribution, N(0, 1), or the chi-squared distribution, χ 2 (2). A power study is performed, and possible applications of the new statistics to testing general multivariate goodness-of-fit problems are discussed. 1.
Modeling dwell time to predict click-level satisfaction
- In WSDM'14
, 2014
"... Clicks on search results are the most widely used behavioral sig-nals for predicting search satisfaction. Even though clicks are correlated with satisfaction, they can also be noisy. Previous work has shown that clicks are affected by position bias, caption bias, and other factors. A popular heurist ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
Clicks on search results are the most widely used behavioral sig-nals for predicting search satisfaction. Even though clicks are correlated with satisfaction, they can also be noisy. Previous work has shown that clicks are affected by position bias, caption bias, and other factors. A popular heuristic for reducing this noise is to only consider clicks with long dwell time, usually equaling or exceeding 30 seconds. The rationale is that the more time a searcher spends on a page, the more likely they are to be satisfied with its contents. However, having a single threshold value as-sumes that users need a fixed amount of time to be satisfied with any result click, irrespective of the page chosen. In reality, clicked pages can differ significantly. Pages have different topics, reada-bility levels, content lengths, etc. All of these factors may affect the amount of time spent by the user on the page. In this paper, we study the effect of different page characteristics on the time need-ed to achieve search satisfaction. We show that the topic of the page, its length and its readability level are critical in determining the amount of dwell time needed to predict whether any click is associated with satisfaction. We propose a method to model and provide a better understanding of click dwell time. We estimate click dwell time distributions for SAT (satisfied) or DSAT (dissat-isfied) clicks for different click segments and use them to derive features to train a click-level satisfaction model. We compare the proposed model to baseline methods that use dwell time and other search performance predictors as features, and demonstrate that the proposed model achieves significant improvements.
Universal Residuals: A Multivariate Transformation ∗
, 2006
"... Rosenblatt’s transformation has been used extensively for evaluation of model goodness-of-fit, but it only applies to models whose joint distribution is continuous. In this paper we generalize the transformation so that it applies to arbitrary probability models. The transformation is simple, but ha ..."
Abstract
-
Cited by 10 (0 self)
- Add to MetaCart
Rosenblatt’s transformation has been used extensively for evaluation of model goodness-of-fit, but it only applies to models whose joint distribution is continuous. In this paper we generalize the transformation so that it applies to arbitrary probability models. The transformation is simple, but has a wide range of possible applications, providing a tool for exploratory data analysis and formal goodness-of-fit testing for a very general class of probability models. The method is demonstrated with specific examples.
Integrated Conditional Moment Tests for Parametric Conditional Distributions, Working paper (http://econ.la.psu .edu/~hbierens/ICM_IID.PDF
, 2008
"... In this paper we propose a weighted integrated conditional moment (ICM) test of the validity of parametric specifications of conditional distribution models for stationary time series, by extending the weighted ICM test of Bierens (1984) for time series regression models to complete parametric condi ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
In this paper we propose a weighted integrated conditional moment (ICM) test of the validity of parametric specifications of conditional distribution models for stationary time series, by extending the weighted ICM test of Bierens (1984) for time series regression models to complete parametric conditional distribution specifications. Support for research within the Center for the Study of Auctions, Procurements, and
High-Dimensional Density Estimation via SCA: An Example in the Modelling of Hurricane Tracks ✩
, 907
"... We present nonparametric techniques for constructing and verifying density estimates from high-dimensional data whose irregular dependence structure cannot be modelled by parametric multivariate distributions. A low-dimensional representation of the data is critical in such situations because of the ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We present nonparametric techniques for constructing and verifying density estimates from high-dimensional data whose irregular dependence structure cannot be modelled by parametric multivariate distributions. A low-dimensional representation of the data is critical in such situations because of the curse of dimensionality. Our proposed methodology consists of three main parts: (1) data reparameterization via dimensionality reduction, wherein the data are mapped into a space where standard techniques can be used for density estimation and simulation; (2) inverse mapping, in which simulated points are mapped back to the high-dimensional input space; and (3) verification, in which the quality of the estimate is assessed by comparing simulated samples with the observed data. These approaches are illustrated via an exploration of the spatial variability of tropical cyclones in the North Atlantic; each datum in this case is an entire hurricane trajectory. We conclude the paper with a discussion of extending the methods to model the relationship between TC variability and climatic variables. Key words: dimension reduction, nonparametric density estimation, application to physical sciences 1.
Article Statistical Diagnosis of the Best Weibull Methods for Wind Power Assessment for Agricultural Applications
, 2014
"... www.mdpi.com/journal/energies ..."