Results 1 
3 of
3
Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspect Psychol Sci (2008
"... ABSTRACT—Replication is fundamental to science, so statistical analysis should give information about replication. Because p values dominate statistical analysis in psychology, it is important to ask what p says about replication. The answer to this question is ‘‘Surprisingly little.’ ’ In one sim ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
ABSTRACT—Replication is fundamental to science, so statistical analysis should give information about replication. Because p values dominate statistical analysis in psychology, it is important to ask what p says about replication. The answer to this question is ‘‘Surprisingly little.’ ’ In one simulation of 25 repetitions of a typical experiment, p varied from <.001 to.76, thus illustrating that p is a very unreliable measure. This article shows that, if an initial experiment results in twotailed p 5.05, there is an 80% chance the onetailed p value from a replication will fall in the interval (.00008,.44), a 10 % chance that p <.00008, and fully a 10 % chance that p>.44. Remarkably, the interval—termed a p interval—is this wide however large the sample size. p is so unreliable and gives such dramatically vague information that it is a poor basis for inference. Confidence intervals, however, give much better information about replication. Researchers should minimize the role of p by using confidence intervals and modelfitting techniques and by adopting metaanalytic thinking. [p values] can be highly misleading measures of the evidence... against the null hypothesis.
On the post hoc power in testing mean differences
 Journal of Educational and Behavioral Statistics
, 2005
"... Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true effect size. One would hope th ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true effect size. One would hope that the post hoc power is also a good estimator of the true power. This article studies whether such a power estimator provides valutble infonnation about the true power. Using analytical, numerical, and Monte Carlo approaches, our results show that the estimated power does not provide usefidl infonnation when the true power is small. It is almost always a biased estimator of the true power. The bias can be negative or positive. Large sample size alone does not guarantee the post hoc power to be a good estimator of the true power. Actually, when the population variance is known, the cumulative distribution function of the post hoc power is solely a function of the population power. This distribution is uniform when the true power equals 0.5 and highly skewed when the true power is near 0 or 1. When the population variance is unknown, the post hoc power behaves essentially the same as when the variance is known.