Results 1 
4 of
4
Some Practical Guidelines for Effective SampleSize Determination
, 2001
"... Samplesize determination is often an important step in planning a statistical studyand it is usually a difficult one. Among the important hurdles to be surpassed, one must obtain an estimate of one or more error variances, and specify an effect size of importance. There is the temptation to t ..."
Abstract

Cited by 85 (1 self)
 Add to MetaCart
Samplesize determination is often an important step in planning a statistical studyand it is usually a difficult one. Among the important hurdles to be surpassed, one must obtain an estimate of one or more error variances, and specify an effect size of importance. There is the temptation to take some shortcuts. This paper offers some suggestions for successful and meaningful samplesize determination. Also discussed is the possibility that sample size may not be the main issue, that the real goal is to design a highquality study. Finally, criticism is made of some illadvised shortcuts relating to power and sample size. Key words: Power; Sample size; Observed power; Retrospective power; Study design; Cohen's effect measures; Equivalence testing; # I wish to thank Kate Cowles, John Castelloe, Steve Simon, two referees, an editor, and an associate editor for their helpful comments on earlier drafts of this paper. Much of this work was done with the support of the Obermann ...
On the post hoc power in testing mean differences
 Journal of Educational and Behavioral Statistics
, 2005
"... Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true effect size. One would hope th ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Retrospective or post hoc power analysis is recommended by reviewers and editors of many journals. Little literature has been found that gave a serious study of the post hoc power. When the sample size is large, the observed effect size is a good estimator of the true effect size. One would hope that the post hoc power is also a good estimator of the true power. This article studies whether such a power estimator provides valutble infonnation about the true power. Using analytical, numerical, and Monte Carlo approaches, our results show that the estimated power does not provide usefidl infonnation when the true power is small. It is almost always a biased estimator of the true power. The bias can be negative or positive. Large sample size alone does not guarantee the post hoc power to be a good estimator of the true power. Actually, when the population variance is known, the cumulative distribution function of the post hoc power is solely a function of the population power. This distribution is uniform when the true power equals 0.5 and highly skewed when the true power is near 0 or 1. When the population variance is unknown, the post hoc power behaves essentially the same as when the variance is known.
Many statis...
, 2003
"... Summary. Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on con ..."
Abstract
 Add to MetaCart
Summary. Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on constructing a confidence interval lead to a sample size likely to control the width of the interval. With either approach, a sample size so large as to waste resources or introduce ethical concerns is undesirable. This work was motivated by the concern that existing sample size methods often make it difficult for scientists to achieve their actual goals. We focus on situations which involve a fixed, unknown scalar parameter representing the true state of nature. The width of the confidence interval is defined as the difference between the (random) upper and lower bounds. An event width is said to occur if the observed confidence interval width is less than a fixed constant chosen a priori. Aneventvalidity is said to occur if the parameter of interest is contained between the observed upper and lower confidence interval bounds. An event rejection is said to occur if the confidence interval excludes the null value of the parameter. In our opinion, scientists often implicitly seek to have all three occur: width, validity, and rejection. New results illustrate that neglecting rejection or width (and less so validity) often provides a sample size with a low probability of the simultaneous occurrence of all three events. We recommend considering all three
Clinical Trials 2010; 7: 219–226ARTICLE
"... Sample size reestimation in a breast cancer trial ..."
(Show Context)