Results 1 - 10
of
11,615
A direct approach to false discovery rates
, 2002
"... Summary. Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for a single-hypothesis test, a compound error rate is controlled for multiple-hypothesis tests. For example, controlling the ..."
Abstract
-
Cited by 775 (14 self)
- Add to MetaCart
the false discovery rate FDR traditionally involves intricate sequential p-value rejection methods based on the observed data. Whereas a sequential p-value method fixes the error rate and estimates its corresponding rejection region, we propose the opposite approach—we fix the rejection region
of true nulls and false discovery rate
"... Generalized estimators for multiple testing: proportion of true nulls and false discovery rate by ..."
Abstract
- Add to MetaCart
Generalized estimators for multiple testing: proportion of true nulls and false discovery rate by
Local False Discovery Rates
, 2005
"... Modern scientific technology is providing a new class of large-scale simultaneous inference problems, with hundreds or thousands of hypothesis tests to consider at the same time. Microarrays epitomize this type of technology but similar problems arise in proteomics, time of flight spectroscopy, flow ..."
Abstract
-
Cited by 24 (1 self)
- Add to MetaCart
, flow cytometry, FMRI imaging, and massive social science surveys. This paper uses local false discovery rate methods to carry out size and power calculations on large-scale data sets. An empirical Bayes approach allows the fdr analysis to proceed from a minimum of frequentist or Bayesian modeling
The control of the false discovery rate in multiple testing under dependency
- Annals of Statistics
, 2001
"... Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than comparab ..."
Abstract
-
Cited by 1093 (16 self)
- Add to MetaCart
Benjamini and Hochberg suggest that the false discovery rate may be the appropriate error rate to control in many applied multiple testing problems. A simple procedure was given there as an FDR controlling procedure for independent test statistics and was shown to be much more powerful than
Thresholding of statistical maps in functional neuroimaging using the false discovery rate.
- NeuroImage
, 2002
"... Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a long-standing problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for mult ..."
Abstract
-
Cited by 521 (9 self)
- Add to MetaCart
for multiple hypothesis testing (e.g., Bonferroni) tend to not be sensitive enough to be useful in this context. This paper introduces to the neuroscience literature statistical procedures for controlling the false discovery rate (FDR). Recent theoretical work in statistics suggests that FDR
THE FAULTY FALSE DISCOVERY RATE
"... The false discovery procedure introduced by Benjamini and Hochberg in 1995 has become a mainstream method for large scale simultaneous inference in a variety of bioinformatics problems. The procedure controls the false discovery rate (FDR) at a specified level α assuming that the distribution functi ..."
Abstract
- Add to MetaCart
The false discovery procedure introduced by Benjamini and Hochberg in 1995 has become a mainstream method for large scale simultaneous inference in a variety of bioinformatics problems. The procedure controls the false discovery rate (FDR) at a specified level α assuming that the distribution
Size, power and false discovery rates
, 2007
"... Modern scientific technology has provided a new class of large-scale simultaneous inference problems, with thousands of hypothesis tests to consider at the same time. Microarrays epitomize this type of technology, but similar situations arise in proteomics, spectroscopy, imaging, and social science ..."
Abstract
-
Cited by 53 (4 self)
- Add to MetaCart
surveys. This paper uses false discovery rate methods to carry out both size and power calculations on large-scale problems. A simple empirical Bayes approach allows the fdr analysis to proceed with a minimum of frequentist or Bayesian modeling assumptions. Closed-form accuracy formulas are derived
• FDR: False Discovery Rate
"... identification by database search of mixture tandem mass spectra ..."
Microarrays, Empirical Bayes Methods, and False Discovery Rates
- Genet. Epidemiol
, 2001
"... In a classic two-sample problem one might use Wilcoxon's statistic to test for a dierence between Treatment and Control subjects. The analogous microarray experiment yields thousands of Wilcoxon statistics, one for each gene on the array, and confronts the statistician with a dicult simultan ..."
Abstract
-
Cited by 217 (14 self)
- Add to MetaCart
simultaneous inference situation. We will discuss two inferential approaches to this problem: an empirical Bayes method that requires very little a priori Bayesian modeling, and the frequentist method of \False Discovery Rates" proposed by Benjamini and Hochberg in 1995. It turns out that the two
Results 1 - 10
of
11,615