Results

**1 - 5**of**5**### UNDERSTANDING, TEACHING AND USING p VALUES

"... There are many problems with the p value. Is it an indicator of strength of evidence (Fisher), or only to be compared with � (Neyman-Pearson)? Many researchers and even statistics teachers have misconceptions about p, although p has been little studied, and we know little about how textbooks present ..."

Abstract
- Add to MetaCart

There are many problems with the p value. Is it an indicator of strength of evidence (Fisher), or only to be compared with � (Neyman-Pearson)? Many researchers and even statistics teachers have misconceptions about p, although p has been little studied, and we know little about how textbooks present it, and how researchers think about it, react to it, and use it in practice. The p value varies dramatically because of sampling variability, but textbooks do not mention this and researchers do not appreciate how widely it varies. I discuss the problems of p and advantages of confidence intervals, and identify research needed to guide the design of improved statistics education about p. I suggest the most promising teaching approach may be to focus throughout on estimation, use confidence intervals wherever possible, give p only a minor role, and explain p mainly as indicating where the confidence interval falls in relation to the null hypothesised value. Many disciplines rely on the p value to draw conclusions, yet p is often misunderstood and poorly used. It is at the heart of research, so it is surprising and disappointing how little it has been studied. We know little about how researchers think and feel about p, and little about how textbooks explain p and how that relates to what researchers do. The very large variation in p over replication is not widely appreciated, or mentioned in textbooks. I discuss these problems of p, and

### 2015 © The Author(s) & Dept. of Mathematical Sciences-The University of Montana Risk as an Explanatory Factor for Researchers ’ Inferential Interpretations

"... Abstract: Logical reasoning is crucial in science, but we know that this is not something that humans are innately good at. It becomes even harder to reason logically about data when there is uncertainty, because there is always a chance of being wrong. Dealing with uncertainty is inevitable, for ex ..."

Abstract
- Add to MetaCart

Abstract: Logical reasoning is crucial in science, but we know that this is not something that humans are innately good at. It becomes even harder to reason logically about data when there is uncertainty, because there is always a chance of being wrong. Dealing with uncertainty is inevitable, for example, in situations in which the evaluation of sample outcomes with respect to some population is required. Inferential statistics is a structured way of reasoning rationally about such data. One could therefore expect that using well-known statistical techniques protects its users against misinterpretations regarding uncertainty. Unfortunately, this does not seem to be the case. Researchers often pretend to be too certain about the presence or absence of an effect, and data are analysed in a selective way, which impacts the validity of conclusions that can be drawn from the techniques that are used. In this paper, the concept of risk is used to explain why unwanted behaviour may not be as unreasonable as it seems, once the risks that researchers face are taken into account.

### 22 QUALITATIVE RESEARCH: AN ESSENTIAL PART OF

"... Our research in statistical cognition uses both qualitative and quantitative methods. A mixed method approach makes our research more comprehensive, and provides us with new directions, unexpected insights, and alternative explanations for previously established concepts. In this paper, we review fo ..."

Abstract
- Add to MetaCart

(Show Context)
Our research in statistical cognition uses both qualitative and quantitative methods. A mixed method approach makes our research more comprehensive, and provides us with new directions, unexpected insights, and alternative explanations for previously established concepts. In this paper, we review four statistical cognition studies that used mixed methods and explain the contributions of both the quantitative and qualitative components. The four studies investigated concern statistical reporting practices in medical journals, an intervention aimed at improving psychologists’ interpretations of statistical tests, the extent to which interpretations improve when results are presented with confidence intervals (CIs) rather than p-values, and graduate students ’ misconceptions about CIs. Finally, we discuss the concept of scientific rigour and outline guidelines for maintaining rigour that should apply equally to qualitative and quantitative research.

### AF T The Fallacy of Placing Confidence in Confidence Intervals

"... Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true para ..."

Abstract
- Add to MetaCart

(Show Context)
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; the parameter values contained within a CI are thought to be more plausible than those outside the interval; and the confidence coefficient of the interval (typically 95%) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and generally lead to incoherent inferences. For this reason, we recommend against the use of the method of CIs for inference. “You keep using that word. I do not think it means what you think it means.” Inigo Montoya, The Princess Bride (1987) The development of statistics over the past century has seen the proliferation of methods designed to make infer-ences from data. Methods vary widely in their philosophical foundations, the questions they are supposed to address, and their frequency of use in practice. One popular and widely-promoted class of methods are interval estimates, which in-clude frequentist confidence intervals, Bayesian credible in-tervals and highest posterior density (HPD) intervals, fiducial intervals, and likelihood intervals. These procedures differ in their philosophical foundation and computation, but infor-mally are all designed to be estimates of a parameter that ac-count for measurement or sampling uncertainty by yielding a range of values for the parameter instead of a single value. Of the many kinds of interval estimates, the most popular is the confidence interval (CI). Confidence intervals are intro-duced in almost all introductory statistics texts; they are rec-ommended or required by the methodological guidelines of many prominent journals (e.g., Psychonomics Society, 2012;

### Communication Research: Implications for Hypothesis Development and Testing

, 2011

"... Publication details, including instructions for authors and subscription information: ..."

Abstract
- Add to MetaCart

(Show Context)
Publication details, including instructions for authors and subscription information: