### Additive effects of stimulus quality and word frequency on eye movements during

"... Chinese reading ..."

(Show Context)
### Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory

"... Recent debates in the psychological literature have raised questions about what assumptions underpin Bayesian models of cognition, and what infer-ences they license about human cognition. In this paper we revisit this topic, arguing that there are two qualitatively different ways in which a Bayesian ..."

Abstract
- Add to MetaCart

(Show Context)
Recent debates in the psychological literature have raised questions about what assumptions underpin Bayesian models of cognition, and what infer-ences they license about human cognition. In this paper we revisit this topic, arguing that there are two qualitatively different ways in which a Bayesian model could be constructed. If a Bayesian model is intended to license a claim about optimality then the priors and likelihoods in the model must be constrained by reference to some external criterion. A descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a sub-stantive psychological theory. We present three case studies in which these two perspectives lead to different computational models and license different conclusions about human cognition. We argue that the descriptive Bayesian approach is more useful overall, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the two perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended.

### Correspondence concerning this article should be addressed to:

"... The stop-signal paradigm is frequently used to study response inhibition. In this paradigm, participants perform a two-choice response time task where the primary task is occasionally interrupted by a stop-signal that prompts participants to withhold their response. The primary goal is to estimate t ..."

Abstract
- Add to MetaCart

(Show Context)
The stop-signal paradigm is frequently used to study response inhibition. In this paradigm, participants perform a two-choice response time task where the primary task is occasionally interrupted by a stop-signal that prompts participants to withhold their response. The primary goal is to estimate the latency of the unobservable stop response (stop signal reaction time or SSRT). Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (in press) have developed a Bayesian parametric approach that allows for the estimation of the entire distribution of SSRTs. The Bayesian parametric approach assumes that SSRTs are ex-Gaussian distributed and uses Markov chain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS — that can be applied to individual as well as hierarchical stop-signal data. BEESTS comes with an easy-to-use graphical user interface and provides users with summary statistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The software is open source and runs on Windows and OS X operating systems. In sum, BEESTS allows experimental and clinical psychologists to estimate entire distributions of SSRTs and hence facilitates the more rigorous analysis of stop-signal data.

### Bayesian Hierarchical Models 2 Bayesian Hierarchical Models

"... Introduction: The need for hierarchical models Those of us who study human cognition have no easy task. We try to understand how people functionally represent and processes information in performing cognitive activities such as vision, perception, memory, language, and decision making. Fortunately, ..."

Abstract
- Add to MetaCart

(Show Context)
Introduction: The need for hierarchical models Those of us who study human cognition have no easy task. We try to understand how people functionally represent and processes information in performing cognitive activities such as vision, perception, memory, language, and decision making. Fortunately, experimental psychology has a rich theoretical tradition, and there is no shortage of

### Working Memory’s Workload Capacity

"... We examined the role of dual-task interference in working memory using a novel dual 2-back task that requires a redundant-target response (i.e., that neither the auditory nor visual stimulus occurred two back vs. one or both occurred two back) on every trial. Comparisons with performance on single 2 ..."

Abstract
- Add to MetaCart

We examined the role of dual-task interference in working memory using a novel dual 2-back task that requires a redundant-target response (i.e., that neither the auditory nor visual stimulus occurred two back vs. one or both occurred two back) on every trial. Comparisons with performance on single 2-back trials (i.e., with only auditory or only visual stimuli) showed dual-task demands reduced both speed and accuracy. Our task design enabled a novel application of Townsend and Nozawa’s (1995) workload-capacity measure, which revealed that the decrement in dual 2-back performance was mediated by sharing of a limited amount of processing capacity. Relative to most other single and dual n-back tasks, performance measures for our task were more reliable, due to the use of a small stimulus set that induced a high and constant level of proactive interference. For a version of our dual 2-back task that minimized response bias, accuracy was also more strongly correlated with a complex span than has been found for most other single and dual n-back tasks.

### Journal of Mathematical Psychology ( ) – Contents lists available at ScienceDirect Journal of Mathematical Psychology

"... journal homepage: www.elsevier.com/locate/jmp Bayesian alternatives to null-hypothesis significance testing for ..."

Abstract
- Add to MetaCart

(Show Context)
journal homepage: www.elsevier.com/locate/jmp Bayesian alternatives to null-hypothesis significance testing for

### Quantifying the time course of similarity

"... Does the similarity between two items change over time? Pre-vious studies (Goldstone & Medin, 1994; Gentner & Brem, 1999) have found suggestive results but have relied on inter-preting complex interaction effects from “deadline ” decision tasks in which the decision making process is not wel ..."

Abstract
- Add to MetaCart

(Show Context)
Does the similarity between two items change over time? Pre-vious studies (Goldstone & Medin, 1994; Gentner & Brem, 1999) have found suggestive results but have relied on inter-preting complex interaction effects from “deadline ” decision tasks in which the decision making process is not well un-derstood (Luce, 1986). Using a self-paced simple decision task in which the similarity between two items can be isolated from strategic decision processes using computational model-ing techniques (Ratcliff, 1978), we show strong evidence that the similarity between two items changes over time and shifts in systematic ways. The change in similarity from early to late processing in Experiment 1 is consistent with the theory of structural alignment (Gentner, 1983; Goldstone & Medin, 1994), and Experiment 2 demonstrates evidence for a stronger influence of thematic knowledge than taxonomic knowledge in early processing of word associations (Lin & Murphy, 2001).

### AF T The Fallacy of Placing Confidence in Confidence Intervals

"... Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true para ..."

Abstract
- Add to MetaCart

(Show Context)
Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; the parameter values contained within a CI are thought to be more plausible than those outside the interval; and the confidence coefficient of the interval (typically 95%) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of these properties, and generally lead to incoherent inferences. For this reason, we recommend against the use of the method of CIs for inference. “You keep using that word. I do not think it means what you think it means.” Inigo Montoya, The Princess Bride (1987) The development of statistics over the past century has seen the proliferation of methods designed to make infer-ences from data. Methods vary widely in their philosophical foundations, the questions they are supposed to address, and their frequency of use in practice. One popular and widely-promoted class of methods are interval estimates, which in-clude frequentist confidence intervals, Bayesian credible in-tervals and highest posterior density (HPD) intervals, fiducial intervals, and likelihood intervals. These procedures differ in their philosophical foundation and computation, but infor-mally are all designed to be estimates of a parameter that ac-count for measurement or sampling uncertainty by yielding a range of values for the parameter instead of a single value. Of the many kinds of interval estimates, the most popular is the confidence interval (CI). Confidence intervals are intro-duced in almost all introductory statistics texts; they are rec-ommended or required by the methodological guidelines of many prominent journals (e.g., Psychonomics Society, 2012;

### Reviewed by:

, 2015

"... During conversations participants alternate smoothly between speaker and hearer roles with only brief pauses and overlaps. There are two competing types of accounts about how conversationalists accomplish this: (a) the signaling approach and (b) the anticipatory (‘projection’) approach. We wanted to ..."

Abstract
- Add to MetaCart

(Show Context)
During conversations participants alternate smoothly between speaker and hearer roles with only brief pauses and overlaps. There are two competing types of accounts about how conversationalists accomplish this: (a) the signaling approach and (b) the anticipatory (‘projection’) approach. We wanted to investigate, first, the relative merits of these two accounts, and second, the relative contribution of semantic and syntactic information to the timing of next turn initiation. We performed three button-press experiments using turn fragments taken from natural conversations to address the following questions: (a) Is turn-taking predominantly based on anticipation or on reaction, and (b) what is the relative contribution of semantic and syntactic information to accurate turn-taking. In our first experiment we gradually manipulated the information available for anticipation of the turn end (providing information about the turn end in advance to completely removing linguistic information). The results of our first experiment show that the distribution of the participants ’ estimation of turn-endings for natural turns is very similar to the distribution for pure anticipation. We conclude that listeners are indeed able to anticipate a turn-end and that this strategy is predominantly used in turn-taking. In Experiment 2 we collected