Abstract: Publication date: June 2017 Source:Cognitive Psychology, Volume 95 Author(s): Maxim A. Bushmakin, Ami Eidels, Andrew Heathcote We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coactive architectures, which combine information before it enters the decision process, with parallel architectures, where logical rules combine independent decisions made about each perceptual source. For both architectures we test the novel hypothesis that participants break the decision rules on some trials, making a response based on only one stimulus even though task instructions require them to consider both. Our models take account of not only the decisions made but also the distribution of the time that it takes to make them, providing an account of speed-accuracy tradeoffs and response biases occurring when one response is required more often than another. We also test a second novel hypothesis, that the nature of the decision rule changes the evidence on which choices are based. We apply the models to data from a perceptual integration task with near threshold stimuli under two different decision rules. The coactive architecture was clearly rejected in favor of logical-rules. The logical-rule models were shown to provide an accurate account of all aspects of the data, but only when they allow for response bias and the possibility for subjects to break those rules. We discuss how our framework can be applied more broadly, and its relationship to Townsend and Nozawa’s (1995) Systems-Factorial Technology.
Abstract: Publication date: May 2017 Source:Cognitive Psychology, Volume 94 Author(s): Anna Samara, Kenny Smith, Helen Brown, Elizabeth Wonnacott Languages exhibit sociolinguistic variation, such that adult native speakers condition the usage of linguistic variants on social context, gender, and ethnicity, among other cues. While the existence of this kind of socially conditioned variation is well-established, less is known about how it is acquired. Studies of naturalistic language use by children provide various examples where children’s production of sociolinguistic variants appears to be conditioned on similar factors to adults’ production, but it is difficult to determine whether this reflects knowledge of sociolinguistic conditioning or systematic differences in the input to children from different social groups. Furthermore, artificial language learning experiments have shown that children have a tendency to eliminate variation, a process which could potentially work against their acquisition of sociolinguistic variation. The current study used a semi-artificial language learning paradigm to investigate learning of the sociolinguistic cue of speaker identity in 6-year-olds and adults. Participants were trained and tested on an artificial language where nouns were obligatorily followed by one of two meaningless particles and were produced by one of two speakers (one male, one female). Particle usage was conditioned deterministically on speaker identity (Experiment 1), probabilistically (Experiment 2), or not at all (Experiment 3). Participants were given tests of production and comprehension. In Experiments 1 and 2, both children and adults successfully acquired the speaker identity cue, although the effect was stronger for adults and in Experiment 1. In addition, in all three experiments, there was evidence of regularization in participants’ productions, although the type of regularization differed with age: children showed regularization by boosting the frequency of one particle at the expense of the other, while adults regularized by conditioning particle usage on lexical items. Overall, results demonstrate that children and adults are sensitive to speaker identity cues, an ability which is fundamental to tracking sociolinguistic variation, and that children’s well-established tendency to regularize does not prevent them from learning sociolinguistically conditioned variation.
Abstract: Publication date: May 2017 Source:Cognitive Psychology, Volume 94 Author(s): Kyle MacDonald, Daniel Yurovsky, Michael C. Frank Because children hear language in environments that contain many things to talk about, learning the meaning of even the simplest word requires making inferences under uncertainty. A cross-situational statistical learner can aggregate across naming events to form stable word-referent mappings, but this approach neglects an important source of information that can reduce referential uncertainty: social cues from speakers (e.g., eye gaze). In four large-scale experiments with adults, we tested the effects of varying referential uncertainty in cross-situational word learning using social cues. Social cues shifted learners away from tracking multiple hypotheses and towards storing only a single hypothesis (Experiments 1 and 2). In addition, learners were sensitive to graded changes in the strength of a social cue, and when it became less reliable, they were more likely to store multiple hypotheses (Experiment 3). Finally, learners stored fewer word-referent mappings in the presence of a social cue even when given the opportunity to visually inspect the objects for the same amount of time (Experiment 4). Taken together, our data suggest that the representations underlying cross-situational word learning of concrete object labels are quite flexible: In conditions of greater uncertainty, learners store a broader range of information.
Abstract: Publication date: May 2017 Source:Cognitive Psychology, Volume 94 Author(s): Aline Bompas, Craig Hedge, Petroc Sumner Action decisions are considered an emergent property of competitive response activations. As such, decision mechanisms are embedded in, and therefore may differ between, different response modalities. Despite this, the saccadic eye movement system is often promoted as a model for all decisions, especially in the fields of electrophysiology and modelling. Other research traditions predominantly use manual button presses, which have different response distribution profiles and are initiated by different brain areas. Here we tested whether core concepts of action selection models (decision and non-decision times, integration of automatic and selective inputs to threshold, interference across response options, noise, etc.) generalise from saccadic to manual domains. Using two diagnostic phenomena, the remote distractor effect (RDE) and ‘saccadic inhibition', we find that manual responses are also sensitive to the interference of visual distractors but to a lesser extent than saccades and during a shorter time window. A biologically-inspired model (DINASAUR, based on non-linear input dynamics) can account for both saccadic and manual response distributions and accuracy by simply adjusting the balance and relative timings of transient and sustained inputs, and increasing the mean and variance of non-decisional delays for manual responses. This is consistent with known neurophysiological and anatomical differences between saccadic and manual networks. Thus core decision principles appear to generalise across effectors, consistent with previous work, but we also conclude that key quantitative differences underlie apparent qualitative differences in the literature, such as effects being robustly reported in one modality and unreliable in another.
Abstract: Publication date: May 2017 Source:Cognitive Psychology, Volume 94 Author(s): Youjeong Park, Marianella Casasola We tested young children’s spatial reasoning in a match-to-sample task, manipulating the objects in the task (abstract geometric shapes, line drawings of realistic objects, or both). Korean 4- and 5-year-old children (N =161) generalized the target spatial configuration (i.e., on, in, above) more easily when the sample used geometric shapes and the choices used realistic objects than the reverse (i.e., realistic-object sample to geometric-shape choices). With within-type stimuli (i.e., sample and choices were both geometric shapes or both realistic objects), 5-year-old, but not 4-year-old, children generalized the spatial relations more easily with geometric shapes than realistic objects. In addition, children who knew more locative terms (e.g., “in”, “on”) performed better on the task, suggesting a link to children’s spatial vocabulary. The results demonstrate an advantage of geometric shapes over realistic objects in facilitating young children’s performance on a match-to-sample spatial reasoning task.
Abstract: Publication date: May 2017 Source:Cognitive Psychology, Volume 94 Author(s): Nicholas J. Sexton, Richard P. Cooper Task inhibition (also known as backward inhibition) is an hypothesised form of cognitive inhibition evident in multi-task situations, with the role of facilitating switching between multiple, competing tasks. This article presents a novel cognitive computational model of a backward inhibition mechanism. By combining aspects of previous cognitive models in task switching and conflict monitoring, the model instantiates the theoretical proposal that backward inhibition is the direct result of conflict between multiple task representations. In a first simulation, we demonstrate that the model produces two effects widely observed in the empirical literature, specifically, reaction time costs for both (n-1) task switches and n-2 task repeats. Through a systematic search of parameter space, we demonstrate that these effects are a general property of the model’s theoretical content, and not specific parameter settings. We further demonstrate that the model captures previously reported empirical effects of inter-trial interval on n-2 switch costs. A final simulation extends the paradigm of switching between tasks of asymmetric difficulty to three tasks, and generates novel predictions for n-2 repetition costs. Specifically, the model predicts that n-2 repetition costs associated with hard-easy-hard alternations are greater than for easy-hard-easy alternations. Finally, we report two behavioural experiments testing this hypothesis, with results consistent with the model predictions.
Abstract: Publication date: March 2017 Source:Cognitive Psychology, Volume 93 Author(s): Thorsten Pachur, Renata S. Suter, Ralph Hertwig Two influential approaches to modeling choice between risky options are algebraic models (which focus on predicting the overt decisions) and models of heuristics (which are also concerned with capturing the underlying cognitive process). Because they rest on fundamentally different assumptions and algorithms, the two approaches are usually treated as antithetical, or even incommensurable. Drawing on cumulative prospect theory (CPT; Tversky & Kahneman, 1992) as the currently most influential instance of a descriptive algebraic model, we demonstrate how the two modeling traditions can be linked. CPT’s algebraic functions characterize choices in terms of psychophysical (diminishing sensitivity to probabilities and outcomes) as well as psychological (risk aversion and loss aversion) constructs. Models of heuristics characterize choices as rooted in simple information-processing principles such as lexicographic and limited search. In computer simulations, we estimated CPT’s parameters for choices produced by various heuristics. The resulting CPT parameter profiles portray each of the choice-generating heuristics in psychologically meaningful ways—capturing, for instance, differences in how the heuristics process probability information. Furthermore, CPT parameters can reflect a key property of many heuristics, lexicographic search, and track the environment-dependent behavior of heuristics. Finally, we show, both in an empirical and a model recovery study, how CPT parameter profiles can be used to detect the operation of heuristics. We also address the limits of CPT’s ability to capture choices produced by heuristics. Our results highlight an untapped potential of CPT as a measurement tool to characterize the information processing underlying risky choice.
Abstract: Publication date: March 2017 Source:Cognitive Psychology, Volume 93 Author(s): Ori Plonsky, Ido Erev Analyses of human learning reveal a discrepancy between the long- and the short-term effects of outcomes on subsequent choice. The long-term effect is simple: favorable outcomes increase the choice rate of an alternative whereas unfavorable outcomes decrease it. The short-term effects are more complex. Favorable outcomes can decrease the choice rate of the best option. This pattern violates the positive recency assumption that underlies the popular models of learning. The current research tries to clarify the implications of these results. Analysis of wide sets of learning experiments shows that rare positive outcomes have a wavy recency effect. The probability of risky choice after a successful outcome from risk-taking at trial t is initially (at t +1) relatively high, falls to a minimum at t +2, then increases for about 15 trials, and then decreases again. Rare negative outcomes trigger a wavy reaction when the feedback is complete, but not under partial feedback. The difference between the effects of rare positive and rare negative outcomes and between full and partial feedback settings can be described as a reflection of an interaction of an effort to discover patterns with two other features of human learning: surprise-triggers-change and the hot stove effect. A similarity-based descriptive model is shown to capture well all these interacting phenomena. In addition, the model outperforms the leading models in capturing the outcomes of data used in the 2010 Technion Prediction Tournament.
Abstract: Publication date: March 2017 Source:Cognitive Psychology, Volume 93 Author(s): Sarah Molouki, Daniel M. Bartels Five studies explore how anticipating different types of personal change affects people’s perceptions of their own self-continuity. The studies find that improvements are seen as less disruptive to personal continuity than worsening or unspecified change, although this difference varies in magnitude based on the type of feature being considered. Also, people’s expectations and desires matter. For example, a negative change is highly disruptive to perceived continuity when people expect improvement and less disruptive when people expect to worsen. The finding that some types of change are consistent with perceptions of self-continuity suggests that the self-concept may include beliefs about personal development.
Authors:Ya-Ning Chang; Stephen Welbourne; Chia-Ying Lee Pages: 1 - 23 Abstract: Publication date: December 2016 Source:Cognitive Psychology, Volume 91 Author(s): Ya-Ning Chang, Stephen Welbourne, Chia-Ying Lee Orthographic neighborhood (N) size effects have been extensively studied in English consistently producing a facilitatory effect in word naming tasks. In contrast, several recent studies on Chinese character naming have demonstrated an inhibitory effect of neighborhood size. Response latencies tend to be inhibited by inconsistent characters with large neighborhoods relative to small neighborhoods. These differences in neighborhood effects between languages may depend on the characteristics (depth) of the mapping between orthography and phonology. To explore this, we first conducted a behavioral experiment to investigate the relationship between neighborhood size, consistency and reading response. The results showed an inhibitory effect of neighborhood size for inconsistent characters but a facilitatory effect for consistent characters. We then developed two computational models based on parallel distributed processing principles to try and capture the nature of the processing that leads to these results in Chinese character naming. Simulations using models based on the triangle model of reading indicated that consistency and neighborhood size interact with the division of labor between semantics and phonology to produce these effects.
Authors:Wei (Sophia) Deng; Vladimir M. Sloutsky Pages: 24 - 62 Abstract: Publication date: December 2016 Source:Cognitive Psychology, Volume 91 Author(s): Wei (Sophia) Deng, Vladimir M. Sloutsky How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants’ attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cueing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development.
Authors:Brandon M. Turner; Dan R. Schley Pages: 1 - 47 Abstract: Publication date: November 2016 Source:Cognitive Psychology, Volume 90 Author(s): Brandon M. Turner, Dan R. Schley Few experimental effects in the psychology of judgment and decision making have been studied as meticulously as the anchoring effect. Although the existing literature provides considerable insight into the psychological processes underlying anchoring effects, extant theories up to this point have only generated qualitative predictions. While these theories have been productive in advancing our understanding of the underlying anchoring process, they leave much to be desired in the interpretation of specific anchoring effects. In this article, we introduce the Anchor Integration Model (AIM) as a descriptive tool for the measurement and quantification of anchoring effects. We develop two versions the model: one suitable for assessing between-participant anchoring effects, and another for assessing individual differences in anchoring effects. We then fit each model to data from two experiments, and demonstrate the model’s utility in describing anchoring effects.
Authors:Filipp Schmidt; Roland W. Fleming Pages: 48 - 70 Abstract: Publication date: November 2016 Source:Cognitive Psychology, Volume 90 Author(s): Filipp Schmidt, Roland W. Fleming Morphogenesis—or the origin of complex natural form—has long fascinated researchers from practically every branch of science. However, we know practically nothing about how we perceive and understand such processes. Here, we measured how observers visually infer shape-transforming processes. Participants viewed pairs of objects (‘before’ and ‘after’ a transformation) and identified points that corresponded across the transformation. This allowed us to map out in spatial detail how perceived shape and space were affected by the transformations. Participants’ responses were strikingly accurate and mutually consistent for a wide range of non-rigid transformations including complex growth-like processes. A zero-free-parameter model based on matching and interpolating/extrapolating the positions of high-salience contour features predicts the data surprisingly well, suggesting observers infer spatial correspondences relative to key landmarks. Together, our findings reveal the operation of specific perceptual organization processes that make us remarkably adept at identifying correspondences across complex shape-transforming processes by using salient object features. We suggest that these abilities, which allow us to parse and interpret the causally significant features of shapes, are invaluable for many tasks that involve ‘making sense’ of shape.
Authors:Daniel R. Little; Tony Wang; Robert M. Nosofsky Pages: 1 - 38 Abstract: Publication date: September 2016 Source:Cognitive Psychology, Volume 89 Author(s): Daniel R. Little, Tony Wang, Robert M. Nosofsky Among the most fundamental results in the area of perceptual classification are the “correlated facilitation” and “filtering interference” effects observed in Garner’s (1974) speeded categorization tasks: In the case of integral-dimension stimuli, relative to a control task, single-dimension classification is faster when there is correlated variation along a second dimension, but slower when there is orthogonal variation that cannot be filtered out (e.g., by attention). These fundamental effects may result from participants’ use of a trial-by-trial bypass strategy in the control and correlated tasks: The observer changes the previous category response whenever the stimulus changes, and maintains responses if the stimulus repeats. Here we conduct modified versions of the Garner tasks that eliminate the availability of a pure bypass strategy. The fundamental facilitation and interference effects remain, but are still largely explainable in terms of pronounced sequential effects in all tasks. We develop sequence-sensitive versions of exemplar-retrieval and decision-bound models aimed at capturing the detailed, trial-by-trial response-time distribution data. The models combine assumptions involving: (i) strengthened perceptual/memory representations of stimuli that repeat across consecutive trials, and (ii) a bias to change category responses on trials in which the stimulus changes. These models can predict our observed effects and provide a more complete account of the underlying bases of performance in our modified Garner tasks.
Authors:Samuel G.B. Johnson; Greeshma Rajeev-Kumar; Frank C. Keil Pages: 39 - 70 Abstract: Publication date: September 2016 Source:Cognitive Psychology, Volume 89 Author(s): Samuel G.B. Johnson, Greeshma Rajeev-Kumar, Frank C. Keil Much of cognition allows us to make sense of things by explaining observable evidence in terms of unobservable explanations, such as category memberships and hidden causes. Yet we must often make such explanatory inferences with incomplete evidence, where we are ignorant about some relevant facts or diagnostic features. In seven experiments, we studied how people make explanatory inferences under these uncertain conditions, testing the possibility that people attempt to infer the presence or absence of diagnostic evidence on the basis of other cues such as evidence base rates (even when these cues are normatively irrelevant) and then proceed to make explanatory inferences on the basis of the inferred evidence. Participants followed this strategy in both diagnostic causal reasoning (Experiments 1–4, 7) and in categorization (Experiments 5–6), leading to illusory inferences. Two processing predictions of this account were also confirmed, concerning participants’ evidence-seeking behavior (Experiment 4) and their beliefs about the likely presence or absence of the evidence (Experiment 5). These findings reveal deep commonalities between superficially distinct forms of diagnostic reasoning—causal reasoning and classification—and point toward common inferential machinery across explanatory tasks.
Authors:Philip L. Smith; Simon D. Lilburn; Elaine A. Corbett; David K. Sewell; Søren Kyllingsbæk Pages: 71 - 105 Abstract: Publication date: September 2016 Source:Cognitive Psychology, Volume 89 Author(s): Philip L. Smith, Simon D. Lilburn, Elaine A. Corbett, David K. Sewell, Søren Kyllingsbæk We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of ∑ i ( d i ′ ) 2 , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model.
Authors:Fintan Costello; Paul Watts Pages: 106 - 133 Abstract: Publication date: September 2016 Source:Cognitive Psychology, Volume 89 Author(s): Fintan Costello, Paul Watts A common view in current psychology is that people estimate probabilities using various ‘heuristics’ or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P ( A B ) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people’s conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people’s estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or ‘direct’ probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.
Authors:Kimele Persaud; Pernille Hemmer Pages: 1 - 21 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Kimele Persaud, Pernille Hemmer Bayesian models of cognition assume that prior knowledge about the world influences judgments. Recent approaches have suggested that the loss of fidelity from working to long-term (LT) memory is simply due to an increased rate of guessing (e.g. Brady, Konkle, Gill, Oliva, & Alvarez, 2013). That is, recall is the result of either remembering (with some noise) or guessing. This stands in contrast to Bayesian models of cognition while assume that prior knowledge about the world influences judgments, and that recall is a combination of expectations learned from the environment and noisy memory representations. Here, we evaluate the time course of fidelity in LT episodic memory, and the relative contribution of prior category knowledge and guessing, using a continuous recall paradigm. At an aggregate level, performance reflects a high rate of guessing. However, when aggregate data is partitioned by lag (i.e., the number of presentations from study to test), or is un-aggregated, performance appears to be more complex than just remembering with some noise and guessing. We implemented three models: the standard remember-guess model, a three-component remember-guess model, and a Bayesian mixture model and evaluated these models against the data. The results emphasize the importance of taking into account the influence of prior category knowledge on memory.
Authors:Steven G. Luke; Kiel Christianson Pages: 22 - 60 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Steven G. Luke, Kiel Christianson Efficient language processing may involve generating expectations about upcoming input. To investigate the extent to which prediction might facilitate reading, a large-scale survey provided cloze scores for all 2689 words in 55 different text passages. Highly predictable words were quite rare (5% of content words), and most words had a more-expected competitor. An eye-tracking study showed sensitivity to cloze probability but no mis-prediction cost. Instead, the presence of a more-expected competitor was found to be facilitative in several measures. Further, semantic and morphosyntactic information was highly predictable even when word identity was not, and this information facilitated reading above and beyond the predictability of the full word form. The results are consistent with graded prediction but inconsistent with full lexical prediction. Implications for theories of prediction in language comprehension are discussed.
Authors:Henrik Singmann; Karl Christoph Klauer; Sieghard Beller Pages: 61 - 87 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Henrik Singmann, Karl Christoph Klauer, Sieghard Beller The present research examines descriptive models of probabilistic conditional reasoning, that is of reasoning from uncertain conditionals with contents about which reasoners have rich background knowledge. According to our dual-source model, two types of information shape such reasoning: knowledge-based information elicited by the contents of the material and content-independent information derived from the form of inferences. Two experiments implemented manipulations that selectively influenced the model parameters for the knowledge-based information, the relative weight given to form-based versus knowledge-based information, and the parameters for the form-based information, validating the psychological interpretation of these parameters. We apply the model to classical suppression effects dissecting them into effects on background knowledge and effects on form-based processes (Exp. 3) and we use it to reanalyse previous studies manipulating reasoning instructions. In a model-comparison exercise, based on data of seven studies, the dual-source model outperformed three Bayesian competitor models. Overall, our results support the view that people make use of background knowledge in line with current Bayesian models, but they also suggest that the form of the conditional argument, irrespective of its content, plays a substantive, yet smaller, role.
Authors:Arthur G. Samuel Pages: 88 - 114 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Arthur G. Samuel In listening to speech, people have been shown to apply several types of adjustment to their phonemic categories that take into account variations in the prevailing linguistic environment. These adjustments include selective adaptation, lexically driven recalibration, and audiovisually determined recalibration. Prior studies have used dual task procedures to test whether these adjustments are automatic or if they require attention, and all of these tests have supported automaticity. The current study instead uses a method of targeted distraction to demonstrate that lexical recalibration does in fact require attention. Building on this finding, the targeted distraction method is used to measure the period of time during which the lexical percept remains malleable. The results support a processing window of approximately one second, consistent with the results of a small number of prior studies that bear on this question. The results also demonstrate that recalibration is closely linked to the completion of lexical access.
Authors:Anna Shusterman; Peggy Li Pages: 115 - 161 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Anna Shusterman, Peggy Li Languages differ in how they encode spatial frames of reference. It is unknown how children acquire the particular frame-of-reference terms in their language (e.g., left/right, north/south). The present paper uses a word-learning paradigm to investigate 4-year-old English-speaking children’s acquisition of such terms. In Part I, with five experiments, we contrasted children’s acquisition of novel word pairs meaning left-right and north-south to examine their initial hypotheses and the relative ease of learning the meanings of these terms. Children interpreted ambiguous spatial terms as having environment-based meanings akin to north and south, and they readily learned and generalized north-south meanings. These studies provide the first direct evidence that children invoke geocentric representations in spatial language acquisition. However, the studies leave unanswered how children ultimately acquire “left” and “right.” In Part II, with three more experiments, we investigated why children struggle to master body-based frame-of-reference words. Children successfully learned “left” and “right” when the novel words were systematically introduced on their own bodies and extended these words to novel (intrinsic and relative) uses; however, they had difficulty learning to talk about the left and right sides of a doll. This difficulty was paralleled in identifying the left and right sides of the doll in a non-linguistic memory task. In contrast, children had no difficulties learning to label the front and back sides of a doll. These studies begin to paint a detailed account of the acquisition of spatial terms in English, and provide insights into the origins of diverse spatial reference frames in the world’s languages.
Authors:Mathieu Le Corre; Peggy Li; Becky H. Huang; Gisela Jia; Susan Carey Pages: 162 - 186 Abstract: Publication date: August 2016 Source:Cognitive Psychology, Volume 88 Author(s): Mathieu Le Corre, Peggy Li, Becky H. Huang, Gisela Jia, Susan Carey Previous studies showed that children learning a language with an obligatory singular/plural distinction (Russian and English) learn the meaning of the number word for one earlier than children learning Japanese, a language without obligatory number morphology (Barner, Libenson, Cheung, & Takasaki, 2009; Sarnecka, Kamenskaya, Yamana, Ogura, & Yudovina, 2007). This can be explained by differences in number morphology, but it can also be explained by many other differences between the languages and the environments of the children who were compared. The present study tests the hypothesis that the morphological singular/plural distinction supports the early acquisition of the meaning of the number word for one by comparing young English learners to age and SES matched young Mandarin Chinese learners. Mandarin does not have obligatory number morphology but is more similar to English than Japanese in many crucial respects. Corpus analyses show that, compared to English learners, Mandarin learners hear number words more frequently, are more likely to hear number words followed by a noun, and are more likely to hear number words in contexts where they denote a cardinal value. Two tasks show that, despite these advantages, Mandarin learners learn the meaning of the number word for one three to six months later than do English learners. These results provide the strongest evidence to date that prior knowledge of the numerical meaning of the distinction between singular and plural supports the acquisition of the meaning of the number word for one.
Authors:Caitlin Tenison; Jon M. Fincham; John R. Anderson Pages: 1 - 28 Abstract: Publication date: June 2016 Source:Cognitive Psychology, Volume 87 Author(s): Caitlin Tenison, Jon M. Fincham, John R. Anderson This fMRI study examines the changes in participants’ information processing as they repeatedly solve the same mathematical problem. We show that the majority of practice-related speedup is produced by discrete changes in cognitive processing. Because the points at which these changes take place vary from problem to problem, and the underlying information processing steps vary in duration, the existence of such discrete changes can be hard to detect. Using two converging approaches, we establish the existence of three learning phases. When solving a problem in one of these learning phases, participants can go through three cognitive stages: Encoding, Solving, and Responding. Each cognitive stage is associated with a unique brain signature. Using a bottom-up approach combining multi-voxel pattern analysis and hidden semi-Markov modeling, we identify the duration of that stage on any particular trial from participants brain activation patterns. For our top-down approach we developed an ACT-R model of these cognitive stages and simulated how they change over the course of learning. The Solving stage of the first learning phase is long and involves a sequence of arithmetic computations. Participants transition to the second learning phase when they can retrieve the answer, thereby drastically reducing the duration of the Solving stage. With continued practice, participants then transition to the third learning phase when they recognize the problem as a single unit and produce the answer as an automatic response. The duration of this third learning phase is dominated by the Responding stage.
Authors:Roman Feiman; Jesse Snedeker Pages: 29 - 52 Abstract: Publication date: June 2016 Source:Cognitive Psychology, Volume 87 Author(s): Roman Feiman, Jesse Snedeker Quantifier words like each, every, all and three are among the most abstract words in language. Unlike nouns, verbs and adjectives, the meanings of quantifiers are not related to a referent out in the world. Rather, quantifiers specify what relationships hold between the sets of entities, events and properties denoted by other words. When two quantifiers are in the same clause, they create a systematic ambiguity. “Every kid climbed a tree” could mean that there was only one tree, climbed by all, or many different trees, one per climbing kid. In the present study, participants chose a picture to indicate their preferred reading of different ambiguous sentences – those containing every, as well as the other three quantifiers. In Experiment 1, we found large systematic differences in preference, depending on the quantifier word. In Experiment 2, we then manipulated the choice of a particular reading of one sentence, and tested how this affected participants’ reading preference on a subsequent target sentence. We found a priming effect for all quantifiers, but only when the prime and target sentences contained the same quantifier. For example, all-a sentences prime other all-a sentences, while each-a primes each-a, but sentences with each do not prime sentences with all or vice versa. In Experiment 3, we ask whether the lack of priming across quantifiers could be due to the two sentences sharing one fewer word. We find that changing the verb between the prime and target sentence does not reduce the priming effect. In Experiment 4, we discover one case where there is priming across quantifiers – when one number (e.g. three) is in the prime, and a different one (e.g. four) is in the target. We discuss how these findings relate to linguistic theories of quantifier meaning and what they tell us about the division of labor between conceptual content and combinatorial semantics, as well as the mental representations of quantification and of the abstract logical structure of language.
Authors:Matthew G. Buckley; Alastair D. Smith; Mark Haselgrove Pages: 53 - 87 Abstract: Publication date: June 2016 Source:Cognitive Psychology, Volume 87 Author(s): Matthew G. Buckley, Alastair D. Smith, Mark Haselgrove The way in which human and non-human animals represent the shape of their environments remains a contentious issue. According to local theories of shape learning, organisms encode the local geometric features of the environment that signal a goal location. In contrast, global theories of shape learning suggest that organisms encode the overall shape of the environment. There is, however, a surprising lack of evidence to support this latter claim, despite the fact that common behaviours seem to require encoding of the global-shape of an environment. We tested one such behaviour in 5 experiments, in which human participants were trained to navigate to a hidden goal on one side of a virtual arena (e.g. the inside) before being required to find the same point on the alternative side (e.g. the outside). Participants navigated to the appropriate goal location, both when inside and outside the virtual arena, but only when the shape of the arena remained the same between training and test (Experiments 1a and 1b). When the arena shape was transformed between these stages, participants were lost (Experiments 2a and 2b). When training and testing was conducted on the outside of two different-shaped arenas that shared local geometric cues participants once again explored the appropriate goal location (Experiment 3). These results provide core evidence that humans encode a global representation of the overall shape of the environments in, or around, which they navigate.
Authors:Benjamin M. Rottman; Reid Hastie Pages: 88 - 134 Abstract: Publication date: June 2016 Source:Cognitive Psychology, Volume 87 Author(s): Benjamin M. Rottman, Reid Hastie Making judgments by relying on beliefs about the causal relationships between events is a fundamental capacity of everyday cognition. In the last decade, Causal Bayesian Networks have been proposed as a framework for modeling causal reasoning. Two experiments were conducted to provide comprehensive data sets with which to evaluate a variety of different types of judgments in comparison to the standard Bayesian networks calculations. Participants were introduced to a fictional system of three events and observed a set of learning trials that instantiated the multivariate distribution relating the three variables. We tested inferences on chains X 1 → Y → X 2, common cause structures X 1 ← Y → X 2, and common effect structures X 1 → Y ← X 2, on binary and numerical variables, and with high and intermediate causal strengths. We tested transitive inferences, inferences when one variable is irrelevant because it is blocked by an intervening variable (Markov Assumption), inferences from two variables to a middle variable, and inferences about the presence of one cause when the alternative cause was known to have occurred (the normative “explaining away” pattern). Compared to the normative account, in general, when the judgments should change, they change in the normative direction. However, we also discuss a few persistent violations of the standard normative model. In addition, we evaluate the relative success of 12 theoretical explanations for these deviations.
Authors:Tom Hartley; Mark J. Hurlstone; Graham J. Hitch Pages: 135 - 178 Abstract: Publication date: June 2016 Source:Cognitive Psychology, Volume 87 Author(s): Tom Hartley, Mark J. Hurlstone, Graham J. Hitch Immediate memory for spoken sequences depends on their rhythm – different levels of accuracy and patterns of error are seen according to the way in which items are spaced in time. Current models address these phenomena only partially or not at all. We investigate the idea that temporal grouping effects are an emergent property of a general serial ordering mechanism based on a population of oscillators locally-sensitive to amplitude modulations on different temporal scales. Two experiments show that the effects of temporal grouping are independent of the predictability of the grouping pattern, consistent with this model’s stimulus-driven mechanism and inconsistent with alternative accounts in terms of top-down processes. The second experiment reports detailed and systematic differences in the recall of irregularly grouped sequences that are broadly consistent with predictions of the new model. We suggest that the bottom-up multi-scale population oscillator (or BUMP) mechanism is a useful starting point for a general account of serial order in language processing more widely.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Oliver Herbort, Hanna Mathew, Wilfried Kunde Object-directed grasping movements are adapted to intended interactions with an object. We address whether adjusting the grasp for object manipulation is controlled habitually, based on past experiences, or by goal-directed planning, based on an evaluation of the expected action outcomes. Therefore, we asked participants to grasp and rotate a dial. In such tasks, participants typically grasp the dial with an excursed, uncomfortable arm posture, which then allows to complete the dial rotation in a comfortable end-state. We extended this task by manipulating the contingency between the orientation of the grasp and the resulting end-state of the arm. A one-step (control) group rotated the dial to a single target. A two-step group rotated the dial to an initial target and then in the opposite direction. A three-step group rotated the dial to the initial target, then in the opposite direction, and then back to the initial target. During practice, the two-step and three-step groups reduced the excursion of their grasps, thus avoiding overly excursed arm postures after the second rotation. When the two-step and three-step groups were asked to execute one-step rotations, their grasps resembled those that were acquired during the two-step and three-step rotations, respectively. However, the carry-over was not complete. This suggests that adjusting grasps for forthcoming object manipulations is controlled by a mixture of habitual and goal-directed processes. In the present experiment, the former contributed approximately twice as much to grasp selection than the latter.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Ansgar D. Endress, Alan Langus Learners often need to extract recurring items from continuous sequences, in both vision and audition. The best-known example is probably found in word-learning, where listeners have to determine where words start and end in fluent speech. This could be achieved through universal and experience-independent statistical mechanisms, for example by relying on Transitional Probabilities (TPs). Further, these mechanisms might allow learners to store items in memory. However, previous investigations have yielded conflicting evidence as to whether a sensitivity to TPs is diagnostic of the memorization of recurring items. Here, we address this issue in the visual modality. Participants were familiarized with a continuous sequence of visual items (i.e., arbitrary or everyday symbols), and then had to choose between (i) high-TP items that appeared in the sequence, (ii) high-TP items that did not appear in the sequence, and (iii) low-TP items that appeared in the sequence. Items matched in TPs but differing in (chunk) frequency were much harder to discriminate than items differing in TPs (with no significant sensitivity to chunk frequency), and learners preferred unattested high-TP items over attested low-TP items. Contrary to previous claims, these results cannot be explained on the basis of the similarity of the test items. Learners thus weigh within-item TPs higher than the frequency of the chunks, even when the TP differences are relatively subtle. We argue that these results are problematic for distributional clustering mechanisms that analyze continuous sequences, and provide supporting computational results. We suggest that the role of TPs might not be to memorize items per se, but rather to prepare learners to memorize recurring items once they are presented in subsequent learning situations with richer cues.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Aslı Kılıç, Amy H. Criss, Kenneth J. Malmberg, Richard M. Shiffrin Differentiation is a theory that originally emerged from the perception literature and proposes that with experience, the representation of stimuli becomes more distinct from or less similar to the representation of other stimuli. In recent years, the role of differentiation has played a critical role in models of memory. Differentiation mechanisms have been implemented in episodic memory models by assuming that information about new experiences with a stimulus in a particular context accumulates in a single memory trace and these updated memory traces become more distinct from the representations of other stimuli. A key implication of such models is that well encoded events are less confusable with other events. This prediction is particularly relevant for two important phenomena. One is the role of encoding strength on memory. The strength based mirror effect is the finding of higher hit rates and lower false alarm rates for a list composed of all strongly encoded items compared to a list composed of all weakly encoded items. The other is output interference, the finding that accuracy decreases across a series of test trials. Results from four experiments show a tight coupling between these two empirical phenomena such that strongly encoded target items are less prone to interference. By proposing a process model and evaluating the predictions of the model, we show how a single theoretical principle, differentiation, provides a unified explanation for these effects.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Katharine A. Tillman, Tyler Marghetis, David Barner, Mahesh Srinivasan Deictic time words like “yesterday” and “tomorrow” pose a challenge to children not only because they are abstract, and label periods in time, but also because their denotations vary according to the time at which they are uttered: Monday’s “tomorrow” is different than Thursday’s. Although children produce these words as early as age 2 or 3, they do not use them in adult-like ways for several subsequent years. Here, we explored whether children have partial but systematic meanings for these words during the long delay before adult-like usage. We asked 3- to 8-year-olds to represent these words on a bidirectional, left-to-right timeline that extended from the past (infancy) to the future (adulthood). This method allowed us to independently probe knowledge of these words’ deictic status (e.g., “yesterday” is in the past), relative ordering (e.g., “last week” was before “yesterday”), and remoteness from the present (e.g., “last week” was about 7 times longer ago than “yesterday”). We found that adult-like knowledge of deictic status and order emerge in synchrony, between ages 4 and 6, but that knowledge of remoteness emerges later, after age 7. Our findings suggest that children’s early use of deictic time words is not random, but instead reflects the gradual construction of a structured lexical domain.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Adam F. Osth, Simon Dennis, Andrew Heathcote The mirror effect – a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates – is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this “LR-DDM” to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): Pierina Cheung, Miriam Rubenson, David Barner Recent accounts of number word learning posit that when children learn to accurately count sets (i.e., become “cardinal principle” or “CP” knowers), they have a conceptual insight about how the count list implements the successor function – i.e., that every natural number n has a successor defined as n +1 (Carey, 2004, 2009; Sarnecka & Carey, 2008). However, recent studies suggest that knowledge of the successor function emerges sometime after children learn to accurately count, though it remains unknown when this occurs, and what causes this developmental transition. We tested knowledge of the successor function in 100 children aged 4 through 7 and asked how age and counting ability are related to: (1) children’s ability to infer the successors of all numbers in their count list and (2) knowledge that all numbers have a successor. We found that children do not acquire these two facets of the successor function until they are about 5½ or 6years of age – roughly 2years after they learn to accurately count sets and become CP-knowers. These findings show that acquisition of the successor function is highly protracted, providing the strongest evidence yet that it cannot drive the cardinal principle induction. We suggest that counting experience, as well as knowledge of recursive counting structures, may instead drive the learning of the successor function.
Abstract: Publication date: February 2017 Source:Cognitive Psychology, Volume 92 Author(s): John D. Coley, Melanie Arenson, Yian Xu, Kimberly D. Tanner A large body of cognitive research has shown that people intuitively and effortlessly reason about the biological world in complex and systematic ways. We addressed two questions about the nature of intuitive biological reasoning: How does intuitive biological thinking change during adolescence and early adulthood? How does increasing biology education influence intuitive biological thinking? To do so, we developed a battery of measures to systematically test three components of intuitive biological thought: anthropocentric thinking, teleological thinking and essentialist thinking, and tested 8th graders and university students (both biology majors, and non-biology majors). Results reveal clear evidence of persistent intuitive reasoning among all populations studied, consistent but surprisingly small differences between 8th graders and college students on measures of intuitive biological thought, and consistent but again surprisingly small influence of increasing biology education on intuitive biological reasoning. Results speak to the persistence of intuitive reasoning, the importance of taking intuitive knowledge into account in science classrooms, and the necessity of interdisciplinary research to advance biology education. Further studies are necessary to investigate how cultural context and continued acquisition of expertise impact intuitive biology thinking.
Abstract: Publication date: December 2016 Source:Cognitive Psychology, Volume 91 Author(s): Dale J. Cohen, Philip T. Quinlan How do people derive meaning from numbers? Here, we instantiate the primary theories of numerical representation in computational models and compare simulated performance to human data. Specifically, we fit simulated data to the distributions for correct and incorrect responses, as well as the pattern of errors made, in a traditional “relative quantity” task. The results reveal that no current theory of numerical representation can adequately account for the data without additional assumptions. However, when we introduce repeated, error-prone sampling of the stimulus (e.g., Cohen, 2009) superior fits are achieved when the underlying representation of integers reflects linear spacing with constant variance. These results provide new insights into (i) the detailed nature of mental numerical representation, and, (ii) general perceptual processes implemented by the human visual system.
Abstract: Publication date: December 2016 Source:Cognitive Psychology, Volume 91 Author(s): James R. Schmidt, Jan De Houwer, Klaus Rothermund The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena.
Abstract: Publication date: December 2016 Source:Cognitive Psychology, Volume 91 Author(s): Jason S. Tsukahara, Tyler L. Harrison, Randall W. Engle Pupil dilations of the eye are known to correspond to central cognitive processes. However, the relationship between pupil size and individual differences in cognitive ability is not as well studied. A peculiar finding that has cropped up in this research is that those high on cognitive ability have a larger pupil size, even during a passive baseline condition. Yet these findings were incidental and lacked a clear explanation. Therefore, in the present series of studies we systematically investigated whether pupil size during a passive baseline is associated with individual differences in working memory capacity and fluid intelligence. Across three studies we consistently found that baseline pupil size is, in fact, related to cognitive ability. We showed that this relationship could not be explained by differences in mental effort, and that the effect of working memory capacity and fluid intelligence on pupil size persisted even after 23 sessions and taking into account the effect of novelty or familiarity with the environment. We also accounted for potential confounding variables such as; age, ethnicity, and drug substances. Lastly, we found that it is fluid intelligence, more so than working memory capacity, which is related to baseline pupil size. In order to provide an explanation and suggestions for future research, we also consider our findings in the context of the underlying neural mechanisms involved.
Abstract: Publication date: Available online 20 October 2016 Source:Cognitive Psychology Author(s): Sara T. Baker, Alan M. Leslie, C.R. Gallistel, Bruce M. Hood Although learning and development reflect changes situated in an individual brain, most discussions of behavioral change are based on the evidence of group averages. Our reliance on group-averaged data creates a dilemma. On the one hand, we need to use traditional inferential statistics. On the other hand, group averages are highly ambiguous when we need to understand change in the individual; the average pattern of change may characterize all, some, or none of the individuals in the group. Here we present a new method for statistically characterizing developmental change in each individual child we study. Using false-belief tasks, fifty-two children in two cohorts were repeatedly tested for varying lengths of time between 3 and 5 years of age. Using a novel Bayesian change point analysis, we determined both the presence and—just as importantly—the absence of change in individual longitudinal cumulative records. Whenever the analysis supports a change conclusion, it identifies in that child’s record the most likely point at which change occurred. Results show striking variability in patterns of change and stability across individual children. We then group the individuals by their various patterns of change or no change. The resulting patterns provide scarce support for sudden changes in competence and shed new light on the concepts of “passing” and “failing” in developmental studies.
Authors:Punit Shah; Adam J.L. Harris; Geoffrey Bird; Caroline Catmur; Ulrike Hahn Abstract: Publication date: Available online 16 August 2016 Source:Cognitive Psychology Author(s): Punit Shah, Adam J.L. Harris, Geoffrey Bird, Caroline Catmur, Ulrike Hahn Received academic wisdom holds that human judgment is characterized by unrealistic optimism, the tendency to underestimate the likelihood of negative events and overestimate the likelihood of positive events. With recent questions being raised over the degree to which the majority of this research genuinely demonstrates optimism, attention to possible mechanisms generating such a bias becomes ever more important. New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook). However, using task analyses, simulations, and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants’ use of new information with reference to the normative, Bayesian standard. Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human ‘bias’ and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition.