for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Similar Journals
Journal Cover
Cognition
Journal Prestige (SJR): 2.455
Citation Impact (citeScore): 4
Number of Followers: 185  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0010-0277
Published by Elsevier Homepage  [3160 journals]
  • Reconstructing the origins of the space-number association: spatial and
           number-magnitude codes must be used jointly to elicit spatially organised
           mental number lines
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Mario Pinto, Michele Pellegrino, Fabio Marson, Stefano Lasaponara, Fabrizio Doricchi In a series of five experiments with unimanual Go/No-Go tasks and intermixed Arabic numerical, i.e. numbers lower or higher than 5, and directional targets, i.e. arrows pointing to the left or to the right, we explored whether spatial codes used in isolation inherently evoke the left-to-right representation of number magnitudes, i.e. Space-to-Number congruency effect, and, vice-versa, whether number-magnitude codes used in isolation inherently evoke the conceptual activation of left/right spatial codes, i.e. Number-to-Space congruency effect. In Experiment 1 Go responses were provided based on instructions that activated only magnitude codes, e.g. “push only if the number is lower than 5 and whenever an arrow appears”, or only spatial codes, e.g. “push only when an arrow points to the left and whenever a number appears”. In Experiments 2–4, the same instructions were combined with the request of responding only to arrows in a specific colour. No fixed association was present between a specific arrow colour and a specific arrow direction. In Experiment 3, the direction of arrow-targets was kept fixed to favour the processing of arrow direction. In Experiment 4, an additional class of No-Go visual stimuli was included to heighten the focus of attention on numerical and directional arrow-targets and, in addition, only numbers 1, 2, 8 and 9 were used to force the contrast between small and large magnitude codes. The results of Experiments 1–4 highlighted no significant or reliable Space-to-Number congruency effect, e.g. faster RTs to numbers lower than 5 when participants attend to arrows pointing to the left, or Number-to-Space congruency effect, e.g. faster RTs to arrows pointing to the left when participants attend to numbers lower than 5. In Experiment 5 we confirmed that when spatial and number magnitude codes are used in conjunction, e.g. “ push only when an arrow points left and when a number is lower than 5”, a significant Space-to-Number congruency effect is found and we expanded this evidence to Number-to-Space congruency. Split-half testing showed that in Experiment 5 congruency effects were reliable and null-hypothesis significance testing showed that they were different from those found in all other experiments. We conclude that neither space codes used in isolation can elicit a spatial representation of number magnitudes nor number-magnitude codes used in isolation can trigger the activation of spatial codes. Thus, spatial and numerical codes must be used jointly to evoke reliable spatially organised mental number lines.
       
  • A counterfactual explanation for the action effect in causal judgment
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Paul Henne, Laura Niemi, Ángel Pinillos, Felipe De Brigard, Joshua Knobe People’s causal judgments are susceptible to the action effect, whereby they judge actions to be more causal than inactions. We offer a new explanation for this effect, the counterfactual explanation: people judge actions to be more causal than inactions because they are more inclined to consider the counterfactual alternatives to actions than to consider counterfactual alternatives to inactions. Experiment 1a conceptually replicates the original action effect for causal judgments. Experiment 1b confirms a novel prediction of the new explanation, the reverse action effect, in which people judge inactions to be more causal than actions in overdetermination cases. Experiment 2 directly compares the two effects in joint-causation and overdetermination scenarios and conceptually replicates them with new scenarios. Taken together, these studies provide support for the new counterfactual explanation for the action effect in causal judgment.
       
  • Evidence of analogical re-representation from a change detection task
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Daniel C. Silliman, Kenneth J. Kurtz The flexibility to map similar, but non-identical relations, is a key characteristic of human analogical reasoning. Understanding how this flexibility is implemented is necessary for a complete accounting of analogical processes. The structure mapping theory of analogy addresses this issue by invoking re-representation—an online transformation of conceptually similar relational content that reveals potential partial identity matches between predicates. Despite the critical importance of re-representation to structure mapping, very little empirical work has validated the psychological reality of this mechanism, with the existing evidence being no more than suggestive. The present work investigates the likelihood of re-representation across two experiments using a novel change detection task. The resultant findings demonstrate precise evidence of representational change in relational content of analogs. Experiment 2 further explores the relationship between lower-order relational similarity and the likelihood of re-representation.
       
  • The progressive 6-year-old conserver: Numerical saliency and sensitivity
           as core mechanisms of numerical abstraction in a Piaget-like estimation
           task
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Arnaud Viarouge, Olivier Houdé, Grégoire Borst In Piaget’s theory of number development, children do not possess a true concept of number until they are able to reason on numerical quantity regardless of changes in other nonnumerical magnitudes, such as length. Recent studies have echoed this result by arguing that abstracting number from nonnumerical dimensions of magnitude is a developmental milestone and a strong predictor of mathematics achievement. However, the mechanisms supporting such abstraction remain largely underspecified. We aimed to study how identification of the numerical equivalence in a Piaget-like estimation task by 6-year-old children is affected by (a) the degree of interference between number and nonnumerical magnitudes and (b) children’s spontaneous orientation to numerosity. Six-year-old children first performed a card sorting task assessing their spontaneous orientation towards numerosity, spacing, or item size in a set of dots. Then, they completed a Piaget-like same/different numerical estimation task using two rows of dots in which the length ratio between the two rows varied systematically. Children were less likely to accept the numerical equivalence in the Piaget-like estimation task (a) as the difference in spacing between the dots increased and (b) as the children were more spontaneously oriented towards spacing over number in the card sorting task. Our results suggest that abstracting number depends on its saliency, which varies both as a function of the context (i.e., length ratio between the two rows) and of individual differences in children’s sensitivity to the numerical aspects of their environment. These factors could be at the root of the observed development of performance in the seminal number-conservation task, which appears as a progressive abstraction of number rather than a conceptual shift, as Piaget hypothesized.
       
  • A meta-analysis of the object-based compatibility effect
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Shaheed Azaad, Simon M. Laham, Phebe Shields The object-based compatibility effect (CE) describes, in the context of two-choice keypress tasks, the facilitation of response times (RTs) by the correspondence between participants’ response hand and the task-irrelevant orientation of a viewed object’s handle. Object-based CEs are often attributed to affordance perception. Although the object-based CE paradigm is the major RT task used to study affordances, failures to replicate the effect have raised questions about its robustness. Moreover, it remains unclear whether the object-based CE is indeed indicative of affordances, or whether it is merely an example of spatial CEs brought about by the object’s protruding handle. We present a meta-analysis of object-based CEs to (1) obtain a point estimate of the overall effect and (2) test for moderation consistent with either affordance or spatial compatibility accounts. From 88 independent effects (computed on 2359 participants), we estimated a small but significant compatibility effect (ES = 0.106, z = 5.44, p 
       
  • Collections, collectives, and individuals: Preschoolers’
           attributions of intentionality
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Hammad Sheikh, Lawrence A. Hirschfeld Given the complexity of our social worlds, humans must develop the ability to make nuanced interpretations of behavior, including the ability to infer an actor’s intentions from perceptual properties of an actor’s movements. Consistent with the common perception of a group as a single collective entity and the use of singular nouns to refer to groups, such as a clan, family, team, army, herd, hive, or a gaggle, Bloom and Veres (1999) found that adults attribute intentionality to groups to the same extent that they do to single entities. This study examines the developmental course of both these phenomena by examining the performance of adults and preschoolers on an adaptation of Bloom and Veres’ task. Our results show that preschoolers, like adults, readily attribute intentions to a group and that the more they do so, the more they perceive the group as a single collective entity. This effect is largely mediated by increased attributions of goal-directed action and, to a lesser extent by attributions of mental states, consistent with the claim that purposeful, coordinated action makes a collection of individuals conceptually coalesce into an entity.
       
  • Spontaneous memory retrieval varies based on familiarity with a spatial
           context
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Jessica Robin, Luisa Garzon, Morris Moscovitch Spatial context can serve as a powerful cue for episodic memory. In daily life, we encounter locations varying in familiarity that may trigger different forms of memory retrieval. While previous research on autobiographical memory suggests that more familiar landmarks cue more detailed memories, theories such as cue overload predict that less familiar cues will more reliably trigger specific memory retrieval. It is therefore possible that more and less familiar cues will differentially elicit more generalized and specific memories, respectively. In this series of studies, we develop a novel paradigm for eliciting spontaneous memory retrieval based on real-world spatial contexts varying in familiarity. We found evidence that more familiar contexts generally lead to higher rates of spontaneous memory retrieval for semantic and generalized memories, but that episodic memories are more frequently retrieved for less familiar cues. These patterns demonstrate how related memories lead to the formation of more generalized representations over time, while memories with fewer associates remain episodic. We discuss these findings in relation to those obtained in a version of the study in which participants were instructed to retrieve thoughts. Together these findings provide novel insight into the dynamics of context familiarity and memory retrieval in a naturalistic autobiographical memory paradigm.
       
  • Pupils say more than a thousand words: Pupil size reflects how observed
           actions are interpreted
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): François Quesque, Friederike Behrens, Mariska E. Kret Humans attend to others’ facial expressions and body language to better understand their emotions and predict goals and intentions. The eyes and its pupils reveal important social information. Because pupil size is beyond voluntary control yet reflective of a range of cognitive and affective processes, pupils in principal have the potential to convey whether others’ actions are interpreted correctly or not. Here, we measured pupil size while participants observed video-clips showing reach-to-grasp arm movements. Expressors in the video-clips were playing a board game and moved a dowel to a new position. Participants’ task was to decide whether the dowel was repositioned with the intention to be followed up by another move of the same expressor (personal intention) or whether the arm movement carried the implicit message that expressor’s turn was over (social intention). Replicating earlier findings, results showed that participants recognized expressors’ intentions on the basis of their arm kinematics. Results further showed that participants’ pupil size was larger when observing actions reflecting personal compared to social intentions. Most interestingly, before participants indicated how they interpreted the observed actions by choosing to press one of two keys (corresponding to the personal or social intention), their pupils within a split second, had already given away how they interpreted the expressor’s movement. In sum, this study underscores the importance of nonverbal behavior in helping social messages get across quickly. Revealing how actions are interpreted, pupils may provide additional feedback for effective social interactions.
       
  • Do non-human primates really represent others’ ignorance' A test of
           the awareness relations hypothesis
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Daniel J. Horschler, Laurie R. Santos, Evan L. MacLean Non-human primates can often predict how another agent will behave based on that agent’s knowledge about the world. But how do non-human primates represent others’ knowledge states' Researchers have recently proposed that non-human primates form “awareness relations” to attribute objectively true information to other minds, as opposed to human-like representations that track others’ ignorance or false belief states. We present the first explicit test of the awareness relations hypothesis by examining when rhesus macaques’ understanding of other agents’ knowledge falters. In Experiment 1, monkeys watched an agent observe a piece of fruit (the target object) being hidden in one of two boxes. While the agent’s view was occluded, either the fruit moved out of its box and directly back into it, or the box containing the fruit opened and immediately closed. We found that monkeys looked significantly longer when the agent reached incorrectly rather than correctly after the box’s movement, but not after the fruit’s movement. This result suggests that monkeys did not expect the agent to know the fruit’s location when it briefly and arbitrarily moved while the agent could not see it, but did expect the agent to know the fruit’s location when only the box moved while the agent could not see it. In Experiment 2, we replicated and extended both findings with a larger sample, a different target object, and opposite directions of motion in the test trials. These findings suggest that monkeys reason about others’ knowledge of objects by forming awareness relations which are disrupted by arbitrary spatial manipulation of the target object while an agent has no perceptual access to it.
       
  • Similarity of referents influences the learning of phonological word
           forms: Evidence from concurrent word learning
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Libo Zhao, Stephanie Packard, Bob McMurray, Prahlad Gupta Although it is well accepted that various components of words (e.g., phonological word forms, and semantics) interact with each other during processing, it is less clear whether semantics and phonology interact with each other during word learning in the sense of affecting each other’s learning. This study employed three behavioral experiments as well as computational simulations to shed light on this issue by examining whether semantic similarity, operationalized as visual referent similarity, influences the learning of phonological word forms. Experiment 1 demonstrated an effect that would typically be interpreted as an effect of similarity on phonological learning, employing the commonly used naming task. We discuss how such results are confounded by at-test competition effects that can arise directly from the explicit presentation of similar referents, or indirectly from their re-activation even without explicit presentation. Experiments 2 and 3 used a stem completion test and a recognition from mispronunciation test to control for direct effects of competition, and still found effects of similarity on performance. A neural network model of word learning simulated the behavioral results, with analysis of the model confirming a genuine effect of referent similarity on phonological learning even when controlling for both direct and indirect at-test similarity effects. Together these results provide converging evidence that phonological word form learning is affected by referent similarity, suggesting that semantics and phonology interact with each other during learning.
       
  • Emotional Semantic Congruency based on stimulus driven comparative
           judgements
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Carlo Fantoni, Giulio Baldassi, Sara Rigutti, Valter Prpic, Mauro Murgia, Tiziano Agostini A common cognitive process in everyday life consists in the comparative judgements of emotions given a pair of facial expressions and the choice of the most positive/negative among them. Results from three experiments on complete-facial expressions (happy/angry) and mixed-facial expressions (neutral/happy-or-angry) pairs viewed with (Experiment 1 and 3) or without (Experiment 2) foveation and performed in conditions in which valence was either task relevant (Experiment 1 and 2) or task irrelevant (Experiment 3), show that comparative judgements of emotions are stimulus driven. Judgements' speed increased as the target absolute emotion intensity grew larger together with the average emotion of the pair, irrespective of the compatibility between the valence and the side of motor response: a semantic congruency effect in the domain of emotion. This result undermines previous interpretation of results in the context of comparative judgements based on the lateralization of emotions (e.g., SNARC-like instructional flexibility), and is fully consistent with our formalization of emotional semantic congruency: the direct Speed-Intensity Association model.
       
  • Graded similarity in free categorization
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): John P. Clapper Similarity has long been regarded as a major determinant of human categorization. Surprisingly, much research has shown that when people are asked to construct their own categories they rarely do so on the basis of overall similarity, instead categorizing on the basis of a single feature or dimension of the objects. This article reports five experiments that manipulate the proportion of parts shared by two structurally alignable objects to determine whether similarity would have a graded effect on free categorization. Increasing the proportion of shared features increased both the rated similarity of a given pair of objects and the probability of assigning them to the same category. Interestingly, the shape of the two similarity functions differed, with rated similarity increasing linearly with the proportion of shared features while the probability of assigning the objects to the same category increased superlinearly (exponentially). This difference is discussed in terms of Shepard's (1987) model of generalization, which predicts that any monotonic increase in perceived similarity will result in an exponential increase in the probability of generalization. Overall, these results provide a strong demonstration of similarity-based free categorization, and the particular form of that relationship provides useful information regarding the underlying cognitive processes involved.
       
  • Sensitivity to emotion information in children’s lexical processing
    • Abstract: Publication date: September 2019Source: Cognition, Volume 190Author(s): Tatiana C. Lund, David M. Sidhu, Penny M. Pexman We tested predictions of multiple representation accounts of conceptual processing, including the proposal that emotion information may provide a bootstrapping mechanism for vocabulary acquisition. We investigated the influence of word valence on children’s lexical processing, presenting 40 positive words, 40 neutral words, and 40 negative words in an auditory lexical decision task (ALDT), along with 120 nonwords. We tested 99 children across three age groups: 5, 6, or 7 years. There were no significant effects of valence on the ALDT responses of 5-year-old children. The 6-year-old children, however, were faster to respond to negative words than to neutral words and, for more abstract words, faster to respond to positive words than to neutral words. The 7-year-old children were faster for positive words than for neutral words, regardless of concreteness. As such, children showed sensitivity to word valence in lexical processing, at a younger age than had been established in previous research. In addition, children’s language skills were related to their improved processing of more abstract neutral words between 6 and 7 years of age. These results are consistent with multimodal accounts of word meaning and lexical development.
       
  • A new look at joint attention and common knowledge
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Barbora Siposova, Malinda Carpenter Everyone agrees that joint attention is a key feature of human social cognition. Yet, despite over 40 years of work and hundreds of publications on this topic, there is still surprisingly little agreement on what exactly joint attention is, and how the jointness in it is achieved. Part of the problem, we propose, is that joint attention is not a single process, but rather it includes a cluster of different cognitive skills and processes, and different researchers focus on different aspects of it. A similar problem applies to common knowledge. Here we present a new approach: We outline a typology of social attention levels which are currently all referred to in the literature as joint attention (from monitoring to common, mutual, and shared attention), along with corresponding levels of common knowledge. We consider cognitive, behavioral, and phenomenological aspects of the different levels as well as their different functions, and a key distinction we make in all of this is second-personal vs. third-personal relations. While we focus mainly on joint attention and common knowledge, we also briefly discuss how these levels might apply to other ‘joint’ mental states such as joint goals.
       
  • Intuitions about mathematical beauty: A case study in the aesthetic
           experience of ideas
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Samuel G.B. Johnson, Stefan Steinerberger Can an idea be beautiful' Mathematicians often describe arguments as “beautiful” or “dull,” and famous scientists have claimed that mathematical beauty is a guide toward the truth. Do laypeople, like mathematicians and scientists, experience mathematics aesthetically' Three studies suggest that they do. When people rated the similarity of simple mathematical arguments to landscape paintings (Study 1) or pieces of classical piano music (Study 2), their similarity rankings were internally consistent across participants. Moreover, when participants rated beauty and various other potentially aesthetic dimensions for artworks and mathematical arguments, they relied mainly on the same three dimensions for judging beauty—elegance, profundity, and clarity (Study 3). These aesthetic judgments, made separately for artworks and arguments, could be used to predict similarity judgments out-of-sample. These studies also suggest a role for expertise in sharpening aesthetic intuitions about mathematics. We argue that these results shed light on broader issues in how and why humans have aesthetic experiences of abstract ideas.
       
  • Parallel, cascaded, interactive processing of words during sentence
           reading
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Yun Wen, Joshua Snell, Jonathan Grainger Single words are easier to identify in a briefly presented syntactically correct word sequence compared with a scrambled version of the same set of words: a sentence superiority effect. Interactive-activation models of sentence comprehension can account for this phenomenon by implementing parallel processing of word identities. The cascaded and interactive nature of such processing allows sentence-level structures to influence on-going word processing. Alternatively, prior observations of a sentence superiority effect in post-cued word-in-phrase identification might be due to the sophisticated guessing of word identities on the basis of partial information about the target word and the surrounding context. Here, for the first time, we used electrophysiological recordings to plot the time-course of the sentence superiority effect. According to an interactive-activation account of this phenomenon, the effect should be visible in the N400 component, thought to reflect the mapping of word identities onto higher-level semantic and syntactic representations. Such evidence for changes in highly automatized linguistic processing is not predicted by a sophisticated guessing account. Our results revealed a robust and widespread sentence-superiority effect on the N400 component that onsets around 270 ms post-sentence onset, thus lending support to the interactive-activation account.
       
  • A delay in processing for repeated letters: Evidence from megastudies
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Iliyana V. Trifonova, James S. Adelman Repetitions of letters in words are frequent in many languages. Here we explore whether these repetitions affect word recognition. Previous studies of word processing have not provided conclusive evidence of differential processing between repeated and unique letter identities. In the present study, to achieve greater power, we used regression analyses on existing mega-studies of visual word recognition latencies. In both lexical decision (in English, Dutch, and French) and word naming (in English), there was strong evidence that repeated letters delay visual word recognition after major covariates are partialed out. This delay was most robust when the repeated letters occurred in close proximity but not in immediate adjacency to each other. Simulations indicated that the observed inhibitory pattern of repeated letters was not predicted by three leading visual word recognition models. Future theorizing in visual word recognition will need to take account of this inhibitory pattern. It remains to be seen whether the appropriate adjustment should occur in the representation of letter position and identity, or in a more precise description of earlier visual processes.
       
  • Instance-of-object-kind representations
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Sandeep Prasada, D. Geoffrey Hall We naturally think and talk about the objects we encounter as instances of one or another kind of object (e.g. as a dog). This paper makes a proposal concerning the instance-of-object-kind representations that allow us to think of things as instances of object kinds and investigates two predictions of the proposed representations. Instance-of-object-kind representations represent an object as one of indefinitely many instances of a given object kind. In so doing, they implicate a uniqueness-in-kind constraint which represents an object as unique within a kind, by distinguishing the instance-of-object-kind representation of an object from instance-of-object-kind representations of other actually as well as potentially existing instances of that object kind. The uniqueness-in-kind constraint predicts that a given instance of an object kind cannot be transformed into a different potentially existing instance of the same kind, but allows the object to be transformed into a potentially existing instance of a different object kind. Instance-of-object-kind representations also implicate a mode-of-existence constraint whereby actually existing instances of an object kind are represented as being distinct from all actually existing instances of that kind and other object kinds. This constraint predicts that participants should reject the possibility of transforming an actually existing instance of an object kind into an actually existing instance of the same or different object kind. Five experiments provide evidence for these constraints. In so doing, the experiments provide evidence for the formal characteristics of the proposed instance-of-object-kind representations that guide our thinking about things as instances of object kinds.
       
  • Linguistic conventionality and the role of epistemic reasoning in
           children’s mutual exclusivity inferences
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Mahesh Srinivasan, Ruthe Foushee, Andrew Bartnof, David Barner To interpret an interlocutor’s use of a novel word (e.g., “give me the papaya”), children typically exclude referents that they already have labels for (like an “apple”), and expect the word to refer to something they do not have a label for (like the papaya). The goal of the present studies was to test whether such mutual exclusivity inferences require children to reason about the words their interlocutors know and could have chosen to say: e.g., If she had wanted the “apple” she would have asked for it (since she knows the word “apple”), so she must want the papaya. Across four studies, we document that both children and adults will make mutual exclusivity inferences even when they believe that their interlocutor does not share their knowledge of relevant, alternative words, suggesting that such inferences do not require reasoning about an interlocutor’s epistemic states. Instead, our findings suggest that children’s own knowledge of an object’s label, together with their belief that this is the conventional label for the object in their language, and that this convention applies to their interlocutor, is sufficient to support their mutual exclusivity inferences. Additionally, and contrary to the claims of previous studies that have used mutual exclusivity as a proxy for children’s beliefs that others share their knowledge, we found that children — especially those with stronger theory of mind ability — are quite conservative about attributing their knowledge of object labels to others. Together, our findings hold implications for theories of word learning, and for how children learn about the scope of shared conventional knowledge.
       
  • Better than native: Tone language experience enhances English lexical
           stress discrimination in Cantonese-English bilingual listeners
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): William Choi, Xiuli Tong, Arthur G. Samuel While many second language (L2) listeners are known to struggle when discriminating non-native features absent in their first language (L1), no study has reported that L2 listeners perform better than native listeners in this regard. The present study tested whether Cantonese-English bilinguals were better in discriminating English lexical stress in individual words or pseudowords than native English listeners, even though lexical stress is absent in Cantonese. In experiments manipulating acoustic, phonotactic, and lexical cues, Cantonese-English bilingual adults exhibited superior performance in discriminating English lexical stress than native English listeners across all phonotactic/lexical conditions when the fundamental frequency (f0) cue to lexical stress was present. The findings underscore the facilitative effect of Cantonese tone language experience on English lexical stress discrimination.
       
  • Does the implicit outcomes expectancies shape learning and memory
           processes'
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Isabel Carmona, Paloma Marí-Beffa, Angeles F. Estévez Does the explicit or implicit knowledge about the consequences of our choices shape learning and memory processes' This seems to be the case according to previous studies demonstrating improvements in learning and retention of symbolic relations and in visuospatial recognition memory when each correct choice is reinforced with its own unique and explicit outcome (the differential outcomes procedure, DOP). In the present study, we aim to extend these findings by exploring the impact of the DOP under conditions of non-conscious processing. To test for this, both the outcomes (Experiment 1A) and the sample stimuli (Experiment 1B) were presented under subliminal (non-conscious) and supraliminal conditions in a delayed visual recognition memory task. Results from both experiments showed a better visual recognition memory when participants were trained with the DOP regardless the awareness of the outcomes or even of the stimuli used for training. To our knowledge, this is the first demonstration that the DOP can be effective under unconscious conditions. This finding is discussed in the light of the two-memory systems model developed by Savage and colleagues to explain the beneficial effects observed on learning and memory when differential outcomes are applied.
       
  • Visual perception of shape-transforming processes: ‘Shape
           Scission’
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Filipp Schmidt, Flip Phillips, Roland W. Fleming Shape-deforming processes (e.g., squashing, bending, twisting) can radically alter objects’ shapes. After such a transformation, some features are due to the object’s original form, while others are due to the transformation, yet it is challenging to separate the two. We tested whether observers can distinguish the causal origin of different features, teasing apart the characteristics of the original shape from those imposed by transformations, a process we call ‘shape scission’. Using computer graphics, we created 8 unfamiliar objects and subjected each to 8 transformations (e.g., “twisted”, “inflated”, “melted”). One group of participants named transformations consistently. A second group arranged cards depicting the objects into classes according to either (i) the original shape or (ii) the type of transformation. They could do this almost perfectly, suggesting that they readily distinguish the causal origin of shape features. Another group used a digital painting interface to indicate which locations on the objects appeared transformed, with responses suggesting they can localise features caused by transformations. Finally, we parametrically varied the magnitude of the transformations, and asked another group to rate the degree of transformation. Ratings correlated strongly with transformation magnitude with a tendency to overestimate small magnitudes. Responses were predicted by both the magnitude and area affected by the transformation. Together, the findings suggest that observers can scission object shapes into original shape and transformation features and access the resulting representational layers at will.
       
  • Hierarchical structure priming from mathematics to two- and three-site
           relative clause attachment
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Christoph Scheepers, Anastasia Galkina, Yury Shtyrov, Andriy Myachykov A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9 + 1) × 5 (making high RC-attachments more likely) versus 80–9 + 1 × 5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3 + 11)) × 2, 18 + 7+(3 + 11) × 2, and 18 + 7 + 3 + 11 × 2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure.
       
  • Visual form perception is fundamental for both reading comprehension and
           arithmetic computation
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Jiaxin Cui, Yiyun Zhang, Sirui Wan, Chuansheng Chen, Jieying Zeng, Xinlin Zhou Visual perception has been found to be a critical factor for reading comprehension and arithmetic computation in separate lines of research with different measures of visual form perception. The current study of 1099 Chinese elementary school students investigated whether the same visual form perception (assessed by a geometric figure matching task) underlies both reading comprehension and arithmetic computation. The results showed that visual form perception had close relations with both reading comprehension and arithmetic computation, even after controlling for age, gender, and cognitive factors such as processing speed, attention, working memory, visuo-spatial processing, and general intelligence. Results also showed that numerosity comparison's relations with reading comprehension and arithmetic computation were fully accounted for by visual form perception. These results suggest that reading comprehension and arithmetic computation might share a similar visual form processing mechanism.
       
  • The development of symmetry concept in preschool children
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Qingfen Hu, Meng Zhang Young children are exposed to symmetrical figures frequently before they are taught the concept of symmetry, which is a valuable experience for the development of geometry; however, limited research has explored how this concept develops. This study investigated the developmental sequence of “general symmetry” concept and “specific symmetry” concepts (i.e., bilateral, rotational, and translational symmetry) with 106 4–6-year-old children using a symmetry deviant detection task. The test examined children’s conception of general symmetry against asymmetry, specific symmetry against asymmetry, and discrimination of specific symmetries. The results suggested that the concept of symmetry develops as a differentiation process. The concept of general symmetry was acquired first, followed by specific symmetries which were acquired in sequential order.
       
  • Collective benefit in joint perceptual judgments: Partial roles of shared
           environments, meta-cognition, and feedback
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Pavel V. Voinov, Natalie Sebanz, Günther Knoblich Recent research has demonstrated that two persons can optimally combine their observations to make better decisions when they can verbally communicate their confidence to each other. The present study investigated whether non-verbal interaction in a shared environment can be sufficient to achieve similar collective benefit. Pairs of individuals performed a localization task in a shared virtual 3D environment. In Experiments 1 and 2, partners had access to orthogonal viewpoints providing complementary information. The results showed robust collective benefit compared to individual performance from a single viewpoint, which could be obtained without any opportunity for verbal communication and even when no feedback about joint accuracy was provided (Experiment 2). When partners shared the same viewpoint (Experiment 3), collective benefit was achieved only when feedback on accuracy was provided (Experiment 3a). The findings indicate that sharing an environment can be sufficient for achieving integration of complementary perceptual information. Communicating confidence might not be necessary when an environment is shared. Another possibility is that processes for integrating interpersonally continuous information generally differ from the processes invoked when making a joint dichotomous choice.
       
  • There’s more to “sparkle” than meets the eye: Knowledge of vision
           and light verbs among congenitally blind and sighted individuals
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Marina Bedny, Jorie Koster-Hale, Giulia Elli, Lindsay Yazzolino, Rebecca Saxe We examined the contribution of first-person sensory experience to concepts by comparing the meanings of perception (visual/tactile) and emission (light/sound) verbs among congenitally blind (N = 25) and sighted speakers (N = 22). Participants judged semantic similarity for pairs of verbs referring to events of visual (e.g. to peek), tactile (e.g. to feel) and amodal perception (e.g. to perceive) as well as light (e.g. to shimmer) and sound (e.g. to boom) emission and manner of motion (to roll) (total word pairs, N = 2041). Relative to the sighted, blind speakers had higher agreement among themselves on touch perception and sound emission verbs. However, for visual verbs, the judgments of blind and sighted participants were indistinguishable, both in the semantic criteria used and subject-wise variability. Blind and sighted individuals alike differentiate visual perception verbs from verbs of touch and amodal perception and differentiate among acts of visual perception e.g. intense/continuous from brief acts of looking (e.g. peek vs. stare). Light emission verbs are differentiated according to intensity (blaze vs. glow) and stability (blaze vs. flash). Thus detailed knowledge of visual word meanings is acquired without first-person sensory access.
       
  • Preschoolers and multi-digit numbers: A path to mathematics through the
           symbols themselves
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Lei Yuan, Richard W. Prather, Kelly S. Mix, Linda B. Smith Numerous studies from developmental psychology have suggested that human symbolic representation of numbers is built upon the evolutionally old capacity for representing quantities that is shared with other species. Substantial research from mathematics education also supports the idea that mathematical concepts are best learned through their corresponding physical representations. We argue for an independent pathway to learning “big” multi-digit symbolic numbers that focuses on the symbol system itself. Across five experiments using both between- and within-subject designs, we asked preschoolers to identify written multi-digit numbers with their spoken names in a two-alternative-choice-test or to indicate the larger quantity between two written numbers. Results showed that preschoolers could reliably map spoken number names to written forms and compare the magnitudes of two written multi-digit numbers. Importantly, these abilities were not related to their non-symbolic representation of quantities. These findings have important implications for numerical cognition, symbolic development, teaching, and education.
       
  • Choking under experimenter’s presence: Impact on proactive control and
           practical consequences for psychological science
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Clément Belletier, Alice Normand, Valérie Camos, Pierre Barrouillet, Pascal Huguet Evidence accumulated for more than a century on audience effects shows that being watched by others typically impairs performance on difficult tasks. However, recent research under the label of « choking under pressure » suggests that this performance impairment is, ironically, specific to the individuals who are the most qualified to succeed—those with a high working memory capacity (WMC). Here, we predicted and found that being watched by evaluative others such as the experimenter undermines proactive control on which the high-WMC individuals rely the more. These results refine our understanding of both audience and choking effects, and lead to innovative, practical recommendations for psychological science.
       
  • Learning mechanisms in cue reweighting
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Zara Harmon, Kaori Idemaru, Vsevolod Kapatsinski Feedback has been shown to be effective in shifting attention across perceptual cues to a phonological contrast in speech perception (Francis, Baldwin & Nusbaum, 2000). However, the learning mechanisms behind this process remain obscure. We compare the predictions of supervised error-driven learning (Rescorla & Wagner, 1972) and reinforcement learning (Sutton & Barto, 1998) using computational simulations. Supervised learning predicts downweighting of an informative cue when the learner receives evidence that it is no longer informative. In contrast, reinforcement learning suggests that a reduction in cue weight requires positive evidence for the informativeness of an alternative cue. Experimental evidence supports the latter prediction, implicating reinforcement learning as the mechanism behind the effect of feedback on cue weighting in speech perception. Native English listeners were exposed to either bimodal or unimodal VOT distributions spanning the unaspirated/aspirated boundary (bear/pear). VOT is the primary cue to initial stop voicing in English. However, lexical feedback in training indicated that VOT was no longer predictive of voicing. Reduction in the weight of VOT was observed only when participants could use an alternative cue, F0, to predict voicing. Frequency distributions had no effect on learning. Overall, the results suggest that attention shifting in learning the phonetic cues to phonological categories is accomplished using simple reinforcement learning principles that also guide the choice of actions in other domains.
       
  • Passively learned spatial navigation cues evoke reinforcement learning
           reward signals
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Thomas D. Ferguson, Chad C. Williams, Ronald W. Skelton, Olave E. Krigolson Since the suggestion by Tolman (1948) that both rodents and humans create cognitive maps during navigation, the specifics of how navigators learn about their environment has been mired in debate. One facet of this debate is whether or not the creation of cognitive maps – also known as allocentric navigation – involves reinforcement learning. Here, we demonstrate a role for reinforcement learning during allocentric navigation using event-related brain potentials (ERPs). In the present experiment, participants navigated in a virtual environment that allowed the use of three different navigation strategies (allocentric, egocentric-response, & egocentric-cue), in which their goal was to locate and remember a hidden platform. Following the navigation phase of the experiment, participants were shown “cue images” representative of the three navigation strategies. Specifically, we examined whether or not these passively learned strategy images elicited a reward positivity – an ERP component associated with reinforcement learning and the anterior cingulate cortex. We found that when allocentric navigators were shown previously learned cues predicting the goal location a reward positivity was elicited. The present findings demonstrate that allocentric navigational cues carry long-term value after navigation and lend support to the claim that reinforcement learning plays a role in the acquisition of allocentric navigation and thus the generation of cognitive maps.
       
  • Time pressure disrupts level-2, but not level-1, visual perspective
           calculation: A process-dissociation analysis
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Andrew R. Todd, Austin J. Simpson, C. Daryl Cameron Reasoning about other people’s mental states has long been assumed to require active deliberation. Yet, evidence from indirect measures suggests that adults and children commonly display behavior indicative of having incidentally calculated both what other agents see (level-1 perspective taking) and how they see it (level-2 perspective taking). Here, we investigated the efficiency of such perspective calculation in adults. In four experiments using indirect measures of visual perspective taking, we imposed time pressure to constrain processing opportunity, and we used process-dissociation analyses to isolate perspective calculation as the process of focal interest. Results revealed that time pressure weakened level-2, but not level-1, perspective calculation—a pattern that was not evident in error-rate analyses. These findings suggest that perspective calculation may operate more efficiently in level-1 than in level-2 perspective taking. They also highlight the utility of the process-dissociation framework for unmasking processes that otherwise may go under-detected in behavior-level analyses.
       
  • Braille readers break mirror invariance for both visual Braille and Latin
           letters
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Adélaïde de Heering, Régine Kolinsky For this study, we started from the observation that the poor adequacy of a script to the requirements of the human visual system strongly impacts some aspects of reading expertise (e.g., fluent reading). Here we investigated another of these aspects, namely the ability to break mirror invariance, which makes it hard for readers to ignore the mirrored contrasts of letters even if this hinders performance. In particular, we hypothesized that this ability would be preserved for the visually presented letters of the Braille alphabet despite their poor fit to the constraints of the human visual system, as it did for congenital Braille readers when they explored the same letters through the tactile modality (de Heering, Collignon, & Kolinsky, 2018). To test so, we measured visual Braille readers’ mirror costs, indexing for their difficulty to consider mirrored items as identical compared to strictly identical items, for three materials: Braille letters, geometrical shapes and Latin letters, which invariant properties are typically considered as having been selected through cultural evolution because they match the requirements of the visual system. Contrary to people having never experienced Braille, Braille readers’ mirror cost was of the same magnitude for Latin letters and Braille letters and steadily increased the more they had experience with the latter material. Both these costs were also stronger than what was observed for geometrical shapes. Overall these results suggest that the poor adequacy of the Braille alphabet to the visual system does not impede Braille readers to break mirror invariance for the Braille material.
       
  • Mental attribution is not sufficient or necessary to trigger attentional
           orienting to gaze
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Alan Kingstone, George Kachkovski, Daniil Vasilyev, Michael Kuk, Timothy N. Welsh Attention can be shifted in the direction that another person is looking, but the role played by an observer's mental attribution to the looker is controversial. And whether mental attribution to the looker is sufficient to trigger an attention shift is unknown. The current study introduces a novel paradigm to investigate this latter issue. An actor is presented on video turning his head to the left or right before a target appears, randomly, at the gazed-at or non-gazed at location. Time to detect the target is measured. The standard finding is that target detection is more efficient at the gazed-at than the nongazed-at location, indicating that attention is shifted to the gazed-at location. Critically, in the current study, an actor is wearing two identical masks – one covering his face and the other the back of his head. Thus, after the head turn, participants are presented with the profile of two faces, one looking left and one looking right. For a gaze cuing effect to emerge, participants must attribute a mental state to the actor – as looking through one mask and not the other. Over the course of four experiments we report that when mental attribution is necessary, a shift in social attention does not occur (i.e., mental attribution is not sufficient to produce a social attention effect); and when mental attribution is not necessary, a shift in social attention does occur. Thus, mental attribution is neither sufficient nor necessary for the occurrence of an involuntary shift in social attention. The present findings constrain future models of social attention that wish to link gaze cuing to mental attribution.
       
  • Statistical learning and Gestalt-like principles predict melodic
           expectations
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Emily Morgan, Allison Fogel, Anjali Nair, Aniruddh D. Patel Expectation, or prediction, has become a major theme in cognitive science. Music offers a powerful system for studying how expectations are formed and deployed in the processing of richly structured sequences that unfold rapidly in time. We ask to what extent expectations about an upcoming note in a melody are driven by two distinct factors: Gestalt-like principles grounded in the auditory system (e.g. a preference for subsequent notes to move in small intervals), and statistical learning of melodic structure. We use multinomial regression modeling to evaluate the predictions of computationally implemented models of melodic expectation against behavioral data from a musical cloze task, in which participants hear a novel melodic opening and are asked to sing the note they expect to come next. We demonstrate that both Gestalt-like principles and statistical learning contribute to listeners’ online expectations. In conjunction with results in the domain of language, our results point to a larger-than-previously-assumed role for statistical learning in predictive processing across cognitive domains, even in cases that seem potentially governed by a smaller set of theoretically motivated rules. However, we also find that both of the models tested here leave much variance in the human data unexplained, pointing to a need for models of melodic expectation that incorporate underlying hierarchical and/or harmonic structure. We propose that our combined behavioral (melodic cloze) and modeling (multinomial regression) approach provides a powerful method for further testing and development of models of melodic expectation.
       
  • Cross-situational learning in a Zipfian environment
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Andrew T. Hendrickson, Amy Perfors Both adults and children have shown impressive cross-situational word learning in which they leverage the statistics of word usage across many different scenes in order to isolate specific word meanings (e.g., Yu & Smith, 2007). However, relatively little is known about how this learning scales to real language. Some theoretical analyses suggest that when words follow a Zipfian distribution, as they do in natural language, it should be more difficult to learn a lexicon because of the many low-frequency words that are only observed a few times (Blythe, Smith, & Smith, 2010; Vogt, 2012). Although this effect can be mitigated somewhat by assuming mutual exclusivity (Reisenauer, Smith, & Blythe, 2013), no mathematical analyses suggest that learning in a Zipfian environment should be easier. In this work, we show the opposite of the predicted effect using cross-situational learning experiments with adults: when the distribution of words and meanings is Zipfian, learning is not impaired and is usually improved. Over a series of experiments, we provide evidence that this is because Zipfian distributions help people to disambiguate the meanings of the other words in the situation.
       
  • Monkeys are curious about counterfactual outcomes
    • Abstract: Publication date: August 2019Source: Cognition, Volume 189Author(s): Maya Zhe Wang, Benjamin Y. Hayden Many non-human animals show exploratory behaviors. It remains unclear whether any possess human-like curiosity. We previously proposed three criteria for applying the term curiosity to animal behavior: (1) the subject is willing to sacrifice reward to obtain information, (2) the information provides no immediate instrumental or strategic benefit, and (3) the amount the subject is willing to pay depends systematically on the amount of information available. In previous work on information-seeking in animals, information generally predicts upcoming rewards, and animals’ decisions may therefore be a byproduct of reinforcement processes. Here we get around this potential confound by taking advantage of macaques’ ability to reason counterfactually (that is, about outcomes that could have occurred had the subject chosen differently). Specifically, macaques sacrificed fluid reward to obtain information about counterfactual outcomes. Moreover, their willingness to pay scaled with the information (Shannon entropy) offered by the counterfactual option. These results demonstrate the existence of human-like curiosity in non-human primates according to our criteria, which circumvent several confounds associated with less stringent criteria.
       
  • Skilled readers’ sensitivity to meaningful regularities in English
           writing
    • Abstract: Publication date: Available online 1 December 2018Source: CognitionAuthor(s): Anastasia Ulicheva, Hannah Harvey, Mark Aronoff, Kathleen Rastle Substantial research has been undertaken to understand the relationship between spelling and sound, but we know little about the relationship between spelling and meaning in alphabetic writing systems. We present a computational analysis of English writing in which we develop new constructs to describe this relationship. Diagnosticity captures the amount of meaningful information in a given spelling, whereas specificity estimates the degree of dispersion of this meaning across different spellings for a particular sound sequence. Using these two constructs, we demonstrate that particular suffix spellings tend to be reserved for particular meaningful functions. We then show across three paradigms (nonword classification, spelling, and eye tracking during sentence reading) that this form of regularity between spelling and meaning influences the behaviour of skilled readers, and that the degree of this behavioural sensitivity mirrors the strength of spelling-to-meaning regularities in the writing system. We close by arguing that English spelling may have become fractionated such that the high degree of spelling-sound inconsistency maximises the transmission of meaningful information.
       
  • Re-thinking Cognition’s Open Data Policy: Responding to Hardwicke and
           colleagues’ evaluation of its impact
    • Abstract: Publication date: Available online 23 October 2018Source: CognitionAuthor(s): Manos Tsakiris, Randi Martin, Johan Wagemans
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 34.207.82.217
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-