Journal Cover
Cognition
Journal Prestige (SJR): 2.455
Citation Impact (citeScore): 4
Number of Followers: 202  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0010-0277
Published by Elsevier Homepage  [3183 journals]
  • Proactive control of affective distraction: Experience-based but not
           expectancy-based
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Constantin Schmidts, Anna Foerster, Thomas Kleinsorge, Wilfried Kunde Unpleasant stimuli disrupt ongoing information processing, even when they are entirely task-irrelevant. We examined whether such affective disturbances can be controlled explicitly and proactively. Specifically, we studied two different mechanisms to induce proactive control: the experience of frequent affective distraction and cueing of upcoming affective distraction. We predicted that both mechanisms would shield the attentional system from affective disturbance. Participants solved a letter classification task while being exposed to neutral or negative distractor pictures. We varied whether the proportion of negative distractors was low or high and whether cues for the upcoming type of distractor valence were informative or uninformative. In three experiments (N = 114), we found support for the notion that experience-based control shields information processing from affective disturbances, whereas distractor valence expectations were not helpful. These data suggest that there is no explicit top-down influence on attentional control settings of affective distraction, just adjustments to the context.
       
  • Attentional prioritization reconfigures novel instructions into
           action-oriented task sets
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Carlos González-García, Silvia Formica, Baptist Liefooghe, Marcel Brass An astonishing aspect of human cognitive flexibility concerns the ability to efficiently convert complex symbolic instructions into novel behaviors. In such ability, the rapid transformation of relevant content into action plans is particularly crucial as it allows for reflexive, automatic-like execution of merely instructed task sets. However, little is known about the mechanisms that give rise to this transformation. In the current study, we test the hypothesis that novel instructions held in working memory are reformatted into action-oriented representations when selective attention prioritizes their content. To do so, we devised a paradigm in which participants first encoded 4 S-R mappings and later, a retro-cue selected two of them. We first found that participants can benefit from retro-cues during the implementation of novel task-sets. Then, across two preregistered experiments, we observed that cued mappings (but not uncued ones) induced intention-based reflexivity, suggesting that only these entered an action-oriented state. Altogether, our results reveal that selective attention prioritizes relevant novel instructed content, playing an important role in its prospective reformatting into an action-bound task set.
       
  • What comes to mind'
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Adam Bear, Samantha Bensinger, Julian Jara-Ettinger, Joshua Knobe, Fiery Cushman When solving problems, like making predictions or choices, people often “sample” possibilities into mind. Here, we consider whether there is structure to the kinds of thoughts people sample by default—that is, without an explicit goal. Across three experiments we found that what comes to mind by default are samples from a probability distribution that combines what people think is likely and what they think is good. Experiment 1 found that the first quantities that come to mind for everyday behaviors and events are quantities that combine what is average and ideal. Experiment 2 found, in a manipulated context, that the distribution of numbers that come to mind resemble the mathematical product of the presented statistical distribution and a (softmax-transformed) prescriptive distribution. Experiment 3 replicated these findings in a visual domain. These results provide insight into the process generating people’s conscious thoughts and invite new questions about the value of thinking about things that are both likely and good.
       
  • Increased overt attention to objects in early deaf adults: An eye-tracking
           study of complex naturalistic scenes
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Silvia Zeni, Irene Laudanna, Francesca Baruffaldi, Benedetta Heimler, David Melcher, Francesco Pavani The study of selective attention in people with profound deafness has repeatedly documented enhanced attention to the peripheral regions of the visual field compared to hearing controls. This finding emerged from covert attention studies (i.e., without eye-movements) involving extremely simplified visual scenes and comprising few visual items. In this study, we aimed to test whether this key finding extends also to overt attention, using a more ecologically valid experimental context in which complex naturalistic images were presented for 3 s. In Experiment 1 (N = 35), all images contained a single central object superimposed on a congruent naturalistic background (e.g., a tiger in the woods). At the end of the visual exploration phase, an incidental memory task probed the participants' recollection of the seen central objects and image backgrounds. Results showed that hearing controls explored and remembered the image backgrounds more than deaf participants, who lingered on the central object to a greater extent. In Experiment 2 we aimed to disentangle if this behaviour of deaf participants reflected a bias in overt space-based attention towards the centre of the image, or instead, enhanced object-centred attention. We tested new participants (N = 42) in the visual exploration task adding images with lateralized objects, as well as images with multiple object or images without any object. Results confirmed increased exploration of objects in deaf participants. Taken together our novel findings show limitations of the well-known peripheral attention bias of deaf people and suggest that visual object-centred attention may also change after prolonged auditory deprivation.
       
  • Wearing your heart on your screen: Investigating congruency-effects in
           autonomic responses and their role in interoceptive processing during
           biofeedback
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Lilla Hodossy, Manos Tsakiris The experience of one’s embodied sense of self is dependent on the integration of signals originating both from within and outwith one’s body. During the processing and integration of these signals, the bodily self must maintain a fine balance between stability and malleability. Here we investigate the potential role of autonomic responses in interoceptive processing and their contribution to the stability of the bodily self. Using a biofeedback paradigm, we manipulated the congruency of cardiac signals across two hierarchical levels: (i) the low-level congruency between a visual feedback and participant’s own cardiac signal and (ii) the high-level congruency between the participants’ beliefs about the identity of the cardiac feedback and its true identity. We measured the effects of these manipulations on high-frequency heart rate variability (HF-HRV), a selective index of phasic vagal cardiac control. In Experiment 1, HF-HRV was sensitive to low-level congruency, independently of whether participants attempted to regulate or simply attend to the biofeedback. Experiment 2 revealed a higher-level congruency effect, as participants’ prior veridical beliefs increased HF-HRV while when false they decreased HF-HRV. Our results demonstrate that autonomic changes in HF-HRV are sensitive to congruencies across multiple hierarchical levels. Our findings have important theoretical implications for predictive coding models of the self as they pave the way for a more direct way to track the subtle changes in the co-processing of the internal and external milieus.
       
  • Gamble evaluation and evoked reference sets: Why adding a small loss to a
           gamble increases its attractiveness
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Craig R.M. McKenzie, Shlomi Sher When presented with a gamble involving a chance of winning $9, participants rate it as only moderately attractive. However, when other participants are presented with a gamble that adds a chance of losing 5 cents – resulting in gamble that is strictly worse – they rate it as much more attractive. This surprising effect has previously been explained in terms of the small loss increasing the affective evaluability of $9. This paper argues for an alternative model, in which the baseline and small-loss gambles evoke different reference sets for comparison. In inferring a relevant reference set, people are sensitive to both the objective content and the framing of a gamble. The model distinguishes between two effects of evoked reference sets on behavior – an obligatory (and rational) effect on scale interpretation, and an optional (but not rational) effect on the internal representation of value. Five experiments provide strong evidence for the evoked reference set model. Data from attractiveness ratings suggest large and consistent reference set effects on scale interpretation, while data from willingness-to-pay and choice tasks suggest that effects on the internal representation of value are less robust.
       
  • Causal actions enhance perception of continuous body movements
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Yujia Peng, Nicholas Ichien, Hongjing Lu Our experience of motion depends not only on spatiotemporal features of stimuli, but also on our recognition of seemingly higher-level properties, as when we see an actor's body movements as goal-directed. Here, we examined how the perception of social causation in human actions guides the perceptual interpolation of motion in the observation of body movements. Natural human-object interactions were recorded for videos in which a person prepared to catch a ball thrown by another person. We manipulated the number of image frames between key postures to yield a short clip with different frame rates, and asked participants to judge whether the catcher's action showed smooth movements or sudden changes. In the causal condition, the catcher faced toward the ball and the thrower to preserve an intention-based causal relation between the ball's movement and the catcher's action in which the former causes the catcher's intention to act. In the non-causal condition, the catcher performed the same movements to raise their hands to catch a ball, except that they faced away from the ball, creating the impression of either a psychic reaction or coincidental non-goal-directed behavior, which makes movements of the ball appear to be an implausible cause of the catcher's intention to act. Across four experiments, we found that humans were more likely to judge the catcher's body movements to be continuous in the causal condition than in the non-causal condition. The effect was maintained as long as the intention-based causal relation was present, even when only part of the chain of causal events was observed. These findings indicate that intention-based cause-effect relations in human actions guide perceptual interpolation of body movements.
       
  • Underspecification in toddlers’ and adults’ lexical
           representations
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Jie Ren, Uriel Cohen Priva, James L. Morgan Recent research has shown that toddlers’ lexical representations are phonologically detailed, quantitatively much like those of adults. Studies in this article explore whether toddlers’ and adults’ lexical representations are qualitatively similar. Psycholinguistic claims (Lahiri & Marslen-Wilson, 1991; Lahiri & Reetz, 2002, 2010) based on underspecification (Kiparsky, 1982 et seq.) predict asymmetrical judgments in lexical processing tasks; these have been supported in some psycholinguistic research showing that participants are more sensitive to noncoronal-to-coronal (pop → top) than to coronal-to-noncoronal (top → pop) changes or mispronunciations. Three experiments using on-line visual world procedures showed that 19-month-olds and adults displayed sensitivities to both noncoronal-to-coronal and coronal-to-noncoronal mispronunciations of familiar words. No hints of any asymmetries were observed for either age group. There thus appears to be considerable developmental continuity in the nature of early and mature lexical representations. Discrepancies between the current findings and those of previous studies appear to be due to methodological differences that cast doubt on the validity of claims of psycholinguistic support for lexical underspecification.
       
  • Dynamics of auditory spatial attention gradients
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Edward J. Golob, Jeffrey R. Mock Auditory spatial attention faces the conflicting demands of focusing on the current task while also rapidly shifting location to unexpected stimuli. We examined the interplay of sustained focus and intermittent shifts in an auditory spatial attention task. Most trials required a choice response from a standard location in virtual space (L-R: −90°, 0°, +90°), but occasionally the location shifted between 45°-180°. Reaction time curves for angular shifts had a quadratic shape, with slowing for small shifts but faster reaction times for larger shifts. The reaction time curves were maintained at faster stimulus rates and usually scaled to fit the range of stimulus locations. However, focus on the right had an attenuated curve, and did not scale to the range of locations. The findings suggest two mechanisms: a top-down bias centered on standard locations that decreases with distance, and a bottom-up bias that under these conditions increases with distance from the standard location.
       
  • Facial responses of adult humans during the anticipation and consumption
           of touch and food rewards
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Sebastian Korb, Claudia Massaccesi, Andreas Gartus, Johan N. Lundström, Raffaella Rumiati, Christoph Eisenegger, Giorgia Silani Whether cognitive, motivational and hedonic aspects of reward anticipation and consumption can be reliably assessed with explicit and implicit measures, and if different motivational (decision utility) and hedonic (experienced utility) processes get recruited by distinct reward types, remain partly unsolved questions that are relevant for theories of social and non-social decision-making. We investigated these topics using a novel experimental paradigm, including carefully matched social and nonsocial rewards, and by focusing on facial responses. Facial expressions are indeed an often-cited implicit measure of rewards’ hedonic impact. For example, food rewards elicit powerful facial responses – characterized by lip smacking, tongue protrusion, and relaxation of the middle face – in human newborns, juvenile monkeys, and adult rats. The same stimuli elicit more nuanced facial reactions in adult humans, which can be best captured with facial electromyography (fEMG). However, little is known about facial expressions preceding reward consumption, reflecting the motivation to obtain and possibly the expected pleasantness of a reward, and whether similar facial expressions are elicited by different types of rewards. To investigate these questions, a novel within-subject experimental paradigm was developed. During the anticipation and consumption of social (affective touch) and nonsocial (food) rewards, explicit (ratings of wanting and liking, physical effort) and implicit (fEMG) measures of wanting and liking were taken in 43 healthy adult participants. Reduced activation of the Corrugator Supercilii (CS) muscle (reflecting less frowning and indicating greater positive response) was found in trials with higher wanting and effort during the anticipation of food rewards, as well as in trials with higher liking and effort during the consumption of food rewards. The CS muscle is thus a sensitive measure of wanting and liking of food rewards both during their anticipation and consumption. Crucially, thanks to careful reward matching, these results cannot be explained by differences in subjective wanting, liking, or effort produced to obtain the two types of rewards. No significant modulation of the Zygomaticus Major (ZM) muscle was found for social or food rewards. Explorative analyses however indicated that the ZM may activate during the delivery of the most wanted touch, but not for the most wanted food. The absence of significant effects of social rewards on the activation of CS and ZM muscles are discussed in relation to the specifics of this innovative task comparing two types of matched rewards in the same participants. The present findings contribute to the understanding of the processes underlying motivational and hedonic aspects of rewards, and may therefore inform models of social and non-social decision-making.
       
  • An initial accuracy focus prevents illusory truth
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Nadia M. Brashier, Emmaline Drew Eliseev, Elizabeth J. Marsh News stories, advertising campaigns, and political propaganda often repeat misleading claims, increasing their persuasive power. Repeated statements feel easier to process, and thus truer, than new ones. Surprisingly, this illusory truth effect occurs even when claims contradict young adults’ stored knowledge (e.g., repeating The fastest land animal is the leopard makes it more believable). In four experiments, we tackled this problem by prompting people to behave like “fact checkers.” Focusing on accuracy at exposure (giving initial truth ratings) wiped out the illusion later, but only when participants held relevant knowledge. This selective benefit persisted over a delay. Our findings inform theories of how people evaluate truth and suggest practical strategies for coping in a “post-truth world.”
       
  • The role of language in novel task learning
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Felice van 't Wout, Christopher Jarrold The ability to rapidly acquire novel cognitive skills is a hallmark of human cognition. Theories of skill acquisition assume that this process is reliant on language, but to date this assertion has not been conclusively supported by empirical evidence. In two experiments participants (total N = 68) were required to learn, by trial-and-error, the correct response to sets of five object stimuli. To investigate the contribution of language to this process, participants performed a verbal (articulatory suppression), a non-verbal (foot tapping), or no distractor task during the first or second half of each task. In both experiments, articulatory suppression resulted in increased error rates (compared to foot tapping), but only during the first (and not the second) half of each task. These results constitute the first convincing evidence for the diminishing role of language in novel task learning and are discussed in relation to theories of skill acquisition.
       
  • Dynamics of visual attention revealed in foraging tasks
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Tómas Kristjánsson, Ian M. Thornton, Andrey Chetverikov, Árni Kristjánsson Visual search tasks play a key role in theories of visual attention. But single-target search tasks may provide only a snapshot of attentional orienting. Foraging tasks with multiple targets of different types arguably provide a closer analogy to everyday attentional processing. Set-size effects have in the literature formed the basis for inferring how attention operates during visual search. We therefore measured the effects of absolute set-size (constant target-distractor ratio) and relative set-size (constant set-size but target-distractor ratio varies) on foraging patterns during “feature” foraging (targets differed from distractors on a single feature) and “conjunction” foraging (targets differed from distractors on a combination of two features). Patterns of runs of same target-type selection were similar regardless of whether absolute or relative set-size varied: long sequential runs during conjunction foraging but rapid switching between target types during feature foraging. But although foraging strategies differed between feature and conjunction foraging, surprisingly, intertarget times throughout foraging trials did not differ much between the conditions. Typical response time by set-size patterns for single-target visual search tasks were only observed for the last target during foraging. Furthermore, the foraging patterns within trials involved several distinct phases, that may serve as markers of particular attentional operations. Foraging tasks provide a remarkably intricate picture of attentional selection, far more detailed than traditional single-target visual search tasks, and well-known theories of visual attention have difficulty accounting for key aspects of the observed foraging patterns. Finally, we discuss how theoretical conceptions of attention could be modified to account for these effects.
       
  • Evidence for metacognitive bias in perception of voluntary action
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Lucie Charles, Camille Chardin, Patrick Haggard Studies of metacognition often measure confidence in perceptual decisions. Much less is known about metacognition of action, and specifically about how people estimate the success of their own actions. In the present study, we compare metacognitive abilities between voluntary actions, passive movements matched to those actions, and purely visual signals. Participants reported their confidence in judging whether a brief visual probe appeared ahead or behind of their finger during simple flexion/extension movement. The finger could be moved voluntarily, or could be moved passively by a robot replaying their own previous movements. In a third condition, participants did not move, but a visual cursor replayed their previous voluntary movements. Metacognitive sensitivity was comparable when judging active movements, during passive finger displacement and visual cursor reply. However, a progressive metacognitive bias was found, with active movements leading to overconfidence in first-level judgement relative to passive movements, at equal levels of actual evidence. Further, both active and passive movements produced overconfidence relative to visual signals. Taken together, our results may partly explain some of the peculiarities that arise when one judges one’s own actions.
       
  • Left is “good”: Observed action affects the association between
           horizontal space and affective valence
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Xiaolei Song, Feng Yi, Junting Zhang, Robert W. Proctor According to the body-specificity hypothesis, people will associate positive and negative emotional valence with the relative fluency of the left or right responding hand. Prior studies have shown that temporary changes in the fluency of the responding hand can influence the association of emotional valence with left or right, even under circumstances of action observation. But the reason why this change occurs is still controversial. The purpose of the present study was to replicate this finding and to identify the underlying mechanism. Experiment 1 duplicated a modified paradigm “Bob goes to the zoo” to verify the existence of space-valence association for Chinese right–handers. The results indicated that they had the same pattern of right-good/left-bad. However, after action training and observation in Experiment 2 that reduced the fluency of the right hand temporarily, both actors’ and observers’ space-valence associations were reversed as well. However, when observers’ potential motor capacities were constrained by binding their responding hands behind them (Experiment 3) or in front of them (Experiment 4), the observers associated the positive affect with their dominant right hand instead of the left hand in Experiment 3, whereas the observers in Experiment 4 still showed the same association pattern as the actors and the observers in Experiment 2. This study provides further evidence that the effect of alternative motor fluency on space-valence association in the observer is mainly modulated by the connection between the outcomes and space, with body posture also influencing the association.
       
  • Differential benefits of mental training types for attention, compassion,
           and theory of mind
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Fynn-Mathis Trautwein, Philipp Kanske, Anne Böckler, Tania Singer Mindfulness- and, more generally, meditation-based interventions increasingly gain popularity, effectively promoting cognitive, affective, and social capacities. It is unclear, however, if different types of practice have the same or specific effects on mental functioning. Here we tested three consecutive three-month training modules aimed at cultivating either attention, socio-affective qualities (such as compassion), or socio-cognitive skills (such as theory of mind), in three training cohorts and a retest control cohort (N = 332). While attentional performance improved most consistently after attention training, compassion increased most after socio-affective training and theory of mind partially improved after socio-cognitive training. These results show that specific mental training practices are needed to induce plasticity in different domains of mental functioning, providing a foundation for evidence-based development of more targeted interventions adapted to the needs of different education, labor, and health settings.
       
  • The role of proactive control on subcomponents of language control:
           Evidence from trilinguals
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Huanhuan Liu, Yingying Zhang, Esti Blanco-Elorrieta, Yuying He, Baoguo Chen Language control in bilingual individuals has been the source of thorough study in the last decade. However, the characterization of the subcomponents of this cognitive process remains shallow. In this experiment we tested Chinese, English, and Japanese trilinguals who completed a modified language-switching task. Participants named pictures in one of three conditions: Repeat language, Switch-away from a language (target language undetermined) and Switch-to a particular language (target language determined). Brain activity was recorded by electroencephalogram (EEG) and general proactive control ability was measured independently by the AX-Continuous Performance Test (AX-CPT). Switch-to and Switch-away processes elicited distinct neural signatures. Both at the cue and stimulus stage, Switching away elicited more negative activity at an early time window (N2); and less positive activity at the later time window (LPC). Further, at the cue stage this amplitude was negatively correlated with the proactive control index. These results show that the different subcomponents of cued-switching are dissociable and that there is a direct relation between the online signatures elicited by some of these processes and the general proactive control abilities of individuals.
       
  • Whereof one cannot speak: How language and capture of visual attention
           interact
    • Abstract: Publication date: January 2020Source: Cognition, Volume 194Author(s): Florian Goller, Soonja Choi, Upyong Hong, Ulrich Ansorge Our research addresses the important question whether language influences cognition by studying crosslinguistic differences in nonlinguistic visual search tasks. We investigated whether capture of visual attention is mediated by characteristics corresponding to concepts that are differently expressed across different languages. Korean grammatically distinguishes between tight- (kkita) and loose-fit (nehta) containment whereas German collapses them into a single semantic category (in). Although linguistic processing was neither instructed nor necessary to perform the visual search task, we found that Korean speakers showed attention capture by non-instructed but target-coincident (Experiment 1) or distractor-coincident (Experiments 4 and 5) spatial fitness of the stimuli, whereas German speakers were not sensitive to it. As the tight- versus loose-fit distinction is grammaticalized only in the Korean but not the German language, our results demonstrate that language influences which visual features capture attention even in non-linguistic tasks that do not require paying attention to these features. In separate control experiments (Experiments 2 and 3), we ruled out cultural or general cognitive group differences between Korean and German speaking participants as alternative explanations. We outline the mechanisms underlying these crosslinguistic differences in nonlinguistic visual search behaviors. This is the first study showing that linguistic spatial relational concepts held in long-term memory can affect attention capture in visual search tasks.
       
  • How abstract is syntax' Evidence from structural priming
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Jayden Ziegler, Giulia Bencini, Adele Goldberg, Jesse Snedeker In 1990, Bock and Loebell found that passives (e.g., The 747 was radioed by the airport’s control tower) can be primed by intransitive locatives (e.g., The 747 was landing by the airport’s control tower). This finding is often taken as strong evidence that structural priming occurs on the basis of a syntactic phrase structure that abstracts across lexical content, including prepositions, and is uninfluenced by the semantic roles of the arguments. However, all of the intransitive locative primes in Bock and Loebell contained the preposition by (by-locatives), just like the passive targets. Therefore, the locative-to-passive priming may have been due to the adjunct headed by by, rather than being a result of purely abstract syntax. The present experiment investigates this possibility. We find that passives and intransitive by-locatives are equivalent primes, but intransitive locatives with other prepositions (e.g., The 747 has landed near the airport control tower) do not prime passives. We conclude that a shared abstract, content-less tree structure is not sufficient for passive priming to occur. We then review the prior results that have been offered in favor of abstract tree priming, and note the range of evidence can be considerably narrowed—and possibly eliminated—once effects of animacy, semantic event structure, shared morphology, information structure, and rhythm are taken into account.
       
  • Learning reward frequency over reward probability: A tale of two learning
           rules
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Hilary J. Don, A. Ross Otto, Astin C. Cornwall, Tyler Davis, Darrell A. Worthy Learning about the expected value of choice alternatives associated with reward is critical for adaptive behavior. Although human choice preferences are affected by the presentation frequency of reward-related alternatives, this may not be captured by some dominant models of value learning, such as the delta rule. In this study, we examined whether reward learning is driven more by learning the probability of reward provided by each option, or how frequently each option has been rewarded, and assess how well models based on average reward (e.g. the delta model) and models based on cumulative reward (e.g. the decay model) can account for choice preferences. In a binary-outcome choice task, participants selected between pairs of options that had reward probabilities of 0.65 (A) versus 0.35 (B) or 0.75 (C) versus 0.25 (D). Crucially, during training there were twice the number of AB trials as CD trials, such that option A was associated with higher cumulative reward, while option C gave higher average reward. Participants then decided between novel combinations of options (e.g., AC). Most participants preferred option A over C, a result predicted by the Decay model, but not the Delta model. We also compared the Delta and Decay models to both more simplified as well as more complex models that assumed additional mechanisms, such as representation of uncertainty. Overall, models that assume learning about cumulative reward provided the best account of the data.
       
  • Pragmatic processing: An investigation of the (anti-)presuppositions of
           determiners using mouse-tracking
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Cosima Schneider, Carolin Schonard, Michael Franke, Gerhard Jäger, Markus Janczyk A presupposition is a condition that has to be met in order for a linguistic expression to be appropriate. The definite determiner (as in the banana) triggers the uniqueness-presupposition that there is a uniquely identifiable banana in the relevant discourse context. The indefinite determiner (as in a banana) is similarly associated with anti-uniqueness (that there are several bananas). Application of the Maximize Presupposition principle to the indefinite determiner suggests that this latter effect results indirectly as an anti-presupposition from considering the uniqueness-presupposition of the definite determiner, which is then negated. This results in increased processing difficulty. We utilized mouse-tracking to compare processing of definite and indefinite determiners when used felicitously and infelicitously in a particular context. First, processing of the indefinite determiner was associated with more processing difficulties compared with the definite determiner. Second, we also observed evidence for an initial temporary activation and evaluation of the uniqueness-presupposition, just as derived from anti-presupposition theory and the Maximize Presupposition principle.
       
  • A chink in the armor: The influence of training on generalization learning
           impairments after viewing traumatic stimuli
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Shilat Haim-Nachum, Einat Levy-Gigi Studies have demonstrated that similarly to individuals with PTSD, non-PTSD individuals with repeated traumatic-exposure display selective impairments in hippocampal-related functions. A central example is their impaired generalization learning. Interestingly, previous findings revealed that the nature of this impairment varied as a function of occupation; while firefighters display impaired generalization of negative context, police crime scene investigators (CSI) display impaired generalization of negative cue. One possible explanation for these discrepancies may relate to the different job requirements and unique training. Specifically, firefighters are primed to regard the context during traumatic events whereas CSI police are primed to regard specific objects (cues) in the environment. The aim of the present study was to examine the interactive effect of exposure and training on generalization learning. Eighty-two healthy volunteers were exposed to either neutral or traumatic images while receiving instructions to refer either to the images’ general contexts or to their specific cues. It was found that while both groups equally acquired and retained stimulus-outcome associations, only participants who were exposed to traumatic images showed impaired generalization learning. This impairment demonstrated a particular difficulty to generalize negative but not positive outcomes. Most importantly, as expected, there was a significant interaction between type of training and the observed impairments. Specifically, individuals who were previously trained to refer to general contexts showed a selective overgeneralization of negative contexts, while individuals who were trained to refer to specific cues displayed a selective overgeneralization of negative cues. The results suggest that trauma exposed individuals show the most vulnerability in precisely the areas in which they were most trained. We discuss the ways in which improving generalization learning may impact individuals' process of trauma recovery and might set the ground for developing treatment and prevention methods.
       
  • When do we punish people who don’t'
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Justin W. Martin, Jillian J. Jordan, David G. Rand, Fiery Cushman People often punish norm violations. In what cases is such punishment viewed as normative—a behavior that we “should” or even “must” engage in' We approach this question by asking when people who fail to punish a norm violator are, themselves, punished. (For instance, a boss who fails to punish transgressive employees might, herself, be fired.) We conducted experiments exploring the contexts in which higher-order punishment occurs, using both incentivized economic games and hypothetical vignettes describing everyday situations. We presented participants with cases in which an individual fails to punish a transgressor, either as a victim (second party) or as an observer (third party). Across studies, we consistently observed higher-order punishment of non-punishing observers. Higher-order punishment of non-punishing victims, however, was consistently weaker, and sometimes non-existent. These results demonstrate the selective application of higher-order punishment, provide a new perspective on the psychological mechanisms that support it, and provide some clues regarding its function.
       
  • Crying helps, but being sad doesn’t: Infants constrain nominal reference
           online using known verbs, but not known adjectives
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Kristen Syrett, Alexander LaTourrette, Brock Ferguson, Sandra R. Waxman Speakers can make inferences about the meaning of new words appearing in an utterance based on the lexical semantics of other words that co-occur with them. Previous work has revealed that infants at 19 and 24 months of age can recruit the semantic selectional restrictions of known verbs (e.g., eating) to deduce that a noun appearing in the subject position maps onto an animate referent. We asked whether this ability to capitalize on the semantics of familiar words to identify the referent of a novel noun in subject position extends to adjectives, which also denote properties, and which also have animacy constraints (e.g., hungry). We found that unlike in the previous studies with verbs, neither 24- nor 36-month-olds could successfully recruit known adjectival semantics in an online task to home in on an animate nominal referent. However, 36-month-olds were successful in a more interactive, forced-choice version of the task without such strict time limitations. We discuss multiple non-mutually-exclusive hypotheses for this pattern of results, focusing on the role of the morphosyntactic cues, the (lack of) perceptual cues for the target property in context of the utterance, truth conditions, and cross-linguistic implications. These possibilities raise fundamental questions about the infant’s developing lexicon and the linguistic and conceptual mechanisms at play in the process of word learning.
       
  • Possibilities as the foundation of reasoning
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): P.N. Johnson-Laird, Marco Ragni Reasoning about possibilities is fundamental in daily life. Yet, it has been little studied in psychology. We present a psychological theory in which it is the foundation of human reasoning. The theory explains how possibilities have distinct interpretations (deontic, epistemic, and alethic), how people represent them in models, and how these models yield inferences. Key principles are that the semantics of possibilities are the same finitary alternatives underlying probabilities, that speech acts can create obligations inexpressible as probabilities, that compound assertions – conditionals and disjunctions – refer to conjunctions of possibilities holding in default of knowledge to the contrary, and that mental models condense multiple consistent possibilities into one. The theory is incompatible with all normal modal logics and with probabilistic logic. Yet, experiments have corroborated its predictions. The article discusses its precursors, rivals, and potentials.
       
  • The Dialogical Entailment Task
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Niels Skovgaard-Olsen In this paper, a critical discussion is made of the role of entailments in the so-called New Paradigm of psychology of reasoning based on Bayesian models of rationality (Elqayam & Over, 2013). It is argued that assessments of probabilistic coherence cannot stand on their own, but that they need to be integrated with empirical studies of intuitive entailment judgments. This need is motivated not just by the requirements of probability theory itself, but also by a need to enhance the interdisciplinary integration of the psychology of reasoning with formal semantics in linguistics. The constructive goal of the paper is to introduce a new experimental paradigm, called the Dialogical Entailment task, to supplement current trends in the psychology of reasoning towards investigating knowledge-rich, social reasoning under uncertainty (Oaksford & Chater, 2019). As a case study, this experimental paradigm is applied to reasoning with conditionals and negation operators (e.g. CEM and wide and narrow-scope negation). As part of the investigation, participants’ entailment judgments are evaluated against their probability evaluations to assess participants’ cross-task consistency over two experimental sessions.
       
  • Integration to boundary in decisions between numerical sequences
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Moshe Glickman, Marius Usher Integration-to-boundary is a prominent normative principle used in evidence-based decisions to explain the speed-accuracy trade-off and determine the decision-time. Despite its prominence, however, the decision boundary is not directly observed, but rather is theoretically assumed, and there is still an ongoing debate regarding its form: fixed vs. collapsing. The aim of this study is to show that the integration-to-boundary process extends to decisions between rapid pairs of numerical sequences (2 Hz rate), and to determine the boundary type by directly monitoring the noisy accumulated evidence. In a set of two experiments (supplemented by computational modelling), we demonstrate that integration to a collapsing-boundary takes place in such tasks, ruling out non-integration heuristic strategies. Moreover, we show that participants can adaptively adjust their boundaries in response to reward contingencies. Finally, we discuss the implications to decision optimality and the nature of processes and representations in numerical cognition.
       
  • Human body motion captures visual attention and elicits pupillary dilation
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Elin H. Williams, Fil Cristino, Emily S. Cross The social motivation theory proposes that individuals naturally orient their attention to the social world. Research has documented the rewarding value of social stimuli, such as biological motion, to typically developed individuals. Here, we used complementary eye tracking measures to investigate how social motion cues affect attention and arousal. Specifically, we examined whether viewing the human body moving naturally versus mechanically leads to greater attentional engagement and changes in autonomic arousal (as assessed by pupil size measures). Participants completed an attentional disengagement task in two independent experiments, while pupillary responses were recorded. We found that natural, human-like motion produced greater increases in attention and arousal than mechanical motion, whether the moving agent was human or not. These findings contribute an important piece to our understanding of social motivation by demonstrating that human motion is a key social stimulus that engages visual attention and induces autonomic arousal in the viewer.
       
  • Self-control is linked to interoceptive inference: Craving regulation and
           the prediction of aversive interoceptive states induced with inspiratory
           breathing load
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Johann D. Kruschwitz, Anne Kausch, Anastasia Brovkin, Anita Keshmirian, Martin P. Paulus, Thomas Goschke, Henrik Walter The interoceptive inference framework suggests that our brain continuously anticipates future states of our body and aims to avoid events that might counteract homeostasis by minimizing prediction errors through active inference; e.g. appropriate actions. If predicted interoceptive models are inaccurate, behavior inconsistent with our long-term homeostatic goals may result; e.g. in failures in self-control. Using a within-subject design including an inspiratory breathing-load task to examine the prediction of aversive interoceptive perturbation and a craving-regulation for palatable foods task, we examined the relationship between self-control and aversive interoceptive predictive models. Those individuals (n = 51 healthy individuals from the general population) who were more accurate in predicting their interoceptive state with respect to anticipated versus experienced dyspnea were significantly more effective in the down-regulation of craving using negative future-thinking strategies. These individuals also scored higher on a measure of trait self-control, i.e. self-regulation to achieve long-term goals. Thus, individuals with more accurate predictive interoceptive models are better able to modulate cravings and thus exert better self-control.
       
  • Bilinguals apply language-specific grain sizes during sentence reading
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Ciara Egan, Gary M. Oppenheim, Christopher Saville, Kristina Moll, Manon Wyn Jones Languages differ in the consistency with which they map orthography to phonology, and a large body of work now shows that orthographic consistency determines the style of word decoding in monolinguals. Here, we characterise word decoding in bilinguals whose two languages differ in orthographic consistency, assessing whether they maintain two distinct reading styles or settle on a single ‘compromise’ reading style. In Experiment 1, Welsh-English bilinguals read cognates and pseudowords embedded in Welsh and English sentences. Eye-movements revealed that bilinguals dynamically alter their decoding strategy according to the language context, including more fixations during lexical access for cognates in the more consistent orthography (Welsh) than in the less consistent orthography (English), and these effects were specific to word (as opposed to pseudoword) processing. In Experiment 2, we compared the same bilinguals’ eye movements in the English sentence reading context to those of monolinguals’. Bilinguals’ eye-movement behaviour was very similar to monolinguals’ when reading English, suggesting that their knowledge of the more consistent orthography (Welsh) did not alter their decoding style when reading in English. This study presents the first characterisation of bilingual decoding style in sentence reading. We discuss our findings in relation to connectionist reading models and models of bilingual visual word recognition.
       
  • Keep trying!: Parental language predicts infants’ persistence
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Kelsey Lucca, Rachel Horton, Jessica A. Sommerville Infants’ persistence in the face of challenges predicts their learning across domains. In older children, linguistic input is an important predictor of persistence: when children are praised for their efforts, as opposed to fixed traits, they try harder on future endeavors. Yet, little is known about the impact of linguistic input as individual differences in persistence are first emerging, during infancy. Based on a preliminary investigation of the CHILDES database, which revealed that language surrounding persistence is an early-emerging feature of children’s language environment, we conducted an observational study to test how linguistic input in the form of praise and persistence-focused language more broadly impacts infants’ persistence. In Study 1, 18-month-olds and their caregivers participated in two tasks: a free-play task (a gear stacker) and a joint-book reading task. We measured parental language and infants’ persistent gear stacking. Findings revealed that infants whose parents spent more time praising their efforts and hard work (process praise), and used more persistence-focused language in general, were more persistent than infants whose parents used this language less often. Study 2 extended these findings by examining whether the effects of parental language on persistence carry over to contexts in which parents are uninvolved. The findings revealed that parental use of process praise predicted infants’ persistence even in the absence of parental support. Critically, these findings could not be explained by caregivers’ reporting on their own persistence. Together, these findings suggest that as early as 18 months, linguistic input is a key predictor of persistence.
       
  • Sensory cue combination in children under 10 years of age
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): James Negen, Brittney Chere, Laura-Ashleigh Bird, Ellen Taylor, Hannah E. Roome, Samantha Keenaghan, Lore Thaler, Marko Nardini Cue combination occurs when two independent noisy perceptual estimates are merged together as a weighted average, creating a unified estimate that is more precise than either single estimate alone. Surprisingly, this effect has not been demonstrated compellingly in children under the age of 10 years, in contrast with the array of other multisensory skills that children show even in infancy. Instead, across a wide variety of studies, precision with both cues is no better than the best single cue – and sometimes worse. Here we provide the first consistent evidence of cue combination in children from 7 to 10 years old. Across three experiments, participants showed evidence of a bimodal precision advantage (Experiments 1a and 1b) and the majority were best-fit by a combining model (Experiment 2). The task was to localize a target horizontally with a binaural audio cue and a noisy visual cue in immersive virtual reality. Feedback was given as well, which could both (a) help participants judge how reliable each cue is and (b) help correct between-cue biases that might prevent cue combination. Crucially, our results show cue combination when feedback is only given on single cues – therefore, combination itself was not a strategy learned via feedback. We suggest that children at 7–10 years old are capable of cue combination in principle, but must have sufficient representations of reliabilities and biases in their own perceptual estimates as relevant to the task, which can be facilitated through task-specific feedback.
       
  • Parafoveal processing of phonology and semantics during the reading of
           Korean sentences
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Ming Yan, Aiping Wang, Hosu Song, Reinhold Kliegl The present study sets out to address two fundamental questions in the reading of continuous texts: Whether semantic and phonological information from upcoming words can be accessed during natural reading. In the present study we investigated parafoveal processing during the reading of Korean sentences, manipulating semantic and phonological information from parafoveal preview words. In addition to the first evidence for a semantic preview effect in Korean, we found that Korean readers have stronger and more long-lasting phonological than semantic activation from parafoveal words in second-pass reading. The present study provides an example that human mind can flexibly adjust processing priority to different types of information based on the linguistic environment.
       
  • Infants use knowledge of emotions to augment face perception: Evidence of
           top-down modulation of perception early in life
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Naiqi G. Xiao, Lauren L. Emberson While top-down modulation is believed to be central to adult perception, the developmental origins of this ability are unclear. Here, we present a direct, behavioral investigation of top-down modulation of perception in infancy using emotional face perception as a test case. We investigated whether 9-month-olds can modulate their face perception based on predictive, auditory emotional cues without any training or familiarization procedure. Infants first heard a 3-second emotional vocal sound (happy/angry) while their gaze was held in the center of the screen. Then, they were presented with a pair of emotional and neutral faces images without any audio sound. The faces were small (4.70° × 5.80°) and presented in randomized locations outside their focus of attention. We measured the initial latency to shift gaze to look at a congruent emotional face as an index of infants’ pre-attentive perception of these faces. We found that infants’ face perception was augmented by preceding emotional cues: They were faster to look at the emotional face after hearing an emotionally congruent sound than an incongruent one. Moreover, the emotional sounds boosted perception of congruent faces 200 ms after the onset of the faces. These top-down effects were robust for both happy and angry emotions, indicating a flexible and active control of perception based on different top-down cues. A control study further supported the view that the Congruency effect is due to a top-down influence on face perception rather than a rapid matching of cross-modal emotional signals. Together, these findings demonstrate that top-down modulation of perception is already quite sophisticated early in development. Raw data is available on Github (https://github.com/naiqixiao/CuedEmotion.git).
       
  • The impact of psychostimulants on sustained attention over a 24-h period
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Lauren N. Whitehurst, Sara Agosta, Roberto Castaños, Lorella Battelli, Sara C. Mednick The off-label use of psychostimulants is a growing trend in healthy adults with many turning to these medications to increase alertness, attentional focus, and to help them study. However, the empirical literature on the efficacy of these medications for cognitive enhancement is controversial and the longer-term impact of these drugs on health and cognitive processing has not been thoroughly examined. Specifically, sleep supports daytime alertness, vigilance, and sustained attention, yet stimulants significantly disrupt sleep. Here, using a double-blind, placebo-controlled, crossover design, we tested the impact morning administration of psychostimulants (dextroamphetamine; DEX) had on: (1) tests of attention 75-min and 12-h after drug ingestion, (2) nighttime sleep and (3) post-sleep attention in healthy, young adults. First, we found that repeated testing led to significant decreases in performance from baseline in the placebo condition, and that DEX, compared to placebo, prevented deterioration at the 75-min test, and selectively for visual field at the 12 h and 24 h tests. We also found that stimulants, compared to placebo, benefitted attentional processing 75-min post-drug but this did not persist to the delayed test 12-h after drug administration. Additionally, morning stimulant administration resulted in robust nighttime sleep disruptions, yet post-sleep sustained attention was equivalent in the stimulant and placebo conditions, indicating that the initial boost to performance dissipated at 24 h, but the decrease was not significantly worse than placebo. Together, these results suggest that stimulant medications, commonly used off-label for cognitive enhancement may prevent deterioration of sustained attention brought on by repeated within-day testing. Additionally, these medications substantially disrupt nighttime sleep; which while coming at little cost to next-day attentional processing, may have steeper consequences for other cognitive domains.
       
  • Shake it baby, but only when needed: Preschoolers adapt their exploratory
           strategies to the information structure of the task
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Azzurra Ruggeri, Nora Swaboda, Zi Lin Sim, Alison Gopnik Previous research has suggested that active engagement with the world drives children’s remarkable learning capabilities. We investigated whether preschoolers are “ecological learners,” that is, whether they are able to select those active learning strategies that are most informative in a given task. Children had to choose which of two exploratory actions (open vs. shake) to perform to find an egg shaker hidden in one of four small boxes, contained in two larger boxes. Prior to this game, children either learnt that the egg was equally likely to be found in any of the four small boxes (Uniform condition), or that it was most likely to be found in one particular small box (Skewed condition). Results of Study 1 show that 3- and 4-year-olds successfully tailored their exploratory actions to the different likelihood-distributions: They were more likely to shake first in the Uniform compared to the Skewed condition. Five-year-olds were equally likely to shake first, irrespective of condition, even when incentivized to shake only when needed (Study 2a). However, when the relevance of the frequency training for the hiding game was highlighted (Study 2b and Study 2c), the 5-year-olds showed the same behavioural pattern as the younger preschoolers in Study 1. We suggest that ecological learning may be a key mechanism underlying children’s effectiveness in active learning.
       
  • Face perception in autism spectrum disorder: Modulation of holistic
           processing by facial emotion
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Rebecca Brewer, Geoffrey Bird, Katie L.H. Gray, Richard Cook Individuals with Autism Spectrum Disorder (ASD; autistic individuals) may exhibit atypical face perception because they fail to process faces holistically. In the context of this hypothesis, it is critical to determine whether autistic individuals exhibit diminished susceptibility to the composite face illusion, widely regarded as a key marker of holistic face processing. To date, however, previous studies have yielded inconsistent findings. In light of recent evidence suggesting that facial emotion cues increase the strength of the composite face illusion in typical individuals, the present study sought to determine whether the presence of facial emotion also modulates the strength of the composite face illusion in autistic individuals, many of whom experience difficulties recognizing facial expressions. We therefore measured composite face effects in a sample of autistic individuals (N = 20) and matched typical controls (N = 29) using an incidental emotion procedure in which distractor regions varied systematically in their emotion strength. As expected, the presence of facial emotion in the distractor regions of composite face arrangements increased the strength of the illusory distortion induced. The extent of the modulation by facial emotion was similar in the two groups. The composite effects seen in the ASD group were qualitatively and quantitatively similar to those seen in the typical group, suggestive of intact holistic processing in this population.
       
  • Level 2 perspective-taking distinguishes automatic and non-automatic
           belief-tracking
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Katheryn Edwards, Jason Low Little is known about whether human beings’ automatic mindreading is computationally restricted to processing a limited kind of content, and what exactly the nature of that signature limit might be. We developed a novel object-detection paradigm to test adults’ automatic processing in a Level 1 perspective-taking (L1PT) context (where an agent’s belief, but not his visuospatial perspective, is relevantly different) and in a Level 2 perspective-taking (L2PT) context (where both the agent’s belief and visuospatial perspective are relevantly different). Experiment 1 uncovered that adults’ reaction times in the L1PT task were helpfully speeded by a bystander’s irrelevant belief when tracking two homogenous objects but not in the L2PT task when tracking a single heterogeneous object. The limitation is especially striking given that the heterogeneous nature of the single object was fully revealed to participants as well as the bystander. The results were replicated in two further experiments, which confirmed that the selective modulation of adults’ reaction times was maintained when tracking the location of a single object (Experiment 2) and when attention checks were removed (Experiment 3). Our findings suggest that automatic mindreading draws upon a distinctively minimalist model of the mental that underspecifies representation of differences in perspective relative to an agent’s position in space.
       
  • A role for metamemory in cognitive offloading
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Xiao Hu, Liang Luo, Stephen M. Fleming Cognitive offloading refers to our reliance on the external environment in order to reduce cognitive demand. For instance, people write notes on paper or smartphones in order not to forget shopping lists or upcoming appointments. A plausible hypothesis is that such offloading relies on metamemory – our confidence in our future memory performance. However, this hypothesis has not been directly tested, and it remains unclear when and how people use external sources to aid their encoding and retrieval of information. In four experiments, here we asked participants to learn word pairs and decide whether to offload some of the pairs by “saving” them on a computer. In the memory test, they had the opportunity to use this saved information on half of trials. Participants adaptively saved the most difficult items and used this offloaded information to boost their memory performance. Crucially, participants' confidence judgments about their memory predicted their decisions to use the saved information, indicating that cognitive offloading is associated with metacognitive evaluation about memory performance. These findings were accommodated by a Bayesian computational model in which beliefs about the performance boost gained from using offloaded information are negatively coupled to an evaluation of memory ability. Together our findings highlight a close link between metamemory and cognitive offloading.
       
  • The role of time perception in temporal binding: Impaired temporal
           resolution in causal sequences
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Richard Fereday, Marc J. Buehner, Simon K. Rushton Causality affects our perception of time; events that appear as causally related are perceived as closer together in time than unrelated events. This effect is known as temporal binding. One potential explanation of this effect is that causality slows an “internal clock” that is used in interval estimation. To explore this hypothesis, we first examined participants’ perceived duration of a range of intervals between a causal action and an effect, or between two unrelated events. If (apparent) causality slows the internal clock, then plotting perceived duration against actual duration should reveal a shallower slope in the causality condition (a relative compression of perceived time). This pattern was found. We then examined an interesting corollary: that a slower rate during causal sequences would result in reduced temporal acuity. This is what we found: Duration discrimination thresholds were higher for causal compared to non-causal sequences. These results are compatible with a clock-slowing account of temporal binding. Implications for sensory recalibration accounts of binding are discussed.
       
  • How do you know that' Automatic belief inferences in passing
           conversation
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Paula Rubio-Fernández, Francis Mollica, Michelle Oraa Ali, Edward Gibson There is an ongoing debate, both in philosophy and psychology, as to whether people are able to automatically infer what others may know, or whether they can only derive belief inferences by deploying cognitive resources. Evidence from laboratory tasks, often involving false beliefs or visual-perspective taking, has suggested that belief inferences are cognitively costly, controlled processes. Here we suggest that in everyday conversation, belief reasoning is pervasive and therefore potentially automatic in some cases. To test this hypothesis, we conducted two pre-registered self-paced reading experiments (N1 = 91, N2 = 89). The results of these experiments showed that participants slowed down when a stranger commented ‘That greasy food is bad for your ulcer’ relative to conditions where a stranger commented on their own ulcer or a friend made either comment – none of which violated participants’ common-ground expectations. We conclude that Theory of Mind models need to account for belief reasoning in conversation as it is at the center of everyday social interaction.
       
  • Putting prototypes in place
    • Abstract: Publication date: December 2019Source: Cognition, Volume 193Author(s): Igor Douven It has recently been proposed that natural concepts are those represented by the cells of an optimally partitioned similarity space. In this proposal, optimal partitioning has been defined in terms of rational design criteria, criteria that a good engineer would adopt if asked to develop a conceptual system. It has been argued, for instance, that convexity should rank high among such criteria. Other criteria concern the possibility of placing prototypes such that they are both similar to the items they represent—each prototype ought to be representative—and dissimilar to each other: the prototypes ought to be contrastive. Parts of this design proposal are already supported by evidence. This paper reports results of a new study meant to address parts still lacking in empirical support. In particular, it presents data concerning color similarity space which indicate that color prototypes are indeed located such that they trade off optimally between being representative and being contrastive.
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-