Journal Cover
Journal Prestige (SJR): 2.455
Citation Impact (citeScore): 4
Number of Followers: 172  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0010-0277
Published by Elsevier Homepage  [3159 journals]
  • People are averse to machines making moral decisions
    • Abstract: Publication date: December 2018Source: Cognition, Volume 181Author(s): Yochanan E. Bigman, Kurt GrayAbstractDo people want autonomous machines making moral decisions' Nine studies suggest that that the answer is ‘no’—in part because machines lack a complete mind. Studies 1–6 find that people are averse to machines making morally-relevant driving, legal, medical, and military decisions, and that this aversion is mediated by the perception that machines can neither fully think nor feel. Studies 5–6 find that this aversion exists even when moral decisions have positive outcomes. Studies 7–9 briefly investigate three potential routes to increasing the acceptability of machine moral decision-making: limiting the machine to an advisory role (Study 7), increasing machines’ perceived experience (Study 8), and increasing machines’ perceived expertise (Study 9). Although some of these routes show promise, the aversion to machine moral decision-making is difficult to eliminate. This aversion may prove challenging for the integration of autonomous technology in moral domains including medicine, the law, the military, and self-driving vehicles.
  • Beyond unpleasantness. Social exclusion affects the experience of pain,
           but not of equally-unpleasant disgust
    • Abstract: Publication date: December 2018Source: Cognition, Volume 181Author(s): Lia Antico, Amelie Guyon, Zainab K. Mohamed, Corrado Corradi-Dell'AcquaAbstractSeminal theories posit that social and physical suffering underlie partly-common representational code. It is unclear, however, if this shared information reflects a modality-specific component of pain, or alternatively a supramodal code for properties common to many aversive experiences (unpleasantness, salience, etc.). To address this issue, we engaged participants in a gaming experience in which they were excluded or included by virtual players. After each game session, participants were subjected to comparably-unpleasant painful or disgusting stimuli. Subjective reports and cardiac responses revealed a reduced sensitivity to pain following exclusion relative to inclusion, an effect which was more pronounced in those participants who declared to feel more affected by the gaming manipulation. Such modulation was not observed for disgust. These findings indicate that the relationship between social and physical suffering does not generalize to disgust, thus suggesting a shared representational code at the level of modality-specific components of pain.
  • Greater reliance on the eye region predicts better face recognition
    • Abstract: Publication date: December 2018Source: Cognition, Volume 181Author(s): Jessica Royer, Caroline Blais, Isabelle Charbonneau, Karine Déry, Jessica Tardif, Brad Duchaine, Frédéric Gosselin, Daniel FisetAbstractInterest in using individual differences in face recognition ability to better understand the perceptual and cognitive mechanisms supporting face processing has grown substantially in recent years. The goal of this study was to determine how varying levels of face recognition ability are linked to changes in visual information extraction strategies in an identity recognition task. To address this question, fifty participants completed six tasks measuring face and object processing abilities. Using the Bubbles method (Gosselin & Schyns, 2001), we also measured each individual’s use of visual information in face recognition. At the group level, our results replicate previous findings demonstrating the importance of the eye region for face identification. More importantly, we show that face processing ability is related to a systematic increase in the use of the eye area, especially the left eye from the observer’s perspective. Indeed, our results suggest that the use of this region accounts for approximately 20% of the variance in face processing ability. These results support the idea that individual differences in face processing are at least partially related to the perceptual extraction strategy used during face identification.
  • ANCHORING is amodal: Evidence from a signed language
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Qatherine Andan, Outi Bat-El, Diane Brentari, Iris BerentAcross languages, certain linguistic forms are systematically preferred to others (e.g. bla > lba). But whether these preferences concern abstract constraints on language structure, generally, or whether these restrictions only apply to speech is unknown. To address this question, here we ask whether linguistic constraints previously identified in spoken languages apply to signs. One such constraint, ANCHORING, restricts the structure of reduplicated forms (AB → ABB, not ABA). In two experiments, native ASL signers rated the acceptability of novel reduplicated forms that either violated ANCHORING (ABA) or obeyed it (ABB). In Experiment 1, signers made a forced choice between ABB and ABA forms; in Experiment 2, signers rated signs individually. Results showed that signers prefer signs that obey ANCHORING over ANCHORING violations (ABB > ABA). These findings show for the first time that ANCHORING is operative in ASL signers. These results suggest that some linguistic constraints are amodal, applying to both speech and signs.
  • Mnemonic accessibility affects statement believability: The effect of
           listening to others selectively practicing beliefs
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Madalina Vlasceanu, Alin ComanAbstractBelief endorsement is rarely a fully deliberative process. Oftentimes, one’s beliefs are influenced by superficial characteristics of the belief evaluation experience. Here, we show that by manipulating the mnemonic accessibility of particular beliefs we can alter their believability. We use a well-established socio-cognitive paradigm (i.e., the social version of the selective practice paradigm) to increase the mnemonic accessibility of some beliefs and induce forgetting in others. We find that listening to a speaker selectively practicing beliefs results in changes in believability. Beliefs that are mentioned become mnemonically accessible and exhibit an increase in believability, while beliefs that are related to those mentioned exrience mnemonic suppression, which results in decreased believability. Importantly, the latter effect occurs regardless of whether the belief is scientifically accurate or inaccurate. Furthermore, beliefs that are endorsed with moderate-strength are particularly susceptible to mnemonically-induced believability changes. These findings, we argue, have the potential to guide interventions aimed at correcting misinformation in vulnerable communities.
  • Subconscious processing reveals dissociable contextual modulations of
           visual size perception
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Lihong Chen, Congying Qiao, Ying Wang, Yi JiangAbstractVisual size perception is highly context-dependent. In a series of experiments reported here, we demonstrated that the contextual modulation of visual size processing could occur independent of conscious awareness. Specifically, the Ebbinghaus illusion, which is mediated by lateral connections within the early visual processing stream, persisted even when the surrounding inducers were rendered invisible. Moreover, when the central target was initially interocularly suppressed, the identical target emerged from suppression faster when surrounded by small relative to large inducers, with the suppression time difference well predicted by the strength of the illusion. By contrast, there were no such subconscious contextual modulation effects associated with the Ponzo illusion, which largely relies on feedback projections to the early visual cortices. These results indicate that contextual information can modulate visual size perception without conscious awareness, and the dissociated modulation effects further suggest that subconscious contextual modulation takes place in the early visual processing stream and is largely independent of high-level feedback influences.
  • Event segmentation: Cross-linguistic differences in verbal and non-verbal
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Johannes Gerwien, Christiane von StutterheimAbstractEvents, as fundamental units in human perception and cognition, are limited by quality changes of objects over time. In the present study, we investigate the role of language in shaping event units. Given fundamental cross-linguistic differences in the concepts encoded in the verb, as in French compared to German, event unit formation was tested for motion events in a verbal (online event description, experiment 1), as well as a non-verbal task (Newtson-test, experiment 2). In German, motion and direction are described by a single assertion, i.e. one verb encoding manner (to walk …), in conjunction with adpositional phrases for path and direction (… over x across y toward z). In contrast, when information on path and direction is encoded in the verb, as typically in French, each path segment requires a separate assertion (head for x, cross y, approach z). Both experiments were based on short naturalistic video clips showing a figure moving through space along a path either without changing orientation/direction (control), or with changes in orientation/direction (critical). Analysis of the verbal task concerned the probability of producing more than one assertion to refer to the motion events presented in the clips; in the non-verbal event segmentation task, the analysis concerned the probability of marking an event boundary, as indicated by pressing a button. Results show that in French, the probability of producing more than one assertion was significantly higher in the critical condition (experiment 1) and the probability to identify an event boundary was also significantly higher (experiment 2), compared to the German participants but only in the critical condition. The findings indicate language-driven effects in event unit formation. The results are discussed in the context of theories of event cognition, thereby focusing on the role of language in the formation of cognitive structures.
  • The latent structure of spatial skill: A test of the
           2 × 2 typology
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Kelly S. Mix, David Z. Hambrick, V. Rani Satyam, Alexander P. Burgoyne, Susan C. LevineAbstractMultiple frameworks for categorizing spatial abilities exist but it has been difficult to verify them using exploratory factor analysis. The present study tested one of these frameworks—a 2 × 2 classification scheme that crossed the dimensions of static/dynamic and intrinsic/extrinsic (Uttal et al., 2013)—using confirmatory factor analysis with data on spatial performance from kindergarten (N = 251), third grade (N = 246) and sixth grade students (N = 241). For kindergarten and third grade students, four models were tested at each grade level: A 1-factor model, two 2-factor models (one static vs. dynamic, the other intrinsic vs. extrinsic), and a 4-factor model. In sixth grade, only the 2- and 1-factor models could be tested given the available data. Evidence that the 4-factor model was the best fit would have validated the 2x2 model. However, the 4-factor models failed to converge in kindergarten and third grade. Both the 1- and 2-factor models converged in these age groups, and chi-square tests demonstrated that the 2-factor intrinsic-extrinsic model was the best fit at both grade levels. In sixth grade, only one of the 2-factor models converged and it did not fit significantly better than the 1-factor model. Thus, there was limited validation of the model in these grades, as well as a trend toward less dimensionality in spatial skill over development.
  • Attention capture is temporally stable: Evidence from mixed-model
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Hanna Weichselbaum, Christoph Huber-Huber, Ulrich AnsorgeAbstractStudies on domain-specific expertise in visual attention, on its cognitive enhancement, or its pathology require individually reliable measurement of visual attention. Yet, the reliability of the most widely used reaction time (RT) differences measuring visual attention is in doubt or unknown. Therefore, we used novel methods of analyses based on linear mixed models (LMMs) and tested the temporal stability, as one index of reliability, of three attentional RT effects in the popular additional-singleton research protocol: (1) bottom-up, (2) top-down, and (3) memory-driven (intertrial priming) influences on attention capture effects. Participants searched for a target having one specific color in most (Exp. 1) or all (Exp. 2) trials. Together with the target, in half (Exp. 1) or two thirds (Exp. 2) of the trials, a distractor was presented that stood out by the target’s (Exp. 1) or a target-similar (Exp. 2) color, therefore matching a top-down search set, or by a different color, capturing attention in a bottom-up way. Also, matching distractors were primed or unprimed by the target color of the preceding trial. We analyzed all three attention capture effects in manual and target fixation RTs at two different times, separated by one (Exp. 1 and 2) or four weeks (only in Exp. 1). Random slope correlations of LMMs and standard correlation coefficients computed on individual participants’ effect scores showed that RT capture effects were in general temporally stable for both time intervals and dependent variables. These results demonstrate the test-retest reliability necessary for looking at individual differences of attentional RT effects.
  • Unconscious memory suppression
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Alexandre Salvador, Lucie Berkovitch, Fabien Vinckier, Laurent Cohen, Lionel Naccache, Stanislas Dehaene, Raphaël GaillardAbstractRecent evidence suggests that high-level executive control can occur unconsciously. In this study, we tested whether unconscious executive control extends to memory retrieval and forgetting. In a first experiment, participants learned word-word associations and were trained to either actively recall or forget theses associations in response to conscious visual cues (Think/No-Think paradigm). Then, the very same cues were subliminally presented while participants were performing a grammatical gender categorization task on distinct word pairs. Memory retrieval tested a few minutes later was significantly influenced by conscious and masked cues, suggesting that memory recall could be manipulated unbeknownst to the participants. In a second experiment, we replicated these findings and added a baseline condition in which some words were not preceded by masked cues. Memory recall was significantly reduced both when words were preceded by an unconscious instruction to forget compared to the baseline condition (i.e. no cue), and to the unconscious instructions to recall. Overall, our results suggest that executive control can occur unconsciously and suppress a specific memory outside of one's awareness.
  • Motor simulation of multiple observed actions
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Emiel Cracco, Marcel BrassAbstractResearch has shown that observed actions are represented in the motor system, leading to automatic imitative responses. However, in social life, we often see multiple persons acting together. Here, we use an automatic imitation paradigm with four stimulus hands to investigate the hypothesis that multiple observed actions can be represented at the same time in the motor system. Experiments 1 and 2 revealed weaker automatic imitation when one hand performed a different action than the other three hands, compared with when three or four hands all performed the same action. Experiment 3 replicated this effect with mutually exclusive actions. These results show that multiple observed actions can be represented simultaneously in the motor system, even when they cannot be executed together. This has important implications for theories of interaction representation.
  • Context mitigates crowding: Peripheral object recognition in real-world
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Maarten W.A. Wijntjes, Ruth RosenholtzAbstractObject recognition is often conceived of as proceeding by segmenting an object from its surround, then integrating its features. In turn, peripheral vision’s sensitivity to clutter, known as visual crowding, has been framed as due to a failure to restrict that integration to features belonging to the object. We hand-segment objects from their background, and find that rather than helping peripheral recognition, this impairs it when compared to viewing the object in its real-world context. Context is in fact so important that it alone (no visible target object) is just as informative, in our experiments, as seeing the object alone. Finally, we find no advantage to separately viewing the context and segmented object. These results, taken together, suggest that we should not think of recognition as ideally operating on pre-segmented objects, nor of crowding as the failure to do so.
  • When do circumstances excuse' Moral prejudices and beliefs about the
           true self drive preferences for agency-minimizing explanations
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Simon CullenAbstractWhen explaining human actions, people usually focus on a small subset of potential causes. What leads us to prefer certain explanations for valenced actions over others' The present studies indicate that our moral attitudes often predict our explanatory preferences far better than our beliefs about how causally sensitive actions are to features of the actor’s environment. Study 1 found that high-prejudice participants were much more likely to endorse non-agential explanations of an erotic same-sex encounter, such as that one of the men endured a stressful event earlier that day. Study 2 manipulated participants’ beliefs about how the agent’s behavior depended on features of his environment, finding that such beliefs played no clear role in modeling participants’ explanatory preferences. This result emerged both with low- and high-prejudice, US and Indian participants, suggesting that these findings probably reflect a species-typical feature of human psychology. Study 3 found that moral attitudes also predicted explanations for a woman’s decision to abort her pregnancy (3a) and a person’s decision to convert to Islam (3b). Study 4 found that luck in an action’s etiology tends to undermine perceptions of blame more readily than perceptions of praise. Finally, Study 5 found that when explaining support for a rival ideology, both Liberals and Conservatives downplay agential causes while emphasizing environmental ones. Taken together, these studies indicate that our explanatory preferences often reflect a powerful tendency to represent agents as possessing virtuous true selves. Consequently, situation-focused explanations often appear salient because people resist attributing negatively valenced actions to the true self. There is a person/situation distinction, but it is normative.
  • Learning to measure through action and gesture: Children’s prior
           knowledge matters
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Eliza L. Congdon, Mee-Kyoung Kwon, Susan C. LevineAbstractLearning through physical action with mathematical manipulatives is an effective way to help children acquire new ideas and concepts. Gesture is a type of physical action, but it differs from other kinds of actions in that it does not involve interacting directly with external objects. As such, gesture provides an interesting comparison to action-on-objects and allows us to identify the circumstances under which gesture versus interaction with objects (and the associated effects on the external world) may be differentially beneficial to learning. In the current study, we ask whether individual differences in first grade children’s prior knowledge about a foundational mathematical concept – their understanding of linear units of measure – might interact with their ability to glean insight from action- and gesture-based instruction. We find that the children using a more rudimentary pretest strategy did not benefit from producing gestures at all, but did benefit from producing actions. In contrast, children using a more conceptually advanced, though still incorrect, strategy at pretest learned from both actions and gestures. This interaction between conceptual knowledge and movement type (action or gesture) emphasizes the importance of considering individual differences in children’s prior knowledge when assessing the efficacy of movement-based instruction.
  • Expectation affects learning and modulates memory experience at retrieval
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Alex Kafkas, Daniela MontaldiAbstractOur ability to make predictions and monitor regularities has a profound impact on the way we perceive the environment, but the effect this mechanism has on memory is not well understood. In four experiments, we explored the effects on memory of the expectation status of information at encoding or at retrieval. In a rule-learning task participants learned a contingency relationship between 6 different symbols and the type of stimulus that followed each one. Either at encoding (Experiments 1a and 1b) or at retrieval (Experiments 2a and 2b), the established relationship was violated for a subset of stimuli resulting in the presentation of both expected and unexpected stimuli. The expectation status of the stimuli was found to have opposite effects on familiarity and recollection performance, the two kinds of memory that support recognition memory. At encoding (Experiments 1a and 1b), the presentation of expected stimuli selectively enhanced subsequent familiarity performance, while unexpected stimuli selectively enhanced subsequent recollection. Similarly, at retrieval (Experiments 2a and 2b), expected stimuli were more likely to be deemed familiar than unexpected stimuli, whereas unexpected stimuli were more likely to be recollected than were expected stimuli. These findings suggest that two separate memory enhancement mechanisms exist; one sensitive and modulating the accuracy of memory for the contextually distinctive or unexpected, and the other sensitive to and modulating the accuracy of memory for the expected. Therefore, the degree to which information fits with expectation has critical implications for the type of computational mechanism that will be engaged to support memory.
  • Grounding the neurobiology of language in first principles: The necessity
           of non-language-centric explanations for language comprehension
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Uri Hasson, Giovanna Egidi, Marco Marelli, Roel M. WillemsAbstractRecent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Typical visual-field locations facilitate access to awareness for everyday
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Daniel Kaiser, Radoslaw M. CichyAbstractIn real-world vision, humans are constantly confronted with complex environments that contain a multitude of objects. These environments are spatially structured, so that objects have different likelihoods of appearing in specific parts of the visual space. Our massive experience with such positional regularities prompts the hypothesis that the processing of individual objects varies in efficiency across the visual field: when objects are encountered in their typical locations (e.g., we are used to seeing lamps in the upper visual field and carpets in the lower visual field), they should be more efficiently perceived than when they are encountered in atypical locations (e.g., a lamp in the lower visual field and a carpet in the upper visual field). Here, we provide evidence for this hypothesis by showing that typical positioning facilitates an object’s access to awareness. In two continuous flash suppression experiments, objects more efficiently overcame inter-ocular suppression when they were presented in visual-field locations that matched their typical locations in the environment, as compared to non-typical locations. This finding suggests that through extensive experience the visual system has adapted to the statistics of the environment. This adaptation may be particularly useful for rapid object individuation in natural scenes.
  • Not all those who wander are lost: Spatial exploration patterns and their
           relationship to gender and spatial memory
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Kyle T. Gagnon, Brandon J. Thomas, Ascher Munion, Sarah H. Creem-Regehr, Elizabeth A. Cashdan, Jeanine K. StefanucciAbstractWhen humans and animals navigate through environments, they form spatial memories important for supporting subsequent recall of locations relative to their own position and orientation, as well as to other object locations in the environment. The goal of the current study was to examine whether individual differences in initial exploration of a large-scale novel environment relate to subsequent spatial memories. A majority of studies examining spatial memory formed in large-scale spaces have constrained encoding of the environment by leading participants on pre-determined paths, thereby limiting their free exploration. We allowed participants to freely explore a large-scale, virtual environment to locate a set of objects within. We then tested their ability to navigate back to those objects as well as their ability to point to them from one another. Based on previous work suggesting gender differences in navigation strategies and spatial anxiety, we predicted that males and females would show different patterns of initial exploration and that these exploration patterns would account for gender differences in measures of spatial memory. We found that females revisited previous locations more often and showed lower rates of spreading through an area. These measures of exploration partially accounted for gender differences in efficiency in navigation and pointing accuracy to remembered locations. The results demonstrate the importance of exploration in spatial memory and provide a new perspective on gender differences in spatial cognition.
  • Intuitive statistical inferences in chimpanzees and humans follow
           Weber’s law
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Johanna Eckert, Josep Call, Jonas Hermes, Esther Herrmann, Hannes RakoczyAbstractHumans and nonhuman great apes share a sense for intuitive statistical reasoning, making intuitive probability judgments based on proportional information. This ability is of fundamental importance, in particular for inferring general regularities from finite numbers of observations and, vice versa, for predicting the outcome of single events using prior information. To date it remains unclear which cognitive mechanism underlies and enables this capacity. The aim of the present study was to gain deeper insights into the cognitive structure of intuitive statistics by probing its signatures in chimpanzees and humans. We tested 24 sanctuary-living chimpanzees in a previously established paradigm which required them to reason from populations of food items with different ratios of preferred (peanuts) and non-preferred items (carrot pieces) to randomly drawn samples. In a series of eight test conditions, the ratio between the two ratios to be discriminated (ROR) was systematically varied ranging from 1 (same proportions in both populations) to 16 (high magnitude of difference between populations). One hundred and forty-four human adults were tested in a computerized version of the same task. The main result was that both chimpanzee and human performance varied as a function of the log(ROR) and thus followed Weber’s law. This suggests that intuitive statistical reasoning relies on the same cognitive mechanism that is used for comparing absolute quantities, namely the analogue magnitude system.
  • Cognitive development attenuates audiovisual distraction and promotes the
           selection of task-relevant perceptual saliency during visual search on
           complex scenes
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Clarissa Cavallina, Giovanna Puccio, Michele Capurso, Andrew J. Bremner, Valerio SantangeloAbstractSearching for a target while avoiding distraction is a core function of selective attention involving both voluntary and reflexive mechanisms. Here, for the first time, we investigated the development of the interplay between voluntary and reflexive mechanisms of selective attention from childhood to early adulthood. We asked 6-, 10-, and 20-year-old participants to search for a target presented in one hemifield of a complex scene, preceded by a task-irrelevant auditory cue on either the target side (valid), the opposite side (invalid), or both sides (neutral). For each scene we computed the number of salient locations (NSL) and the target saliency (TgS). All age groups showed comparable orienting effects (“valid minus neutral” trials), indicating a similar capture of spatial attention by valid cues which was independent of age. However, only adults demonstrated a suppression of the reorienting effect (“invalid minus neutral” trials), indicating late developments in the reallocation of spatial attention toward a target following auditory distraction. The searching performance of the children (both 6- and 10-year-olds), but not of the adults, was predicted by the NSL, indicating an attraction of processing resources to salient but task-irrelevant locations in childhood; conversely, only adults showed greater performance with increased TgS in valid trials, indicating late development in the use of task-related saliency. These findings highlight qualitatively different mechanisms of selective attention operating at different ages, demonstrating important developmental changes in the interplay between voluntary and reflexive mechanisms of selective attention during visual search in complex scenes.
  • Meaning before order: Cardinal principle knowledge predicts improvement in
           understanding the successor principle and exact ordering
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Elizabet Spaepen, Elizabeth A. Gunderson, Dominic Gibson, Susan Goldin-Meadow, Susan C. LevineAbstractLearning the cardinal principle (the last word reached when counting a set represents the size of the whole set) is a major milestone in early mathematics. But researchers disagree about the relationship between cardinal principle knowledge and other concepts, including how counting implements the successor function (for each number word N representing a cardinal value, the next word in the count list represents the cardinal value N + 1) and exact ordering (cardinal values can be ordered such that each is one more than the value before it and one less than the value after it). No studies have investigated acquisition of the successor principle and exact ordering over time, and in relation to cardinal principle knowledge. An open question thus remains: Is the cardinal principle a “gatekeeper” concept children must acquire before learning about succession and exact ordering, or can these concepts develop separately' Preschoolers (N = 127) who knew the cardinal principle (CP-knowers) or who knew the cardinal meanings of number words up to “three” or “four” (3–4-knowers) completed succession and exact ordering tasks at pretest and posttest. In between, children completed one of two trainings: counting only versus counting, cardinal labeling, and comparison. CP-knowers started out better than 3–4-knowers on succession and exact ordering. Controlling for this disparity, we found that CP-knowers improved over time on succession and exact ordering; 3–4-knowers did not. Improvement did not differ between the two training conditions. We conclude that children can learn the cardinal principle without understanding succession or exact ordering and hypothesize that children must understand the cardinal principle before learning these concepts.
  • I remember emotional content better, but I’m struggling to remember
           who said it!
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Ludovic Le Bigot, Dominique Knutsen, Sandrine GilAbstractThe joint impact of emotion and production on conversational memory was examined in two experiments where pairs of participants took turns producing verbal information. They were instructed to produce out loud sentences based on either neutral or emotional (Experiment 1: negative; Experiment 2: positive) words. Each participant was then asked to recall as many words as possible (content memory) and to indicate who had produced each word (reality monitoring). The analyses showed that both self-production and emotion boost content memory, although emotion also impairs reality monitoring. This study sheds light on how both factors (emotion and production) may constrain language interaction memory through information saliency.
  • Communicative intent modulates production and comprehension of actions and
           gestures: A Kinect study
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): James P. Trujillo, Irina Simanova, Harold Bekkering, Asli ÖzyürekAbstractActions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension.We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative.Our study showed that during production the communicative context modulates space–time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable.Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
  • Intention, attention and long-term memory for visual scenes: It all
           depends on the scenes
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Karla K. Evans, Alan BaddeleyAbstractHumans have an ability to remember up to 10,000 previously viewed scenes with apparently robust memory for visual detail, a phenomenon that has been interpreted as suggesting a visual memory system of massive capacity. Attempts at explanation have largely focused on the nature of the stimuli and been influenced by theoretical accounts of object recognition. Our own study aims to supplement this by considering two observer-based aspects of visual long-term memory, one strategic, whether the observers are aware or not that their memory will subsequently be tested and the other executive, based on the amount of attentional capacity available during encoding. We describe six studies involving visual scenes ranging in difficulty from complex manmade scenes (d′ = 2.54), to door scenes with prominent features removed (d′ = 0.79). To ensure processing of the stimuli, all participants have to make a judgement of pleasantness (Experiments 1 and 2) or of the presence or absence of a dot (Experiment 3). Intention to learn influence performance only in the most impoverished condition comprising doors with prominent features removed. Experiments 4–6 investigated the attentional demands of visual long-term memory using a concurrent task procedure. While the demanding task of counting back in threes clearly impaired performance across the range of materials, a lighter load, counting back in ones influences only the most difficult door scenes. Detailed analysis of error patterns indicated that clear differences in performance level between manmade and natural scenes and between unmodified and modified door scenes was reflected in false alarm scores not detections, while concurrent task load affected both. We suggest an interpretation in terms of a two-level process of encoding at the visual feature rather than the whole scene level, with natural images containing many features encoded richly, rapidly and without explicit intent. Only when scenes are selected from a single category and with distinctive detail minimised does memory depend on intention to remember and on the availability of substantial executive capacity.
  • The relationship between parental mental-state language and
           2.5-year-olds’ performance on a nontraditional false-belief task
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Erin Roby, Rose M. ScottAbstractA growing body of evidence suggests that children succeed in nontraditional false-belief tasks in the first years of life. However, few studies have examined individual differences in infants’ and toddlers’ performance on these tasks. Here we investigated whether parental use of mental-state language (i.e. think, understand), which predicts children’s performance on elicited-response false-belief tasks at older ages, also predicts toddlers’ performance on a nontraditional task. We tested 2.5-year-old children in a verbal nontraditional false-belief task that included two looking time measures, anticipatory looking and preferential looking, and measured parents’ use of mental-state language during a picture-book task. Parents’ use of mental-state language positively predicted children’s performance on the anticipatory-looking measure of the nontraditional task. These results provide the first evidence that social factors relate to children’s false-belief understanding prior to age 3 and that this association extends to performance on nontraditional tasks. These findings add to a growing number of studies suggesting that mental-state language supports mental-state understanding across the lifespan.
  • Mental states modulate gaze following, but not automatically
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Gustav Kuhn, Ieva Vacaityte, Antonia D.C. D'Souza, Abbie C. Millett, Geoff G. ColeAbstractA number of authors have suggested that the computation of another person’s visual perspective occurs automatically. In the current work we examined whether perspective-taking is indeed automatic or more likely to be due to mechanisms associated with conscious control. Participants viewed everyday scenes in which a single human model looked towards a target object. Importantly, the model’s view of the object was either visible or occluded by a physical barrier (e.g., briefcase). Results showed that when observers were given five seconds to freely view the scenes, eye movements were faster to fixate the object when the model could see it compared to when it was occluded. By contrast, when observers were required to rapidly discriminate a target superimposed upon the same object no such visibility effect occurred. We also employed the barrier procedure together with the most recent method (i.e., the ambiguous number paradigm) to have been employed in assessing the perspective-taking theory. Results showed that the model’s gaze facilitated responses even when this agent could not see the critical stimuli. We argue that although humans do take into account the perspective of other people this does not occur automatically.
  • Adaptation to other people’s eye gaze reflects habituation of high-level
           perceptual representations
    • Abstract: Publication date: November 2018Source: Cognition, Volume 180Author(s): Colin J. Palmer, Colin W.G. CliffordOur sense of where another person is looking depends upon multiple features of their face, relating to both the deviation of their eyes and the angle of their head. In this way, gaze direction is a higher-level perceptual property that is dependent on holistic processing of lower-level visual cues. A key paradigm in social perception research is sensory adaptation, which has been used to probe how properties like gaze direction are encoded in the visual system. Here we test whether sensory adaptation acts on higher-level, perceptual representations of gaze direction, or occurs to lower-level visual features of the face alone. To this end, participants were adapted on faces that evoke the Wollaston illusion, in which the direction that the face appears to look differs from its veridical eye direction. We compared across sets of images that were exactly matched in the lower-level features of the face image, but perceptually distinct due to differences in the conjunction of head and eye direction. The changes in participants’ perception of gaze direction following adaptation were consistent with habituation having occurred to the perceived gaze direction of the Wollaston faces, where this is dependent on integration of eye direction and head direction, rather than to lower-level sensory features of the face alone. This constitutes strong evidence for adaptable representations of other people’s gaze direction in the visual system that are abstracted from lower-level facial cues.
  • Lazy, not biased: Susceptibility to partisan fake news is better explained
           by lack of reasoning than by motivated reasoning
    • Abstract: Publication date: Available online 20 June 2018Source: CognitionAuthor(s): Gordon Pennycook, David G. RandAbstractWhy do people believe blatantly inaccurate news headlines (“fake news”)' Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology' Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.
  • I know why you voted for Trump: (Over)inferring motives based on choice
    • Abstract: Publication date: Available online 10 May 2018Source: CognitionAuthor(s): Kate Barasz, Tami Kim, Ioannis EvangelidisAbstractPeople often speculate about why others make the choices they do. This paper investigates how such inferences are formed as a function of what is chosen. Specifically, when observers encounter someone else’s choice (e.g., of political candidate), they use the chosen option’s attribute values (e.g., a candidate’s specific stance on a policy issue) to infer the importance of that attribute (e.g., the policy issue) to the decision-maker. Consequently, when a chosen option has an attribute whose value is extreme (e.g., an extreme policy stance), observers infer—sometimes incorrectly—that this attribute disproportionately motivated the decision-maker’s choice. Seven studies demonstrate how observers use an attribute’s value to infer its weight—the value-weight heuristic—and identify the role of perceived diagnosticity: more extreme attribute values give observers the subjective sense that they know more about a decision-maker’s preferences, and in turn, increase the attribute’s perceived importance. The paper explores how this heuristic can produce erroneous inferences and influence broader beliefs about decision-makers.
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-