for Journals by Title or ISSN
for Articles by Keywords
Similar Journals
Journal Cover
Journal Prestige (SJR): 2.455
Citation Impact (citeScore): 4
Number of Followers: 205  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0010-0277
Published by Elsevier Homepage  [3177 journals]
  • Relational labeling unlocks inert knowledge
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Anja Jamrozik, Dedre GentnerAbstractInsightful solutions often come about by recalling a relevant prior situation—one that shares the same essential relational pattern as the current problem. Unfortunately, our memory retrievals often depend primarily on surface matches, rather than relational matches. For example, a person who is familiar with the idea of positive feedback in sound systems may fail to think of it in the context of global warming. We suggest that one reason for the failure of cross-domain relational retrieval is that relational information is typically encoded variably, in a context-dependent way. In contrast, the surface features of that context—such as objects, animals and characters—are encoded in a relatively stable way, and are therefore easier to retrieve across contexts. We propose that the use of relational language can serve to make situations' relational representations more uniform, thereby facilitating relational retrieval. In two studies, we find that providing relational labels for situations at encoding or at retrieval increased the likelihood of relational retrieval. In contrast, domain labels—labels that highlight situations' contextual features—did not reliably improve domain retrieval. We suggest that relational language allows people to retrieve knowledge that would otherwise remain inert and contributes to domain experts' insight.
  • The sociality of social inhibition of return
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): O. Nafcha, S. Shamay-Tsoory, S. GabayAbstractCognitive processes are traditionally studied in individual settings, while the possible effect of the social context is ignored. The present study focuses on the social inhibition of return effect (SIOR; Welsh et al., 2005). According to it, observation of another person's action at a specific location initiates an inhibitory process in the observer at that location. The aim of the present study was to investigate which processes are influenced by the social context (e.g. action representation, attention, etc.) and whether this effect is elicited only in a social context. In a series of four experiments we examined the SIOR effect by developing a dyadic computerized task in which each participant, in turn, responded to a peripherally presented target in two successive trials. The first trial was performed after the other participant had responded and was designed to examine SIOR. The second trial was aimed at studying self-induced IOR. The first two experiments replicated and extended previous findings by demonstrating that information regarding the counterpart's response location was sufficient to produce SIOR. In the third experiment the participants performed the same task but without a counterpart so that SIOR was eliminated. The fourth experiment demonstrated that believing there is a co-actor is enough to elicit the SIOR effect. These findings suggest that knowing that a location was acted upon before by another person (by observation or by prior knowledge) is the minimal condition for the SIOR effect to be evoked.
  • Production efficiency can cause grammatical change: Learners deviate from
           the input to better balance efficiency against robust message transmission
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Masha Fedzechkina, T. Florian JaegerThe idea that human languages have properties suitable for efficient communication has permeated linguistic theorizing. Indirect correlational support for this idea has come from cross-linguistic synchronic and diachronic data. However, direct causal tests have been lacking. We directly test whether biases operating during language learning can cause learners to deviate from the input they receive towards output languages that better balance production efficiency against robust message transmission. We employ miniature language learning experiments to address this question for a well-documented cross-linguistic correlation between constituent order flexibility and the presence of case marking in a language. Participants were exposed to novel miniature languages that had optional case marking and either fixed or flexible constituent order. Between participants, we manipulated the amount of time and effort associated with the production of case marking. We find that learners introduced the cross-linguistically observed trade-off between case marking and constituent order flexibility into their output languages. Critically, learners only did so when case-marked nouns required additional effort compared to non-case-marked nouns. Thus, the present study suggests that even abstract grammatical properties of languages can be shaped by a balance between production efficiency and robust message transmission.
  • Active sampling in visual search is coupled to the cardiac cycle
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Alejandro Galvez-Pol, Ruth McConnell, James M. KilnerAbstractRecent research has demonstrated that perception and reasoning vary according to the phase of internal bodily signals such as heartbeat. This has been shown by locking the presentation of sensory events to distinct phases of the cardiac cycle. However, task-relevant information is not usually encountered in such a phase-locked manner nor passively accessed, but rather actively sampled at one's own pace. Moreover, if the phase of the cardiac cycle is an important modulator of perception and cognition, as previously proposed, then the way in which we actively sample the world should be similarly modulated by the phase of the cardiac cycle. Here we tested this by coregistration of eye movements and heartbeat signals while participants freely compared differences between two visual arrays. Across three different analyses, we found a significant coupling of saccades, subsequent fixations, and blinks with the cardiac cycle. More eye movements were generated during the systolic phase of the cardiac cycle, which has been reported as the period of maximal effect of the baroreceptors' activity upon cognition. Conversely, more fixations were found during the diastole phase (quiescent baroreceptors). Lastly, more blinks were generated in the later period of the cardiac cycle. These results suggest that interoceptive and exteroceptive processing do adjust to each other; in our case, by sampling the outer environment during quiescent periods of the inner organism.
  • Zooming in on zooming out: Partial selectivity and dynamic tuning of
           bilingual language control during reading
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Liv J. Hoversten, Matthew J. TraxlerAbstractProminent models of bilingual visual word recognition posit a bottom-up nonselective view of lexical processing with parallel access to lexical candidates of both languages. However, these accounts do not accommodate recent findings of top-down effects on the relative global activation level of each language during bilingual reading. We conducted two eye-tracking experiments to systematically assess the degree of accessibility of each language in different global language contexts. When critical words were presented overtly in Experiment 1, code switches disrupted reading early during lexical processing, but not as much as pseudowords did. Participants zoomed out of the target language with increasing exposure to language switches. In Experiment 2, a monolingual language context was created by presenting critical words covertly as parafoveal previews. Here, code-switched words were treated like pseudowords, and participants remained zoomed in to the target language throughout the experiment. Switch direction analyses confirmed and extended these interpretations to provide further support for the role of global language control on lexical access, above and beyond effects due to proficiency differences across languages. Together, these data provide strong evidence for dynamic top-down adjustment of the degree of language selectivity during bilingual reading.
  • It's (not) all Greek to me: Boundaries of the foreign language effect
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Alexandra S. Dylman, Marie-France Champoux-LarssonAbstractWe report three experiments investigating the boundaries of the Foreign Language effect in decision making (examining both risk aversion and moral dilemmas), when the foreign language is culturally influential, or when there is high linguistic similarity between the native language and the foreign language. Specifically, we found no Foreign Language effect in the Asian disease problem (Experiment 1a) or the footbridge moral dilemma (Experiment 2a) in Swedish-English bilinguals, but did find a Foreign Language effect for both these tasks in Swedish-French bilinguals (Experiments 1b and 2b). Additionally, we found no Foreign Language effect for moral dilemmas when the language pair was linguistically similar by testing Swedish-Norwegian and Norwegian-Swedish bilinguals (Experiment 3). These results indicate possible boundaries to the Foreign Language effect in decision making and propose that factors such as cultural influence and linguistic similarity diminish the Foreign Language effect.
  • Reward learning biases the direction of saccades
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Ming-Ray Liao, Brian A. AndersonAbstractThe role of associative reward learning in guiding feature-based attention and spatial attention is well established. However, no studies have looked at the extent to which reward learning can modulate the direction of saccades during visual search. Here, we introduced a novel reward learning paradigm to examine whether reward-associated directions of eye movements can modulate performance in different visual search tasks. Participants had to fixate a peripheral target before fixating one of four disks that subsequently appeared in each cardinal position. This was followed by reward feedback contingent upon the direction chosen, where one direction consistently yielded a high reward. Thus, reward was tied to the direction of saccades rather than the absolute location of the stimulus fixated. Participants selected the target in the high-value direction on the majority of trials, demonstrating robust learning of the task contingencies. In an untimed visual foraging task that followed, which was performed in extinction, initial saccades were reliably biased in the previously rewarded-associated direction. In a second experiment, following the same training procedure, eye movements in the previously high-value direction were facilitated in a saccade-to-target task. Our findings suggest that rewarding directional eye movements biases oculomotor search patterns in a manner that is robust to extinction and generalizes across stimuli and task.
  • Opacity, obscurity, and the geometry of question-asking
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Christina Boyce-Jacino, Simon DeDeoAbstractAsking questions is a pervasive human activity, but little is understood about what makes them difficult to answer. An analysis of a pair of large databases, New York Times crosswords and questions from the quiz-show Jeopardy, establishes two orthogonal dimensions of question difficulty: obscurity (the rarity of the answer) and opacity (the indirectness of question cues, operationalized with word2vec). The importance of opacity, and the role of synergistic information in resolving it, suggests that accounts of difficulty in terms of prior expectations captures only a part of the question-asking process. A further regression analysis shows the presence of additional dimensions to question-asking: question complexity, the answer's local network density, cue intersection, and the presence of signal words. Our work shows how question-askers can help their interlocutors by using contextual cues, or, conversely, how a particular kind of unfamiliarity with the domain in question can make it harder for individuals to learn from others. Taken together, these results suggest how Bayesian models of question difficulty can be supplemented by process models and accounts of the heuristics individuals use to navigate conceptual spaces.
  • Distractor context manipulation in visual search: How expectations
           modulate proactive control
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Marco A. Petilli, Francesco Marini, Roberta DainiAbstractVisual search can be guided by top-down and bottom-up processes, with either one dominating the other depending on the task (e.g., feature versus conjunction). Moreover, different search tasks bring about different expectations about the type, or frequency, of distractor stimuli. These expectations could promote top-down “task-sets” that may impact performance even when distractors are temporarily absent. Here, we characterized the role and extent of recruitment of proactive top-down processes for distractor expectation in feature and conjunction search. Participants conducted feature and conjunction search tasks for a visual target among distractors, which were either frequently presented or completely absent. The effects of the recruitment of proactive top-down processes for distractor expectation entailed slower responses, yet more accurate, on distractor-absent trials in the frequent-distractor (versus no-distractor) context of both tasks. These effects were larger in the conjunction versus feature task and were not impacted by stimulus duration and time pressure (short/present in Experiment 1, unlimited/absent in Experiment 2, respectively). Results were replicated when the presence/absence of distractors at each trial was fully predictable (Experiment 3), and when several parameters of visual search were changed (Experiment 4). Our findings indicate that top-down task-sets related to distractor expectation entail performance costs and benefits in visual search. These effects occur throughout task blocks rather than trial-to-trial, are modulated by search type, and confirm that proactive top-down processes intervene in feature search.
  • Helping, fast and slow: Exploring intuitive cooperation in early ontogeny
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Tobias Grossmann, Manuela Missana, Amrisha VaishAbstractCooperative behavior is central to human societies. Human adults who reach their cooperative decisions more rapidly and independently of cognitive control display greater levels of prosocial behavior. This is taken to show that cooperation is guided by intuitive processes rather than by active control of selfish impulses. The current study investigated the emergence of intuitive cooperation in early human ontogeny. We measured helping behavior (latency and frequency) in a longitudinal sample of infants at ages 14 and 18 months. Between 14 and 18 months, the frequency of helping significantly increased and latency to help significantly decreased, suggesting advances in helping behavior during this period of development. Moreover, at 18 months and to some extent, even at 14 months, infants who helped more rapidly (as indexed by a shorter latency) acted more prosocially (as indexed by a greater frequency of helping) than infants who were slower to help. This link between latency and frequency of prosocial behavior was independent of infants' ability for inhibitory control and general sociability levels. Prosocial behavior thus begins to be governed by intuitive processes that operate independently of cognitive control early in human ontogeny. This informs our understanding of the nature and emergence of cooperative behavior by supporting accounts that assign a central role to intuition in the evolution of human cooperation.
  • Something from nothing: Agency for deliberate nonactions
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Lisa Weller, Katharina A. Schwarz, Wilfried Kunde, Roland PfisterAbstractSeveral law systems punish nonactions such as failures to render assistance, although it is unknown if people spontaneously experience a sense of authorship for the consequences of their not acting. Here we provide evidence that events caused by deliberate choices not to act can indeed give rise to a vivid sense of agency. In three experiments, participants reported a sense of agency for events following nonactions and, crucially, temporal binding between nonactions and subsequent consequences suggested a sense of agency for nonactions even at an implicit level. These findings indicate that a sense of agency is not confined to overt body movements. At the same time, agency was more pronounced when the same event resulted from an action rather than being the consequence of a nonaction, highlighting the importance of ascribing different degrees of responsibility for the consequences of acting and not acting.
  • The timing of gaze-contingent decision prompts influences risky choice
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Xiao-Yang Sui, Hong-Zhi Liu, Li-Lin RaoAbstractRisky decisions are ubiquitous in daily life and are central to human behavior, but little attention has been devoted to exploring whether risky choice can be influenced by gaze direction. In the current study, we used gaze-contingent manipulation to manipulate an individual’s gaze while he/she decided between two risky options, and we examined whether risky decisions could be biased toward a randomly determined target. We found that participants’ risky choices were biased toward a randomly determined target when they were manipulated to gaze longer at the target option (Study 1, N = 37; Study 3, N = 40) or at the target outcome dimension (Study 2, N = 37). We also found that both the relative time advantage and the location of the last fixation mediated the effect of the gaze-contingent manipulation on risky choice in the valid trials. However, the mediation effects of the relative time advantage and the location of the last fixation were not significant when timed-out trials were included in Studies 2 and 3, indicating that the gaze-contingent manipulation did not effectively enforce a bias toward attending to a particular stimulus through eye gaze in all trials. Future work is needed to improve the effectiveness of the gaze-contingent prompt procedure.
  • Four-year-old Mandarin-speaking children’s online comprehension of
           relative clauses
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Wenchun Yang, Angel Chan, Franklin Chang, Evan KiddA core question in language acquisition is whether children’s syntactic processing is experience-dependent and language-specific, or whether it is governed by abstract, universal syntactic machinery. We address this question by presenting corpus and on-line processing dat a from children learning Mandarin Chinese, a language that has been important in debates about the universality of parsing processes. The corpus data revealed that two different relative clause constructions in Mandarin are differentially used to modify syntactic subjects and objects. In the experiment, 4-year-old children’s eye-movements were recorded as they listened to the two RC construction types (e.g., Can you pick up the pig that pushed the sheep'). A permutation analysis showed that children’s ease of comprehension was closely aligned with the distributional frequencies, suggesting syntactic processing preferences are shaped by the input experience of these constructions.
  • When knowing only one word for “car” leads to weak application
           of mutual exclusivity
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Elena Nicoladis, Angélique LaurentAbstractFrom a very young age, monolingual children assume their language has no synonyms, or use the principle of mutual exclusivity (only one label per object). In contrast, bilingual children often accept more novel synonyms than monolinguals. One possible explanation for this difference is the lexicon structure hypothesis: having synonyms (across languages) in the lexicon reduces adherence to mutual exclusivity. The purpose of this study is to test the lexicon structure hypothesis by comparing three- to five-year-old children who speak either Canadian French or English. Canadian French allows more synonyms than English. French-speaking children should therefore accept more novel synonyms than English-speaking children. The children did a disambiguation task, choosing whether a familiar or an unfamiliar object was the referent of a novel word (e.g., moli). Surprisingly, the French-speaking children accepted significantly fewer novel synonyms than English-speaking children. However, they accepted the most synonyms for objects that had synonyms in French but they did not know both synonyms. These results support a modified version of the lexicon structure hypothesis, one that accounts for children’s weak access to synonyms.
  • The Role of Selective Attention in Cross-modal Interactions between
           Auditory and Visual Features
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Karla K. EvansAbstractEvans and Treisman (2010) showed systematic interactions between audition and vision when participants made speeded classifications in one modality while supposedly ignoring another. We found perceptual facilitation between high pitch and high visual position, high spatial frequency and small size, and interference between high pitch and low position, low spatial frequency and large size, while the converse was the case between low pitch and the same visual features. The present study examined the role of selective attention in these cross-modal interactions. Participants performed speeded classification or search tasks of low or high load while attempting to ignore irrelevant stimuli in a different modality. In both paradigms, congruency between the visual and the irrelevant auditory stimulus had an equal effect in the low and in the high perceptual load conditions. A third experiment tested divided attention, requiring participants to compare stimuli across modalities and respond to the visual-auditory compound. The congruency effect was as large with attention focused on one modality as when it was divided across both. These findings offer converging evidence that cross-modal interactions between corresponding basic features are independent of selective attention.
  • The dark side of Eureka: Artificially induced Aha moments make facts feel
    • Abstract: Publication date: March 2020Source: Cognition, Volume 196Author(s): Ruben E. Laukkonen, Benjamin T. Kaveladze, Jason M. Tangen, Jonathan W. SchoolerAbstractSome ideas that we have feel mundane, but others are imbued with a sense of profundity. We propose that Aha! moments make an idea feel more true or valuable in order to aid quick and efficient decision-making, akin to a heuristic. To demonstrate where the heuristic may incur errors, we hypothesized that facts would appear more true if they were artificially accompanied by an Aha! moment elicited using an anagram task. In a preregistered experiment, we found that participants (n = 300) provided higher truth ratings for statements accompanied by solved anagrams even if the facts were false, and the effect was particularly pronounced when participants reported an Aha! experience (d = .629). Recent work suggests that feelings of insight usually accompany correct ideas. However, here we show that feelings of insight can be overgeneralized and bias how true an idea or fact appears, simply if it occurs in the temporal ‘neighbourhood’ of an Aha! moment. We raise the possibility that feelings of insight, epiphanies, and Aha! moments have a dark side, and discuss some circumstances where they may even inspire false beliefs and delusions, with potential clinical importance.
  • Specifying the domain-general resources that contribute to conceptual
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Nathan Tardiff, Igor Bascandziev, Susan Carey, Deborah ZaitchikAbstractThere are two dissociable processes that underlie knowledge acquisition: knowledge enrichment, which involves learning information that can be represented with one’s current conceptual repertoire; and conceptual construction, which involves acquiring knowledge that can only be represented in terms of concepts one does not yet possess. Theory changes involving conceptual change require conceptual construction. The cognitive mechanisms underlying conceptual change are still poorly understood, though executive function capacities have been implicated. The present study concerns the domain-general resources drawn upon in one well-studied case of the construction of a new framework theory in early childhood: the framework theory of vitalist biology, the ontogenetically earliest theory in which the concepts life and death come to have biological content shared with adults. Eighty-three five- and six-year-old children were tested on a battery of tasks that probe central concepts of the vitalist theory, as well as on a battery of tests of domain-general capacities that may be implicated in development in this domain, including measures of knowledge enrichment, executive function, and fluid IQ. With variance in accumulated knowledge and in knowledge enrichment capacity controlled, two specific executive functions, shifting and inhibition, predicted children’s progress in constructing the vitalist theory. In contrast, working memory and fluid IQ were not associated with the acquisition of vitalist biology. These results provide further evidence for the distinction between knowledge enrichment and conceptual construction and impose new constraints on accounts of the mechanisms underlying conceptual construction in this domain.
  • Delayed disengagement of attention from distractors signalling reward
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Poppy Watson, Daniel Pearson, Jan Theeuwes, Steven B. Most, Mike E. Le PelleyAbstractAttention refers to the set of cognitive mechanisms that facilitate the prioritization of incoming sensory information. Existing research suggests that motivationally salient stimuli, such as those associated with reward, are prioritized by the attention system and that this prioritization occurs independently of an observer’s goals. Specifically, studies of visual search have shown that stimuli signalling the availability of monetary reward are more likely to capture eye movements, even when participants are motivated to ignore such stimuli. In the current study we ask whether reward magnitude influences only the likelihood that stimuli will capture spatial attention, or whether reward also influences the ease with which people can disengage attention from a location when they are motivated to move their attention elsewhere. Three experiments examined the time taken to disengage from a centrally presented distractor that signalled the availability of high or low reward. We found that participants took longer to move their eyes away from a high-reward distractor, even though this came at financial cost (Experiment 1), that participants were unable to suppress a high-reward distractor consistently presented at the central location (Experiment 2), that slower responding was not due to behavioural freezing in the presence of a signal of high reward (Experiment 3), and that slower responding persisted even when rewards were no longer available (Experiment 4). These results indicate that reward modulates attentional disengagement: signals of high reward hold attention for longer, even when this is counterproductive for performance of ongoing tasks. Our findings further highlight the role of reward in the conflict between automatic and goal-directed attentional processing.
  • Agents' pivotality and reward fairness modulate sense of agency in
           cooperative joint action
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Solène Le Bars, Alexandre Devaux, Tena Nevidal, Valérian Chambon, Elisabeth PacherieAbstractThe sense of agency (SoA) experienced in joint action is an essential subjective dimension of human cooperativeness, but we still know little about the specific factors that contribute to its emergence or alteration. In the present study, dyads of participants were instructed to coordinate their key presses to move a cursor up to a specific target (i.e., to achieve a common goal). We applied random deviations on the cursor’s trajectory to manipulate the motor fluency of the joint action, while the agents’ motor roles were either balanced (i.e., equivalent) or unbalanced (i.e., one agent contributed more than the other), making the agents more or less pivotal to the joint action. Then, the final outcomes were shared equally, fairly (i.e., reflecting individual motor contributions) or arbitrarily in an all-or-none fashion, between the co-agents. Self and joint SoA were measured through self-reports about feeling of control, that is, using judgment of (felt) control (JoC), and electrodermal activity was recorded during the whole motor task. We observed that self and joint JoC were reduced in the case of low motor fluency, pointing out the importance of sensorimotor cues for both I- and we-modes. Moreover, while self JoC was reduced in the low pivotality condition (i.e., low motor role), joint JoC was significantly enhanced when agents’ roles and rewards were symmetrical (i.e. equal). Skin conductance responses to rewards were impacted by the way outcomes were shared between partners (i.e., fairly, equally or arbitrarily) but not by the individual gains, which demonstrates the sensitivity of low-level physiological reactions to external signs of fairness. Skin conductance level was also reduced in the fair context, where rewards were shared according to individual motor contributions, relative to the all-or-none context, which could mirror the feeling of effective responsibility and control over actions’ outcomes.
  • Ease of learning explains semantic universals
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Shane Steinert-Threlkeld, Jakub SzymanikAbstractSemantic universals are properties of meaning shared by the languages of the world. We offer an explanation of the presence of such universals by measuring simplicity in terms of ease of learning, showing that expressions satisfying universals are simpler than those that do not according to this criterion. We measure ease of learning using tools from machine learning and analyze universals in a domain of function words (quantifiers) and content words (color terms). Our results provide strong evidence that semantic universals across both function and content words reflect simplicity as measured by ease of learning.
  • How influenceable is our metamemory for pictorial material' The impact
           of framing and emotionality on metamemory judgments
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Michaela Schmoeger, Matthias Deckert, Eva Loos, Ulrike WillingerAbstractMetamemory refers to the ability to monitor and control one´s own memory processes which plays an important role in everyday life when accuracy of memory is required. The present study intends to give new insights into the complex relationship between confidence in memory and accuracy of metamemory judgments for negative emotional and neutral pictorial stimuli. Judgments of learning (JOLs) were investigated in order to provide important theoretical information for practical applications in everyday life. A balanced 2 × 2 factorial experimental design was used to explore the impact of framing in terms of remembering or forgetting as well as emotionality on JOLs. With respect to the already known complex relationship of confidence in memory and accuracy of confidence statements, the present results emphasize this complexity by showing that there is no interaction between the factors framing and emotionality but significant main effects of these two factors with respect to JOLs. Furthermore, accuracy of JOLs is not influenced by framing in terms of remembering and forgetting. Both framing conditions lead to overconfident judgments, regardless of whether confidence in memory is influenced by framing. Emotionality, on the other hand, enhances memory accuracy regardless of whether the subjective feeling of remembering influences confidence in memory or not. The present findings highlight the need to strengthen the collective consciousness about the influenceability of confidence in memory and the fact that a high confidence in memory is not inevitably accompanied by accurate memories.
  • Prediction error and regularity detection underlie two dissociable
           mechanisms for computing the sense of agency
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Wen Wen, Patrick HaggardAbstractThe sense of agency refers to the subjective feeling of controlling one’s own actions, and through them, events in the outside world. According to computational motor control models, the prediction errors from comparison between the predicted sensory feedback and actual sensory feedback determine whether people feel agency over the corresponding outcome event, or not. This mechanism requires a model of the relation between action and outcome. However, in a novel environment, where this model has not yet been learned, the sense of agency must emerge during exploratory behaviours. In the present study, we designed a novel control detection task, in which participants explored the extent to which they could control the movement of three dots with a computer mouse, and then identified the dot that they felt they could control. Pre-recorded motions were applied for two dots, and the participants’ real-time motion only influenced one dot’s motion (i.e. the target dot). We disturbed participants’ control over the motion of the target dot in one of two ways. In one case, we applied a fixed angular bias transformation between participant’s movements and dot movements. In another condition, we mixed the participant’s current movement with replay of another movement, and used the resulting hybrid signal to drive visual dot position. The former intervention changes the match between motor action and visual outcome, but maintains a regular relation between the two. In contrast, the latter alters both matching and motor-visual correlation. Crucially, we carefully selected the strength of these two perturbations so that they caused the same magnitude of impairment of motor performance in a simple reaching task, suggesting that both interventions produced comparable prediction errors. However, we found the visuomotor transformation had much less effect on the ability to detect which dot was under one’s own control than did the nonlinear disturbance. This suggests a specific role of a correlation-like mechanism that detects ongoing visual-motor regularity in the human sense of agency. These regularity-detection mechanisms would remain intact under the linear, but not the nonlinear transformation. Human sense of agency may depend on monitoring ongoing motor-visual regularities, as well as on detecting prediction errors.
  • Discrimination of ordinal relationships in temporal sequences by
           4-month-old infants
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Maria Dolores de Hevia, Viola Macchi Cassia, Ludovica Veggiotti, Maria Eirini NetskouAbstractThe ability to discriminate the ordinal information embedded in magnitude-based sequences has been shown in 4-month-old infants, both for numerical and size-based sequences. At this early age, however, this ability is confined to increasing sequences, with infants failing to extract and represent decreasing order. Here we investigate whether the ability to represent order extends to duration-based sequences in 4-month-old infants, and whether it also shows the asymmetry signature previously observed for number and size. Infants were tested in an order discrimination task in which they were habituated to either increasing or decreasing variations in temporal duration, and were then tested with novel sequences composed of new temporal items whose durations varied following the familiar and the novel orders in alternation. Across three experiments, we manipulated the duration of the single temporal items and therefore of the whole sequences, which resulted in imposing more or less constraints on infants’ working memory, or general processing capacities. Results showed that infants failed at discriminating the ordinal direction in temporal sequences when the sequences had an overall long duration (Experiment 1), but succeeded when the duration of the sequences was shortened (Experiments 2 and 3). Moreover, there was no sign of the asymmetry signature previously reported for number and size, as successful discrimination was present for infants habituated to both increasing and decreasing sequences. These results suggest that sensitivity to temporal order is present very early in development, and that its functional properties are not shared with other magnitude dimensions, such as size and number.
  • Numbers around Descartes: A preregistered study on the three-dimensional
           SNARC effect
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Sara Aleotti, Francesco Di Girolamo, Stefano Massaccesi, Konstantinos PriftisAbstractThe Spatial-Numerical Association of Response Codes (SNARC) effect suggests that numbers are represented along a horizontal left-to-right oriented, mental number line, with small numbers on the left and large numbers on the right. Much less evidence exists for vertical (down-to-up) and sagittal (near-to-far) SNARC effects. This might be due to the employment of different experimental paradigms among studies and to the, sometimes, inexact definition of the vertical and sagittal axes. We investigated for the first time the SNARC effect along the horizontal, vertical, and sagittal axes, by means of a classic SNARC task. Our results suggest the presence of three equally-strong SNARC effects. Our findings can be considered as evidence in favor of a three-dimensional, mental representation of numbers, in the form of a mental number space, defined by Cartesian coordinates.
  • Putting the world in mind: The case of mental representation of quantity
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Naama Katzin, David Katzin, Adi Rosén, Avishai Henik, Moti SaltiAbstractA reoccurring question in cognitive science concerns the way the world is represented. Cognitive scientists quantify the contribution of a physical attribute to a sensation and try to characterize the underlying mechanism. In numerical cognition, the contribution of physical properties to quantity perception in comparison tasks was widely demonstrated albeit leaving the underlying mechanism unclear. Furthermore, it is unclear whether this contribution is related solely to comparison tasks or to a core, general ability. Here we demonstrate that the shape of the convex hull, the smallest convex polygon containing all objects in an array, plays a role in the transfer function between quantity and its mental representation. We used geometric probability to demonstrate that the shape of the convex hull is correlated with quantity in a way that resembles the behavioral enumeration curve of subitizing and estimation. Then, in two behavioral experiments we manipulated the shape of the convex hull and demonstrated its effect on enumeration. Accordingly, we suggest that humans learn the correlation between convex hull shape and numerosity and use it to enumerate.
  • On the limits of evidence accumulation of the preconscious percept
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Alberto Avilés, Howard Bowman, Brad WybleAbstractWhen a stimulus is repeated the memory representation for that stimulus is strengthened and performance in memory tests increases. To what extent this effect requires that each exposure elicits a fully-fledged conscious percept' In two Rapid Serial Visual Presentation (RSVP) experiments we explored whether the memory representations for words would accumulate evidence trough repeated exposure when none of those presentations induced a conscious percept. Participants were instructed to identify repeated items inserted in different RSVP streams and we isolated the first instance that an item was consciously perceived as repeating. The results showed that the probability of detecting a repetition for the first time was constant across repetitions. This effect signals a limit to the evidence accumulation process through repeated exposure. We discuss whether conscious perception modulates the decay of memory representations with below-threshold items resulting in extremely fleeting memory representations.
  • Temporal and spatial ensemble statistics are formed by distinct mechanisms
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Haojiang Ying, Edwin J. Burns J., Amanda M. Choo, Hong Xu
  • Bilingual switching between languages and listeners: Insights from
           immersive virtual reality
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): David PeetersAbstractPerhaps the main advantage of being bilingual is the capacity to communicate with interlocutors that have different language backgrounds. In the life of a bilingual, switching interlocutors hence sometimes involves switching languages. We know that the capacity to switch from one language to another is supported by control mechanisms, such as task-set reconfiguration. This study investigates whether similar neurophysiological mechanisms support bilingual switching between different listeners, within and across languages. A group of 48 unbalanced Dutch-English bilinguals named pictures for two monolingual Dutch and two monolingual English life-size virtual listeners in an immersive virtual reality environment. In terms of reaction times, switching languages came at a cost over and above the significant cost of switching from one listener to another. Analysis of event-related potentials showed similar electrophysiological correlates for switching listeners and switching languages. However, it was found that having to switch listeners and languages at the same time delays the onset of lexical processes more than a switch between listeners within the same language. Findings are interpreted in light of the interplay between proactive (sustained inhibition) and reactive (task-set reconfiguration) control in bilingual speech production. It is argued that a possible bilingual advantage in executive control may not be due to the process of switching per se. This study paves the way for the study of bilingual language switching in ecologically valid, naturalistic, experimental settings.
  • Two-day-old newborns learn to discriminate accelerated-decelerated
           biological kinematics from constant velocity motion
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Laila Craighero, Valentina Ghirardi, Marco Lunghi, Fiorenza Panin, Francesca SimionAbstractAlready in uterus the hand moves with the typical accelerated-decelerated kinematics of goal-directed actions and, from the twenty-second week of pregnancy, the unborn shows the ability to modulate the velocity of the movement depending on the nature of the target. According to the direct matching hypothesis, this motor knowledge may be sufficient to attune neonates' motion perception—like adults'—to biological kinematics. Using dots configuration motions which varied with respect to the kinematics of goal-directed actions, we observed that two-day-old human newborns did not show any spontaneous preference for either biological accelerated-decelerated motion or non-biological constant velocity motion when these were simultaneously presented in a standard preferential looking paradigm. In contrast, newborns preferred the biological kinematics after the repeated visual presentation of the different motions in a standard infant-control visual habituation paradigm. We propose that present results indicate that the relationship between perception and action does not require only action development but also the accumulation of sufficient perceptual experience. They also suggest a fast plasticity of the sensorimotor system in linking an already acquired motor knowledge with a newly experienced congruent visual stimulation.
  • Detecting falsehood relies on mismatch detection between sentence
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Rebecca Weil, Liad MudrikAbstractHow do people process and evaluate falsehood of sentences' Do people need to compare presented information with the correct answer to determine that a sentence is false, or do they rely on a mismatch between presented sentence components' To illustrate, when confronted with the false sentence ‘trains run on highways’, does one need to know that trains do not run on highways or does one need to know that trains run on tracks to reject the sentence as false' To investigate these questions, participants were asked to validate sentences that were preceded by images (Experiments 1–3) conveying a truth-congruent or a falsehood-congruent component of the sentence (e.g., an image of tracks/highway preceding the sentence ‘trains run on tracks/highways’) or by words (Experiment 4) that were either sentence-congruent, truth-congruent, or both (e.g., the word ‘train/tracks’ preceding the sentence ‘trains run on tracks/highways’). Results from four experiments showed that activating sentence-congruent concepts facilitates validation for both false and true sentences but that activating truth-congruent concepts did not aid the validation of false sentences. The present findings suggest that a detection of falsehood relies on a mismatch detection between sentence’s components, rather than on the activation of true content in the context of a particular sentence.
  • Peripersonal space is diversely sensitive to a temporary vs permanent
           state of anxiety
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): C. Spaccasassi, A. MaravitaAbstractPeripersonal Space (PPS) is the multisensory space immediately surrounding our body. Visual and tactile stimuli here are promptly processed, since their interaction gradually strengthens as the distance between visual stimulus and the body decreases. Recently, a modified version of the Temporal Order Judgment (TOJ) task was proposed to assess PPS based on the spatial congruence between somatosensory and visual stimuli. Here, we used this paradigm to explore how a temporary vs a permanent state of anxiety can alter PPS. Indeed, previous research showed that PPS boundaries are not fixed, but they can be enlarged by contingent factors (i.e. emotional features). Participants performed the TOJ paradigm twice, just before and after completing an anxiety-inducing task (experimental breathing condition) or a neutral one (control breathing condition), while their trait and state anxiety levels were repeatedly measured. We found that the pattern of visuo-tactile integration in PPS changes in the very opposite way following the two breathing tasks for participants with high levels of temporary anxiety, by strengthening and weakening its power after the experimental and control conditions, respectively. On the contrary, both the breathing tasks are capable of reducing the cross-modal interplay as compared to baseline for high trait-anxious participants, who show an overall stronger visuo-tactile integration inside the PPS than low trait anxious individuals. These results are discussed in the light of the double dissociation between orienting and alerting attentional network over-functioning, reported in state anxiety participants, and impoverished prefrontal attentional control shown by trait anxiety individuals.
  • Bouba and Kiki inside objects: Sound-shape correspondence for objects with
           a hole
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Sung-Ho KimAbstractVisual holes (cutouts in a surface) have recently intrigued vision scientists as interesting and useful stimuli in the studies of shape perception and as a perceptual conundrum regarding figure/ground organization. Adopting the Bouba/Kiki paradigm, this study addressed a controversial issue of whether the perceived shape of a closed region alters when the region changes from a solid object to an empty hole, in a more direct manner than previous studies did. Observers were presented with two doughnut-like cardboard cutouts, one with a flower-shaped hole and the other with a star-shaped hole, and then matched them with two nonsense words. The curvature profile of the hole boundary was manipulated so that the shape of the interior region (i.e., a hole) and that of the exterior region (i.e., material edges) give rise to opposite shape impressions (i.e., one rounded and the other spiky). The results of Experiment 1 revealed that shape-name matching for holed objects is based on the interior shapes of holes, but not those of materially defined inner edges. The following three experiments replicated the same pattern of results even when holes appeared like oral apertures in animal character faces (Experiments 2–3) and when they were irregular, non-symmetric, and low in semantic association with familiar real-world objects (Experiment 4). Lastly, Experiment 5 showed that shape-name matching for “C”-shaped, negative-part stimuli is also interior-shape-based if the opening of the interior region is relatively small. These findings suggest that the interior shapes of holes are automatically accessible. I conclude with a discussion of my hypothesis that the only global-level, unitary shape representation of a bounded region of a single connected surface is that of the interior region for both objects and holes, imposing an important constraint in visual shape processing.
  • Communication efficiency of color naming across languages provides a new
           framework for the evolution of color terms
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Bevil R. Conway, Sivalogeswaran Ratnasingam, Julian Jara-Ettinger, Richard Futrell, Edward GibsonAbstractLanguages vary in their number of color terms. A widely accepted theory proposes that languages evolve, acquiring color terms in a stereotyped sequence. This theory, by Berlin and Kay (BK), is supported by analyzing best exemplars (“focal colors”) of basic color terms in the World Color Survey (WCS) of 110 languages. But the instructions of the WCS were complex and the color chips confounded hue and saturation, which likely impacted focal-color selection. In addition, it is now known that even so-called early-stage languages nonetheless have a complete representation of color distributed across the population. These facts undermine the BK theory. Here we revisit the evolution of color terms using original color-naming data obtained with simple instructions in Tsimane’, an Amazonian culture that has limited contact with industrialized society. We also collected data in Bolivian-Spanish speakers and English speakers. We discovered that information theory analysis of color-naming data was not influenced by color-chip saturation, which motivated a new analysis of the WCS data. Embedded within a universal pattern in which warm colors (reds, oranges) are always communicated more efficiently than cool colors (blues, greens), as languages increase in overall communicative efficiency about color, some colors undergo greater increases in communication efficiency compared to others. Communication efficiency increases first for yellow, then brown, then purple. The present analyses and results provide a new framework for understanding the evolution of color terms: what varies among cultures is not whether colors are seen differently, but the extent to which color is useful.
  • Who's got the global advantage' Visual field differences in processing
           of global and local shape
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Christian Gerlach, Nicolas PoirelAbstractMuch evidence suggests that we first perceive the overall layout of a scene or object followed later by the details. This coarse-to-fine temporal dynamic in visual processing is also found in Navon's classical paradigm where information at the global level of compound stimuli is processed faster than information at the local level (global precedence effect), and where information at the global level has larger effects on local level responses than local level information has on global level responses (asymmetric interference effects). Traditionally, global shape primacy in Navon's paradigm has been linked with a right hemisphere preference (left visual field advantage) for global shape processing, and a left hemisphere preference (right visual field advantage) for local shape processing. This link, however, has been based on measures which confound global precedence and interference effects. Indeed, when these measures are de-confounded, we find no evidence for larger global precedence effects in the left compared with the right visual field in a large sample of participants (N = 337). In comparison, global-to-local interference effects are found to be stronger in the left than in the right visual field. We argue that these findings can be accounted for by assuming that the right hemisphere plays a special role in integrating shape information across spatial scales, that is, without assuming the existence of a right hemisphere preference for global shape processing per se.
  • Direct gaze enhances interoceptive accuracy
    • Abstract: Publication date: February 2020Source: Cognition, Volume 195Author(s): Tomoko Isomura, Katsumi WatanabeAbstractDirect-gaze signals are known to modulate human cognition, including self-awareness. In the present study, we specifically focused on ‘bodily’ self-awareness and examined whether direct gaze would modulate one’s interoceptive accuracy (IAcc)—the ability to accurately monitor internal bodily sensations. While viewing a photograph of a frontal face with a direct gaze, an averted face or a mere white cross as a baseline, participants were required to count their heartbeats without taking their pulse. The results showed higher IAcc in the direct-gaze condition than in the averted-face or baseline condition. This was particularly the case in participants with low IAcc at baseline, indicating that direct gaze enhanced the participants’ IAcc. Importantly, their heart rate was not different while viewing the direct gaze and averted face, suggesting that sensitivity to interoceptive signals, rather than physiological arousal, is heightened by direct gaze. These findings demonstrate the role of social signals in our bodily interoceptive processing and support the notion of the social nature of self-awareness.
  • Corrigendum to “Not all who ponder count costs: Arithmetic reflection
           predicts utilitarian tendencies, but logical reflection predicts both
    • Abstract: Publication date: Available online 26 October 2019Source: CognitionAuthor(s): Nick Byrd, Paul Conway
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-