for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Cognition
  [SJR: 2.77]   [H-I: 142]   [151 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0010-0277
   Published by Elsevier Homepage  [3089 journals]
  • Mine is better than yours: Investigating the ownership effect in children
           with autism spectrum disorder and typically developing children
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Calum Hartley, Sophie Fisher
      Ownership has a unique and privileged influence on human psychology. Typically developing (TD) children judge their objects to be more desirable and valuable than similar objects belonging to others. This ‘ownership effect’ is due to processing one’s property in relation to ‘the self’. Here we explore whether children with autism spectrum disorder (ASD) – a population with impaired self-understanding – prefer and over-value property due to ownership. In Experiment 1, we discovered that children with ASD did not favour a randomly endowed toy and frequently traded for a different object. By contrast, TD children showed a clear preference for their randomly endowed toy and traded infrequently. Both populations also demonstrated highly-accurate tracking of owner-object relationships. Experiment 2 showed that both TD children and children with ASD over-value their toys if they are self-selected and different from other-owned toys. Unlike TD children, children with ASD did not over-value their toys in comparison to non-owned identical copies. This finding was replicated in Experiment 3, which also established that mere ownership elicited over-valuation of randomly endowed property in TD children. However, children with ASD did not consistently regard their randomly endowed toys as the most valuable, and evaluated property irrespective of ownership. Our findings show that mere ownership increases preferences and valuations for self-owned property in TD children, but not children with ASD. We propose that deficits in self-understanding may diminish ownership effects in ASD, eliciting a more economically-rational strategy that prioritises material qualities (e.g. what a toy is) rather than whom it belongs to.

      PubDate: 2017-12-13T06:20:48Z
       
  • Involuntary top-down control by search-irrelevant features: Visual working
           memory biases attention in an object-based manner
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Rebecca M. Foerster, Werner X. Schneider
      Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates.

      PubDate: 2017-12-13T06:20:48Z
       
  • Understanding face familiarity
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Robin S.S. Kramer, Andrew W. Young, A. Mike Burton
      It has been known for many years that identifying familiar faces is much easier than identifying unfamiliar faces, and that this familiar face advantage persists across a range of tasks. However, attempts to understand face familiarity have mostly used a binary contrast between ‘familiar’ and ‘unfamiliar’ faces, with no attempt to incorporate the vast range of familiarity we all experience. From family members to casual acquaintances and from personal to media exposure, familiarity is a more complex categorisation than is usually acknowledged. Here we model levels of familiarity using a generic statistical analysis (PCA combined with LDA) computed over some four thousand naturally occurring images that include a large variation in the numbers of images for each known person. Using a strong test of performance with entirely novel, untrained everyday images, we show that such a model can simulate widely documented effects of familiarity in face recognition and face matching, and offers a natural account of the internal feature advantage for familiar faces. Furthermore, as with human viewers, the benefits of familiarity seem to accrue from being able to extract consistent information across different photos of the same face. We argue that face familiarity is best understood as reflecting increasingly robust statistical descriptions of idiosyncratic within-person variability. Understanding how faces become familiar appears to rely on both bottom-up statistical image descriptions (modelled here with PCA), and top-down processes that cohere superficially different images of the same person (modelled here with LDA).

      PubDate: 2017-12-13T06:20:48Z
       
  • A unified computational account of cumulative semantic, semantic blocking,
           and semantic distractor effects in picture naming
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Ardi Roelofs
      Computational models of lexical selection in spoken word production have been applied to semantic interference effects in picture naming response times obtained with continuous naming, blocked-cyclic naming, and picture-word interference paradigms. However, a unified computational account of the effects in the three paradigms is lacking. Here, I show that the inclusion of conceptual bias in the WEAVER++ model (Levelt, Roelofs, & Meyer, 1999) explains cumulative semantic and semantic blocking effects while preserving the model's account of semantic distractor effects. The key assumptions of the account are (1) lexical selection by competition, and (2) a conceptual origin and lexical locus of the semantic effects. I provide a proof of concept of the account by reporting computer simulation results, addressing behavioral and neuroimaging evidence. The assumptions are sufficient for a unified account of semantic effects in the three paradigms, contrary to pessimistic views of this area.

      PubDate: 2017-12-13T06:20:48Z
       
  • Distinguishing languages from dialects: A litmus test using the
           picture-word interference task
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Alissa Melinger
      Linguists have been working to develop objective criteria for distinguishing languages from dialects for well over half a century. The prevailing view amongst sociolinguists is that no objective criteria can be formulated. The aim of this study is to examine whether language processing can provide insights into this problem by comparing bidialectal behavioural effects to bilingual effects reported in the literature. Previous research has demonstrated that when bilinguals name an object in Lx while simultaneously processing a translation equivalent distractor word in Ly, naming times are sped up relative to an unrelated condition (Costa, Miozzo, & Caramazza, 1999). Using the same methodology, we evaluated whether a comparable facilitation effect arises when the distractor word is a dialectal or register variant of the picture name. Across 5 experiments we found no trace of translation equivalent facilitation. Instead, we repeatedly observed between-dialect and between-register interference, in contrast to the between-language facilitation effect. This behavioural divergence between bilingual vs. bidialectal processing suggests that this paradigm could provide an objective litmus tests for identifying the boundary between dialects and languages.

      PubDate: 2017-12-13T06:20:48Z
       
  • Learning and transfer of working memory gating policies
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Apoorva Bhandari, David Badre
      knowledge about the tasks we encounter enables us to rapidly and flexibly adapt to novel task contexts. Previous research has focused primarily on abstract rules that leverage shared structure in stimulus-response (S-R) mappings as the basis of such task knowledge. Here we provide evidence that working memory (WM) gating policies – a type of control policy required for internal control of WM during a task – constitute a form of abstract task knowledge that can be transferred across contexts. In two experiments, we report specific evidence for the transfer of selective WM gating policies across changes of task context. We show that this transfer is not tied to shared structure in S-R mappings, but instead in the dynamic structure of the task. Collectively, our results highlight the importance of WM gating policies in particular, and control policies in general, as a key component of the task knowledge that supports flexible behavior and task generalization.

      PubDate: 2017-12-13T06:20:48Z
       
  • Sensorimotor training alters action understanding
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Caroline Catmur, Emma L. Thompson, Orianna Bairaktari, Frida Lind, Geoffrey Bird
      The discovery of ‘mirror’ neurons stimulated intense interest in the role of motor processes in social interaction. A popular assumption is that observation-related motor activation, exemplified by mirror neurons’ matching properties, evolved to subserve the ‘understanding’ of others’ actions. Alternatively, such motor activation may result from sensorimotor learning. Sensorimotor training alters observation-related motor activation, but studies demonstrating training-dependent changes in motor activation have not addressed the functional role of such activation. We therefore tested whether sensorimotor learning alters action understanding. Participants completed an action understanding task, judging the weight of boxes lifted by another person, before and after ‘counter-mirror’ sensorimotor training. During this training they lifted heavy boxes while observing light boxes being lifted, and vice-versa. Compared to a control group, this training significantly reduced participants’ action understanding ability. Performance on a duration judgement task was unaffected by training. These data suggest the ability to understand others’ actions results from sensorimotor learning.

      PubDate: 2017-12-13T06:20:48Z
       
  • The influence of prior reputation and reciprocity on dynamic
           trust-building in adults with and without autism spectrum disorder
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Cornelius Maurer, Valerian Chambon, Sacha Bourgeois-Gironde, Marion Leboyer, Tiziana Zalla
      The present study was designed to investigate the effects of reputational priors and direct reciprocity on the dynamics of trust building in adults with (N = 17) and without (N = 25) autism spectrum disorder (ASD) using a multi-round Trust Game (MTG). On each round, participants, who played as investors, were required to maximize their benefits by updating their prior expectations (the partner’s positive or negative reputation), based on the partner’s directed reciprocity, and adjusting their own investment decisions accordingly. Results showed that reputational priors strongly oriented the initial decision to trust, operationalized as the amount of investment the investor shares with the counterpart. However, while typically developed participants were mainly affected by the direct reciprocity, and rapidly adopted the optimal Tit-for-Tat strategy, participants with ASD continued to rely on reputational priors throughout the game, even when experience of the counterpart’s actual behavior contradicted their prior-based expectations. In participants with ASD, the effect of the reputational prior never disappeared, and affected judgments of trustworthiness and reciprocity of the partner even after completion of the game. Moreover, the weight of prior reputation positively correlated with the severity of the ASD participant’s social impairments while the reciprocity score negatively correlated with the severity of repetitive and stereotyped behaviors, as measured by the Autism Diagnostic Interview–Revised (ADI-R). In line with Bayesian theoretical accounts, the present findings indicate that individuals with ASD have difficulties encoding incoming social information and using it to revise and flexibly update prior social expectations, and that this deficit might severely hinder social learning and everyday life interactions.

      PubDate: 2017-12-02T21:02:39Z
       
  • Perceptions of randomness in binary sequences: Normative, heuristic, or
           both'
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Stian Reimers, Chris Donkin, Mike E. Le Pelley
      When people consider a series of random binary events, such as tossing an unbiased coin and recording the sequence of heads (H) and tails (T), they tend to erroneously rate sequences with less internal structure or order (such as HTTHT) as more probable than sequences containing more structure or order (such as HHHHH). This is traditionally explained as a local representativeness effect: Participants assume that the properties of long sequences of random outcomes—such as an equal proportion of heads and tails, and little internal structure—should also apply to short sequences. However, recent theoretical work has noted that the probability of a particular sequence of say, heads and tails of length n, occurring within a larger (>n) sequence of coin flips actually differs by sequence, so P(HHHHH) < P(HTTHT). In this alternative account, people apply rational norms based on limited experience. We test these accounts. Participants in Experiment 1 rated the likelihood of occurrence for all possible strings of 4, 5, and 6 observations in a sequence of coin flips. Judgments were better explained by representativeness in alternation rate, relative proportion of heads and tails, and sequence complexity, than by objective probabilities. Experiments 2 and 3 gave similar results using incentivized binary choice procedures. Overall the evidence suggests that participants are not sensitive to variation in objective probabilities of a sub-sequence occurring; they appear to use heuristics based on several distinct forms of representativeness.

      PubDate: 2017-12-02T21:02:39Z
       
  • When having two names facilitates lexical selection: Similar results in
           the picture-word task from translation distractors in bilinguals and
           synonym distractors in monolinguals
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Alexandra S. Dylman, Christopher Barry
      We report five experiments using the picture-word task to examine lexical selection by comparing the effects of translation distractors in bilinguals and synonym distractors in monolinguals. Three groups of bilinguals named objects in their L1 or L2, and English monolinguals named objects using common names (e.g., DOG =“dog”) or, in a novel manipulation, using synonymous alternative names (e.g., DOG =“hound”, GLASSES =“spectacles”). All studies produced strikingly similar results. When bilinguals named in L1, there was a small facilitation effect from translation distractors, but larger facilitation when they named in L2. When monolinguals produced common names, there was no reliable effect from synonym distractors, but facilitation when they produced alternative names. (There were also strong identity facilitation effects in all naming conditions.) We discuss the relevance of these results for the debate concerning the role of competition in lexical selection and propose that for speech production there are direct facilitatory connections between the lexical representations of translations in bilinguals (and between synonyms in monolinguals). The effects of synonyms in monolinguals appear to “simulate” the effects found for translations in bilinguals, which suggest that there are commonalities in monolingual and bilingual lexical selection.

      PubDate: 2017-12-02T21:02:39Z
       
  • Verbal labels facilitate tactile perception
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Tally McCormick Miller, Timo Torsten Schmidt, Felix Blankenburg, Friedemann Pulvermüller
      One of the key statements of linguistic relativity is that language has a causal effect on perception. Although much previous research has addressed such putative language perception causality, no firm proof is available thus far which demonstrates that verbal labels help or otherwise influence perceptual processes. Here, we tested the hypothesis of language perception causality by using novel, minimally-different tactile-patterned stimuli applied to the finger, which initially could not be discriminated by our participants. By combining novel verbal pseudoword- and novel tactile-patterned stimuli in an implicit learning experiment, we show a language-induced facilitation in tactile-patterned stimulus discrimination. After one week of intensive yet implicit learning of tactile stimuli in the presence of irrelevant consistent verbal labels, participants demonstrated significant discrimination improvement. In contrast, the same participants showed no improvement in discriminating tactile-patterned stimuli that had been learnt in the context of variable linguistic stimuli. These results show that specific mental links between verbal labels and perceptual information brought about by their correlated presentation enable one to better discriminate said sensory information (and build percepts).

      PubDate: 2017-12-02T21:02:39Z
       
  • Moral imagination: Facilitating prosocial decision-making through scene
           imagery and theory of mind
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Brendan Gaesser, Kerri Keeler, Liane Young
      How we imagine and subjectively experience the future can inform how we make decisions in the present. Here, we examined a prosocial effect of imagining future episodes in motivating moral decisions about helping others in need, as well as the underlying cognitive mechanisms. Across three experiments we found that people are more willing to help others in specific situations after imagining helping them in those situations. Manipulating the spatial representation of imagined future episodes in particular was effective at increasing intentions to help others, suggesting that scene imagery plays an important role in the prosocial effect of episodic simulation. Path modeling analyses revealed that episodic simulation interacts with theory of mind in facilitating prosocial responses but can also operate independently. Moreover, we found that our manipulations of the imagined helping episode increased actual prosocial behavior, which also correlated with changes in reported willingness to help. Based on these findings, we propose a new model that begins to capture the multifaceted mechanisms by which episodic simulation contributes to prosocial decision-making, highlighting boundaries and promising future directions to explore. Implications for research in moral cognition, imagination, and patients with impairments in episodic simulation are discussed.

      PubDate: 2017-12-02T21:02:39Z
       
  • Iconicity affects children’s comprehension of complex sentences: The
           role of semantics, clause order, input and individual differences
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Laura E. de Ruiter, Anna L. Theakston, Silke Brandt, Elena V.M. Lieven
      Complex sentences involving adverbial clauses appear in children’s speech at about three years of age yet children have difficulty comprehending these sentences well into the school years. To date, the reasons for these difficulties are unclear, largely because previous studies have tended to focus on only sub-types of adverbial clauses, or have tested only limited theoretical models. In this paper, we provide the most comprehensive experimental study to date. We tested four-year-olds, five-year-olds and adults on four different adverbial clauses (before, after, because, if) to evaluate four different theoretical models (semantic, syntactic, frequency-based and capacity-constrained). 71 children and 10 adults (as controls) completed a forced-choice, picture-selection comprehension test, providing accuracy and response time data. Children also completed a battery of tests to assess their linguistic and general cognitive abilities. We found that children’s comprehension was strongly influenced by semantic factors – the iconicity of the event-to-language mappings – and that their response times were influenced by the type of relation expressed by the connective (temporal vs. causal). Neither input frequency (frequency-based account), nor clause order (syntax account) or working memory (capacity-constrained account) provided a good fit to the data. Our findings thus contribute to the development of more sophisticated models of sentence processing. We conclude that such models must also take into account how children’s emerging linguistic understanding interacts with developments in other cognitive domains such as their ability to construct mental models and reason flexibly about them.

      PubDate: 2017-12-02T21:02:39Z
       
  • Automatic imitation of pro- and antisocial gestures: Is implicit social
           behavior censored'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Emiel Cracco, Oliver Genschow, Ina Radkova, Marcel Brass
      According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered.

      PubDate: 2017-12-02T21:02:39Z
       
  • Pre-linguistic segmentation of speech into syllable-like units
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Okko Räsänen, Gabriel Doyle, Michael C. Frank
      Syllables are often considered to be central to infant and adult speech perception. Many theories and behavioral studies on early language acquisition are also based on syllable-level representations of spoken language. There is little clarity, however, on what sort of pre-linguistic “syllable” would actually be accessible to an infant with no phonological or lexical knowledge. Anchored by the notion that syllables are organized around particularly sonorous (audible) speech sounds, the present study investigates the feasibility of speech segmentation into syllable-like chunks without any a priori linguistic knowledge. We first operationalize sonority as a measurable property of the acoustic input, and then use sonority variation across time, or speech rhythm, as the basis for segmentation. The entire process from acoustic input to chunks of syllable-like acoustic segments is implemented as a computational model inspired by the oscillatory entrainment of the brain to speech rhythm. We analyze the output of the segmentation process in three different languages, showing that the sonority fluctuation in speech is highly informative of syllable and word boundaries in all three cases without any language-specific tuning of the model. These findings support the widely held assumption that syllable-like structure is accessible to infants even when they are only beginning to learn the properties of their native language.

      PubDate: 2017-11-21T14:20:54Z
       
  • Social attention directs working memory maintenance
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Qi-Yang Nie, Xiaowei Ding, Jianyong Chen, Markus Conci
      Visual working memory (vWM) performance is enhanced when a memorized object is cued after encoding. This so-called retro-cue effect is typically observed with a predictive (80% valid), retrospective cue. The current study examined whether a nonpredictive (50% valid) retro-cue can similarly enhance internal memory representations in cases where the cue conveys social signals. To this end, gaze cues were presented during the retention interval of a change-detection task, which are capable to engender a mutual attentional focus of two individuals towards one location. In line with our prediction, Experiment 1 demonstrated that a polygon presented at the gazed-at location was remembered better than that at both non-gazed and gazed-away locations. Experiments 2 and 3 showed that low-level motion cues did not elicit attentional orienting in a comparable manner as the gaze cue, and these differences in cuing were found to be reliable and independent of memory load. Furthermore, the gaze retro-cue effect disappeared when the face was inverted (Experiment 4). In sum, these results clearly show that sharing the focus of another individual establishes a point of reference from which visual information is restored with priority, suggesting that a gaze retro-cue leads to social attention, thus, modulating vWM maintenance in a reflexive, automatic manner.

      PubDate: 2017-11-13T20:58:20Z
       
  • Infants' sensitivity to vowel harmony and its role in segmenting speech
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Toben H. Mintz, Rachel L. Walker, Ashlee Welday, Celeste Kidd
      A critical part of infants’ ability to acquire any language involves segmenting continuous speech input into discrete word forms. Certain properties of words could provide infants with reliable cues to word boundaries. Here we investigate the potential utility of vowel harmony (VH), a phonological property whereby vowels within a word systematically exhibit similarity (“harmony”) for some aspect of the way they are pronounced. We present evidence that infants with no experience of VH in their native language nevertheless actively use these patterns to generate hypotheses about where words begin and end in the speech stream. In two sets of experiments, we exposed infants learning English, a language without VH, to a continuous speech stream in which the only systematic patterns available to be used as cues to word boundaries came from syllable sequences that showed VH or those that showed vowel disharmony (dissimilarity). After hearing less than one minute of the streams, infants showed evidence of sensitivity to VH cues. These results suggest that infants have an experience-independent sensitivity to VH, and are predisposed to segment speech according to harmony patterns. We also found that when the VH patterns were more subtle (Experiment 2), infants required more exposure to the speech stream before they segmented based on VH, consistent with previous work on infants’ preferences relating to processing load. Our findings evidence a previously unknown mechanism by which infants could discover the words of their language, and they shed light on the perceptual mechanisms that might be responsible for the emergence of vowel harmony as an organizing principle for the sound structure of words in many languages.

      PubDate: 2017-11-13T20:58:20Z
       
  • On-line confidence monitoring during decision making
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Dror Dotan, Florent Meyniel, Stanislas Dehaene
      Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions.

      PubDate: 2017-11-13T20:58:20Z
       
  • What is in a name'
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Jincai Li, Longgen Liu, Elizabeth Chalmers, Jesse Snedeker
      Past work has shown systematic differences between Easterners' and Westerners' intuitions about the reference of proper names. Understanding when these differences emerge in development will help us understand their origins. In the present study, we investigate the referential intuitions of English- and Chinese-speaking children and adults in the U.S. and China. Using a truth-value judgment task modeled on Kripke's classic Gödel case, we find that the cross-cultural differences are already in place at age seven. Thus, these differences cannot be attributed to later education or enculturation. Instead, they must stem from differences that are present in early childhood. We consider alternate theories of reference that are compatible with these findings and discuss the possibility that the cross-cultural differences reflect differences in perspective-taking strategies.

      PubDate: 2017-11-13T20:58:20Z
       
  • Decreased reward value of biological motion among individuals with
           autistic traits
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Elin H. Williams, Emily S. Cross
      The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments.

      PubDate: 2017-11-07T02:45:03Z
       
  • From altered synaptic plasticity to atypical learning: A computational
           model of Down syndrome
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Ángel Eugenio Tovar, Gert Westermann, Alvaro Torres
      Learning and memory rely on the adaptation of synaptic connections. Research on the neurophysiology of Down syndrome has characterized an atypical pattern of synaptic plasticity with limited long-term potentiation (LTP) and increased long-term depression (LTD). Here we present a neurocomputational model that instantiates this LTP/LTD imbalance to explore its impact on tasks of associative learning. In Study 1, we ran a series of computational simulations to analyze the learning of simple and overlapping stimulus associations in a model of Down syndrome compared with a model of typical development. Learning in the Down syndrome model was slower and more susceptible to interference effects. We found that interference effects could be overcome with dedicated stimulation schedules. In Study 2, we ran a second set of simulations and an empirical study with participants with Down syndrome and typically developing children to test the predictions of our model. The model adequately predicted the performance of the human participants in a serial reaction time task, an implicit learning task that relies on associative learning mechanisms. Critically, typical and atypical behavior was explained by the interactions between neural plasticity constraints and the stimulation schedule. Our model provides a mechanistic account of learning impairments based on these interactions, and a causal link between atypical synaptic plasticity and associative learning.

      PubDate: 2017-11-07T02:45:03Z
       
  • Compound risk judgment in tasks with both idiosyncratic and systematic
           risk: The “Robust Beauty” of additive probability integration
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Joakim Sundh, Peter Juslin
      In this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk – the most complex tasks from the standpoint of probability theory – while they should shift to probability theory or exemplar memory with independence between the assets. The results from 3 experiments confirm that participants shift between strategies depending on the task, starting off with the default of additive integration. In contrast to results in similar multiple cue judgment tasks, there is little evidence for use of exemplar memory. The additive heuristics also appear to be surprisingly context-sensitive, with limited generalization across formally very similar tasks.

      PubDate: 2017-11-07T02:45:03Z
       
  • The early social significance of shared ritual actions
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Zoe Liberman, Katherine D. Kinzler, Amanda L. Woodward
      Many rituals are socially stipulated such that engaging in a group’s rituals can fundamentally signal membership in that group. Here, we asked whether infants infer information about people’s social affiliation based on whether those people perform the same ritualistic action versus different actions. We presented 16-month-old infants with two people who used the same object to achieve the same goal: turning on a light. In a first study, the actions that the actors used to turn on the light had key properties of ritual: they were not causally necessary to reach the overall goal, and there were no features of the situation that required doing the particular actions. We varied whether the two actors performed the same action or performed different actions to turn on the light. Infants expected people who used the same ritualistic action to be more likely to affiliate than people who used different actions. A second study indicated that these results were not due to perceptual similarity: when the differences in the actors’ actions were not marked by properties of ritual, but were instead due to situational constraints, infants expected the actors to affiliate. Thus, infants understand the social significance of people engaging in common, potentially ritualistic actions, and expect these actions to provide information about third-party social relationships.

      PubDate: 2017-11-07T02:45:03Z
       
  • Expected but omitted stimuli affect crossmodal interaction
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Marcello Costantini, Daniele Migliorati, Brunella Donno, Miroslav Sirota, Francesca Ferri
      One of the most important ability of our brain is to integrate input from different sensory modalities to create a coherent representation of the environment. Does expectation affect such multisensory integration' In this paper, we tackled this issue by taking advantage from the crossmodal congruency effect (CCE). Participants made elevation judgments to visual target while ignoring tactile distractors. We manipulated the expectation of the tactile distractor by pairing the tactile stimulus to the index finger with a high-frequency tone and the tactile stimulus to the thumb with a low-frequency tone in 80% of the trials. In the remaining trials we delivered the tone and the visual target, but the tactile distractor was omitted (Study 1). Results fully replicated the basic crossmodal congruency effect. Strikingly, the CCE was observed, though at a lesser degree, also when the tactile distractor was not presented but merely expected. The contingencies between tones and tactile distractors were reversed in a follow-up study (Study 2), and the effect was further tested in two conceptual replications using different combinations of stimuli (Studies 5 and 6). Two control studies ruled out alternative explanations of the observed effect that would not involve a role for tactile distractors (Studies 3, 4). Two additional control studies unequivocally proved the dependency of the CCE on the spatial and temporal expectation of the distractors (Study 7, 8). An internal small-scale meta-analysis showed that the crossmodal congruency effect with predicted distractors is a robust medium size effect. Our findings reveal that multisensory integration, one of the most basic and ubiquitous mechanisms to encode external events, benefits from expectation of sensory input.

      PubDate: 2017-11-07T02:45:03Z
       
  • Full interpretation of minimal images
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Guy Ben-Yosef, Liav Assif, Shimon Ullman
      The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is approached by dividing the interpretation of the complete object to the interpretation of multiple reduced but interpretable local regions. In such reduced regions, interpretation is simpler, since the number of semantic components is small, and the variability of possible configurations is low. We model the interpretation process by identifying primitive components and relations that play a useful role in local interpretation by humans. To identify useful components and relations used in the interpretation process, we consider the interpretation of ‘minimal configurations’: these are reduced local regions, which are minimal in the sense that further reduction renders them unrecognizable and uninterpretable. We show that such minimal interpretable images have useful properties, which we use to identify informative features and relations used for full interpretation. We describe our interpretation model, and show results of detailed interpretations of minimal configurations, produced automatically by the model. Finally, we discuss possible extensions and implications of full interpretation to difficult visual tasks, such as recognizing social interactions, which are beyond the scope of current models of visual recognition.

      PubDate: 2017-11-07T02:45:03Z
       
  • Task complexity moderates the influence of descriptions in decisions from
           experience
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Leonardo Weiss-Cohen, Emmanouil Konstantinidis, Maarten Speekenbrink, Nigel Harvey
      Decisions-makers often have access to a combination of descriptive and experiential information, but limited research so far has explored decisions made using both. Three experiments explore the relationship between task complexity and the influence of descriptions. We show that in simple experience-based decision-making tasks, providing congruent descriptions has little influence on task performance in comparison to experience alone without descriptions, since learning via experience is relatively easy. In more complex tasks, which are slower and more demanding to learn experientially, descriptions have stronger influence and help participants identify their preferred choices. However, when the task gets too complex to be concisely described, the influence of descriptions is reduced hence showing a non-monotonic pattern of influence of descriptions according to task complexity. We also propose a cognitive model that incorporates descriptive information into the traditional reinforcement learning framework, with the impact of descriptions moderated by task complexity. This model fits the observed behavior better than previous models and replicates the observed non-monotonic relationship between impact of descriptions and task complexity. This research has implications for the development of effective warning labels that rely on simple descriptive information to trigger safer behavior in complex environments.

      PubDate: 2017-11-07T02:45:03Z
       
  • Attention to distinguishing features in object recognition: An
           interactive-iterative framework
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Orit Baruch, Ruth Kimchi, Morris Goldsmith
      This article advances a framework that casts object recognition as a process of discrimination between alternative object identities, in which top-down and bottom-up processes interact—iteratively when necessary—with attention to distinguishing features playing a critical role. In two experiments, observers discriminated between different types of artificial fish. In parallel, a secondary, variable-SOA visual-probe detection task was used to examine the dynamics of visual attention. In Experiment 1, the fish varied in three distinguishing features: one indicating the general category (saltwater, freshwater), and one of the two other features indicating the specific type of fish within each category. As predicted, in the course of recognizing each fish, attention was allocated iteratively to the distinguishing features in an optimal manner: first to the general category feature, and then, based on its value, to the second feature that identified the specific fish. In Experiment 2, two types of fish could be discriminated on the basis of either of two distinguishing features, one more visually discriminable than the other. On some of the trials, one of the two alternative distinguishing features was occluded. As predicted, in the course of recognizing each fish, attention was directed initially to the more discriminable distinguishing feature, but when this feature was occluded, it was then redirected to the less discriminable feature. The implications of these findings, and the interactive-iterative framework they support, are discussed with regard to several fundamental issues having a long history in the literatures on object recognition, object categorization, and visual perception in general.

      PubDate: 2017-11-07T02:45:03Z
       
  • When and how does labour lead to love' The ontogeny and mechanisms of
           the IKEA effect
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Lauren E. Marsh, Patricia Kanngiesser, Bruce Hood
      We elevate our constructions to a special status in our minds. This ‘IKEA’ effect leads us to believe that our creations are more valuable than items that are identical, but constructed by another. This series of studies utilises a developmental perspective to explore why this bias exists. Study 1 elucidates the ontogeny of the IKEA effect, demonstrating an emerging bias at age 5, corresponding with key developmental milestones in self-concept formation. Study 2 assesses the role of effort, revealing that the IKEA effect is not moderated by the amount of effort invested in the task in 5-to-6-year olds. Finally, Study 3 examines whether feelings of ownership moderate the IKEA effect, finding that ownership alone cannot explain why children value their creations more. Altogether, results from this study series are incompatible with existing theories of the IKEA bias. Instead, we propose a new framework to examine biases in decision making. Perhaps the IKEA effect reflects a link between our creations and our self-concept, emerging at age 5, leading us to value them more positively than others’ creations.

      PubDate: 2017-11-07T02:45:03Z
       
  • Extremely long-term memory and familiarity after 12 years
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Christelle Larzabal, Eve Tramoni, Sophie Muratot, Simon J. Thorpe, Emmanuel J. Barbeau
      In 2006 Mitchell demonstrated that implicit memory was robust to decay. He showed that the ability to identify fragments of pictures seen 17 years before was significantly higher than for new stimuli. Is this true only for implicit memory' In this study, we tested whether explicit memory was still possible for drawings (n = 144) that had been presented once or three times, two seconds each time on average, approximately 12 years earlier. Surprisingly, our data reveal that our participants were able to recognize pictures above chance level. Preserved memory was mainly observed in the youngest subjects, for stimuli seen three times. Despite the fact that confidence judgments were low, reports suggest that recognition could be based on a strong sense of familiarity. These data extend Mitchell’s findings and show that familiarity can also be robust to decay.

      PubDate: 2017-11-07T02:45:03Z
       
  • Early visual deprivation prompts the use of body-centered frames of
           reference for auditory localization
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Tiziana Vercillo, Alessia Tonelli, Monica Gori
      The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual’s reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations.

      PubDate: 2017-11-07T02:45:03Z
       
  • Young children can overcome their weak inhibitory control, if they
           conceptualize a task in the right way
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Andrew Simpson, Daniel J. Carroll
      This article investigates the process of task conceptualization, through which participants turn the instructions on a task into a mental representation of that task. We provide the first empirical evidence that this process of conceptualization can directly influence the inhibitory demands of a task. Data from Experiments 1 and 2 (both n = 24) suggested that robust difficulties on inhibitory tasks can be overcome if preschoolers conceptualize the tasks in a way that avoids the need for inhibitory control. Experiment 3 (n = 60) demonstrated that even when all other aspects of a task are identical, simply changing how the rules are introduced can influence whether such a conceptualization is adopted – thereby influencing children’s performance on the task. An appreciation of the process of conceptualization is essential for our understanding of how inhibitory control and knowledge interact in early development.

      PubDate: 2017-11-07T02:45:03Z
       
  • What do monkeys know about others’ knowledge'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Lindsey A. Drayton, Laurie R. Santos
      Recently, comparative psychologists have suggested that primates represent others’ knowledge states. Evidence for this claim comes from studies demonstrating that primates expect others to maintain representations of objects when those objects are not currently visible. However, little work has explored whether nonhuman primates expect others to share the more sophisticated kinds of object knowledge that they themselves possess. We therefore investigated whether primates attribute to others knowledge that is acquired through the mental transformation of a static object representation. Specifically, we tested whether rhesus macaques (Macaca mulatta) expected a human demonstrator to solve a difficult rotational displacement task. In Experiment 1, monkeys watched a demonstrator hide a piece of fruit in one of two boxes. The monkey and the demonstrator then watched the boxes rotate 180°. We found that monkeys looked longer when the demonstrator reached into the box that did not contain the fruit, indicating that they expected her to be able to track the fruit to its current location. In Experiment 2, we ruled out the possibility that monkeys simply expected the demonstrator to search for the food in its true location. When the demonstrator did not witness the rotation event, monkeys looked equally long at the two reaching outcomes. These results are consistent with the interpretation that rhesus macaques expect others to dynamically update their representations of unseen objects.

      PubDate: 2017-10-18T16:54:18Z
       
  • Auditory-induced bouncing is a perceptual (rather than a cognitive)
           phenomenon: Evidence from illusory crescents
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Hauke S. Meyerhoff, Brian J. Scholl
      A central task for vision is to identify objects as the same persisting individuals over time and motion. The need for such processing is made especially clear in ambiguous situations such as the bouncing/streaming display: two discs move toward each other, superimpose, and then continue along their trajectories. Did the discs stream past each other, or bounce off each other' When people are likely to perceive streaming, playing a brief tone at the moment of overlap can readily cause them to see bouncing instead. Recent research has attributed this effect to decisional (rather than perceptual) processes by showing that auditory tones alter response biases but not the underlying sensitivity for detecting objective bounces. Here we explore the nature of this phenomenon using ‘illusory causal crescents’: when people perceive bouncing (or causal ‘launching’), they also perceive the second disc to begin moving before being fully overlapped with the first disc (i.e. leaving an uncovered crescent). Here we demonstrate that merely playing a sound coincident to the moment of overlap can also reliably induce the perception of such illusory crescents. Moreover, this effect is due to the coincidence of the tone, per se, since the effect disappears when the tone is embedded in a larger regular tone sequence. Because observers never have to explicitly categorize their percept (e.g. as streaming)—and because the effect involves a subtle quantitative influence on another clearly visual property (i.e. the crescent’s width)—we conclude that this audiovisual influence on the perception of identity over time reflects perceptual processing rather than higher-level decisions.

      PubDate: 2017-10-12T08:00:59Z
       
  • Cognitive costs of decision-making strategies: A resource demand
           decomposition analysis with a cognitive architecture
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Hanna B. Fechner, Lael J. Schooler, Thorsten Pachur
      Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort—or cognitive costs—associated with a strategy be conceptualized and measured' We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought–Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded.

      PubDate: 2017-10-12T08:00:59Z
       
  • Does a lack of auditory experience affect sequential learning'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Janne von Koss Torkildsen, Joanne Arciuli, Christiane Lingås Haukedal, Ona Bø Wie
      To understand the interaction between sensory experiences and cognition, it is critical to investigate the possibility that deprivation in one sensory modality might affect cognition in other modalities. Here we are concerned with the hypothesis that early experience with sound is vital to the development of domain-general sequential processing skills. In line with this hypothesis, a seminal empirical study found that prelingually deaf children had impaired sequence learning in the visual modality. In order to assess the limits of this hypothesis, the current study employed a different visual sequence learning task in an investigation of prelingually deaf children with cochlear implants and normal hearing children. Results showed statistically significant learning in each of the two groups, and no significant difference in the amount of learning between groups. Moreover, there was no association between the age at which the child received their implant (and thus access to electric hearing) and their performance on the sequential learning task. We discuss key differences between our study and the previous study, and argue that the field must reconsider claims about domain-general cognitive impairment resulting from early auditory deprivation.

      PubDate: 2017-10-12T08:00:59Z
       
  • The role of multisensory interplay in enabling temporal expectations
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Felix Ball, Lara E. Michels, Carsten Thiele, Toemme Noesselt
      Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested – in a series of experiments – whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity ( d ′ ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ′ but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations.

      PubDate: 2017-10-12T08:00:59Z
       
  • Information density converges in dialogue: Towards an
           information-theoretic model
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Yang Xu, David Reitter
      The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified.

      PubDate: 2017-10-12T08:00:59Z
       
  • Can monolinguals be like bilinguals' Evidence from dialect switching
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Neil W. Kirk, Vera Kempe, Kenneth C. Scott-Brown, Andrea Philipp, Mathieu Declerck
      Bilinguals rely on cognitive control mechanisms like selective activation and inhibition of lexical entries to prevent intrusions from the non-target language. We present cross-linguistic evidence that these mechanisms also operate in bidialectals. Thirty-two native German speakers who sometimes use the Öcher Platt dialect, and thirty-two native English speakers who sometimes use the Dundonian Scots dialect completed a dialect-switching task. Naming latencies were higher for switch than for non-switch trials, and lower for cognate compared to non-cognate nouns. Switch costs were symmetrical, regardless of whether participants actively used the dialect or not. In contrast, sixteen monodialectal English speakers, who performed the dialect-switching task after being trained on the Dundonian words, showed asymmetrical switch costs with longer latencies when switching back into Standard English. These results are reminiscent of findings for balanced vs. unbalanced bilinguals, and suggest that monolingual dialect speakers can recruit control mechanisms in similar ways as bilinguals.

      PubDate: 2017-10-12T08:00:59Z
       
  • Where do spontaneous first impressions of faces come from'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Harriet Over, Richard Cook
      Humans spontaneously attribute a wide range of traits to strangers based solely on their facial features. These first impressions are known to exert striking effects on our choices and behaviours. In this paper, we provide a theoretical account of the origins of these spontaneous trait inferences. We describe a novel framework (‘Trait Inference Mapping’) in which trait inferences are products of mappings between locations in ‘face space’ and ‘trait space’. These mappings are acquired during ontogeny and allow excitation of face representations to propagate automatically to associated trait representations. This conceptualization provides a framework within which the relative contribution of ontogenetic experience and genetic inheritance can be considered. Contrary to many existing ideas about the origins of trait inferences, we propose only a limited role for innate mechanisms and natural selection. Instead, our model explains inter-observer consistency by appealing to cultural learning and physiological responses that facilitate or ‘canalise’ particular face-trait mappings. Our TIM framework has both theoretical and substantive implications, and can be extended to trait inferences from non-facial cues to provide a unified account of first impressions.

      PubDate: 2017-10-12T08:00:59Z
       
  • Is utilitarian sacrifice becoming more morally permissible'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Ivar R. Hannikainen, Edouard Machery, Fiery A. Cushman
      A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.

      PubDate: 2017-10-05T15:44:43Z
       
  • The priming of basic combinatory responses in MEG
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Esti Blanco-Elorrieta, Victor S. Ferreira, Paul Del Prato, Liina Pylkkänen
      Priming has been a powerful tool for the study of human memory and especially the memory representations relevant for language. However, although it is well established that lexical access can be primed, we do not know exactly what types of computations can be primed above the word level. This work took a neurobiological approach and assessed the ways in which the complex representation of a minimal combinatory phrase, such as red boat, can be primed, as evidenced by the spatiotemporal profiles of magnetoencephalography (MEG) signals. Specifically, we built upon recent progress on the neural signatures of phrasal composition and tested whether the brain activities implicated for the basic combination of two words could be primed. In two experiments, MEG was recorded during a picture naming task where the prime trials were designed to replicate previously reported combinatory effects and the target trials to test whether those combinatory effects could be primed. The manipulation of the primes was successful in eliciting larger activity for adjective-noun combinations than single nouns in left anterior temporal and ventromedial prefrontal cortices, replicating prior MEG studies on parallel contrasts. Priming of similarly timed activity was observed during target trials in anterior temporal cortex, but only when the prime and target shared an adjective. No priming in temporal cortex was observed for single word repetition and two control tasks showed that the priming effect was not elicited if the prime pictures were simply viewed but not named. In sum, this work provides evidence that very basic combinatory operations can be primed, with the necessity for some lexical overlap between prime and target suggesting combinatory conceptual, as opposed to syntactic processing. Both our combinatory and priming effects were early, onsetting between 100 and 150ms after picture onset and thus are likely to reflect the very earliest planning stages of a combinatory message. Thus our findings suggest that at the earliest stages of combinatory planning in production, a combinatory memory representation is formed that affects the planning of a relevantly similar combination on a subsequent trial.

      PubDate: 2017-09-28T03:23:20Z
       
  • The role of reference in cross-situational word learning
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Felix Hao Wang, Toben H. Mintz
      Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects’ prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words’ meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children’s language acquisition.

      PubDate: 2017-09-28T03:23:20Z
       
  • Reasoning strategies modulate gender differences in emotion processing
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Henry Markovits, Bastien Trémolière, Isabelle Blanchette
      The dual strategy model of reasoning has proposed that people’s reasoning can be understood asa combination of two different ways of processing information related to problem premises: a counterexample strategy that examines information for explicit potential counterexamples and a statistical strategy that uses associative access to generate a likelihood estimate of putative conclusions. Previous studies have examined this model in the context of basic conditional reasoning tasks. However, the information processing distinction that underlies the dual strategy model can be seen asa basic description of differences in reasoning (similar to that described by many general dual process models of reasoning). In two studies, we examine how these differences in reasoning strategy may relate to processing very different information, specifically we focus on previously observed gender differences in processing negative emotions. Study 1 examined the intensity of emotional reactions to a film clip inducing primarily negative emotions. Study 2 examined the speed at which participants determine the emotional valence of sequences of negative images. In both studies, no gender differences were observed among participants using a counterexample strategy. Among participants using a statistical strategy, females produce significantly stronger emotional reactions than males (in Study 1) and were faster to recognize the valence of negative images than were males (in Study 2). Results show that the processing distinction underlying the dual strategy model of reasoning generalizes to the processing of emotions.

      PubDate: 2017-09-28T03:23:20Z
       
  • Competing perspectives on frames of reference in language and thought
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Peggy Li, Linda Abarbanell
      A study found that Dutch-speaking children who prefer an egocentric (left/right) reference frame when describing spatial relationships, and Hai om-speaking children who use a geocentric (north/south) frame had difficulty recreating small-scale spatial arrays using their language-incongruent system (Haun, Rapold, Janzen, & Levinson, 2011). In five experiments, we reconciled these results with another study showing that English (egocentric) and Tseltal Mayan (geocentric) speakers can flexibly use both systems (Abarbanell, 2010; Li, Abarbanell, Gleitman, & Papafragou, 2011). In replicating and extending Haun et al. (Experiment 1), English- but not Tseltal-speaking children could use their language-incongruent system when the instructions used their non-preferred frame of reference. Perseveration due to task order may explain the discrepancies between present English- and previous Dutch-speaking children, while not understanding task instructions using left/right language may explain why present Tseltal- and previous Hai om-speaking children had difficulty with their language-incongruent systems. In support, Tseltal-speaking children could use an egocentric system when the instructions were conveyed without left/right language (Experiments 2–4), and many did not know left/right language (Experiment 5). These findings help reconcile seemingly conflicting sets of results and suggest that task constraints, rather than language, determine which system is easier to use (Experiment 2 vs. 3).

      PubDate: 2017-09-21T21:41:41Z
       
  • Human infants’ understanding of social imitation: Inferences of
           affiliation from third party observations
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Lindsey J. Powell, Elizabeth S. Spelke
      Imitation is ubiquitous in positive social interactions. For adult and child observers, it also supports inferences about the participants in such interactions and their social relationships, but the origins of these inferences are obscure. Do infants attach social significance to this form of interaction' Here we test 4- to 5.5-month-old infants’ interpretation of imitation, asking if the imitative interactions they observe support inferences of social affiliation, across 10 experimental conditions that varied the modality of the imitation (movement vs. sound), the roles of specific characters (imitators vs. targets), the number of characters in the displays (3 vs. 5), and the number of parties initiating affiliative test events (1 vs. 2). These experiments, together with one experiment conducted with 12-month-old infants, yielded three main findings. First, infants expect that characters who engaged in imitation will approach and affiliate with the characters whom they imitated. Second, infants show no evidence of expecting that characters who were targets of imitation will approach and affiliate with their imitators. Third, analyzing imitative interactions is difficult for young infants, whose expectations vary in strength depending on the number of characters to be tracked and the number of affiliative actors to be compared. These findings have implications for our understanding of social imitation, and they provide methods for advancing understanding of other aspects of early social cognitive development.

      PubDate: 2017-09-21T21:41:41Z
       
  • What makes a face photo a ‘good likeness’'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Kay L. Ritchie, Robin S.S. Kramer, A. Mike Burton
      Photographs of people are commonly said to be ‘good likenesses’ or ‘poor likenesses’, and this is a concept that we readily understand. Despite this, there has been no systematic investigation of what makes an image a good likeness, or of which cognitive processes are involved in making such a judgement. In three experiments, we investigate likeness judgements for different types of images: natural images of film stars (Experiment 1), images of film stars from specific films (Experiment 2), and iconic images and face averages (Experiment 3). In all three experiments, participants rated images for likeness and completed speeded name verification tasks. We consistently show that participants are faster to identify images which they have previously rated asa good likeness compared to a poor likeness. We also consistently show that the more familiar we are with someone, the higher likeness rating we give to all images of them. A key finding is that our perception of likeness is idiosyncratic (Experiments 1 and 2), and can be tied to our specific experience of each individual (Experiment 2). We argue that likeness judgements require a comparison between the stimulus and our own representation of the person, and that this representation differs according to our prior experience with that individual. This has theoretical implications for our understanding of how we represent familiar people, and practical implications for how we go about selecting images for identity purposes such as photo-ID.

      PubDate: 2017-09-17T00:39:18Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.91.38.173
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016