Journal Cover Cognition
  [SJR: 2.77]   [H-I: 142]   [162 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0010-0277
   Published by Elsevier Homepage  [3120 journals]
  • Dynamic competition account of men’s perceptions of women’s
           sexual interest
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Jodi R. Smith, Teresa A. Treat, Thomas A. Farmer, Bob McMurray
      This work applies a dynamic competition framework of decision making to the domain of sexual perception, which is linked theoretically and empirically to college men’s risk for exhibiting sexual coercion and aggression toward female acquaintances. Within a mouse-tracking paradigm, 152 undergraduate men viewed full-body photographs of women who varied in affect (sexual interest or rejection), clothing style (provocative or conservative), and attractiveness, and decided whether each woman currently felt sexually interested or rejecting. Participants’ mouse movements were recorded to capture competition dynamics during online processing (throughout the decisional process), and as an index of the final categorical decision (endpoint of the decisional process). Participants completed a measure of Rape-Supportive Attitudes (RSA), a well-established correlate of male-initiated sexual aggression toward female acquaintances. Mixed-effects analyses revealed greater curvature toward the incorrect response on conceptually incongruent trials (e.g., rejecting and dressed provocatively) than on congruent trials (e.g., rejecting and dressed conservatively). This suggests that the two decision alternatives are simultaneously active and compete continuously over time, consistent with a dynamic competition account. Congruence effects also emerged at the decisional endpoint; accuracy was typically lower when stimulus features were incongruent, rather than congruent. RSA potentiated online congruence effects (intermediate states of behavior) but not offline congruence effects (endpoint states of behavior). In a hierarchical regression analysis, online processing indices accounted for unique variability in RSA above and beyond offline accuracy rates. The process-based account of men’s sexual-interest judgments ultimately may point to novel targets for prevention strategies designed to reduce acquaintance-initiated sexual aggression on college campuses.
      Graphical abstract image

      PubDate: 2018-02-14T20:15:21Z
       
  • Inferring causes during speech perception
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Linda Liu, T. Florian Jaeger
      One of the central challenges in speech perception is the lack of invariance: talkers differ in how they map words onto the speech signal. Previous work has shown that one mechanism by which listeners overcome this variability is adaptation. However, talkers differ in how they pronounce words for a number of reasons, ranging from more permanent, characteristic factors such as having a foreign accent, to more temporary, incidental factors, such as speaking with a pen in the mouth. One challenge for listeners is that the true cause underlying atypical pronunciations is never directly known, and instead must be inferred from (often causally ambiguous) evidence. In three experiments, we investigate whether these inferences underlie speech perception, and how the speech perception system deals with uncertainty about competing causes for atypical pronunciations. We find that adaptation to atypical pronunciations is affected by whether the atypical pronunciations are seen as characteristic or incidental. Furthermore, we find that listeners are able to maintain information about previous causally ambiguous pronunciations that they experience, and use this previously experienced evidence to drive their adaptation after additional evidence has disambiguated the cause. Our findings revise previous proposals that causally ambiguous evidence is ignored during speech adaptation.

      PubDate: 2018-02-14T20:15:21Z
       
  • Effects of metric hierarchy and rhyme predictability on word duration in
           The Cat in the Hat
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Mara Breen
      Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes.

      PubDate: 2018-02-14T20:15:21Z
       
  • A thought in the park: The influence of naturalness and low-level visual
           features on expressed thoughts
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Kathryn E. Schertz, Sonya Sachdeva, Omid Kardan, Hiroki P. Kotabe, Kathleen L. Wolf, Marc G. Berman
      Prior research has shown that the physical characteristics of one’s environment have wide ranging effects on affect and cognition. Other research has demonstrated that one’s thoughts have impacts on mood and behavior, and in this three-part research program we investigated how physical features of the environment can alter thought content. In one study, we analyzed thousands of journal entries written by park visitors to examine how low-level and semantic visual features of the parks correlate with different thought topics. In a second study, we validated our ecological results by conducting an online study where participants were asked to write journal entries while imagining they were visiting a park, to ensure that results from Study 1 were not due to selection bias of park visitors. In the third study, we experimentally manipulated exposure to specific visual features to determine if they induced thinking about the same thought topics under more generalized conditions. Results from Study 3 demonstrated a potential causal role for perceived naturalness and high non-straight edges on thinking about “Nature”, with a significant positive interaction. Results also showed a potential causal effect of naturalness and non-straight edges on thinking about topics related to “Spiritual & Life Journey”, with perceived naturalness having a negative relationship and non-straight edges having a positive relationship. We also observed a significant positive interaction between non-straight edge density and naturalness in relation to “Spiritual & Life Journey”. These results have implications for the design of the built environment to influence human reflection and well-being.

      PubDate: 2018-02-14T20:15:21Z
       
  • Sympathetic arousal, but not disturbed executive functioning, mediates the
           impairment of cognitive flexibility under stress
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Martin Marko, Igor Riečanský
      Cognitive flexibility emerges from an interplay of multiple cognitive systems, of which lexical-semantic and executive are thought to be the most important. Yet this has not been addressed by previous studies demonstrating that such forms of flexible thought deteriorate under stress. Motivated by these shortcomings, the present study evaluated several candidate mechanisms implied to mediate the impairing effects of stress on flexible thinking. Fifty-seven healthy adults were randomly assigned to psychosocial stress or control condition while assessed for performance on cognitive flexibility, working memory capacity, semantic fluency, and self-reported cognitive interference. Stress response was indicated by changes in skin conductance, hearth rate, and state anxiety. Our analyses showed that acute stress impaired cognitive flexibility via a concomitant increase in sympathetic arousal, while this mediator was positively associated with semantic fluency. Stress also decreased working memory capacity, which was partially mediated by elevated cognitive interference, but neither of these two measures were associated with cognitive flexibility or sympathetic arousal. Following these findings, we conclude that acute stress impairs cognitive flexibility via sympathetic arousal that modulates lexical-semantic and associative processes. In particular, the results indicate that stress-level of sympathetic activation may restrict the accessibility and integration of remote associates and bias the response competition towards prepotent and dominant ideas. Importantly, our results indicate that stress-induced impairments of cognitive flexibility and executive functions are mediated by distinct neurocognitive mechanisms.

      PubDate: 2018-02-14T20:15:21Z
       
  • The determinants of response time in a repeated constant-sum game: A
           robust Bayesian hierarchical dual-process model
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Leonidas Spiliopoulos
      The investigation of response time and behavior has a long tradition in cognitive psychology, particularly for non-strategic decision-making. Recently, experimental economists have also studied response time in strategic interactions, but with an emphasis on either one-shot games or repeated social-dilemmas. I investigate the determinants of response time in a repeated (pure-conflict) game, admitting a unique mixed strategy Nash equilibrium, with fixed partner matching. Response times depend upon the interaction of two decision models embedded in a dual-process framework (Achtziger and Alós-Ferrer, 2014; Alós-Ferrer, 2016). The first decision model is the commonly used win-stay/lose-shift heuristic and the second the pattern-detecting reinforcement learning model in Spiliopoulos (2013b). The former is less complex and can be executed more quickly than the latter. As predicted, conflict between these two models (i.e., each one recommending a different course of action) led to longer response times than cases without conflict. The dual-process framework makes other qualitative response time predictions arising from the interaction between the existence (or not) of conflict and which one of the two decision models the chosen action is consistent with—these were broadly verified by the data. Other determinants of RT were hypothesized on the basis of existing theory and tested empirically. Response times were strongly dependent on the actions chosen by both players in the previous rounds and the resulting outcomes. Specifically, response time was shortest after a win in the previous round where the maximum possible payoff was obtained; response time after losses was significantly longer. Strongly auto-correlated behavior (regardless of its sign) was also associated with longer response times. I conclude that, similar to other tasks, there is a strong coupling in repeated games between behavior and RT, which can be exploited to further our understanding of decision making.

      PubDate: 2018-02-14T20:15:21Z
       
  • Mental files theory of mind: When do children consider agents acquainted
           with different object identities'
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Michael Huemer, Josef Perner, Brian Leahy
      Mental files theory explains why children pass many perspective taking tasks like the false belief test around age 4 (Perner & Leahy, 2016). It also explains why older children struggle to understand that beliefs about an object depend on how one is acquainted with it (intensionality or aspectuality). If Heinz looks at an object that is both a die and an eraser, but cannot tell by looking that it is an eraser, he will not reach for it if he needs an eraser. Four- to 6-year olds find this difficult (Apperly & Robinson, 1998). We tested 129 35- to 86-month olds with a modified version of Apperly and Robinson’s task. Each child faced four tasks resulting from two experimental factors, timing and mode of information. Timing: Children saw Heinz learn the die’s location either before or after they learn that the die is an eraser. Mode of information: Heinz learns where the die is either perceptually or verbally. When Heinz’ learning is verbal, he never perceives the die at all. We found that Apperly and Robinson’s problem occurs only in the seen-after condition, where Heinz sees the die after children had learnt that it was also an eraser. It vanishes when Heinz learns where the die is before children learn that it is also an eraser. The problem also vanishes when Heinz learns where the die is purely verbally (e.g., “The die is in the red box”) and never sees it. This evidence lets us refine existing mental files theory, and eliminate several alternatives from the literature.

      PubDate: 2018-02-14T20:15:21Z
       
  • Contour interpolation: A case study in Modularity of Mind
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Brian P. Keane
      In his monograph Modularity of Mind (1983), philosopher Jerry Fodor argued that mental architecture can be partly decomposed into computational organs termed modules, which are characterized as having nine co-occurring features such as automaticity, domain specificity, and informational encapsulation. Do modules exist' Debates thus far have been framed very generally with few, if any, detailed case studies. The topic is important because it has direct implications on current debates in cognitive science and because it potentially provides a viable framework from which to further understand and make hypotheses about the mind’s structure and function. Here, the case is made for the modularity of contour interpolation, which is a perceptual process that represents non-visible edges on the basis of how surrounding visible edges are spatiotemporally configured. There is substantial evidence that interpolation is domain specific, mandatory, fast, and developmentally well-sequenced; that it produces representationally impoverished outputs; that it relies upon a relatively fixed neural architecture that can be selectively impaired; that it is encapsulated from belief and expectation; and that its inner workings cannot be fathomed through conscious introspection. Upon differentiating contour interpolation from a higher-order contour representational ability (“contour abstraction”) and upon accommodating seemingly inconsistent experimental results, it is argued that interpolation is modular to the extent that the initiating conditions for interpolation are strong. As interpolated contours become more salient, the modularity features emerge. The empirical data, taken as a whole, show that at least certain parts of the mind are modularly organized.

      PubDate: 2018-02-04T22:01:20Z
       
  • Investing in commitment: Persistence in a joint action is enhanced by the
           perception of a partner’s effort
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Marcell Székely, John Michael
      Can the perception that one’s partner is investing effort generate a sense of commitment to a joint action' To test this, we developed a 2-player version of the classic snake game which became increasingly boring over the course of each round. This enabled us to operationalize commitment in terms of how long participants persisted before pressing a ‘finish’ button to conclude each round. Our results from three experiments reveal that participants persisted longer when they perceived what they believed to be cues of their partner’s effortful contribution (Experiment 1). Crucially, this effect was not observed when they knew their partner to be an algorithm (Experiment 2), nor when it was their own effort that had been invested (Experiment 3). These results support the hypothesis that the perception of a partner’s effort elicits a sense of commitment, leading to increased persistence in the face of a temptation to disengage.

      PubDate: 2018-02-04T22:01:20Z
       
  • Integrating multisensory information across external and motor-based
           frames of reference
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Yuqi Liu, Jared Medina
      In the mirror box illusion, participants often report that their hand is located where they see it, even when the position of the reflected hand differs from the actual position of their hand. This illusory shift (an index of multisensory integration) is stronger when the two hands engage in synchronous bimanual movement, in which visual and proprioceptive information is congruent in both motor-based (i.e. coordinate centered on the effector) and external (i.e. coordinates centered on elements external to the effector) frames of reference. To investigate the separate contributions of external and motor-based congruence in multisensory integration, we instructed participants to make synchronous or asynchronous tapping movements in either the same (i.e. both hands palms up) or opposing (palm up, palm down) postures. When in opposing postures, externally congruent movements were incongruent in a motor-based frame of reference, and vice versa. Across three experiments, participants reported more illusory shift and stronger ownership of the viewed hand in the mirror for external versus motor-based congruence trials regardless of motor outflow or motor effort, indicating that information from an externally-based representation is more strongly weighted in multisensory integration. These findings provide evidence that not only information across sensory modalities, but also information regarding crossmodal congruence represented in different spatial frames of reference, is differentially weighted in multisensory integration. We discuss how our findings can be incorporated into current computational models on multisensory integration.

      PubDate: 2018-02-04T22:01:20Z
       
  • Do enhanced states exist' Boosting cognitive capacities through an
           action video-game
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Maria Kozhevnikov, Yahui Li, Sabrina Wong, Takashi Obana, Ido Amihai
      This research reports the existence of enhanced cognitive states in which dramatic temporary improvements in temporal and spatial aspects of attention were exhibited by participants who played (but not by those who merely observed) action video-games meeting certain criteria. Specifically, Experiments 1 and 2 demonstrate that the attentional improvements were exhibited only by participants whose skills matched the difficulty level of the video game. Experiment 2 showed that arousal (as reflected by the reduction in parasympathetic activity and increase in sympathetic activity) is a critical physiological condition for enhanced cognitive states and corresponding attentional enhancements. Experiment 3 showed that the cognitive enhancements were transient, and were no longer observed after 30 min of rest following video-gaming. Moreover, the results suggest that the enhancements were specific to tasks requiring visual-spatial focused attention, but not distribution of spatial attention as has been reported to improve significantly and durably as a result of long-term video-game playing. Overall, the results suggest that the observed enhancements cannot be simply due to the activity of video-gaming per se, but might rather represent an enhanced cognitive state resulting from specific conditions (heightened arousal in combination with active engagement and optimal challenge), resonant with what has been described in previous phenomenological literature as “flow” (Csikszentmihalyi, 1975) or “peak experiences” (Maslov, 1962). The findings provide empirical evidence for the existence of the enhanced cognitive states and suggest possibilities for consciously accessing latent resources of our brain to temporarily boost our cognitive capacities upon demand.

      PubDate: 2018-02-04T22:01:20Z
       
  • Visually-naïve chicks prefer agents that move as if constrained by a
           bilateral body-plan
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): O. Rosa-Salva, M. Hernik, A. Broseghini, G. Vallortigara
      From the first hours of life, the prompt detection of animate agents allows identification of biologically relevant entities. The motion of most animate agents is constrained by their bilaterally-symmetrical body-plan, and consequently tends to be aligned with the main body-axis. Thus parallelism between the main axis of a moving object and its motion trajectory can signal the presence of animate agents. Here we demonstrated that visually-naïve newborn chicks (Gallus gallus domesticus) are attracted to objects displaying such parallelism, and thus show preference for the same type of motion patterns that elicit perception of animacy in humans. This is the first demonstration of a newborn non-human animal’s social preference for a visual cue related to the constraints imposed on behaviour by bilaterian morphology. Chicks also showed preference for rotational movements – a potential manifestation of self-propulsion. Results are discussed in relation to the mechanisms of animacy and agency detection in newborn organisms.

      PubDate: 2018-02-04T22:01:20Z
       
  • Inhibition accumulates over time at multiple processing levels in
           bilingual language control
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Daniel Kleinman, Tamar H. Gollan
      It is commonly assumed that bilinguals enable production in their nondominant language by inhibiting their dominant language temporarily, fully lifting inhibition to switch back. In a re-analysis of data from 416 Spanish-English bilinguals who repeatedly named a small set of pictures while switching languages in response to cues, we separated trials into different types that revealed three cumulative effects. Bilinguals named each picture (a) faster for every time they had previously named that same picture in the same language, an asymmetric repetition priming effect that was greater in their nondominant language, and (b) more slowly for every time they had previously named that same picture in the other language, an effect that was equivalent across languages and implies symmetric lateral inhibition between translation equivalents. Additionally, (c) bilinguals named pictures in the dominant language more slowly for every time they had previously named unrelated pictures in the nondominant language, exhibiting asymmetric language-wide global inhibition. These mechanisms dynamically alter the balances of activation between languages and between lemmas, providing evidence for an oft-assumed but seldom demonstrated key mechanism of bilingual control (competition between translations), resolving the mystery of why reversed language dominance sometimes emerges (the combined forces of asymmetrical effects emerge over time in mixed-language blocks), and also explaining other longer-lasting effects (block order). Key signatures of bilingual control can depend on seemingly trivial methodological details (e.g., the number of trials in a block) because inhibition is applied cumulatively at both local and global levels, persisting long after each individual act of selection.

      PubDate: 2018-02-04T22:01:20Z
       
  • Driven to distraction: A lack of change gives rise to mind wandering
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Myrthe Faber, Gabriel A. Radvansky, Sidney K. D'Mello
      How does the dynamic structure of the external world direct attention' We examined the relationship between event structure and attention to test the hypothesis that narrative shifts (both theoretical and perceived) negatively predict attentional lapses. Self-caught instances of mind wandering were collected while 108 participants watched a 32.5 min film called The Red Balloon. We used theoretical codings of situational change and human perceptions of event boundaries to predict mind wandering in 5-s intervals. Our findings suggest a temporal alignment between the structural dynamics of the film and mind wandering reports. Specifically, the number of situational changes and likelihood of perceiving event boundaries in the prior 0–15 s interval negatively predicted mind wandering net of low-level audiovisual features. Thus, mind wandering is less likely to occur when there is more event change, suggesting that narrative shifts keep attention from drifting inwards.

      PubDate: 2018-02-04T22:01:20Z
       
  • Switching between lift and use grasp actions
    • Abstract: Publication date: May 2018
      Source:Cognition, Volume 174
      Author(s): Daniel N. Bub, Michael E.J. Masson, Hannah van Mook
      Switching between competing grasp postures incurs costs on speeded performance. We examined switch costs between lift versus use actions under task conditions that required subjects to identify familiar objects. There were no asymmetrical interference effects, though reliable costs occurred when the same object required a different action on consecutive trials. In addition, lift actions were faster to objects targeted for a prospective use action than objects irrelevant to this intended goal. The benefit of a lift-then-use action sequence was not merely due to the production of two different actions in short order on the same object; use actions to an object marked for the distal goal of a lift action were not faster than use actions applied to another object. We propose that the intention to use an object facilitates the prior action of lifting it because the motor sequence lift-then-use is habitually conscripted to enact the proper function of an object.

      PubDate: 2018-02-02T21:58:54Z
       
  • Children exhibit different performance patterns in explicit and implicit
           theory of mind tasks
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Nese Oktay-Gür, Alexandra Schulz, Hannes Rakoczy
      Three studies tested scope and limits of children’s implicit and explicit theory of mind. In Studies 1 and 2, three- to six-year-olds (N = 84) were presented with closely matched explicit false belief tasks that differed in whether or not they required an understanding of aspectuality. Results revealed that children performed equally well in the different tasks, and performance was strongly correlated. Study 3 tested two-year-olds (N = 81) in implicit interactive versions of these tasks and found evidence for dis-unity: children performed competently only in those tasks that did not require an understanding of aspectuality. Taken together, the present findings suggest that early implicit and later explicit theory of mind tasks may tap different forms of cognitive capacities.

      PubDate: 2018-01-23T04:54:29Z
       
  • Voulez-vous jouer avec moi' Twelve-month-olds understand that foreign
           languages can communicate
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Athena Vouloumanos
      Infants understand that speech in their native language allows speakers to communicate. Is this understanding limited to their native language or does it extend to non-native languages with which infants have no experience' Twelve-month-old infants saw an actor, the Communicator, repeatedly select one of two objects. When the Communicator could no longer reach the target but a Recipient could, the Communicator vocalized a nonsense phrase either in English (infants’ native language), Spanish (rhythmically different), or Russian (phonotactically different), or hummed (a non-speech vocalization). Across all three languages, native and non-native, but not humming, infants looked longer when the Recipient gave the Communicator the non-target object. Although, by 12 months, infants do not readily map non-native words to objects or discriminate most non-native speech contrasts, they understand that non-native languages can transfer information to others. Understanding language as a tool for communication extends beyond infants’ native language: By 12 months, infants view language as a universal mechanism for transferring and acquiring new information.

      PubDate: 2018-01-23T04:54:29Z
       
  • Cognitive science in the era of artificial intelligence: A roadmap for
           reverse-engineering the infant language-learner
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Emmanuel Dupoux
      Spectacular progress in the information processing sciences (machine learning, wearable sensors) promises to revolutionize the study of cognitive development. Here, we analyse the conditions under which ’reverse engineering’ language development, i.e., building an effective system that mimics infant’s achievements, can contribute to our scientific understanding of early language development. We argue that, on the computational side, it is important to move from toy problems to the full complexity of the learning situation, and take as input as faithful reconstructions of the sensory signals available to infants as possible. On the data side, accessible but privacy-preserving repositories of home data have to be setup. On the psycholinguistic side, specific tests have to be constructed to benchmark humans and machines at different linguistic levels. We discuss the feasibility of this approach and present an overview of current results.

      PubDate: 2018-01-09T21:18:28Z
       
  • Corrigendum to “Implicit moral evaluations: A multinomial modeling
           approach” [Cognition 158 (2017) 224–241]
    • Abstract: Publication date: Available online 4 January 2018
      Source:Cognition
      Author(s): C. Daryl Cameron, B. Keith Payne, Walter Sinnott-Armstrong, Julian A. Scheffer, Michael Inzlicht


      PubDate: 2018-01-09T21:18:28Z
       
  • Toddlers’ comprehension of adult and child talkers: Adult targets versus
           vocal tract similarity
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Angela Cooper, Natalie Fecher, Elizabeth K. Johnson
      How do children represent words' If lexical representations are based on encoding the indexical characteristics of frequently-heard speakers, this predicts that speakers like a child’s own mother should be best understood. Alternatively, if they are based on the child’s own motor productions, this predicts an own-voice advantage in word recognition. Here, we address this question by presenting 2.5-year-olds with recordings of their own voice, another child’s voice, their own mother’s voice, and another mother’s voice in a child-friendly eye-tracking procedure. No own-voice or own-mother advantage was observed. Rather, children uniformly performed better on adult voices than child voices, even performing better for unfamiliar adult voices than own voices. We conclude that children represent words not in the form of own-voice motor codes or frequently heard speakers, but on the basis of adult speech targets.

      PubDate: 2018-01-02T21:14:34Z
       
  • Sensory predictions during action support perception of imitative
           reactions across suprasecond delays
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Daniel Yon, Clare Press
      Perception during action is optimized by sensory predictions about the likely consequences of our movements. Influential theories in social cognition propose that we use the same predictions during interaction, supporting perception of similar reactions in our social partners. However, while our own action outcomes typically occur at short, predictable delays after movement execution, the reactions of others occur at longer, variable delays in the order of seconds. To examine whether we use sensorimotor predictions to support perception of imitative reactions, we therefore investigated the temporal profile of sensory prediction during action in two psychophysical experiments. We took advantage of an influence of prediction on apparent intensity, whereby predicted visual stimuli appear brighter (more intense). Participants performed actions (e.g., index finger lift) and rated the brightness of observed outcomes congruent (index finger lift) or incongruent (middle finger lift) with their movements. Observed action outcomes could occur immediately after execution, or at longer delays likely reflective of those in natural social interaction (1800 or 3600 ms). Consistent with the previous literature, Experiment 1 revealed that congruent action outcomes were rated as brighter than incongruent outcomes. Importantly, this facilitatory perceptual effect was found irrespective of whether outcomes occurred immediately or at delay. Experiment 2 replicated this finding and demonstrated that it was not the result of response bias. These findings therefore suggest that visual predictions generated during action are sufficiently general across time to support our perception of imitative reactions in others, likely generating a range of benefits during social interaction.

      PubDate: 2018-01-02T21:14:34Z
       
  • Deconstructing the human algorithms for exploration
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Samuel J. Gershman
      The dilemma between information gathering (exploration) and reward seeking (exploitation) is a fundamental problem for reinforcement learning agents. How humans resolve this dilemma is still an open question, because experiments have provided equivocal evidence about the underlying algorithms used by humans. We show that two families of algorithms can be distinguished in terms of how uncertainty affects exploration. Algorithms based on uncertainty bonuses predict a change in response bias as a function of uncertainty, whereas algorithms based on sampling predict a change in response slope. Two experiments provide evidence for both bias and slope changes, and computational modeling confirms that a hybrid model is the best quantitative account of the data.

      PubDate: 2018-01-02T21:14:34Z
       
  • Unsupervised learning of complex associations in an animal model
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Leyre Castro, Edward A. Wasserman, Marisol Lauffer
      Supervised learning results from explicit corrective feedback, whereas unsupervised learning results from statistical co-occurrence. In an initial training phase, we gave pigeons an unsupervised learning task to see if mere pairing could establish associations between multiple pairs of visual images. To assess learning, we administered occasional testing trials in which pigeons were shown an object and had to choose between previously paired and unpaired tokens. Learning was evidenced by preferential choice of the previously unpaired token. In a subsequent supervised training phase, learning was facilitated if the object and token had previously been paired. These results document unsupervised learning in pigeons and resemble statistical learning in infants, suggesting an important parallel between human and animal cognition.

      PubDate: 2018-01-02T21:14:34Z
       
  • Using a foreign language reduces mental imagery
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Sayuri Hayakawa, Boaz Keysar
      Mental imagery plays a significant role in guiding how we feel, think, and even behave. These mental simulations are often guided by language, making it important to understand what aspects of language contribute to imagery vividness and consequently to the way we think. Here, we focus on the native-ness of language and present evidence that using a foreign language leads to less vivid mental imagery than using a native tongue. In Experiment 1, participants using a foreign language reported less vivid imagery of sensory experiences such as sight and touch than those using their native tongue. Experiment 2 provided an objective behavioral measure, showing that muted imagery reduced accuracy when judging the similarity of shapes of imagined objects. Lastly, Experiment 3 demonstrated that this reduction in mental imagery partly accounted for the previously observed foreign language effects in moral choice. Together, the findings suggest that our mental images change when using a foreign tongue, leading to downstream consequences for how we make decisions.

      PubDate: 2017-12-26T15:22:18Z
       
  • Small numbers are sensed directly, high numbers constructed from size and
           density
    • Abstract: Publication date: April 2018
      Source:Cognition, Volume 173
      Author(s): Eckart Zimmermann
      Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread.

      PubDate: 2017-12-23T17:23:20Z
       
  • Eyes that bind us: Gaze leading induces an implicit sense of agency
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Lisa J. Stephenson, S. Gareth Edwards, Emma E. Howard, Andrew P. Bayliss
      Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention – causing another person to follow our eye movement – can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named ‘temporal binding,’ is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions.

      PubDate: 2017-12-23T17:23:20Z
       
  • Cognitive capacity limitations and Need for Cognition differentially
           predict reward-induced cognitive effort expenditure
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Dasha A. Sandra, A. Ross Otto
      While psychological, economic, and neuroscientific accounts of behavior broadly maintain that people minimize expenditure of cognitive effort, empirical work reveals how reward incentives can mobilize increased cognitive effort expenditure. Recent theories posit that the decision to expend effort is governed, in part, by a cost-benefit tradeoff whereby the potential benefits of mental effort can offset the perceived costs of effort exertion. Taking an individual differences approach, the present study examined whether one’s executive function capacity, as measured by Stroop interference, predicts the extent to which reward incentives reduce switch costs in a task-switching paradigm, which indexes additional expenditure of cognitive effort. In accordance with the predictions of a cost-benefit account of effort, we found that a low executive function capacity—and, relatedly, a low intrinsic motivation to expend effort (measured by Need for Cognition)—predicted larger increase in cognitive effort expenditure in response to monetary reward incentives, while individuals with greater executive function capacity—and greater intrinsic motivation to expend effort—were less responsive to reward incentives. These findings suggest that an individual’s cost-benefit tradeoff is constrained by the perceived costs of exerting cognitive effort.

      PubDate: 2017-12-23T17:23:20Z
       
  • Understanding environmental sounds in sentence context
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Sophia Uddin, Shannon L.M. Heald, Stephen C. Van Hedger, Serena Klos, Howard C. Nusbaum
      There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.

      PubDate: 2017-12-23T17:23:20Z
       
  • Mine is better than yours: Investigating the ownership effect in children
           with autism spectrum disorder and typically developing children
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Calum Hartley, Sophie Fisher
      Ownership has a unique and privileged influence on human psychology. Typically developing (TD) children judge their objects to be more desirable and valuable than similar objects belonging to others. This ‘ownership effect’ is due to processing one’s property in relation to ‘the self’. Here we explore whether children with autism spectrum disorder (ASD) – a population with impaired self-understanding – prefer and over-value property due to ownership. In Experiment 1, we discovered that children with ASD did not favour a randomly endowed toy and frequently traded for a different object. By contrast, TD children showed a clear preference for their randomly endowed toy and traded infrequently. Both populations also demonstrated highly-accurate tracking of owner-object relationships. Experiment 2 showed that both TD children and children with ASD over-value their toys if they are self-selected and different from other-owned toys. Unlike TD children, children with ASD did not over-value their toys in comparison to non-owned identical copies. This finding was replicated in Experiment 3, which also established that mere ownership elicited over-valuation of randomly endowed property in TD children. However, children with ASD did not consistently regard their randomly endowed toys as the most valuable, and evaluated property irrespective of ownership. Our findings show that mere ownership increases preferences and valuations for self-owned property in TD children, but not children with ASD. We propose that deficits in self-understanding may diminish ownership effects in ASD, eliciting a more economically-rational strategy that prioritises material qualities (e.g. what a toy is) rather than whom it belongs to.

      PubDate: 2017-12-13T06:20:48Z
       
  • Involuntary top-down control by search-irrelevant features: Visual working
           memory biases attention in an object-based manner
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Rebecca M. Foerster, Werner X. Schneider
      Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates.

      PubDate: 2017-12-13T06:20:48Z
       
  • Understanding face familiarity
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Robin S.S. Kramer, Andrew W. Young, A. Mike Burton
      It has been known for many years that identifying familiar faces is much easier than identifying unfamiliar faces, and that this familiar face advantage persists across a range of tasks. However, attempts to understand face familiarity have mostly used a binary contrast between ‘familiar’ and ‘unfamiliar’ faces, with no attempt to incorporate the vast range of familiarity we all experience. From family members to casual acquaintances and from personal to media exposure, familiarity is a more complex categorisation than is usually acknowledged. Here we model levels of familiarity using a generic statistical analysis (PCA combined with LDA) computed over some four thousand naturally occurring images that include a large variation in the numbers of images for each known person. Using a strong test of performance with entirely novel, untrained everyday images, we show that such a model can simulate widely documented effects of familiarity in face recognition and face matching, and offers a natural account of the internal feature advantage for familiar faces. Furthermore, as with human viewers, the benefits of familiarity seem to accrue from being able to extract consistent information across different photos of the same face. We argue that face familiarity is best understood as reflecting increasingly robust statistical descriptions of idiosyncratic within-person variability. Understanding how faces become familiar appears to rely on both bottom-up statistical image descriptions (modelled here with PCA), and top-down processes that cohere superficially different images of the same person (modelled here with LDA).

      PubDate: 2017-12-13T06:20:48Z
       
  • A unified computational account of cumulative semantic, semantic blocking,
           and semantic distractor effects in picture naming
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Ardi Roelofs
      Computational models of lexical selection in spoken word production have been applied to semantic interference effects in picture naming response times obtained with continuous naming, blocked-cyclic naming, and picture-word interference paradigms. However, a unified computational account of the effects in the three paradigms is lacking. Here, I show that the inclusion of conceptual bias in the WEAVER++ model (Levelt, Roelofs, & Meyer, 1999) explains cumulative semantic and semantic blocking effects while preserving the model's account of semantic distractor effects. The key assumptions of the account are (1) lexical selection by competition, and (2) a conceptual origin and lexical locus of the semantic effects. I provide a proof of concept of the account by reporting computer simulation results, addressing behavioral and neuroimaging evidence. The assumptions are sufficient for a unified account of semantic effects in the three paradigms, contrary to pessimistic views of this area.

      PubDate: 2017-12-13T06:20:48Z
       
  • Distinguishing languages from dialects: A litmus test using the
           picture-word interference task
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Alissa Melinger
      Linguists have been working to develop objective criteria for distinguishing languages from dialects for well over half a century. The prevailing view amongst sociolinguists is that no objective criteria can be formulated. The aim of this study is to examine whether language processing can provide insights into this problem by comparing bidialectal behavioural effects to bilingual effects reported in the literature. Previous research has demonstrated that when bilinguals name an object in Lx while simultaneously processing a translation equivalent distractor word in Ly, naming times are sped up relative to an unrelated condition (Costa, Miozzo, & Caramazza, 1999). Using the same methodology, we evaluated whether a comparable facilitation effect arises when the distractor word is a dialectal or register variant of the picture name. Across 5 experiments we found no trace of translation equivalent facilitation. Instead, we repeatedly observed between-dialect and between-register interference, in contrast to the between-language facilitation effect. This behavioural divergence between bilingual vs. bidialectal processing suggests that this paradigm could provide an objective litmus tests for identifying the boundary between dialects and languages.

      PubDate: 2017-12-13T06:20:48Z
       
  • Learning and transfer of working memory gating policies
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Apoorva Bhandari, David Badre
      knowledge about the tasks we encounter enables us to rapidly and flexibly adapt to novel task contexts. Previous research has focused primarily on abstract rules that leverage shared structure in stimulus-response (S-R) mappings as the basis of such task knowledge. Here we provide evidence that working memory (WM) gating policies – a type of control policy required for internal control of WM during a task – constitute a form of abstract task knowledge that can be transferred across contexts. In two experiments, we report specific evidence for the transfer of selective WM gating policies across changes of task context. We show that this transfer is not tied to shared structure in S-R mappings, but instead in the dynamic structure of the task. Collectively, our results highlight the importance of WM gating policies in particular, and control policies in general, as a key component of the task knowledge that supports flexible behavior and task generalization.

      PubDate: 2017-12-13T06:20:48Z
       
  • The influence of prior reputation and reciprocity on dynamic
           trust-building in adults with and without autism spectrum disorder
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Cornelius Maurer, Valerian Chambon, Sacha Bourgeois-Gironde, Marion Leboyer, Tiziana Zalla
      The present study was designed to investigate the effects of reputational priors and direct reciprocity on the dynamics of trust building in adults with (N = 17) and without (N = 25) autism spectrum disorder (ASD) using a multi-round Trust Game (MTG). On each round, participants, who played as investors, were required to maximize their benefits by updating their prior expectations (the partner’s positive or negative reputation), based on the partner’s directed reciprocity, and adjusting their own investment decisions accordingly. Results showed that reputational priors strongly oriented the initial decision to trust, operationalized as the amount of investment the investor shares with the counterpart. However, while typically developed participants were mainly affected by the direct reciprocity, and rapidly adopted the optimal Tit-for-Tat strategy, participants with ASD continued to rely on reputational priors throughout the game, even when experience of the counterpart’s actual behavior contradicted their prior-based expectations. In participants with ASD, the effect of the reputational prior never disappeared, and affected judgments of trustworthiness and reciprocity of the partner even after completion of the game. Moreover, the weight of prior reputation positively correlated with the severity of the ASD participant’s social impairments while the reciprocity score negatively correlated with the severity of repetitive and stereotyped behaviors, as measured by the Autism Diagnostic Interview–Revised (ADI-R). In line with Bayesian theoretical accounts, the present findings indicate that individuals with ASD have difficulties encoding incoming social information and using it to revise and flexibly update prior social expectations, and that this deficit might severely hinder social learning and everyday life interactions.

      PubDate: 2017-12-02T21:02:39Z
       
  • Perceptions of randomness in binary sequences: Normative, heuristic, or
           both'
    • Abstract: Publication date: March 2018
      Source:Cognition, Volume 172
      Author(s): Stian Reimers, Chris Donkin, Mike E. Le Pelley
      When people consider a series of random binary events, such as tossing an unbiased coin and recording the sequence of heads (H) and tails (T), they tend to erroneously rate sequences with less internal structure or order (such as HTTHT) as more probable than sequences containing more structure or order (such as HHHHH). This is traditionally explained as a local representativeness effect: Participants assume that the properties of long sequences of random outcomes—such as an equal proportion of heads and tails, and little internal structure—should also apply to short sequences. However, recent theoretical work has noted that the probability of a particular sequence of say, heads and tails of length n, occurring within a larger (>n) sequence of coin flips actually differs by sequence, so P(HHHHH) < P(HTTHT). In this alternative account, people apply rational norms based on limited experience. We test these accounts. Participants in Experiment 1 rated the likelihood of occurrence for all possible strings of 4, 5, and 6 observations in a sequence of coin flips. Judgments were better explained by representativeness in alternation rate, relative proportion of heads and tails, and sequence complexity, than by objective probabilities. Experiments 2 and 3 gave similar results using incentivized binary choice procedures. Overall the evidence suggests that participants are not sensitive to variation in objective probabilities of a sub-sequence occurring; they appear to use heuristics based on several distinct forms of representativeness.

      PubDate: 2017-12-02T21:02:39Z
       
  • When having two names facilitates lexical selection: Similar results in
           the picture-word task from translation distractors in bilinguals and
           synonym distractors in monolinguals
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Alexandra S. Dylman, Christopher Barry
      We report five experiments using the picture-word task to examine lexical selection by comparing the effects of translation distractors in bilinguals and synonym distractors in monolinguals. Three groups of bilinguals named objects in their L1 or L2, and English monolinguals named objects using common names (e.g., DOG =“dog”) or, in a novel manipulation, using synonymous alternative names (e.g., DOG =“hound”, GLASSES =“spectacles”). All studies produced strikingly similar results. When bilinguals named in L1, there was a small facilitation effect from translation distractors, but larger facilitation when they named in L2. When monolinguals produced common names, there was no reliable effect from synonym distractors, but facilitation when they produced alternative names. (There were also strong identity facilitation effects in all naming conditions.) We discuss the relevance of these results for the debate concerning the role of competition in lexical selection and propose that for speech production there are direct facilitatory connections between the lexical representations of translations in bilinguals (and between synonyms in monolinguals). The effects of synonyms in monolinguals appear to “simulate” the effects found for translations in bilinguals, which suggest that there are commonalities in monolingual and bilingual lexical selection.

      PubDate: 2017-12-02T21:02:39Z
       
  • Verbal labels facilitate tactile perception
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Tally McCormick Miller, Timo Torsten Schmidt, Felix Blankenburg, Friedemann Pulvermüller
      One of the key statements of linguistic relativity is that language has a causal effect on perception. Although much previous research has addressed such putative language perception causality, no firm proof is available thus far which demonstrates that verbal labels help or otherwise influence perceptual processes. Here, we tested the hypothesis of language perception causality by using novel, minimally-different tactile-patterned stimuli applied to the finger, which initially could not be discriminated by our participants. By combining novel verbal pseudoword- and novel tactile-patterned stimuli in an implicit learning experiment, we show a language-induced facilitation in tactile-patterned stimulus discrimination. After one week of intensive yet implicit learning of tactile stimuli in the presence of irrelevant consistent verbal labels, participants demonstrated significant discrimination improvement. In contrast, the same participants showed no improvement in discriminating tactile-patterned stimuli that had been learnt in the context of variable linguistic stimuli. These results show that specific mental links between verbal labels and perceptual information brought about by their correlated presentation enable one to better discriminate said sensory information (and build percepts).

      PubDate: 2017-12-02T21:02:39Z
       
  • Pre-linguistic segmentation of speech into syllable-like units
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Okko Räsänen, Gabriel Doyle, Michael C. Frank
      Syllables are often considered to be central to infant and adult speech perception. Many theories and behavioral studies on early language acquisition are also based on syllable-level representations of spoken language. There is little clarity, however, on what sort of pre-linguistic “syllable” would actually be accessible to an infant with no phonological or lexical knowledge. Anchored by the notion that syllables are organized around particularly sonorous (audible) speech sounds, the present study investigates the feasibility of speech segmentation into syllable-like chunks without any a priori linguistic knowledge. We first operationalize sonority as a measurable property of the acoustic input, and then use sonority variation across time, or speech rhythm, as the basis for segmentation. The entire process from acoustic input to chunks of syllable-like acoustic segments is implemented as a computational model inspired by the oscillatory entrainment of the brain to speech rhythm. We analyze the output of the segmentation process in three different languages, showing that the sonority fluctuation in speech is highly informative of syllable and word boundaries in all three cases without any language-specific tuning of the model. These findings support the widely held assumption that syllable-like structure is accessible to infants even when they are only beginning to learn the properties of their native language.

      PubDate: 2017-11-21T14:20:54Z
       
  • Social attention directs working memory maintenance
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Qi-Yang Nie, Xiaowei Ding, Jianyong Chen, Markus Conci
      Visual working memory (vWM) performance is enhanced when a memorized object is cued after encoding. This so-called retro-cue effect is typically observed with a predictive (80% valid), retrospective cue. The current study examined whether a nonpredictive (50% valid) retro-cue can similarly enhance internal memory representations in cases where the cue conveys social signals. To this end, gaze cues were presented during the retention interval of a change-detection task, which are capable to engender a mutual attentional focus of two individuals towards one location. In line with our prediction, Experiment 1 demonstrated that a polygon presented at the gazed-at location was remembered better than that at both non-gazed and gazed-away locations. Experiments 2 and 3 showed that low-level motion cues did not elicit attentional orienting in a comparable manner as the gaze cue, and these differences in cuing were found to be reliable and independent of memory load. Furthermore, the gaze retro-cue effect disappeared when the face was inverted (Experiment 4). In sum, these results clearly show that sharing the focus of another individual establishes a point of reference from which visual information is restored with priority, suggesting that a gaze retro-cue leads to social attention, thus, modulating vWM maintenance in a reflexive, automatic manner.

      PubDate: 2017-11-13T20:58:20Z
       
  • Infants' sensitivity to vowel harmony and its role in segmenting speech
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Toben H. Mintz, Rachel L. Walker, Ashlee Welday, Celeste Kidd
      A critical part of infants’ ability to acquire any language involves segmenting continuous speech input into discrete word forms. Certain properties of words could provide infants with reliable cues to word boundaries. Here we investigate the potential utility of vowel harmony (VH), a phonological property whereby vowels within a word systematically exhibit similarity (“harmony”) for some aspect of the way they are pronounced. We present evidence that infants with no experience of VH in their native language nevertheless actively use these patterns to generate hypotheses about where words begin and end in the speech stream. In two sets of experiments, we exposed infants learning English, a language without VH, to a continuous speech stream in which the only systematic patterns available to be used as cues to word boundaries came from syllable sequences that showed VH or those that showed vowel disharmony (dissimilarity). After hearing less than one minute of the streams, infants showed evidence of sensitivity to VH cues. These results suggest that infants have an experience-independent sensitivity to VH, and are predisposed to segment speech according to harmony patterns. We also found that when the VH patterns were more subtle (Experiment 2), infants required more exposure to the speech stream before they segmented based on VH, consistent with previous work on infants’ preferences relating to processing load. Our findings evidence a previously unknown mechanism by which infants could discover the words of their language, and they shed light on the perceptual mechanisms that might be responsible for the emergence of vowel harmony as an organizing principle for the sound structure of words in many languages.

      PubDate: 2017-11-13T20:58:20Z
       
  • On-line confidence monitoring during decision making
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Dror Dotan, Florent Meyniel, Stanislas Dehaene
      Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions.

      PubDate: 2017-11-13T20:58:20Z
       
  • Decreased reward value of biological motion among individuals with
           autistic traits
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Elin H. Williams, Emily S. Cross
      The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments.

      PubDate: 2017-11-07T02:45:03Z
       
  • From altered synaptic plasticity to atypical learning: A computational
           model of Down syndrome
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Ángel Eugenio Tovar, Gert Westermann, Alvaro Torres
      Learning and memory rely on the adaptation of synaptic connections. Research on the neurophysiology of Down syndrome has characterized an atypical pattern of synaptic plasticity with limited long-term potentiation (LTP) and increased long-term depression (LTD). Here we present a neurocomputational model that instantiates this LTP/LTD imbalance to explore its impact on tasks of associative learning. In Study 1, we ran a series of computational simulations to analyze the learning of simple and overlapping stimulus associations in a model of Down syndrome compared with a model of typical development. Learning in the Down syndrome model was slower and more susceptible to interference effects. We found that interference effects could be overcome with dedicated stimulation schedules. In Study 2, we ran a second set of simulations and an empirical study with participants with Down syndrome and typically developing children to test the predictions of our model. The model adequately predicted the performance of the human participants in a serial reaction time task, an implicit learning task that relies on associative learning mechanisms. Critically, typical and atypical behavior was explained by the interactions between neural plasticity constraints and the stimulation schedule. Our model provides a mechanistic account of learning impairments based on these interactions, and a causal link between atypical synaptic plasticity and associative learning.

      PubDate: 2017-11-07T02:45:03Z
       
  • Compound risk judgment in tasks with both idiosyncratic and systematic
           risk: The “Robust Beauty” of additive probability integration
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Joakim Sundh, Peter Juslin
      In this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk – the most complex tasks from the standpoint of probability theory – while they should shift to probability theory or exemplar memory with independence between the assets. The results from 3 experiments confirm that participants shift between strategies depending on the task, starting off with the default of additive integration. In contrast to results in similar multiple cue judgment tasks, there is little evidence for use of exemplar memory. The additive heuristics also appear to be surprisingly context-sensitive, with limited generalization across formally very similar tasks.

      PubDate: 2017-11-07T02:45:03Z
       
  • The early social significance of shared ritual actions
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Zoe Liberman, Katherine D. Kinzler, Amanda L. Woodward
      Many rituals are socially stipulated such that engaging in a group’s rituals can fundamentally signal membership in that group. Here, we asked whether infants infer information about people’s social affiliation based on whether those people perform the same ritualistic action versus different actions. We presented 16-month-old infants with two people who used the same object to achieve the same goal: turning on a light. In a first study, the actions that the actors used to turn on the light had key properties of ritual: they were not causally necessary to reach the overall goal, and there were no features of the situation that required doing the particular actions. We varied whether the two actors performed the same action or performed different actions to turn on the light. Infants expected people who used the same ritualistic action to be more likely to affiliate than people who used different actions. A second study indicated that these results were not due to perceptual similarity: when the differences in the actors’ actions were not marked by properties of ritual, but were instead due to situational constraints, infants expected the actors to affiliate. Thus, infants understand the social significance of people engaging in common, potentially ritualistic actions, and expect these actions to provide information about third-party social relationships.

      PubDate: 2017-11-07T02:45:03Z
       
  • Expected but omitted stimuli affect crossmodal interaction
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Marcello Costantini, Daniele Migliorati, Brunella Donno, Miroslav Sirota, Francesca Ferri
      One of the most important ability of our brain is to integrate input from different sensory modalities to create a coherent representation of the environment. Does expectation affect such multisensory integration' In this paper, we tackled this issue by taking advantage from the crossmodal congruency effect (CCE). Participants made elevation judgments to visual target while ignoring tactile distractors. We manipulated the expectation of the tactile distractor by pairing the tactile stimulus to the index finger with a high-frequency tone and the tactile stimulus to the thumb with a low-frequency tone in 80% of the trials. In the remaining trials we delivered the tone and the visual target, but the tactile distractor was omitted (Study 1). Results fully replicated the basic crossmodal congruency effect. Strikingly, the CCE was observed, though at a lesser degree, also when the tactile distractor was not presented but merely expected. The contingencies between tones and tactile distractors were reversed in a follow-up study (Study 2), and the effect was further tested in two conceptual replications using different combinations of stimuli (Studies 5 and 6). Two control studies ruled out alternative explanations of the observed effect that would not involve a role for tactile distractors (Studies 3, 4). Two additional control studies unequivocally proved the dependency of the CCE on the spatial and temporal expectation of the distractors (Study 7, 8). An internal small-scale meta-analysis showed that the crossmodal congruency effect with predicted distractors is a robust medium size effect. Our findings reveal that multisensory integration, one of the most basic and ubiquitous mechanisms to encode external events, benefits from expectation of sensory input.

      PubDate: 2017-11-07T02:45:03Z
       
  • Full interpretation of minimal images
    • Abstract: Publication date: February 2018
      Source:Cognition, Volume 171
      Author(s): Guy Ben-Yosef, Liav Assif, Shimon Ullman
      The goal in this work is to model the process of ‘full interpretation’ of object images, which is the ability to identify and localize all semantic features and parts that are recognized by human observers. The task is approached by dividing the interpretation of the complete object to the interpretation of multiple reduced but interpretable local regions. In such reduced regions, interpretation is simpler, since the number of semantic components is small, and the variability of possible configurations is low. We model the interpretation process by identifying primitive components and relations that play a useful role in local interpretation by humans. To identify useful components and relations used in the interpretation process, we consider the interpretation of ‘minimal configurations’: these are reduced local regions, which are minimal in the sense that further reduction renders them unrecognizable and uninterpretable. We show that such minimal interpretable images have useful properties, which we use to identify informative features and relations used for full interpretation. We describe our interpretation model, and show results of detailed interpretations of minimal configurations, produced automatically by the model. Finally, we discuss possible extensions and implications of full interpretation to difficult visual tasks, such as recognizing social interactions, which are beyond the scope of current models of visual recognition.

      PubDate: 2017-11-07T02:45:03Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.226.33.117
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-