for Journals by Title or ISSN
for Articles by Keywords
help
Followed Journals
Journal you Follow: 0
 
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover Cognition
  [SJR: 2.77]   [H-I: 142]   [142 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0010-0277
   Published by Elsevier Homepage  [3043 journals]
  • Auditory-induced bouncing is a perceptual (rather than a cognitive)
           phenomenon: Evidence from illusory crescents
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Hauke S. Meyerhoff, Brian J. Scholl
      A central task for vision is to identify objects as the same persisting individuals over time and motion. The need for such processing is made especially clear in ambiguous situations such as the bouncing/streaming display: two discs move toward each other, superimpose, and then continue along their trajectories. Did the discs stream past each other, or bounce off each other' When people are likely to perceive streaming, playing a brief tone at the moment of overlap can readily cause them to see bouncing instead. Recent research has attributed this effect to decisional (rather than perceptual) processes by showing that auditory tones alter response biases but not the underlying sensitivity for detecting objective bounces. Here we explore the nature of this phenomenon using ‘illusory causal crescents’: when people perceive bouncing (or causal ‘launching’), they also perceive the second disc to begin moving before being fully overlapped with the first disc (i.e. leaving an uncovered crescent). Here we demonstrate that merely playing a sound coincident to the moment of overlap can also reliably induce the perception of such illusory crescents. Moreover, this effect is due to the coincidence of the tone, per se, since the effect disappears when the tone is embedded in a larger regular tone sequence. Because observers never have to explicitly categorize their percept (e.g. as streaming)—and because the effect involves a subtle quantitative influence on another clearly visual property (i.e. the crescent’s width)—we conclude that this audiovisual influence on the perception of identity over time reflects perceptual processing rather than higher-level decisions.

      PubDate: 2017-10-12T08:00:59Z
       
  • Cognitive costs of decision-making strategies: A resource demand
           decomposition analysis with a cognitive architecture
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Hanna B. Fechner, Lael J. Schooler, Thorsten Pachur
      Several theories of cognition distinguish between strategies that differ in the mental effort that their use requires. But how can the effort—or cognitive costs—associated with a strategy be conceptualized and measured' We propose an approach that decomposes the effort a strategy requires into the time costs associated with the demands for using specific cognitive resources. We refer to this approach as resource demand decomposition analysis (RDDA) and instantiate it in the cognitive architecture Adaptive Control of Thought–Rational (ACT-R). ACT-R provides the means to develop computer simulations of the strategies. These simulations take into account how strategies interact with quantitative implementations of cognitive resources and incorporate the possibility of parallel processing. Using this approach, we quantified, decomposed, and compared the time costs of two prominent strategies for decision making, take-the-best and tallying. Because take-the-best often ignores information and foregoes information integration, it has been considered simpler than strategies like tallying. However, in both ACT-R simulations and an empirical study we found that under increasing cognitive demands the response times (i.e., time costs) of take-the-best sometimes exceeded those of tallying. The RDDA suggested that this pattern is driven by greater requirements for working memory updates, memory retrievals, and the coordination of mental actions when using take-the-best compared to tallying. The results illustrate that assessing the relative simplicity of strategies requires consideration of the overall cognitive system in which the strategies are embedded.

      PubDate: 2017-10-12T08:00:59Z
       
  • Does a lack of auditory experience affect sequential learning'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Janne von Koss Torkildsen, Joanne Arciuli, Christiane Lingås Haukedal, Ona Bø Wie
      To understand the interaction between sensory experiences and cognition, it is critical to investigate the possibility that deprivation in one sensory modality might affect cognition in other modalities. Here we are concerned with the hypothesis that early experience with sound is vital to the development of domain-general sequential processing skills. In line with this hypothesis, a seminal empirical study found that prelingually deaf children had impaired sequence learning in the visual modality. In order to assess the limits of this hypothesis, the current study employed a different visual sequence learning task in an investigation of prelingually deaf children with cochlear implants and normal hearing children. Results showed statistically significant learning in each of the two groups, and no significant difference in the amount of learning between groups. Moreover, there was no association between the age at which the child received their implant (and thus access to electric hearing) and their performance on the sequential learning task. We discuss key differences between our study and the previous study, and argue that the field must reconsider claims about domain-general cognitive impairment resulting from early auditory deprivation.

      PubDate: 2017-10-12T08:00:59Z
       
  • The role of multisensory interplay in enabling temporal expectations
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Felix Ball, Lara E. Michels, Carsten Thiele, Toemme Noesselt
      Temporal regularities can guide our attention to focus on a particular moment in time and to be especially vigilant just then. Previous research provided evidence for the influence of temporal expectation on perceptual processing in unisensory auditory, visual, and tactile contexts. However, in real life we are often exposed to a complex and continuous stream of multisensory events. Here we tested – in a series of experiments – whether temporal expectations can enhance perception in multisensory contexts and whether this enhancement differs from enhancements in unisensory contexts. Our discrimination paradigm contained near-threshold targets (subject-specific 75% discrimination accuracy) embedded in a sequence of distractors. The likelihood of target occurrence (early or late) was manipulated block-wise. Furthermore, we tested whether spatial and modality-specific target uncertainty (i.e. predictable vs. unpredictable target position or modality) would affect temporal expectation (TE) measured with perceptual sensitivity ( d ′ ) and response times (RT). In all our experiments, hidden temporal regularities improved performance for expected multisensory targets. Moreover, multisensory performance was unaffected by spatial and modality-specific uncertainty, whereas unisensory TE effects on d ′ but not RT were modulated by spatial and modality-specific uncertainty. Additionally, the size of the temporal expectation effect, i.e. the increase in perceptual sensitivity and decrease of RT, scaled linearly with the likelihood of expected targets. Finally, temporal expectation effects were unaffected by varying target position within the stream. Together, our results strongly suggest that participants quickly adapt to novel temporal contexts, that they benefit from multisensory (relative to unisensory) stimulation and that multisensory benefits are maximal if the stimulus-driven uncertainty is highest. We propose that enhanced informational content (i.e. multisensory stimulation) enables the robust extraction of temporal regularities which in turn boost (uni-)sensory representations.

      PubDate: 2017-10-12T08:00:59Z
       
  • Information density converges in dialogue: Towards an
           information-theoretic model
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Yang Xu, David Reitter
      The principle of entropy rate constancy (ERC) states that language users distribute information such that words tend to be equally predictable given previous contexts. We examine the applicability of this principle to spoken dialogue, as previous findings primarily rest on written text. The study takes into account the joint-activity nature of dialogue and the topic shift mechanisms that are different from monologue. It examines how the information contributions from the two dialogue partners interactively evolve as the discourse develops. The increase of local sentence-level information density (predicted by ERC) is shown to apply to dialogue overall. However, when the different roles of interlocutors in introducing new topics are identified, their contribution in information content displays a new converging pattern. We draw explanations to this pattern from multiple perspectives: Casting dialogue as an information exchange system would mean that the pattern is the result of two interlocutors maintaining their own context rather than sharing one. Second, we present some empirical evidence that a model of Interactive Alignment may include information density to explain the effect. Third, we argue that building common ground is a process analogous to information convergence. Thus, we put forward an information-theoretic view of dialogue, under which some existing theories of human dialogue may eventually be unified.

      PubDate: 2017-10-12T08:00:59Z
       
  • Can monolinguals be like bilinguals' Evidence from dialect switching
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Neil W. Kirk, Vera Kempe, Kenneth C. Scott-Brown, Andrea Philipp, Mathieu Declerck
      Bilinguals rely on cognitive control mechanisms like selective activation and inhibition of lexical entries to prevent intrusions from the non-target language. We present cross-linguistic evidence that these mechanisms also operate in bidialectals. Thirty-two native German speakers who sometimes use the Öcher Platt dialect, and thirty-two native English speakers who sometimes use the Dundonian Scots dialect completed a dialect-switching task. Naming latencies were higher for switch than for non-switch trials, and lower for cognate compared to non-cognate nouns. Switch costs were symmetrical, regardless of whether participants actively used the dialect or not. In contrast, sixteen monodialectal English speakers, who performed the dialect-switching task after being trained on the Dundonian words, showed asymmetrical switch costs with longer latencies when switching back into Standard English. These results are reminiscent of findings for balanced vs. unbalanced bilinguals, and suggest that monolingual dialect speakers can recruit control mechanisms in similar ways as bilinguals.

      PubDate: 2017-10-12T08:00:59Z
       
  • Where do spontaneous first impressions of faces come from'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Harriet Over, Richard Cook
      Humans spontaneously attribute a wide range of traits to strangers based solely on their facial features. These first impressions are known to exert striking effects on our choices and behaviours. In this paper, we provide a theoretical account of the origins of these spontaneous trait inferences. We describe a novel framework (‘Trait Inference Mapping’) in which trait inferences are products of mappings between locations in ‘face space’ and ‘trait space’. These mappings are acquired during ontogeny and allow excitation of face representations to propagate automatically to associated trait representations. This conceptualization provides a framework within which the relative contribution of ontogenetic experience and genetic inheritance can be considered. Contrary to many existing ideas about the origins of trait inferences, we propose only a limited role for innate mechanisms and natural selection. Instead, our model explains inter-observer consistency by appealing to cultural learning and physiological responses that facilitate or ‘canalise’ particular face-trait mappings. Our TIM framework has both theoretical and substantive implications, and can be extended to trait inferences from non-facial cues to provide a unified account of first impressions.

      PubDate: 2017-10-12T08:00:59Z
       
  • Is utilitarian sacrifice becoming more morally permissible'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Ivar R. Hannikainen, Edouard Machery, Fiery A. Cushman
      A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.

      PubDate: 2017-10-05T15:44:43Z
       
  • The emerging causal understanding of institutional objects
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Alexander Noyes, Frank C. Keil, Yarrow Dunham
      Institutional objects, such as money, drivers’ licenses, and borders, have functions because of their social roles rather than their immediate physical properties. These objects are causally different than standard artifacts (e.g. hammers, chairs, and cars), sharing more commonality with other social roles. Thus, they inform psychological theories of human-made objects as well as children’s emerging understanding of social reality. We examined whether children (N=180, ages 4–9) differentiate institutional objects from standard artifacts. Specifically, we examine whether children understand that mutual intentions (i.e., the intentions of a social collective) underlie the functional affordances of institutional objects in ways that they do not for standard artifacts. We find that young children assimilate institutional objects into their intuitive theories of standard artifacts; children begin to differentiate between the domains in the elementary school years.

      PubDate: 2017-10-05T15:44:43Z
       
  • The priming of basic combinatory responses in MEG
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Esti Blanco-Elorrieta, Victor S. Ferreira, Paul Del Prato, Liina Pylkkänen
      Priming has been a powerful tool for the study of human memory and especially the memory representations relevant for language. However, although it is well established that lexical access can be primed, we do not know exactly what types of computations can be primed above the word level. This work took a neurobiological approach and assessed the ways in which the complex representation of a minimal combinatory phrase, such as red boat, can be primed, as evidenced by the spatiotemporal profiles of magnetoencephalography (MEG) signals. Specifically, we built upon recent progress on the neural signatures of phrasal composition and tested whether the brain activities implicated for the basic combination of two words could be primed. In two experiments, MEG was recorded during a picture naming task where the prime trials were designed to replicate previously reported combinatory effects and the target trials to test whether those combinatory effects could be primed. The manipulation of the primes was successful in eliciting larger activity for adjective-noun combinations than single nouns in left anterior temporal and ventromedial prefrontal cortices, replicating prior MEG studies on parallel contrasts. Priming of similarly timed activity was observed during target trials in anterior temporal cortex, but only when the prime and target shared an adjective. No priming in temporal cortex was observed for single word repetition and two control tasks showed that the priming effect was not elicited if the prime pictures were simply viewed but not named. In sum, this work provides evidence that very basic combinatory operations can be primed, with the necessity for some lexical overlap between prime and target suggesting combinatory conceptual, as opposed to syntactic processing. Both our combinatory and priming effects were early, onsetting between 100 and 150ms after picture onset and thus are likely to reflect the very earliest planning stages of a combinatory message. Thus our findings suggest that at the earliest stages of combinatory planning in production, a combinatory memory representation is formed that affects the planning of a relevantly similar combination on a subsequent trial.

      PubDate: 2017-09-28T03:23:20Z
       
  • The role of reference in cross-situational word learning
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Felix Hao Wang, Toben H. Mintz
      Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects’ prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words’ meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children’s language acquisition.

      PubDate: 2017-09-28T03:23:20Z
       
  • Reasoning strategies modulate gender differences in emotion processing
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Henry Markovits, Bastien Trémolière, Isabelle Blanchette
      The dual strategy model of reasoning has proposed that people’s reasoning can be understood asa combination of two different ways of processing information related to problem premises: a counterexample strategy that examines information for explicit potential counterexamples and a statistical strategy that uses associative access to generate a likelihood estimate of putative conclusions. Previous studies have examined this model in the context of basic conditional reasoning tasks. However, the information processing distinction that underlies the dual strategy model can be seen asa basic description of differences in reasoning (similar to that described by many general dual process models of reasoning). In two studies, we examine how these differences in reasoning strategy may relate to processing very different information, specifically we focus on previously observed gender differences in processing negative emotions. Study 1 examined the intensity of emotional reactions to a film clip inducing primarily negative emotions. Study 2 examined the speed at which participants determine the emotional valence of sequences of negative images. In both studies, no gender differences were observed among participants using a counterexample strategy. Among participants using a statistical strategy, females produce significantly stronger emotional reactions than males (in Study 1) and were faster to recognize the valence of negative images than were males (in Study 2). Results show that the processing distinction underlying the dual strategy model of reasoning generalizes to the processing of emotions.

      PubDate: 2017-09-28T03:23:20Z
       
  • Retraction notice to "Fueling Doubt and Openness: Experiencing the
           Unconscious, Constructed Nature of Perception Induces Uncertainty and
           Openness to Change" Cognition, Volume 137, April 2015, Pages 1-8
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): William Hart, Alexa M. Tullett, Wyley B. Shreves, Zachary Fetterman


      PubDate: 2017-09-28T03:23:20Z
       
  • Competing perspectives on frames of reference in language and thought
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Peggy Li, Linda Abarbanell
      A study found that Dutch-speaking children who prefer an egocentric (left/right) reference frame when describing spatial relationships, and Hai om-speaking children who use a geocentric (north/south) frame had difficulty recreating small-scale spatial arrays using their language-incongruent system (Haun, Rapold, Janzen, & Levinson, 2011). In five experiments, we reconciled these results with another study showing that English (egocentric) and Tseltal Mayan (geocentric) speakers can flexibly use both systems (Abarbanell, 2010; Li, Abarbanell, Gleitman, & Papafragou, 2011). In replicating and extending Haun et al. (Experiment 1), English- but not Tseltal-speaking children could use their language-incongruent system when the instructions used their non-preferred frame of reference. Perseveration due to task order may explain the discrepancies between present English- and previous Dutch-speaking children, while not understanding task instructions using left/right language may explain why present Tseltal- and previous Hai om-speaking children had difficulty with their language-incongruent systems. In support, Tseltal-speaking children could use an egocentric system when the instructions were conveyed without left/right language (Experiments 2–4), and many did not know left/right language (Experiment 5). These findings help reconcile seemingly conflicting sets of results and suggest that task constraints, rather than language, determine which system is easier to use (Experiment 2 vs. 3).

      PubDate: 2017-09-21T21:41:41Z
       
  • Individual differences in switching and inhibition predict
           perspective-taking across the lifespan
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Madeleine R. Long, William S. Horton, Hannah Rohde, Antonella Sorace
      Studies exploring the influence of executive functions (EF) on perspective-taking have focused on inhibition and working memory in young adults or clinical populations. Less consideration has been given to more complex capacities that also involve switching attention between perspectives, or to changes in EF and concomitant effects on perspective-taking across the lifespan. To address this, we assessed whether individual differences in inhibition and attentional switching in healthy adults (ages 17–84) predict performance on a task in which speakers identified targets for a listener with size-contrasting competitors in common or privileged ground. Modification differences across conditions decreased with age. Further, perspective taking interacted with EF measures: youngest adults’ sensitivity to perspective was best captured by their inhibitory performance; oldest adults’ sensitivity was best captured by switching performance. Perspective-taking likely involves multiple aspects of EF, as revealed by considering a wider range of EF tasks and individual capacities across the lifespan.

      PubDate: 2017-09-21T21:41:41Z
       
  • Human infants’ understanding of social imitation: Inferences of
           affiliation from third party observations
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Lindsey J. Powell, Elizabeth S. Spelke
      Imitation is ubiquitous in positive social interactions. For adult and child observers, it also supports inferences about the participants in such interactions and their social relationships, but the origins of these inferences are obscure. Do infants attach social significance to this form of interaction' Here we test 4- to 5.5-month-old infants’ interpretation of imitation, asking if the imitative interactions they observe support inferences of social affiliation, across 10 experimental conditions that varied the modality of the imitation (movement vs. sound), the roles of specific characters (imitators vs. targets), the number of characters in the displays (3 vs. 5), and the number of parties initiating affiliative test events (1 vs. 2). These experiments, together with one experiment conducted with 12-month-old infants, yielded three main findings. First, infants expect that characters who engaged in imitation will approach and affiliate with the characters whom they imitated. Second, infants show no evidence of expecting that characters who were targets of imitation will approach and affiliate with their imitators. Third, analyzing imitative interactions is difficult for young infants, whose expectations vary in strength depending on the number of characters to be tracked and the number of affiliative actors to be compared. These findings have implications for our understanding of social imitation, and they provide methods for advancing understanding of other aspects of early social cognitive development.

      PubDate: 2017-09-21T21:41:41Z
       
  • Towards an integrative model of visual short-term memory maintenance:
           Evidence from the effects of attentional control, load, decay, and their
           interactions in childhood
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Andria Shimi, Gaia Scerif
      Over the past decades there has been a surge of research aiming to shed light on the nature of capacity limits to visual short-term memory (VSTM). However, an integrative account of this evidence is currently missing. We argue that investigating parameters constraining VSTM in childhood suggests a novel integrative model of VSTM maintenance, and that this in turn informs mechanisms of VSTM maintenance in adulthood. Over 3 experiments with 7-year-olds and young adults (total N=206), we provide evidence for multiple cognitive processes interacting to constrain VSTM performance. While age-related increases in storage capacity are undisputable, we replicate the finding that attentional processes control what information will be encoded and maintained in VSTM in the face of increased competition. Therefore, a central process to the current model is attentional refreshment, a mechanism that it is thought to reactivate and strengthen the signal of the visual representations. Critically, here we also show that attentional influences on VSTM are further constrained by additional factors, traditionally studied to the exclusion of each other, such as memory load and temporal decay. We propose that these processes work synergistically in an elegant manner to capture the adult-end state, whereas their less refined efficiency and modulations in childhood account for the smaller VSTM capacity that 7-year-olds demonstrate compared to older individuals. We conclude that going beyond the investigation of single cognitive mechanisms, to their interactions, holds the promise to understand both developing and fully developed maintenance in VSTM.

      PubDate: 2017-09-21T21:41:41Z
       
  • What makes a face photo a ‘good likeness’'
    • Abstract: Publication date: January 2018
      Source:Cognition, Volume 170
      Author(s): Kay L. Ritchie, Robin S.S. Kramer, A. Mike Burton
      Photographs of people are commonly said to be ‘good likenesses’ or ‘poor likenesses’, and this is a concept that we readily understand. Despite this, there has been no systematic investigation of what makes an image a good likeness, or of which cognitive processes are involved in making such a judgement. In three experiments, we investigate likeness judgements for different types of images: natural images of film stars (Experiment 1), images of film stars from specific films (Experiment 2), and iconic images and face averages (Experiment 3). In all three experiments, participants rated images for likeness and completed speeded name verification tasks. We consistently show that participants are faster to identify images which they have previously rated asa good likeness compared to a poor likeness. We also consistently show that the more familiar we are with someone, the higher likeness rating we give to all images of them. A key finding is that our perception of likeness is idiosyncratic (Experiments 1 and 2), and can be tied to our specific experience of each individual (Experiment 2). We argue that likeness judgements require a comparison between the stimulus and our own representation of the person, and that this representation differs according to our prior experience with that individual. This has theoretical implications for our understanding of how we represent familiar people, and practical implications for how we go about selecting images for identity purposes such as photo-ID.

      PubDate: 2017-09-17T00:39:18Z
       
  • Waiting for lexical access: Cochlear implants or severely degraded input
           lead listeners to process speech less incrementally
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Bob McMurray, Ashley Farris-Trimble, Hannah Rigler
      Spoken language unfolds over time. Consequently, there are brief periods of ambiguity, when incomplete input can match many possible words. Typical listeners solve this problem by immediately activating multiple candidates which compete for recognition. In two experiments using the visual world paradigm, we examined real-time lexical competition in prelingually deaf cochlear implant (CI) users, and normal hearing (NH) adults listening to severely degraded speech. In Experiment 1, adolescent CI users and NH controls matched spoken words to arrays of pictures including pictures of the target word and phonological competitors. Eye-movements to each referent were monitored asa measure of how strongly that candidate was considered over time. Relative to NH controls, CI users showed a large delay in fixating any object, less competition from onset competitors (e.g., sandwich after hearing sandal), and increased competition from rhyme competitors (e.g., candle after hearing sandal). Experiment 2 observed the same pattern with NH listeners hearing highly degraded speech. These studies suggests that in contrast to all prior studies of word recognition in typical listeners, listeners recognizing words in severely degraded conditions can exhibit a substantively different pattern of dynamics, waiting to begin lexical access until substantial information has accumulated.

      PubDate: 2017-09-17T00:39:18Z
       
  • The fate of distractors in working memory: No evidence for their active
           removal
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Isabelle Dagry, Pierre Barrouillet
      Not all the information processed in working memory (WM) must be retained. Due to the strict limitations of its capacity, the mechanisms that prevent WM from being cluttered and choked by no longer relevant information are of paramount importance. The present study tested the hypothesis put forward by the SOB-CS model of an active and attention-demanding mechanism that would remove no-longer relevant items from WM. Such a mechanism has been advocated to account for the well-known fact that, in complex span tasks, processing distractors at a slower pace results in better recall of memory items. According to the SOB-CS model, slow pace would free more time for removing distractors, thus alleviating the interference they create on target items. In direct contradiction with this hypothesis, a first experiment demonstrated that distractors are not less, but more accessible at the end of complex span task trials in which they have been processed at a slow rather than a fast pace. Using the repetition priming effect occurring in a lexical decision task inserted as processing component within a complex span task, a second experiment established that distractors processed at a slower pace do not elicit weaker, but stronger repetition priming effects, indicating that they have not been removed. Along with previous findings, the present study not only shows that there is no trace of distractor removal in the long term, in the short term, nor immediately after processing, but demonstrates that memory traces of distractors are stronger in situations assumed to involve a more complete removal by the SOB-CS model. These empirical evidence suggests that distractors are not actively removed from working memory after having been processed.

      PubDate: 2017-09-11T10:03:31Z
       
  • Mens rea ascription, expertise and outcome effects: Professional judges
           surveyed
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Markus Kneer, Sacha Bourgeois-Gironde
      A coherent practice of mens rea (‘guilty mind’) ascription in criminal law presupposes a concept of mens rea which is insensitive to the moral valence of an action’s outcome. For instance, an assessment of whether an agent harmed another person intentionally should be unaffected by the severity of harm done. Ascriptions of intentionality made by laypeople, however, are subject to a strong outcome bias. As demonstrated by the Knobe effect, a knowingly incurred negative side effect is standardly judged intentional, whereas a positive side effect is not. We report the first empirical investigation into intentionality ascriptions made by professional judges, which finds (i) that professionals are sensitive to the moral valence of outcome type, and (ii) that the worse the outcome, the higher the propensity to ascribe intentionality. The data shows the intentionality ascriptions of professional judges to be inconsistent with the concept of mens rea supposedly at the foundation of criminal law.

      PubDate: 2017-09-11T10:03:31Z
       
  • The influence of time on task on mind wandering and visual working memory
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Marissa Krimsky, Daniel E. Forster, Maria M. Llabre, Amishi P. Jha
      Working memory relies on executive resources for successful task performance, with higher demands necessitating greater resource engagement. In addition to mnemonic demands, prior studies suggest that internal sources of distraction, such as mind wandering (i.e., having off-task thoughts) and greater time on task, may tax executive resources. Herein, the consequences of mnemonic demand, mind wandering, and time on task were investigated during a visual working memory task. Participants (N=143) completed a delayed-recognition visual working memory task, with mnemonic load for visual objects manipulated across trials (1 item=low load; 2 items=high load) and subjective mind wandering assessed intermittently throughout the experiment using a self-report Likert-type scale (1=on-task, 6=off-task). Task performance (correct/incorrect response) and self-reported mind wandering data were evaluated by hierarchical linear modeling to track trial-by-trial fluctuations. Performance declined with greater time on task, and the rate of decline was steeper for high vs low load trials. Self-reported mind wandering increased over time, and significantly varied asa function of both load and time on task. Participants reported greater mind wandering at the beginning of the experiment for low vs. high load trials; however, with greater time on task, more mind wandering was reported during high vs. low load trials. These results suggest that the availability of executive resources in support of working memory maintenance processes fluctuates in a demand-sensitive manner with time on task, and may be commandeered by mind wandering.

      PubDate: 2017-09-05T15:13:58Z
       
  • Hidden Markov model analysis reveals the advantage of analytic eye
           movement patterns in face recognition across cultures
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Tim Chuk, Kate Crookes, William G. Hayward, Antoni B. Chan, Janet H. Hsiao
      It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual’s eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture.

      PubDate: 2017-09-05T15:13:58Z
       
  • Preparing for what might happen: An episodic specificity induction impacts
           the generation of alternative future events
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Helen G. Jing, Kevin P. Madore, Daniel L. Schacter
      A critical adaptive feature of future thinking involves the ability to generate alternative versions of possible future events. However, little is known about the nature of the processes that support this ability. Here we examined whether an episodic specificity induction – brief training in recollecting details of a recent experience that selectively impacts tasks that draw on episodic retrieval – (1) boosts alternative event generation and (2) changes one’s initial perceptions of negative future events. In Experiment 1, an episodic specificity induction significantly increased the number of alternative positive outcomes that participants generated to a series of standardized negative events, compared with a control induction not focused on episodic specificity. We also observed larger decreases in the perceived plausibility and negativity of the original events in the specificity condition, where participants generated more alternative outcomes, relative to the control condition. In Experiment 2, we replicated and extended these findings using a series of personalized negative events. Our findings support the idea that episodic memory processes are involved in generating alternative outcomes to anticipated future events, and that boosting the number of alternative outcomes is related to subsequent changes in the perceived plausibility and valence of the original events, which may have implications for psychological well-being.

      PubDate: 2017-09-05T15:13:58Z
       
  • Where you are affects what you can easily imagine: Environmental geometry
           elicits sensorimotor interference in remote perspective taking
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Bernhard E. Riecke, Timothy P. McNamara
      Imagined perspective switches are notoriously difficult, a fact often ascribed to sensorimotor interference between one’s to-be-imagined versus actual orientation. Here, we demonstrate similar interference effects, even if participants know they are in a remote environment with unknown spatial relation to the learning environment. Participants learned 15 target objects irregularly arranged in an office from one orientation (0°, 120°, or 240°). Participants were blindfolded and disoriented before being wheeled to a test room of similar geometry (exp.1) or different geometry (exp.2). Participants were seated facing 0, 120°, or 240°, and asked to perform judgments of relative direction (JRD, e.g., imagine facing “pen”, point to “phone”). JRD performance was improved when participants’ to-be-imagined orientation in the learning room was aligned with their physical orientation in the current (test) room. Conversely, misalignment led to sensorimotor interference. These concurrent reference frame facilitation/interference effects were further enhanced when the current and to-be-imagined environments were more similar. Whereas sensorimotor alignment improved absolute and relative pointing accuracy, sensorimotor misalignment predominately increased response times, supposedly due to increased cognitive demands. These sensorimotor facilitation/interference effects were sustained and could not be sufficiently explained by initial retrieval and transformation costs. We propose that facilitation/interference effects occurred between concurrent egocentric representations of the learning and test environment in working memory. Results suggest that merely being in a rectangular room might be sufficient to automatically re-anchor one’s representation and thus produce orientation-specific interference. This should be considered when designing perspective-taking experiments to avoid unintended biases and concurrent reference frame alignment effects.
      Graphical abstract image

      PubDate: 2017-08-17T22:19:07Z
       
  • Do you hear ‘feather’ when listening to ‘rain’' Lexical tone
           activation during unconscious translation: Evidence from Mandarin-English
           bilinguals
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Xin Wang, Juan Wang, Jeffrey G. Malins
      Although lexical tone is a highly prevalent phonetic cue in human languages, its role in bilingual spoken word recognition is not well understood. The present study investigates whether and how adult bilinguals, who use pitch contours to disambiguate lexical items in one language but not the other, access a tonal L1 when exclusively processing a non-tonal L2. Using the visual world paradigm, we show that Mandarin-English listeners automatically activated Mandarin translation equivalents of English target words such as ‘rain’ (Mandarin ‘yu3’), and consequently were distracted by competitors whose segments and tones overlapped with the translations of English target words (‘feather’, also ‘yu3’ in Mandarin). Importantly, listeners were not distracted by competitors that overlapped with the translations of target words in all segments but not tone (‘fish’; Mandarin ‘yu2’), nor were they distracted by competitors that overlapped with the translations of target words in rime and tone (‘wheat’, Mandarin ‘gu3’). These novel results demonstrate implicit access to L1 lexical representations through automatic/unconscious translation, as a result of cross-language top-down and/or lateral influence, and highlight the critical role of lexical tone activation in bilingual lexical access.

      PubDate: 2017-08-17T22:19:07Z
       
  • Revealing the mechanisms of human face perception using dynamic apertures
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Jennifer Murphy, Richard Cook
      Faces are notoriously hard to perceive when turned upside-down. It is often claimed that perceptual decrements reflect a qualitative switch from parallel whole-face processing, to a serial analysis of individual features. To test this view, we examined observers’ ability to categorize faces presented in their entirety, or viewed through a dynamic aperture that moved incrementally across the facial image. Exposing faces region-by-region disrupts holistic processing, but permits serial analysis of local features. In line with the holistic accounts, we predicted that aperture viewing would greatly impair judgements of upright, but not inverted faces. As expected, identity, gender, age, and expression were categorized more precisely when faces were presented upright and in their entirety. Contrary to holistic accounts, however, the detrimental effects of inversion seen in the whole-face condition were no greater than in the aperture condition. Moreover, we found comparable aperture effects for upright and inverted faces; observers exhibited less decision noise when faces were viewed in their entirety, than when viewed through the aperture, irrespective of orientation. We replicate these findings in control experiments and show that the same pattern is seen irrespective of the direction of aperture transition or the nature of the fill used to replace the occluded regions of the to-be-judged image. These results challenge holistic accounts of the face inversion effect and support an alternative interpretation. First, in line with previous findings, they indicate that perceptual decrements when viewing upside-down faces result from impoverished descriptions of local regions, not the loss of whole-face processing. Second, when interpreting inverted faces, access to the wider face context appears to be far more important than currently believed.

      PubDate: 2017-08-17T22:19:07Z
       
  • Spontaneous, modality-general abstraction of a ratio scale
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Cory D. Bonn, Jessica F. Cantlon
      The existence of a generalized magnitude system in the human mind and brain has been studied extensively but remains elusive because it has not been clearly defined. Here we show that one possibility is the representation of relative magnitudes via ratio calculations: ratios are a naturally dimensionless or abstract quantity that could qualify as a common currency for magnitudes measured on vastly different psychophysical scales and in different sensory modalities like size, number, duration, and loudness. In a series of demonstrations based on comparisons of item sequences, we demonstrate that subjects spontaneously use knowledge of inter-item ratios within and across sensory modalities and across magnitude domains to rate sequences as more or less similar on a sliding scale. Moreover, they rate ratio-preserved sequences as more similar to each other than sequences in which only ordinal relations are preserved, indicating that subjects are aware of differences in levels of relative-magnitude information preservation. The ubiquity of this ability across many different magnitude pairs, even those sharing no sensory information, suggests a highly general code that could qualify as a candidate for a generalized magnitude representation.

      PubDate: 2017-08-17T22:19:07Z
       
  • Effects of semantic neighborhood density in abstract and concrete words
    • Abstract: Publication date: December 2017
      Source:Cognition, Volume 169
      Author(s): Megan Reilly, Rutvik H. Desai
      Concrete and abstract words are thought to differ along several psycholinguistic variables, such as frequency and emotional content. Here, we consider another variable, semantic neighborhood density, which has received much less attention, likely because semantic neighborhoods of abstract words are difficult to measure. Using a corpus-based method that creates representations of words that emphasize featural information, the current investigation explores the relationship between neighborhood density and concreteness in a large set of English nouns. Two important observations emerge. First, semantic neighborhood density is higher for concrete than for abstract words, even when other variables are accounted for, especially for smaller neighborhood sizes. Second, the effects of semantic neighborhood density on behavior are different for concrete and abstract words. Lexical decision reaction times are fastest for words with sparse neighborhoods; however, this effect is stronger for concrete words than for abstract words. These results suggest that semantic neighborhood density plays a role in the cognitive and psycholinguistic differences between concrete and abstract words, and should be taken into account in studies involving lexical semantics. Furthermore, the pattern of results with the current feature-based neighborhood measure is very different from that with associatively defined neighborhoods, suggesting that these two methods should be treated as separate measures rather than two interchangeable measures of semantic neighborhoods.

      PubDate: 2017-08-17T22:19:07Z
       
  • Developmental surface and phonological dyslexia in both Greek and English
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Andreas Sotiropoulos, J. Richard Hanley
      The hallmark of developmental surface dyslexia in English and French is inaccurate reading of words with atypical spelling-sound correspondences. According to Douklias, Masterson and Hanley (2009), surface dyslexia can also be observed in Greek (a transparent orthography for reading that does not contain words of this kind). Their findings suggested that surface dyslexia in Greek can be characterized by slow reading of familiar words, and by inaccurate spelling of words with atypical sound-spelling correspondences (Greek is less transparent for spelling than for reading). In this study, we report seven adult cases whose slow reading and impaired spelling accuracy satisfied these criteria for Greek surface dyslexia. When asked to read words with atypical grapheme-phoneme correspondences in English (their second language), their accuracy was severely impaired. A co-occurrence was also observed between impaired spelling of words with atypical phoneme-grapheme correspondences in English and Greek. These co-occurrences provide strong evidence that surface dyslexia genuinely exists in Greek and that slow reading of real words in Greek reflects the same underlying impairment as that which produces inaccurate reading of atypical words in English. Two further individuals were observed with impaired reading and spelling of nonwords in both languages, consistent with developmental phonological dyslexia. Neither of the phonological dyslexics read words slowly. In terms of computational models of reading aloud, these findings suggest that slow reading by dyslexics in transparent orthographies is the consequence of a developmental impairment of the lexical (Coltheart, Rastle, Perry, Langdon, & Zeigler, 2001; Perry, Ziegler, & Zorzi, 2010) or semantic reading route (Plaut, McClelland, Seidenberg, & Patterson, 1996). This outcome provides evidence that the neurophysiological substrate(s) that support the lexical/semantic and the phonological pathways that are involved in reading and spelling are the same in both Greek and English.

      PubDate: 2017-07-23T18:51:35Z
       
  • Dissociating intuitive physics from intuitive psychology: Evidence from
           Williams syndrome
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Frederik S. Kamps, Joshua B. Julian, Peter Battaglia, Barbara Landau, Nancy Kanwisher, Daniel D. Dilks
      Prior work suggests that our understanding of how things work (“intuitive physics”) and how people work (“intuitive psychology”) are distinct domains of human cognition. Here we directly test the dissociability of these two domains by investigating knowledge of intuitive physics and intuitive psychology in adults with Williams syndrome (WS) – a genetic developmental disorder characterized by severely impaired spatial cognition, but relatively spared social cognition. WS adults and mental-age matched (MA) controls completed an intuitive physics task and an intuitive psychology task. If intuitive physics is a distinct domain (from intuitive psychology), then we should observe differential impairment on the physics task for individuals with WS compared to MA controls. Indeed, adults with WS performed significantly worse on the intuitive physics than the intuitive psychology task, relative to controls. These results support the hypothesis that knowledge of the physical world can be disrupted independently from knowledge of the social world.

      PubDate: 2017-07-12T08:02:52Z
       
  • Social evaluation of intentional, truly accidental, and negligently
           accidental helpers and harmers by 10-month-old infants
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Brandon M. Woo, Conor M. Steckler, Doan T. Le, J. Kiley Hamlin
      Whereas adults largely base their evaluations of others’ actions on others’ intentions, a host of research in developmental psychology suggests that younger children privilege outcome over intention, leading them to condemn accidental harm. To date, this question has been examined only with children capable of language production. In the current studies, we utilized a non-linguistic puppet show paradigm to examine the evaluation of intentional and accidental acts of helping or harming in 10-month-old infants. In Experiment 1 (n =64), infants preferred intentional over accidental helpers but accidental over intentional harmers, suggestive that by this age infants incorporate information about others’ intentions into their social evaluations. In Experiment 2 (n =64), infants did not distinguish “negligently” accidental from intentional helpers or harmers, suggestive that infants may find negligent accidents somewhat intentional. In Experiment 3 (n =64), we found that infants preferred truly accidental over negligently accidental harmers, but did not reliably distinguish negligently accidental from truly accidental helpers, consistent with past work with adults and children suggestive that humans are particularly sensitive to negligently accidental harm. Together, these results imply that infants engage in intention-based social evaluation of those who help and harm accidentally, so long as those accidents do not stem from negligence.

      PubDate: 2017-07-12T08:02:52Z
       
  • Children’s quantitative Bayesian inferences from natural frequencies
           and number of chances
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Stefania Pighin, Vittorio Girotto, Katya Tentori
      Zhu and Gigerenzer (2006) showed that an appreciable number of Chinese children aged between 9 and 12years old made correct quantitative Bayesian inferences requiring the integration of priors and likelihoods as long as they were presented with numerical information phrased in terms of natural frequencies. In this study, we sought to replicate this finding and extend the investigation of children’s Bayesian reasoning to a different numerical format (chances) and other probability questions (distributive and relative). In Experiment 1, a sample of Italian children was presented with the natural frequency version of five Bayesian inference problems employed by Zhu and Gigerenzer (2006), but only a tiny minority of them were able to produce correct responses. In Experiment 2, we found that the children’s accuracy, as well as the coherence between their probability judgments, depended on the type of question but not on the format (natural frequency vs. chance) in which information was presented. We conclude that children’s competence in drawing quantitative Bayesian inferences is lower than suggested by Zhu and Gigerenzer (2006) and, similarly to what happens with adults, it relies more on a problem representation that fosters an extensional evaluation of possibilities than on a specific numerical format.

      PubDate: 2017-07-12T08:02:52Z
       
  • Successful voluntary recruitment of cognitive control under acute stress
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Franziska Plessow, Susann Schade, Clemens Kirschbaum, Rico Fischer
      In highly complex task situations (dual tasking) stressed individuals have been shown to adapt the most resource-efficient task processing strategy, accepting costs of performance. We argue that an interpretation of this behavior in terms of a stress-induced impairment of cognitive control might be too simplified. In the present study, we therefore tested whether stressed individuals are still capable to up-regulate cognitive control when instructed to adapt more resource-intensive strategies, enabling preservation of task performance. Fifty-six participants underwent either an established psychosocial stress induction protocol (Trier Social Stress Test) or a standardized control intervention. Afterwards, all participants performed a dual task in which task prioritization was systematically varied, calling for the adoption of different task processing strategies that require more or less voluntary top-down control. Although individuals of the stress group showed a pronounced stress response, as indicated by salivary cortisol, they were able to recruit more resources and to engage more voluntary top-down control when instructed to do so. This finding suggests an adaptive nature of control regulation under acute stress that reflects a compensatory capacity with the potential to account for some of the observed ostensible impairments of cognitive control. Our finding calls for careful empirical examination of whether stressed individuals cannot or do not recruit certain cognitive control functions in a given situation that will inform novel interventions to optimize performance in high-stress environments.

      PubDate: 2017-07-12T08:02:52Z
       
  • No arousal-biased competition in focused visuospatial attention
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Árni Gunnar Ásgeirsson, Sander Nieuwenhuis
      Arousal sometimes enhances and sometimes impairs perception and memory. A recent theory attempts to reconcile these findings by proposing that arousal amplifies the competition between stimulus representations, strengthening already strong representations and weakening already weak representations. Here, we report a stringent test of this arousal-biased competition theory in the context of focused visuospatial attention. Participants were required to identify a briefly presented target in the context of multiple distractors, which varied in the degree to which they competed for representation with the target, as revealed by psychophysics. We manipulated arousal using emotionally arousing pictures (Experiment 1), alerting tones (Experiment 2) and white-noise stimulation (Experiment 3), and validated these manipulations with electroencephalography and pupillometry. In none of the experiments did we find evidence that arousal modulated the effect of distractor competition on the accuracy of target identification. Bayesian statistics revealed moderate to strong evidence against arousal-biased competition. Modeling of the psychophysical data based on Bundesen’s (1990) theory of visual attention corroborated the conclusion that arousal does not bias competition in focused visuospatial attention.

      PubDate: 2017-07-12T08:02:52Z
       
  • People learn other people’s preferences through inverse
           decision-making
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Alan Jern, Christopher G. Lucas, Charles Kemp
      People are capable of learning other people’s preferences by observing the choices they make. We propose that this learning relies on inverse decision-making—inverting a decision-making model to infer the preferences that led to an observed choice. In Experiment 1, participants observed 47 choices made by others and ranked them by how strongly each choice suggested that the decision maker had a preference for a specific item. An inverse decision-making model generated predictions that were in accordance with participants’ inferences. Experiment 2 replicated and extended a previous study by Newtson (1974) in which participants observed pairs of choices and made judgments about which choice provided stronger evidence for a preference. Inverse decision-making again predicted the results, including a result that previous accounts could not explain. Experiment 3 used the same method as Experiment 2 and found that participants did not expect decision makers to be perfect utility-maximizers.

      PubDate: 2017-07-04T05:30:56Z
       
  • Help me if I can't: Social interaction effects in adult contextual word
           learning
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Laura Verga, Sonja A. Kotz
      A major challenge in second language acquisition is to build up new vocabulary. How is it possible to identify the meaning of a new word among several possible referents' Adult learners typically use contextual information, which reduces the number of possible referents a new word can have. Alternatively, a social partner may facilitate word learning by directing the learner’s attention toward the correct new word meaning. While much is known about the role of this form of ‘joint attention’ in first language acquisition, little is known about its efficacy in second language acquisition. Consequently, we introduce and validate a novel visual word learning game to evaluate how joint attention affects the contextual learning of new words in a second language. Adult learners either acquired new words in a constant or variable sentence context by playing the game with a knowledgeable partner, or by playing the game alone on a computer. Results clearly show that participants who learned new words in social interaction (i) are faster in identifying a correct new word referent in variable sentence contexts, and (ii) temporally coordinate their behavior with a social partner. Testing the learned words in a post-learning recall or recognition task showed that participants, who learned interactively, better recognized words originally learned in a variable context. While this result may suggest that interactive learning facilitates the allocation of attention to a target referent, the differences in the performance during recognition and recall call for further studies investigating the effect of social interaction on learning performance. In summary, we provide first evidence on the role joint attention in second language learning. Furthermore, the new interactive learning game offers itself to further testing in complex neuroimaging research, where the lack of appropriate experimental set-ups has so far limited the investigation of the neural basis of adult word learning in social interaction.

      PubDate: 2017-07-04T05:30:56Z
       
  • Efficient belief tracking in adults: The role of task instruction,
           low-level associative processes and dispositional social functioning
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Gaëlle Meert, Jessica Wang, Dana Samson
      A growing body of evidence suggests that adults can monitor other people’s beliefs in an efficient way. However, the nature and the limits of efficient belief tracking are still being debated. The present study addressed these issues by testing (a) whether adults spontaneously process other people’s beliefs when overt task instructions assign priority to participants’ own belief, (b) whether this processing relies on low-level associative processes and (c) whether the propensity to track other people’s beliefs is linked to empathic disposition. Adult participants were asked to alternately judge an agent’s belief and their own belief. These beliefs were either consistent or inconsistent with each other. Furthermore, visual association between the agent and the object at which he was looking was either possible or impeded. Results showed interference from the agent’s belief when participants judged their own belief, even when low-level associations were impeded. This indicates that adults still process other people’s beliefs when priority is given to their own belief at the time of computation, and that this processing does not depend on low-level associative processes. Finally, performance on the belief task was associated with the Empathy Quotient and the Perspective Taking scale of the Interpersonal Reactivity Index, indicating that efficient belief processing is linked to a dispositional dimension of social functioning.

      PubDate: 2017-07-04T05:30:56Z
       
  • Just look away: Gaze aversions as an overt attentional disengagement
           mechanism
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Dekel Abeles, Shlomit Yuval-Greenberg
      During visual exploration of a scene, the eye-gaze tends to be directed toward more salient image-locations, containing more information. However, while performing non-visual tasks, such information-seeking behavior could be detrimental to performance, as the perception of irrelevant but salient visual input may unnecessarily increase the cognitive-load. It would be therefore beneficial if during non-visual tasks, eye-gaze would be governed by a drive to reduce saliency rather than maximize it. The current study examined the phenomenon of gaze-aversion during non-visual tasks, which is hypothesized to act as an active avoidance mechanism. In two experiments, gaze-position was monitored by an eye-tracker while participants performed an auditory mental arithmetic task, and in a third experiment they performed an undemanding naming task. Task-irrelevant simple motion stimuli (drifting grating and random dot kinematogram) were centrally presented, moving at varying speeds. Participants averted their gaze away from the moving stimuli more frequently and for longer proportions of the time when the motion was faster than when it was slower. Additionally, a positive correlation was found between the task’s difficulty and this aversion behavior. When the task was highly undemanding, no gaze aversion behavior was observed. We conclude that gaze aversion is an active avoidance strategy, sensitive to both the physical features of the visual distractions and the cognitive load imposed by the non-visual task.

      PubDate: 2017-07-04T05:30:56Z
       
  • The grammar of anger: Mapping the computational architecture of a
           recalibrational emotion
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Aaron Sell, Daniel Sznycer, Laith Al-Shawaf, Julian Lim, Andre Krauss, Aneta Feldman, Ruxandra Rascanu, Lawrence Sugiyama, Leda Cosmides, John Tooby
      According to the recalibrational theory of anger, anger is a computationally complex cognitive system that evolved to bargain for better treatment. Anger coordinates facial expressions, vocal changes, verbal arguments, the withholding of benefits, the deployment of aggression, and a suite of other cognitive and physiological variables in the service of leveraging bargaining position into better outcomes. The prototypical trigger of anger is an indication that the offender places too little weight on the angry individual’s welfare when making decisions, i.e. the offender has too low a welfare tradeoff ratio (WTR) toward the angry individual. Twenty-three experiments in six cultures, including a group of foragers in the Ecuadorian Amazon, tested six predictions about the computational structure of anger derived from the recalibrational theory. Subjects judged that anger would intensify when: (i) the cost was large, (ii) the benefit the offender received from imposing the cost was small, or (iii) the offender imposed the cost despite knowing that the angered individual was the person to be harmed. Additionally, anger-based arguments conformed to a conceptual grammar of anger, such that offenders were inclined to argue that they held a high WTR toward the victim, e.g., “the cost I imposed on you was small”, “the benefit I gained was large”, or “I didn’t know it was you I was harming.” These results replicated across all six tested cultures: the US, Australia, Turkey, Romania, India, and Shuar hunter-horticulturalists in Ecuador. Results contradict key predictions about anger based on equity theory and social constructivism.

      PubDate: 2017-07-04T05:30:56Z
       
  • Biphasic attentional orienting triggered by invisible social signals
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Yanliang Sun, Timo Stein, Wenjie Liu, Xiaowei Ding, Qi-Yang Nie
      Biological motion (BM) is one of the most important social cues for detecting conspecifics, prey, and predators. We show that unconscious BM processing can reflexively direct spatial attention, and that this effect has a biphasic temporal profile. Participants responded to probes that were preceded by intact or scrambled BM cues rendered invisible through continuous flash suppression. With a short inter-stimulus interval (ISI, 100ms) between the invisible BM cues and the probe, responses to probes at the same location as the invisible, nonpredictive BM cue were faster than to probes at the location of the scrambled BM cue. With a longer ISI (800ms) this effect reversed, with slower responses to probes at the location of the invisible, nonpredictive BM. These effects were absent when BM and its scrambled control were made visible with both short and long cue durations across variable length of ISIs, indicating that the saliency of BM itself cannot account for the dynamic orienting effects from invisible social cues. Moreover, this dynamic attentional shifts were specific to upright BM cues and not obtained for inverted stimuli. Thus, this reflexive and dynamic attentional modulation triggered by invisible BM, with initial facilitation followed by inhibition, demonstrates that in the complete absence of conscious awareness, cue predictiveness, and saliency differences, attentional systems promote exploration of our visual environment for social signals.

      PubDate: 2017-07-04T05:30:56Z
       
  • Signal dimensionality and the emergence of combinatorial structure
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Hannah Little, Kerem Eryılmaz, Bart de Boer
      In language, a small number of meaningless building blocks can be combined into an unlimited set of meaningful utterances. This is known as combinatorial structure. One hypothesis for the initial emergence of combinatorial structure in language is that recombining elements of signals solves the problem of overcrowding in a signal space. Another hypothesis is that iconicity may impede the emergence of combinatorial structure. However, how these two hypotheses relate to each other is not often discussed. In this paper, we explore how signal space dimensionality relates to both overcrowding in the signal space and iconicity. We use an artificial signalling experiment to test whether a signal space and a meaning space having similar topologies will generate an iconic system and whether, when the topologies differ, the emergence of combinatorially structured signals is facilitated. In our experiments, signals are created from participants’ hand movements, which are measured using an infrared sensor. We found that participants take advantage of iconic signal-meaning mappings where possible. Further, we use trajectory predictability, measures of variance, and Hidden Markov Models to measure the use of structure within the signals produced and found that when topologies do not match, then there is more evidence of combinatorial structure. The results from these experiments are interpreted in the context of the differences between the emergence of combinatorial structure in different linguistic modalities (speech and sign).
      Graphical abstract image

      PubDate: 2017-06-24T10:01:05Z
       
  • When attention wanders: Pupillometric signatures of fluctuations in
           external attention
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Mahiko Konishi, Kevin Brown, Luca Battaglini, Jonathan Smallwood
      Attention is not always directed to events in the external environment. On occasion our thoughts wander to people and places distant from the here and now. Sometimes, this lack of external attention can compromise ongoing task performance. In the current study we set out to understand the extent to which states of internal and external attention can be determined using pupillometry as an index of ongoing cognition. In two experiments we found that periods of slow responding were associated with elevations in the baseline pupil signal over three and a half seconds prior to a behavioural response. In the second experiment we found that unlike behavioural lapses, states of off-task thought, particularly those associated with a focus on the past and with an intrusive quality, were associated with reductions in the size of the pupil over the same window prior to the probe. These data show that both states of large and small baseline pupil size are linked to states when attention is not effectively focused on the external environment, although these states have different qualities. More generally, these findings illustrate that subjective and objective markers of task performance may not be equivalent and underscore the importance of developing objective indicators that can allow these different states to be understood.

      PubDate: 2017-06-24T10:01:05Z
       
  • Phonological and orthographic coding in deaf skilled readers
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Noemi Fariña, Jon Andoni Duñabeitia, Manuel Carreiras
      Written language is very important in daily life. However, most deaf people do not achieve good reading levels compared to their hearing peers. Previous research has mainly focused on their difficulties when reading in a language with an opaque orthography such as English. In the present study, we investigated visual word recognition of deaf adult skilled readers while reading in Spanish, a language with a transparent orthography, for which obligatory phonological mediation has been claimed. Experiment 1 showed a pseudohomophone inhibitory effect in hearing but not in deaf people. Experiment 2 showed similar orthographic sensitivity, as measured by the transposed-letter effect, for both groups. These results suggest that deaf skilled readers do not rely on phonological mediation, while maintaining the same level of orthographic sensitivity as hearing readers, thus suggesting that the use of phonological coding is not required to access the lexicon and meaning in a language with a transparent orthography.

      PubDate: 2017-06-24T10:01:05Z
       
  • An attentional drift diffusion model over binary-attribute choice
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Geoffrey Fisher
      In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute’s weight is decreased by 10%.

      PubDate: 2017-06-24T10:01:05Z
       
  • Predicting actions from subtle preparatory movements
    • Abstract: Publication date: November 2017
      Source:Cognition, Volume 168
      Author(s): Maryam Vaziri-Pashkam, Sarah Cormiea, Ken Nakayama
      To study how people anticipate others’ actions, we designed a competitive reaching task. Subjects faced each other separated by a Plexiglas screen and their finger movements in 3D space were recorded with sensors. The first subject (Attacker) was instructed to touch one of two horizontally arranged targets on the screen. The other subject (Blocker) touched the same target as quickly as possible. Average finger reaction times (fRTs) were fast, much faster than reactions to a dot moving on the screen in the same manner as the Attacker’s finger. This suggests the presence of subtle preparatory cues in other parts of the Attacker’s body. We also recorded videos of Attackers’ movements and had Blockers play against unedited videos as well as videos that had all preparatory cues removed by editing out frames before Attacker finger movements started. Blockers’ fRTs in response to the edited videos were significantly slower (∼90ms). Also, reversing the preparatory movements in the videos tricked the Blockers into choosing the incorrect target at the beginning of their movement. Next, we occluded various body parts of the Attacker and showed that fRTs slow down only when most of the body of the Attacker is occluded. These results indicate that informative cues are widely distributed over the body and Blockers can use any piece from a set of redundant cues for action prediction. Reaction times in each condition remained constant over the duration of the testing sessions indicating a lack of learning during the experiment. These results suggest that during a dynamic two-person interaction, human subjects possess a remarkable and built-in action reading capacity allowing them to predict others’ goals and respond efficiently in this competitive setting.

      PubDate: 2017-06-24T10:01:05Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.224.197.251
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-2016