Journal Cover Cognition
  [SJR: 2.77]   [H-I: 142]   [164 followers]  Follow
    
   Hybrid Journal Hybrid journal (It can contain Open Access articles)
   ISSN (Print) 0010-0277
   Published by Elsevier Homepage  [3175 journals]
  • The prosodic domain of phonological encoding: Evidence from speech errors
    • Authors: Mary-Beth Beirne; Karen Croot
      Pages: 1 - 7
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Mary-Beth Beirne, Karen Croot
      Phonological encoding of segments is thought to occur within a prosodically-defined frame, but it is not clear which of the constituent/s within the prosodic hierarchy (syllables, phonological words, intonational phrases and utterances) serve/s as the domain of phonological encoding. This experiment investigated whether segmental speech errors elicited in tongue-twisters were influenced by position within prosodic constituents above the level of the phonological word. Forty-four participants produced six repetitions each of 40 two-intonational phrase tongue-twisters with error-prone word-initial “target” segments in phrase-initial and phrase-final words. If the domain of phonological encoding is the intonational phrase, we hypothesised that segments within a current intonational phrase would interact in more errors than would segments across intonational phrase boundaries. Participants made more anticipatory than perseveratory errors on target segments in phrase-initial words as predicted. They also made more perseveratory than anticipatory errors on targets in phrase-final words, but only in utterance-final phrases. These results suggest that the intonational phrase is one domain of phonological encoding, and that segments for upcoming phrases are activated while current phrases are being articulated.
      Graphical abstract image

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.004
      Issue No: Vol. 177 (2018)
       
  • Revisiting norms of assertion
    • Authors: John Turri
      Pages: 8 - 11
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): John Turri
      A principal conclusion supported by convergent evidence from cognitive science, life science, and philosophy is that knowledge is a central norm of assertion—that is, according to the rules of the practice, assertions should express knowledge. That view has recently been challenged with new experiments. This paper identifies a critical confound in the experiments. In the process, a new study is reported that provides additional support for the view that knowledge is a central norm of assertion.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.023
      Issue No: Vol. 177 (2018)
       
  • Infants’ prosocial behavior is governed by cost-benefit analyses
    • Authors: Jessica A. Sommerville; Elizabeth A. Enright; Rachel O. Horton; Kelsey Lucca; Miranda J. Sitch; Susanne Kirchner-Adelhart
      Pages: 12 - 20
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Jessica A. Sommerville, Elizabeth A. Enright, Rachel O. Horton, Kelsey Lucca, Miranda J. Sitch, Susanne Kirchner-Adelhart
      Cost-benefit analyses are central to mature decision-making and behavior across a range of contexts. Given debates regarding the nature of infants’ prosociality, we investigated whether 18-month-old infants’ (N = 160) prosocial behavior is impacted by anticipated costs and benefits. Infants participated in a helping task in which they could carry either a heavy or light block across a room to help an experimenter. Infants’ helping behavior was attenuated when the anticipated physical costs were high versus low (Experiment 1), and high-cost helping was enhanced under conditions of increased intrinsic motivational benefits (Experiments 2 and 3). High-cost helping was further predicted by infants’ months of walking experience, presumably because carrying a heavy block across a room is more effortful for less experienced walkers than for more experienced walkers demonstrating that infants subjectively calibrate costs. Thus, infants’ prosocial responding may be guided by a rational decision-making process that weighs and integrates costs and benefits.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.021
      Issue No: Vol. 177 (2018)
       
  • The paca that roared: Immediate cumulative semantic interference among
           newly acquired words
    • Authors: Gary M. Oppenheim
      Pages: 21 - 29
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Gary M. Oppenheim
      With 40,000 words in the average vocabulary, how can speakers find the specific words that they want so quickly and easily' Cumulative semantic interference in language production provides a clue: when naming a large series of pictures, with a few mammals sprinkled about, naming each subsequent mammal becomes slower and more error-prone. Such interference mirrors predictions from an incremental learning algorithm applied to meaning-driven retrieval from an established vocabulary, suggesting retrieval benefits from a constant, implicit, re-optimization process (Oppenheim et al., 2010). But how quickly would a new mammal (e.g. paca) engage in this re-optimization' In this experiment, 18 participants studied 3 novel and 3 familiar exemplars from each of six semantic categories, and immediately performed a timed picture-naming task. Consistent with the learning model’s predictions, naming latencies revealed immediate cumulative semantic interference in all directions: from new words to new words, from new words to old words, from old words to new words, and from old words to old words. Repeating the procedure several days later produced similar-magnitude effects, demonstrating that newly acquired words can be immediately semantically integrated, at least to the extent necessary to produce typical cumulative semantic interference. These findings extend the Dark Side model’s scope to include novel word production, and are considered in terms of mechanisms for lexical selection.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.014
      Issue No: Vol. 177 (2018)
       
  • Cognitive flexibility and memory in pigeons, human children, and adults
    • Authors: Kevin P. Darby; Leyre Castro; Edward A. Wasserman; Vladimir M. Sloutsky
      Pages: 30 - 40
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Kevin P. Darby, Leyre Castro, Edward A. Wasserman, Vladimir M. Sloutsky
      This work examines cognitive flexibility using a comparative approach. Pigeons (Experiment 1), human children (Experiment 2a), and human adults (Experiment 2b) performed a task that required changing responses to the same stimuli twice across the experiment. The results indicate that all three groups demonstrated robust memory for learned information. In addition, pigeons showed comparable and substantial perseveration following both response shifts. In contrast, both children and adults exhibited some perseveration following a first response shift, while exhibiting no cost following the second response shift. These findings are discussed in relation to memory-based theories of cognitive flexibility, according to which perseveration occurs as a result of competition between long-term and working memory, revealing important differences in memory and cognitive flexibility between species.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.015
      Issue No: Vol. 177 (2018)
       
  • The social-cognitive basis of infants’ reference to absent entities
    • Authors: Manuel Bohn; Luise Zimmermann; Josep Call; Michael Tomasello
      Pages: 41 - 48
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Manuel Bohn, Luise Zimmermann, Josep Call, Michael Tomasello
      Recent evidence suggests that infants as young as 12 month of age use pointing to communicate about absent entities. The tacit assumption underlying these studies is that infants do so based on tracking what their interlocutor experienced in a previous shared interaction. The present study addresses this assumption empirically. In three experiments, 12-month-old infants could request additional desired objects by pointing to the location in which these objects were previously located. We systematically varied whether the adult from whom infants were requesting had previously experienced the former content of the location with the infant. Infants systematically adjusted their pointing to the now empty location to what they experienced with the adult previously. These results suggest that infants’ ability to communicate about absent referents is based on an incipient form of common ground.
      Graphical abstract image

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.024
      Issue No: Vol. 177 (2018)
       
  • Infants learn a rule predicated on the relation same but fail to
           simultaneously learn a rule predicated on the relation different
    • Authors: Jean-Rémy Hochmann; Susan Carey; Jacques Mehler
      Pages: 49 - 57
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Jean-Rémy Hochmann, Susan Carey, Jacques Mehler
      In two experiments, we assessed whether infants are able to learn rules predicated on two abstract relations linked by negation: same and different (not same). In an anticipatory looking paradigm, the relation between successive colored geometrical shapes predicted the location where a puppet would appear next. In Experiment 1, 7-month-olds learned and generalized a rule predicated on the relation same, but not a rule predicated on the relation different. Similarly, in Experiment 2, 12-month-olds learned a rule predicated on the relation same-shape, but not a rule predicated on the relation different-shape. Comparing our data with that from previous experiments in the speech domain, we found no effect of age, modality or rule complexity. We conclude that, in the first year of life, infants already possess a representation of the abstract relation same, which serves as input to a rule. In contrast, we find no evidence that they represent the relation different.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.04.005
      Issue No: Vol. 177 (2018)
       
  • When some triggers a scalar inference out of the blue. An
           electrophysiological study of a Stroop-like conflict elicited by single
           words
    • Authors: Cécile Barbet; Guillaume Thierry
      Pages: 58 - 68
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Cécile Barbet, Guillaume Thierry
      Some studies in experimental pragmatics have concluded that scalar inferences (e.g., ‘some X are Y’ implicates ‘not all X are Y’) are context-dependent pragmatic computations delayed relative to semantic computations. However, it remains unclear whether strong contextual support is necessary to trigger such inferences. Here we tested if the scalar inference ‘not all’ triggered by some can be evoked in a maximally neutral context. We investigated event-related potential (ERP) amplitude modulations elicited by Stroop-like conflicts in participants instructed to indicate whether strings of letters were printed with all their letters in upper case or otherwise. In a randomized stream of non-words and distractor words, the words all, some and case were either presented in capitals or they featured at least one lower case letter. As expected, we found a significant conflict-related N450 modulation when comparing e.g., ‘aLl’ with ‘ALL’. Surprisingly, despite the fact that most responses from the same participants in a sentence-picture verification task were literal, we also found a similar modulation when comparing ‘SOME’ with e.g., ‘SoMe’, even though SOME could only elicit such a Stroop conflict when construed pragmatically. No such modulation was found for e.g., ‘CasE’ vs. ‘CASE’ (neutral contrast). These results suggest that some can appear incongruent with the concept of ‘all’ even when contextual support is minimal. Furthermore, there was no significant correlation between N450 effect magnitude (‘SOME’ minus e.g., ‘sOMe’) and pragmatic response rate recorded in the sentence-picture verification task. Overall, this study shows for the first time that the pragmatic meaning of some can be accessed in a maximally neutral context, and thus, that the scalar inference ‘not all’ triggered by some should be construed as context-sensitive rather than context-dependent, that is, more or less salient and relevant depending on the context rather than entirely contingent upon it.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.013
      Issue No: Vol. 177 (2018)
       
  • Growth of symbolic number knowledge accelerates after children understand
           cardinality
    • Authors: David C. Geary; Kristy vanMarle
      Pages: 69 - 78
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): David C. Geary, Kristy vanMarle
      Children who achieve an early understanding of the cardinal value of number words (cardinal knowledge) have a superior understanding of the relations among numerals at school entry, controlling other factors (e.g., intelligence). We tested the hypothesis that this pattern emerges because an understanding of cardinal value jump starts children’s learning of the relations among numerals. Across two years of preschool, the cardinal knowledge of 179 children (85 boys) was assessed four times, as was their understanding of the relative quantity of Arabic numerals and competence at discriminating nonsymbolic quantities. Children were more accurate on nonsymbolic than numeral comparisons before they understood cardinality, but showed more rapid growth for numeral than nonsymbolic comparisons once they understood cardinality. Moreover, and with the possible exception of very small numerals (<5), before they understood cardinality children were no better than chance in their numeral comparisons, but greatly exceeded chance once they understood cardinality. These patterns were independent of the age at which children became cardinal principle knowers and independent of intelligence, executive function, and preliteracy skills. More broadly, the results provide a developmental bridge between cardinal knowledge and school-entry number knowledge.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.04.002
      Issue No: Vol. 177 (2018)
       
  • Interoceptive influences on peripersonal space boundary
    • Authors: Martina Ardizzi; Francesca Ferri
      Pages: 79 - 86
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Martina Ardizzi, Francesca Ferri
      Integration of body-related signals within the peripersonal space (PPS) contributes to bodily self-awareness. Whereas several studies have shown how individual PPS extension is shaped by external factors, e.g. during interactions with people and objects, no studies have looked at interoceptive influences on PPS extension. We exposed participants to an audio-tactile interaction task, to measure their PPS boundary (Session 1), and to a heartbeat counting task and a time estimation task, to specifically assess their interoceptive accuracy (Session 2). Participants’ traits of private self-consciousness and social anxiety were also evaluated, to account for their possible effect on the relation between interoception and PPS representation. We found that higher interoceptive accuracy specifically predicts narrower PPS boundary. Moreover, this relation is moderated by individual traits of private self-consciousness, but not social anxiety. Extending the concept of interoceptive influences on exteroceptive body representations to PPS, our results, first, support the idea that a dynamic balance between intero-exteroceptive processing might represent a general principle underlying bodily self-awareness; second, they shed light on how interoception may affect also the way we interface with the external world. Finally, showing that, in order for interoceptive accuracy to be effective on the intero-exteroceptive balance, it is important that individuals tend to focus on inner sensations and feelings, our results suggest that a comprehensive intero-exteroceptive model of bodily self-awareness should be (at least) a three-dimensional model that includes individual self-consciousness traits.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.04.001
      Issue No: Vol. 177 (2018)
       
  • And then I saw her race: Race-based expectations affect infants’
           word processing
    • Authors: Drew Weatherhead; Katherine S. White
      Pages: 87 - 97
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Drew Weatherhead, Katherine S. White
      How do our expectations about speakers shape speech perception' Adults’ speech perception is influenced by social properties of the speaker (e.g., race). When in development do these influences begin' In the current study, 16-month-olds heard familiar words produced in their native accent (e.g., “dog”) and in an unfamiliar accent involving a vowel shift (e.g., “dag”), in the context of an image of either a same-race speaker or an other-race speaker. Infants’ interpretation of the words depended on the speaker’s race. For the same-race speaker, infants only recognized words produced in the familiar accent; for the other-race speaker, infants recognized both versions of the words. Two additional experiments showed that infants only recognized an other-race speaker’s atypical pronunciations when they differed systematically from the native accent. These results provide the first evidence that expectations driven by unspoken properties of speakers, such as race, influence infants’ speech processing.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.04.004
      Issue No: Vol. 177 (2018)
       
  • Confidence judgments during ratio comparisons reveal a Bayesian bias
    • Authors: Santiago Alonso-Diaz; Jessica F. Cantlon
      Pages: 98 - 106
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Santiago Alonso-Diaz, Jessica F. Cantlon
      Rational numbers are essential in mathematics and decision-making but humans often and erroneously rely on the magnitude of the numerator or denominator to determine the relative size of a quotient. The source of this flawed whole number strategy is poorly understood. Here we test the Bayesian hypothesis that the human bias toward large values in the numerator or denominator of a ratio estimate is the result of higher confidence in large samples. Larger values are considered a better (more certain) instance of that ratio than the same ratio composed of smaller values. We collected confidence measures explicitly (Experiment 1) and implicitly (Experiment 2) during subjects’ comparisons of non-symbolic proportions (images with arrays of orange and blue dots). We manipulated the discernibility of the fractions to control difficulty and varied the cardinality and congruency of the numerators, denominators, and ratio values (e.g. 8/20 vs. 5/10 and 16/40 vs. 10/20). The results revealed that subjects’ confidence during ratio comparisons was modulated by the numerical magnitude of the fraction‘s components, consistent with a Bayesian perception of relative ratios. The results suggest that the large number bias could arise from greater confidence in large samples.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.04.006
      Issue No: Vol. 177 (2018)
       
  • Exploring the functional nature of synaesthetic colour: Dissociations from
           colour perception and imagery
    • Authors: Rocco Chiou; Anina N. Rich; Sebastian Rogers; Joel Pearson
      Pages: 107 - 121
      Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Rocco Chiou, Anina N. Rich, Sebastian Rogers, Joel Pearson
      Individuals with grapheme-colour synaesthesia experience anomalous colours when reading achromatic text. These unusual experiences have been said to resemble ‘normal’ colour perception or colour imagery, but studying the nature of synaesthesia remains difficult. In the present study, we report novel evidence that synaesthetic colour impacts conscious vision in a way that is different from both colour perception and imagery. Presenting ‘normal’ colour prior to binocular rivalry induces a location-dependent suppressive bias reflecting local habituation. By contrast, a grapheme that evokes synaesthetic colour induces a facilitatory bias reflecting priming that is not constrained to the inducing grapheme’s location. This priming does not occur in non-synaesthetes and does not result from response bias. It is sensitive to diversion of visual attention away from the grapheme, but resistant to sensory perturbation, reflecting a reliance on cognitive rather than sensory mechanisms. Whereas colour imagery in non-synaesthetes causes local priming that relies on the locus of imagined colour, imagery in synaesthetes caused global priming not dependent on the locus of imagery. These data suggest a unique psychophysical profile of high-level colour processing in synaesthetes. Our novel findings and method will be critical to testing theories of synaesthesia and visual awareness.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.022
      Issue No: Vol. 177 (2018)
       
  • The role of alternative salience in the derivation of scalar implicatures
    • Authors: Alice Rees; Lewis Bott
      Pages: 1 - 14
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Alice Rees, Lewis Bott
      Comprehension can be enriched by considering what a speaker could have said but did not; namely, the alternative. For example, “Betty passed some of her exams” can be interpreted as “Betty passed some but not all of her exams”. This enriched interpretation is an example of a scalar implicature. We consider whether the salience and use of the alternative are independent processes in the derivation of scalar implicatures or whether use is dependent on salience. Participants completed three sentence interpretation experiments in which the sentences invited scalar implicatures. The experiments used a structural priming paradigm with alternatives and implicatures as primes. We found that (1) adults could be primed to derive scalar implicatures when the alternative was the prime (2) they did so at a rate equal to if the scalar implicature itself were the prime. In the absence of evidence that the use of the alternative can be primed independently of its salience, we conclude that salience and use are not independent processes. Instead, we suggest that when the alternative is sufficiently salient, the implicature will automatically be derived.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.024
      Issue No: Vol. 176 (2018)
       
  • Contextual predictability shapes signal autonomy
    • Authors: James Winters; Simon Kirby; Kenny Smith
      Pages: 15 - 30
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): James Winters, Simon Kirby, Kenny Smith
      Aligning on a shared system of communication requires senders and receivers reach a balance between simplicity, where there is a pressure for compressed representations, and informativeness, where there is a pressure to be communicatively functional. We investigate the extent to which these two pressures are governed by contextual predictability: the amount of contextual information that a sender can estimate, and therefore exploit, in conveying their intended meaning. In particular, we test the claim that contextual predictability is causally related to signal autonomy: the degree to which a signal can be interpreted in isolation, without recourse to contextual information. Using an asymmetric communication game, where senders and receivers are assigned fixed roles, we manipulate two aspects of the referential context: (i) whether or not a sender shares access to the immediate contextual information used by the receiver in interpreting their utterance; (ii) the extent to which the relevant solution in the immediate referential context is generalisable to the aggregate set of contexts. Our results demonstrate that contextual predictability shapes the degree of signal autonomy: when the context is highly predictable (i.e., the sender has access to the context in which their utterances will be interpreted, and the semantic dimension which discriminates between meanings in context is consistent across communicative episodes), languages develop which rely heavily on the context to reduce uncertainty about the intended meaning. When the context is less predictable, senders favour systems composed of autonomous signals, where all potentially relevant semantic dimensions are explicitly encoded. Taken together, these results suggest that our pragmatic faculty, and how it integrates information from the context in reducing uncertainty, plays a central role in shaping language structure.

      PubDate: 2018-03-17T19:23:31Z
      DOI: 10.1016/j.cognition.2018.03.002
      Issue No: Vol. 176 (2018)
       
  • Social network size can influence linguistic malleability and the
           propagation of linguistic change
    • Authors: Shiri Lev-Ari
      Pages: 31 - 39
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Shiri Lev-Ari
      We learn language from our social environment, but the more sources we have, the less informative each source is, and therefore, the less weight we ascribe its input. According to this principle, people with larger social networks should give less weight to new incoming information, and should therefore be less susceptible to the influence of new speakers. This paper tests this prediction, and shows that speakers with smaller social networks indeed have more malleable linguistic representations. In particular, they are more likely to adjust their lexical boundary following exposure to a new speaker. Experiment 2 uses computational simulations to test whether this greater malleability could lead people with smaller social networks to be important for the propagation of linguistic change despite the fact that they interact with fewer people. The results indicate that when innovators were connected with people with smaller rather than larger social networks, the population exhibited greater and faster diffusion. Together these experiments show that the properties of people’s social networks can influence individuals’ learning and use as well as linguistic phenomena at the community level.

      PubDate: 2018-03-17T19:23:31Z
      DOI: 10.1016/j.cognition.2018.03.003
      Issue No: Vol. 176 (2018)
       
  • Abstraction promotes creative problem-solving in rhesus monkeys
    • Authors: William W.L. Sampson; Sara A. Khan; Eric J. Nisenbaum; Jerald D. Kralik
      Pages: 53 - 64
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): William W.L. Sampson, Sara A. Khan, Eric J. Nisenbaum, Jerald D. Kralik
      ion allows us to discern regularities beyond the specific instances we encounter. It also promotes creative problem-solving by enabling us to consider unconventional problem solutions. However, the mechanisms by which this occurs are not well understood. Because it is often difficult to isolate human high-level cognitive processes, we utilized a nonhuman primate model, in which rhesus monkeys appear to use similar processes to consider an unconventional solution to the difficult reverse-reward problem: i.e., given the choice between a better and worse food option they must select the worse one to receive the better one. After solving this problem with only one specific example—one vs. four half-peanuts—three of four monkeys immediately transferred to novel cases: novel quantities, food items, non-food items, and to the choice between a larger, but inferior vegetable and a smaller, but superior food item (either grape or marshmallow), in which they selected the inferior vegetable to receive the superior option. Thus, we show that nonhuman animals have the capacity to comprehend abstract non-perceptual features, to infer them from one specific case, and to use them to override the natural preference to select the superior option. Critically, we also found that three monkeys had a large learning and performance advantage over the fourth monkey who showed less generalization from the original one and four half-peanuts. This difference suggests that abstraction promoted problem-solving via cascading activation from the two food item options to the relation between them, thus providing access to an initially nonapparent problem solution.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.021
      Issue No: Vol. 176 (2018)
       
  • Mental space maps into the future
    • Authors: Anna Belardinelli; Johannes Lohmann; Alessandro Farnè; Martin V. Butz
      Pages: 65 - 73
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Anna Belardinelli, Johannes Lohmann, Alessandro Farnè, Martin V. Butz
      It has been suggested that our mind anticipates the future to act in a goal-directed, event-oriented manner. Here we asked whether peripersonal hand space, that is, the space surrounding one’s hands, is dynamically and adaptively mapped into the future while planning and executing a goal-directed object manipulation. We thus combined the crossmodal congruency paradigm (CCP), which has been used to study selective interactions between vision and touch within peripersonal space, with an object manipulation task. We expected crossmodal interactions in anticipation of the upcoming, currently planned object grasp, which varied trial-by-trial depending on the object’s orientation. Our results confirm that visual distractors close to the future finger positions selectively influence vibrotactile perceptions. Moreover, vibrotactile stimulation influences gaze behavior in the light of the anticipated grasp. Both influences become apparent partly even before the hand starts to move, soon after visual target object onset. These results thus support theories of event encodings and anticipatory behavior, showing that peripersonal hand space is flexibly remapped onto a future, currently actively inferred hand position.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.007
      Issue No: Vol. 176 (2018)
       
  • Comparison within pairs promotes analogical abstraction in
           three-month-olds
    • Authors: Erin M. Anderson; Yin-Juei Chang; Susan Hespos; Dedre Gentner
      Pages: 74 - 86
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Erin M. Anderson, Yin-Juei Chang, Susan Hespos, Dedre Gentner
      This research tests whether analogical learning is present before language comprehension. Three-month-old infants were habituated to a series of analogous pairs, instantiating either the same relation (e.g., AA, BB, etc.) or the different relation (e.g., AB, CD, etc.), and then tested with further exemplars of the relations. If they can distinguish the familiar relation from the novel relation, even with new objects, this is evidence for analogical abstraction across the study pairs. In Experiment 1, we did not find evidence of analogical abstraction when 3-month-olds were habituated to six pairs instantiating the relation. However, in Experiment 2, infants showed evidence of analogical abstraction after habituation to two alternating pairs (e.g., AA, BB, AA, BB…). Further, as with older groups, rendering individual objects salient disrupted learning the relation. These results demonstrate that 3-month-old infants are capable of comparison and abstraction of the same/different relation. Our findings also place limits on the conditions under which these processes are likely to occur. We discuss implications for theories of relational learning.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.008
      Issue No: Vol. 176 (2018)
       
  • Cognitive control ability mediates prediction costs in monolinguals and
           bilinguals
    • Authors: Megan Zirnstein; Janet G. van Hell; Judith F. Kroll
      Pages: 87 - 106
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Megan Zirnstein, Janet G. van Hell, Judith F. Kroll
      In this study, we examined the role that cognitive control and language regulation ability play in mediating readers’ susceptibility to prediction error costs when reading in the native language (L1) or a second language (L2). Twenty-four English monolinguals (Experiment 1) and 28 Chinese-English bilinguals (Experiment 2) read sentences in English while their EEG was recorded. The sentences varied in the predictability of an upcoming expected word and in whether that prediction was confirmed. Monolinguals showed sensitivity to sentence contexts in which expectations were not met (i.e., when unexpected words were encountered) in the form of a late, frontally-distributed positivity, but for bilinguals this effect was more complex. For both groups, performance on the prediction task was modulated by individual differences on the AX-CPT, a measure of inhibitory control. However, the bilinguals' reading performance in the L2 was affected not only by inhibitory control, but also by their performance on an L1 verbal fluency task that indexed language regulation and production capability, related to their language dominance and immersion context. Bilinguals with better regulation of the L1 generated a larger frontal positivity in response to unexpected words in the L2, an effect that was attenuated by inhibitory control ability. In contrast, bilinguals with lower regulatory ability generated a larger, late negativity, which was also mediated by control. These findings suggest that the ability to regulate the native language when immersed in a second language environment can influence mechanisms underlying the prediction process when reading in the L2. In addition, cognitive control ability, specifically inhibitory control, appears to mediate the difficulty readers incur when predictions are disconfirmed, not only in the native language, but also for proficient bilinguals reading in the L2. We argue that the mechanisms engaged during prediction in the L1 and L2 are fundamentally the same, and that what differs for bilinguals are the additional demands imposed by their language experience and language use.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.001
      Issue No: Vol. 176 (2018)
       
  • Is adaptive control in language production mediated by learning'
    • Authors: Michael Freund; Nazbanou Nozari
      Pages: 107 - 130
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Michael Freund, Nazbanou Nozari
      Recent work using the Picture Word Interference (PWI) paradigm has revealed that language production, similar to non-verbal tasks, shows a robust Congruency Sequence Effect (CSE), defined as a decreased congruency effect following incongruent trials. Although CSE is considered an index of adaptive control, its mechanism is debated. In two experiments, we tested the predictions of a learning model of adaptive control in production, using a task-switching paradigm fully balanced to evaluate CSE on a PWI trial as a function of the congruency of a 2-back PWI trial (within-task CSE), as well as a 1-back trial belonging to a different task (cross-task CSE). The second task was a visuospatial task with congruent and incongruent trials in Experiment 1, and a self-paced reading task with ambiguous and unambiguous sentences in Experiment 2 that imposed a gap between the two PWI trials twice as long of that in Experiment 1. A learning model posits that CSE is the result of changes to the connection weights between task-specific representations and a control center, which leads to two predictions in our paradigm: (a) a robust within-task CSE unaffected by the intervening trial and the gap duration, and (b) an absent or reversed cross-task CSE. These predictions were contrasted with two versions of an activation model of CSE. In accord with the predictions of the learning model, we found robust within-task CSE in PWI in both Experiments with a comparable effect size. Similarly, evidence of within-task CSE was also found in the visuospatial and sentence reading tasks. On the other hand, examination of cross-task CSE from PWI to the other tasks and vice versa revealed either absent or reversed CSE. Collectively, these results support a learning model of adaptive control in language production.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.009
      Issue No: Vol. 176 (2018)
       
  • Unconscious decisional learning improves unconscious information
           processing
    • Authors: Alexandra Vlassova; Joel Pearson
      Pages: 131 - 139
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Alexandra Vlassova, Joel Pearson
      The idea that unconscious input can result in long-term learning or task improvement has been debated for decades, yet there is still little evidence to suggest that learning outside of awareness can produce meaningful changes to decision-making. Here we trained participants using noisy motion stimuli, which require the gradual accumulation of information until a decision can be reached. These stimuli were suppressed from conscious awareness by simultaneously presenting a dynamic dichoptic mask. We show that a short period of training on either a partially or fully suppressed motion stimulus resulted in improved accuracy when tested on a partially suppressed motion stimulus traveling in the orthogonal direction. We found this improvement occurred even when performance on the training task was at chance. Performance gains generalized across motion directions, suggesting that the improvement was the result of changes to the decisional mechanisms rather than perceptual. Interestingly, unconscious learning had a stronger effect on unconscious, compared to conscious decisional accumulation. We further show that a conscious coherent percept is necessary to reap the benefits of unconscious learning. Together, these data suggest that unconscious decisional processing can be improved via training.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.018
      Issue No: Vol. 176 (2018)
       
  • Inter-process relations in spatial language: Feedback and graded
           compatibility
    • Authors: Holger Schultheis; Laura A. Carlson
      Pages: 140 - 158
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Holger Schultheis, Laura A. Carlson
      Mapping spatial expressions such as “behind the cup” to a spatial region requires two processes that have been largely explored independently: reference frame selection and spatial term assignment (Logan & Sadler, 1996). Reference frame selection carves a space into regions. Spatial term assignment evaluates these regions by determining the acceptability of the term for the given configuration. Here we present a systematic investigation of the relation and interplay of these two processes by asking whether (a) information from spatial term assignment feeds back to selection and (b) whether competition during selection is graded. In a series of simulation studies, we assess the performance of four computational models, each of which instantiates a unique combination of feedback (no feedback vs. feedback) and gradedness (all-or-none compatibility vs. graded compatibility). The results support two key observations about human spatial term use: First, reference frame selection and spatial term assignment proceed concurrently and in mutual interaction, with assignment information feeding back and influencing the selection process. Second, competition in reference frame selection is graded such that the strength of competition between different available reference frames increases continuously with decreasing similarity of the frames. As such, our work provides a new view on the components involved in spatial term use and their interplay, and suggests more broadly that the gradedness of competition may also be an important aspect of conflict and selection in other cognitive domains.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.020
      Issue No: Vol. 176 (2018)
       
  • Music reading expertise modulates hemispheric lateralization in English
           word processing but not in Chinese character processing
    • Authors: Sara Tze Kwan Li; Janet Hui-wen Hsiao
      Pages: 159 - 173
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Sara Tze Kwan Li, Janet Hui-wen Hsiao
      Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.010
      Issue No: Vol. 176 (2018)
       
  • The basic reproductive ratio as a link between acquisition and change in
           phonotactics
    • Authors: Andreas Baumann; Nikolaus Ritt
      Pages: 174 - 183
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Andreas Baumann, Nikolaus Ritt
      Language acquisition and change are thought to be causally connected. We demonstrate a method for quantifying the strength of this connection in terms of the ‘basic reproductive ratio’ of linguistic constituents. It represents a standardized measure of reproductive success, which can be derived both from diachronic and from acquisition data. By analyzing phonotactic English data, we show that the results of both types of derivation correlate, so that phonotactic acquisition indeed predicts phonotactic change, and vice versa. After drawing that general conclusion, we discuss the role of utterance frequency and show that the latter exhibits destabilizing effects only on late acquired items, which belong to phonotactic periphery. We conclude that – at least in the evolution of English phonotactics – acquisition serves conservation, while innovation is more likely to occur in adult speech and affects items that are less entrenched but comparably frequent.
      Graphical abstract image

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.005
      Issue No: Vol. 176 (2018)
       
  • Phonetic radicals, not phonological coding systems, support orthographic
           learning via self-teaching in Chinese
    • Authors: Luan Li; Hua-Chen Wang; Anne Castles; Miao-Ling Hsieh; Eva Marinus
      Pages: 184 - 194
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Luan Li, Hua-Chen Wang, Anne Castles, Miao-Ling Hsieh, Eva Marinus
      According to the self-teaching hypothesis (Share, 1995), phonological decoding is fundamental to acquiring orthographic representations of novel written words. However, phonological decoding is not straightforward in non-alphabetic scripts such as Chinese, where words are presented as characters. Here, we present the first study investigating the role of phonological decoding in orthographic learning in Chinese. We examined two possible types of phonological decoding: the use of phonetic radicals, an internal phonological aid, andthe use of Zhuyin, an external phonological coding system. Seventy-three Grade 2 children were taught the pronunciations and meanings of twelve novel compound characters over four days. They were then exposed to the written characters in short stories, and were assessed on their reading accuracy and on their subsequent orthographic learning via orthographic choice and spelling tasks. The novel characters were assigned three different types of pronunciation in relation to its phonetic radical – (1) a pronunciation that is identical to the phonetic radical in isolation; (2) a common alternative pronunciation associated with the phonetic radical when it appears in other characters; and (3) a pronunciation that is unrelated to the phonetic radical. The presence of Zhuyin was also manipulated. The children read the novel characters more accurately when phonological cues from the phonetic radicals were available and in the presence of Zhuyin. However, only the phonetic radicals facilitated orthographic learning. The findings provide the first empirical evidence of orthographic learning via self-teaching in Chinese, and reveal how phonological decoding functions to support learning in non-alphabetic writing systems.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.025
      Issue No: Vol. 176 (2018)
       
  • Learning to recognize unfamiliar talkers: Listeners rapidly form
           representations of facial dynamic signatures
    • Authors: Alexandra Jesse; Michael Bartoli
      Pages: 195 - 208
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Alexandra Jesse, Michael Bartoli
      Seeing the motion of a talking face can be sufficient to recognize personally highly familiar speakers, suggesting that dynamic facial information is stored in long-term representations for familiar speakers. In the present study, we tested whether talking-related facial dynamic information can guide the learning of unfamiliar speakers. Participants were asked to identify speakers from configuration-normalized point-light displays showing only the biological motion that speakers produced while saying short sentences. During an initial learning phase, feedback was given. During test, listeners identified speakers from point-light displays of the training sentences and of new sentences. Listeners learned to identify two speakers, and four speakers in another experiment, from visual dynamic information alone. Learning was evident already after very little exposure. Furthermore, listeners formed abstract representations of visual dynamic signatures that allowed them to recognize speakers at test even from new linguistic materials. Control experiments showed that any potentially remaining static information in the point-light displays was not sufficient to guide learning and that listeners learned to recognize the identity, rather than the sex, of the speakers, as learning was also found when speakers were of the same sex. Overall, these results demonstrate that listeners can learn to identify unfamiliar speakers from the motion they produce during talking. Listeners thus establish abstract representations of the talking-related dynamic facial motion signatures of unfamiliar speakers already from limited exposure.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.018
      Issue No: Vol. 176 (2018)
       
  • Looking into the future: An inward bias in aesthetic experience driven
           only by gaze cues
    • Authors: Yi-Chia Chen; Clara Colombatto; Brian J. Scholl
      Pages: 209 - 214
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Yi-Chia Chen, Clara Colombatto, Brian J. Scholl
      The inward bias is an especially powerful principle of aesthetic experience: In framed images (e.g. photographs), we prefer peripheral figures that face inward (vs. outward). Why does this bias exist' Since agents tend to act in the direction in which they are facing, one intriguing possibility is that the inward bias reflects a preference to view scenes from a perspective that will allow us to witness those predicted future actions. This account has been difficult to test with previous displays, in which facing direction is often confounded with either global shape profiles or the relative locations of salient features (since e.g. someone’s face is generally more visually interesting than the back of their head). But here we demonstrate a robust inward bias in aesthetic judgment driven by a cue that is socially powerful but visually subtle: averted gaze. Subjects adjusted the positions of people in images to maximize the images’ aesthetic appeal. People with direct gaze were not placed preferentially in particular regions, but people with averted gaze were reliably placed so that they appeared to be looking inward. This demonstrates that the inward bias can arise from visually subtle features, when those features signal how future events may unfold.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.010
      Issue No: Vol. 176 (2018)
       
  • Grammatical gender affects gender perception: Evidence for the
           structural-feedback hypothesis
    • Authors: Sayaka Sato; Panos Athanasopoulos
      Pages: 220 - 231
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Sayaka Sato, Panos Athanasopoulos
      Two experiments assessed the extent to which grammatical gender provides a predictive basis for bilinguals’ judgments about perceptual gender. In both experiments, French-English bilinguals and native English monolinguals were consecutively presented with images of objects manipulated for their (i) conceptual gender association and (ii) grammatical gender category and were instructed to make a decision on a subsequent target face. The experiments differed in the implicitness of the association between the object primes and target faces. Results revealed that when prior knowledge sources such as conceptual gender can be strategically used to resolve the immediate task (Experiment 1), this information was readily extracted and employed. However, grammatical gender demonstrated a more robust and persisting effect on the bilinguals’ judgments, indicating that the retrieval of obligatory grammatical information is automatic and modulates perceptual judgments (Experiment 2). These results suggest that grammar enables an effective and robust means to access prior knowledge which may be independent of task requirements.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.014
      Issue No: Vol. 176 (2018)
       
  • The sentence wrap-up dogma
    • Authors: Laurie A. Stowe; Edith Kaan; Laura Sabourin; Ryan C. Taylor
      Pages: 232 - 247
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Laurie A. Stowe, Edith Kaan, Laura Sabourin, Ryan C. Taylor
      Current sentence processing research has focused on early effects of the on-line incremental processes that are performed at each word or constituent during processing. However, less attention has been devoted to what happens at the end of the clause or sentence. More specifically, over the last decade and a half, a lot of effort has been put into avoiding measuring event-related brain potentials (ERPs) at the final word of a sentence, because of the possible effects of sentence wrap-up. This article reviews the evidence on how and when sentence wrap-up impacts behavioral and ERP results. Even though the end of the sentence is associated with a positive-going ERP wave, thus far this effect has not been associated with any factors hypothesized to affect wrap-up. In addition, ERP responses to violations have not been affected by this positivity. “Sentence-final” negativities reported in the literature are not unique to sentence final positions, nor do they obscure or distort ERP effects associated with linguistic manipulations. Finally, the empirical evidence used to argue that sentence-final ERPs are different from those recorded at sentence-medial positions is weak at most. Measuring ERPs at sentence-final positions is therefore certainly not to be avoided at all costs, especially not in cases where the structure of the language under investigation requires it. More importantly, researchers should follow rigorous method in their experimental design, avoid decision tasks which may induce ERP confounds, and ensure all other possible explanations for results are considered. Although this article is directed at a particular dogma from a particular literature, this review shows that it is important to reassess what is regarded as “general knowledge” from time to time.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.011
      Issue No: Vol. 176 (2018)
       
  • Direct speech quotations promote low relative-clause attachment in silent
           reading of English
    • Authors: Bo Yao; Christoph Scheepers
      Pages: 248 - 254
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Bo Yao, Christoph Scheepers
      The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, “This dress is beautiful”), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between ‘default’ and ‘simulated’ implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.03.017
      Issue No: Vol. 176 (2018)
       
  • The development of principled connections and kind representations
    • Authors: Paul Haward; Laura Wagner; Susan Carey; Sandeep Prasada
      Pages: 255 - 268
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Paul Haward, Laura Wagner, Susan Carey, Sandeep Prasada
      Kind representations draw an important distinction between properties that are understood as existing in instances of a kind by virtue of their being the kind of thing they are and properties that are not understood in this manner. For example, the property of barking for the kind dog is understood as being had by dogs by virtue of the fact that they are dogs. These properties are said to have a principled connection to the kind. In contrast, the property of wearing a collar is not understood as existing in instances by virtue of their being dogs, despite the fact that a large percentage of dogs wear collars. Such properties are said to have a statistical connection to the kind. Two experiments tested two signatures of principled connections in 4–7 year olds and adults: (i) that principled connections license normative expectations (e.g., we judge there to be something wrong with a dog that does not bark), and (ii) that principled connections license formal explanations which explain the existence of a property by reference to the kind (e.g., that barks because it is a dog). Experiment 1 showed that both the children and adults have normative expectations for properties that have a principled connection to a kind, but not those that have a mere statistical connection to a kind. Experiment 2 showed that both children and adults are more likely to provide a formal explanation when explaining the existence of properties with a principled connection to a kind than properties with statistical connections to their kinds. Both experiments showed no effect of age (over ages 4, 7, and adulthood) on the extent to which participants differentiated principled and statistical connections. We discuss the implications of the results for theories of conceptual representation and for the structure of explanation.

      PubDate: 2018-04-15T23:44:10Z
      DOI: 10.1016/j.cognition.2018.02.001
      Issue No: Vol. 176 (2018)
       
  • Inattentional numbness and the influence of task difficulty
    • Abstract: Publication date: September 2018
      Source:Cognition, Volume 178
      Author(s): Sandra Murphy, Polly Dalton
      Research suggests that clearly detectable stimuli can be missed when attention is focused elsewhere, particularly when the observer is engaged in a complex task. Although this phenomenon has been demonstrated in vision and audition, much less is known about the possibility of a similar phenomenon within touch. Across two experiments, we investigated reported awareness of an unexpected tactile event as a function of the difficulty of a concurrent tactile task. Participants were presented with sequences of tactile stimuli to one hand and performed either an easy or a difficult counting task. On the final trial, an additional tactile stimulus was concurrently presented to the unattended hand. Retrospective reports revealed that more participants in the difficult (vs. easy) condition remained unaware of this unexpected stimulus, even though it was clearly detectable under full attention conditions. These experiments are the first demonstrating the phenomenon of inattentional numbness modulated by concurrent tactile task difficulty.

      PubDate: 2018-05-16T21:55:24Z
       
  • Move to learn: Integrating spatial information from multiple viewpoints
    • Abstract: Publication date: September 2018
      Source:Cognition, Volume 178
      Author(s): Corinne A. Holmes, Nora S. Newcombe, Thomas F. Shipley
      Recalling a spatial layout from multiple orientations – spatial flexibility – is challenging, even when the global configuration can be viewed from a single vantage point, but more so when it must be viewed piecemeal. In the current study, we examined whether experiencing the transition between multiple viewpoints enhances spatial memory and flexible recall for a spatial configuration viewed simultaneously (Exp. 1) and sequentially (Exp. 2), whether the type of transition matters, and whether action provides an additional advantage over passive experience. In Experiment 1, participants viewed an array of dollhouse furniture from four viewpoints, but with all furniture simultaneously visible. In Experiment 2, participants viewed the same array piecemeal, from four partitioned viewpoints that allowed for viewing only a segment at a time. The transition between viewpoints involved rotation of the array or participant movement around it. Rotation and participant movement were passively experienced or actively generated. The control condition presented the dollhouse as a series of static views. Across both experiments, participant movement significantly enhanced spatial memory relative to array rotation or static views. However, in Exp. 2, there was a further advantage for actively walking around the array compared to being passively pushed. These findings suggest that movement around a stable environment is key to spatial memory and flexible recall, with action providing an additional boost to the integration of temporally segmented spatial events. Thus, spatial memory may be more flexible than prior data indicate, when studied under more natural acquisition conditions.

      PubDate: 2018-05-16T21:55:24Z
       
  • Mechanisms of value-learning in the guidance of spatial attention
    • Abstract: Publication date: September 2018
      Source:Cognition, Volume 178
      Author(s): Brian A. Anderson, Haena Kim
      The role of associative reward learning in the guidance of feature-based attention is well established. The extent to which reward learning can modulate spatial attention has been much more controversial. At least one demonstration of a persistent spatial attention bias following space-based associative reward learning has been reported. At the same time, multiple other experiments have been published failing to demonstrate enduring attentional biases towards locations at which a target, if found, yields high reward. This is in spite of evidence that participants use reward structures to inform their decisions where to search, leading some to suggest that, unlike feature-based attention, spatial attention may be impervious to the influence of learning from reward structures. Here, we demonstrate a robust bias towards regions of a scene that participants were previously rewarded for selecting. This spatial bias relies on representations that are anchored to the configuration of objects within a scene. The observed bias appears to be driven specifically by reinforcement learning, and can be observed with equal strength following non-reward corrective feedback. The time course of the bias is consistent with a transient shift of attention, rather than a strategic search pattern, and is evident in eye movement patterns during free viewing. Taken together, our findings reconcile previously conflicting reports and offer an integrative account of how learning from feedback shapes the spatial attention system.

      PubDate: 2018-05-16T21:55:24Z
       
  • The language of smell: Connecting linguistic and psychophysical properties
           of odor descriptors
    • Abstract: Publication date: September 2018
      Source:Cognition, Volume 178
      Author(s): Georgios Iatropoulos, Pawel Herman, Anders Lansner, Jussi Karlgren, Maria Larsson, Jonas K. Olofsson
      The olfactory sense is a particularly challenging domain for cognitive science investigations of perception, memory, and language. Although many studies show that odors often are difficult to describe verbally, little is known about the associations between olfactory percepts and the words that describe them. Quantitative models of how odor experiences are described in natural language are therefore needed to understand how odors are perceived and communicated. In this study, we develop a computational method to characterize the olfaction-related semantic content of words in a large text corpus of internet sites in English. We introduce two new metrics: olfactory association index (OAI, how strongly a word is associated with olfaction) and olfactory specificity index (OSI, how specific a word is in its description of odors). We validate the OAI and OSI metrics using psychophysical datasets by showing that terms with high OAI have high ratings of perceived olfactory association and are used to describe highly familiar odors. In contrast, terms with high OSI have high inter-individual consistency in how they are applied to odors. Finally, we analyze Dravnieks’s (1985) dataset of odor ratings in terms of OAI and OSI. This analysis reveals that terms that are used broadly (applied often but with moderate ratings) tend to be olfaction-unrelated and abstract (e.g., “heavy” or “light”; low OAI and low OSI) while descriptors that are used selectively (applied seldom but with high ratings) tend to be olfaction-related (e.g., “vanilla” or “licorice”; high OAI). Thus, OAI and OSI provide behaviorally meaningful information about olfactory language. These statistical tools are useful for future studies of olfactory perception and cognition, and might help integrate research on odor perception, neuroimaging, and corpus-based linguistic models of semantic organization.

      PubDate: 2018-05-16T21:55:24Z
       
  • The norm of assertion: Empirical data
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Markus Kneer
      Assertions are speech acts by means of which we express beliefs. As such they are at the heart of our linguistic and social practices. Recent research has focused extensively on the question whether the speech act of assertion is governed by norms, and if so, under what conditions it is acceptable to make an assertion. Standard theories propose, for instance, that one should only assert that p if one knows that p (the knowledge account), or that one should only assert that p if p is true (the truth account). In a series of four experiments, this question is addressed empirically. Contrary to previous findings, knowledge turns out to be a poor predictor of assertability, and the norm of assertion is not factive either. The studies here presented provide empirical evidence in favour of the view that a speaker is warranted to assert that p only if her belief that p is justified.

      PubDate: 2018-05-16T21:55:24Z
       
  • Fast mapping word meanings across trials: Young children forget all but
           their first guess
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Athulya Aravind, Jill de Villiers, Amy Pace, Hannah Valentine, Roberta Golinkoff, Kathy Hirsh-Pasek, Aquiles Iglesias, Mary Sweig Wilson
      Do children learn a new word by tracking co-occurrences between words and referents across multiple instances (“cross-situational learning” models), or is word-learning a “one-track” process, where learners maintain a single hypothesis about the possible referent, which may be verified or falsified in future occurrences (“propose-but-verify” models)' Using a novel word-learning task, we ask which learning procedure is utilized by preschool-aged children. We report on findings from three studies comparing the word-learning strategies across different populations of child learners: monolingual English learners, Spanish - English dual language learners, and learners at risk for language-delay. In all three studies, we ask what, if anything, is retained from prior exposures and whether the amount of information retained changes as children get older. The ability to make a good initial hypothesis was a function of various factors, including language ability and experience, but across-the-board, children were no better than chance after a wrong initial hypothesis. This suggests that children do not retain multiple meaning hypotheses across learning instances, lending support to the propose-but-verify models.

      PubDate: 2018-05-16T21:55:24Z
       
  • What’s fair' How children assign reward to members of teams with
           differing causal structures
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Karla Koskuba, Tobias Gerstenberg, Hannah Gordon, David Lagnado, Anne Schlottmann
      How do children reward individual members of a team that has just won or lost a game' We know that from pre-school age, children consider agents’ performance when allocating reward. Here we assess whether children can go further and appreciate performance in context: The same pattern of performance can contribute to a team outcome in different ways, depending on the underlying rule framework. Two experiments, with three age groups (4/5-year-olds, 6/7-year-olds, and adults), varied performance of team members, with the same performance patterns considered under three different game rules for winning or losing. These three rules created distinct underlying causal structures (additive, conjunctive, disjunctive), for how individual performance affected the overall team outcome. Even the youngest children differentiated between different game rules in their reward allocations. Rather than only rewarding individual performance, or whether the team won/lost, children were sensitive to the team structure and how players’ performance contributed to the win/loss under each of the three game rules. Not only do young children consider it fair to allocate resources based on merit, but they are also sensitive to the causal structure of the situation which dictates how individual contributions combine to determine the team outcome.

      PubDate: 2018-05-16T21:55:24Z
       
  • Pointing perception is precise
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): S.M. Cooney, N. Brady, A. McKinney
      The spontaneity and ease with which we point understates the gesture’s significance to understanding cognition. Onset of pointing in infancy predicts early word acquisition and signals a capacity for shared intentionality. Yet, notwithstanding its importance, there is little research on the perception of pointing and its referents. Here we show that perceptual acuity for discerning where another person is pointing is remarkably accurate. Thresholds, as low as 0.5° of visual angle across an interpersonal distance of ∼2 m, are modulated by the referent’s location in space and the hand used to point and remain constant when the pointer’s eyes are occluded from view and when ‘embodiment’ cues are enhanced or minimized. Pointing with the index finger not only directs attention toward a general region of space but the morphology of arm, hand and finger can be used to discern the location of the pointer’s attention with precision.

      PubDate: 2018-05-16T21:55:24Z
       
  • The development of territory-based inferences of ownership
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Brandon W. Goulding, Ori Friedman
      Legal systems often rule that people own objects in their territory. We propose that an early-developing ability to make territory-based inferences of ownership helps children address informational demands presented by ownership. Across 6 experiments (N = 504), we show that these inferences develop between ages 3 and 5 and stem from two aspects of the psychology of ownership. First, we find that a basic ability to infer that people own objects in their territory is already present at age 3 (Experiment 1). Children even make these inferences when the territory owner unintentionally acquired the objects and was unaware of them (Experiments 2 and 3). Second, we find that between ages 3 and 5, children come to consider past events in these judgments. They move from solely considering the current location of an object in territory-based inferences, to also considering and possibly inferring where it originated (Experiments 4 to 6). Together, these findings suggest that territory-based inferences of ownership are unlikely to be constructions of the law. Instead, they may reflect basic intuitions about ownership that operate from early in development.

      PubDate: 2018-05-16T21:55:24Z
       
  • The role of perspective in event segmentation
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Khena M. Swallow, Jovan T. Kemp, Ayse Candan Simsek
      People divide their ongoing experience into meaningful events. This process, event segmentation, is strongly associated with visual input: when visual features change, people are more likely to segment. However, the nature of this relationship is unclear. Segmentation could be bound to specific visual features, such as actor posture. Or, it could be based on changes in the activity that are correlated with visual features. This study distinguished between these two possibilities by examining whether segmentation varies across first- and third-person perspectives. In two experiments, observers identified meaningful events in videos of actors performing everyday activities, such as eating breakfast or doing laundry. Each activity was simultaneously recorded from a first-person perspective and a third-person perspective. These videos presented identical activities but differed in their visual features. If segmentation is tightly bound to visual features then observers should identify different events in first- and third-person videos. In addition, the relationship between segmentation and visual features should remain unchanged. Neither prediction was supported. Though participants sometimes identified more events in first-person videos, the events they identified were mostly indistinguishable from those identified for third-person videos. In addition, the relationship between the video’s visual features and segmentation changed across perspectives, further demonstrating a partial dissociation between segmentation and visual input. Event segmentation appears to be robust to large variations in sensory information as long as the content remains the same. Segmentation mechanisms appear to flexibly use sensory information to identify the structure of the underlying activity.

      PubDate: 2018-05-16T21:55:24Z
       
  • Episodic traces and statistical regularities: Paired associate learning in
           typical and dyslexic readers
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Manon Wyn Jones, Jan-Rouke Kuipers, Sinead Nugent, Angelina Miley, Gary Oppenheim
      Learning visual-phonological associations is a key skill underlying successful reading acquisition. However, we are yet to understand the cognitive mechanisms that enable efficient learning in good readers, and those which are aberrant in individuals with developmental dyslexia. Here, we use a repeated cued-recall task to examine how typical and reading-impaired adults acquire novel associations between visual and phonological stimuli, incorporating a looking-at-nothing paradigm to probe implicit memory for target locations. Cued recall accuracy revealed that typical readers’ recall of novel phonological associates was better than dyslexic readers’ recall, and it also improved more with repetition. Eye fixation-contingent error analyses suggest that typical readers’ greater improvement from repetition reflects their more robust encoding and/or retrieval of each instance in which a given pair was presented: whereas dyslexic readers tended to recall a phonological target better when fixating its most recent location, typical readers showed this pattern more strongly when the target location was consistent across multiple trials. Thus, typical readers’ greater success in reading acquisition may derive from their better use of statistical contingencies to identify consistent stimulus features across multiple exposures. We discuss these findings in relation to the role of implicit memory in forming new visual-phonological associations as a foundational skill in reading, and areas of weakness in developmental dyslexia.

      PubDate: 2018-05-16T21:55:24Z
       
  • Linguistic entrenchment: Prior knowledge impacts statistical learning
           performance
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Noam Siegelman, Louisa Bogaerts, Amit Elazar, Joanne Arciuli, Ram Frost
      Statistical Learning (SL) is typically considered to be a domain-general mechanism by which cognitive systems discover the underlying statistical regularities in the input. Recent findings, however, show clear differences in processing regularities across modalities and stimuli as well as low correlations between performance on visual and auditory tasks. Why does a presumably domain-general mechanism show distinct patterns of modality and stimulus specificity' Here we claim that the key to this puzzle lies in the prior knowledge brought upon by learners to the learning task. Specifically, we argue that learners’ already entrenched expectations about speech co-occurrences from their native language impacts what they learn from novel auditory verbal input. In contrast, learners are free of such entrenchment when processing sequences of visual material such as abstract shapes. We present evidence from three experiments supporting this hypothesis by showing that auditory-verbal tasks display distinct item-specific effects resulting in low correlations between test items. In contrast, non-verbal tasks – visual and auditory – show high correlations between items. Importantly, we also show that individual performance in visual and auditory SL tasks that do not implicate prior knowledge regarding co-occurrence of elements, is highly correlated. In a fourth experiment, we present further support for the entrenchment hypothesis by showing that the variance in performance between different stimuli in auditory-verbal statistical learning tasks can be traced back to their resemblance to participants' native language. We discuss the methodological and theoretical implications of these findings, focusing on models of domain generality/specificity of SL.

      PubDate: 2018-05-16T21:55:24Z
       
  • The influence of religious concepts on the effects of blame appraisals on
           negative emotions
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Eddie M.W. Tong, Alan Q.H. Teo
      This research examined the regulatory effects of religious concepts on appraisal-emotion processes, focusing on concepts related to God and the relationships between blame appraisals and anger and guilt. In two experimental studies (Studies 1 and 2), blame appraisals were manipulated while participants were exposed to a God or neutral prime, in the context of a failed laboratory task. In an event-sampling study (Study 3), daily blame appraisals and emotions were measured repeatedly in naturalistic environments and their relationships under high perceived moral unacceptability were examined in relation to participants’ the tendency to focus on God (God-focus). All three studies consistently found evidence that higher activation of God concepts was associated with a weaker relationship between other-blame and anger. In contrast, God concepts did not moderate the relationship between blame and guilt. The results also indicate that both self- and other-blame can contribute to guilt, and God concepts exert no consistent effects on the blame appraisals. These findings support the God-prosociality link, imply that supernatural monitoring effects influence anger but not guilt, and suggest that thoughts of God can lower anger but do not mitigate nor magnify guilt.

      PubDate: 2018-05-16T21:55:24Z
       
  • Lucky or clever' From expectations to responsibility judgments
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Tobias Gerstenberg, Tomer D. Ullman, Jonas Nagel, Max Kleiman-Weiner, David A. Lagnado, Joshua B. Tenenbaum
      How do people hold others responsible for the consequences of their actions' We propose a computational model that attributes responsibility as a function of what the observed action reveals about the person, and the causal role that the person’s action played in bringing about the outcome. The model first infers what type of person someone is from having observed their action. It then compares a prior expectation of how a person would behave with a posterior expectation after having observed the person’s action. The model predicts that a person is blamed for negative outcomes to the extent that the posterior expectation is lower than the prior, and credited for positive outcomes if the posterior is greater than the prior. We model the causal role of a person’s action by using a counterfactual model that considers how close the action was to having been pivotal for the outcome. The model captures participants’ responsibility judgments to a high degree of quantitative accuracy across three experiments that cover a range of different situations. It also solves an existing puzzle in the literature on the relationship between action expectations and responsibility judgments. Whether an unexpected action yields more or less credit depends on whether the action was diagnostic for good or bad future performance.

      PubDate: 2018-05-16T21:55:24Z
       
  • Intersentential coreference expectations reflect mental models of events
    • Abstract: Publication date: August 2018
      Source:Cognition, Volume 177
      Author(s): Theres Grüter, Aya Takeda, Hannah Rohde, Amy J. Schafer
      Comprehenders’ perception of the world is mediated by the mental models they construct. During discourse processing, incoming information allows comprehenders to update their model of the events being described. At the same time, comprehenders use these models to generate expectations about who or what will be mentioned next. The temporal dynamics of this interdependence between language processing and mental event representation has been difficult to disentangle. The present visual world eye-tracking experiment measures listeners’ coreference expectations during an intersentential pause between a sentence about a transfer-of-possession event and a continuation mentioning either its Source or Goal. We found a temporally dispersed but sustained preference for fixating the Goal that was significantly greater when the event was described as completed rather than incomplete (passed versus was passing). This aligns with reported offline sensitivity to event structure, as conveyed via verb aspect, and provides new evidence that our mental model of an event leads to early and, crucially, proactive expectations about subsequent mention in the upcoming discourse.

      PubDate: 2018-05-16T21:55:24Z
       
  • Post-decision wagering after perceptual judgments reveals bi-directional
           certainty readouts
    • Authors: Caio Moreira; Max Rollwage Kristin Kaduk Melanie Wilke Igor Kagan
      Abstract: Publication date: July 2018
      Source:Cognition, Volume 176
      Author(s): Caio M. Moreira, Max Rollwage, Kristin Kaduk, Melanie Wilke, Igor Kagan
      Humans and other animals constantly evaluate their decisions in order to learn and behave adaptively. Experimentally, such evaluation processes are accessed using metacognitive reports made after decisions, typically using verbally formulated confidence scales. When subjects report high confidence, it reflects a high certainty of being correct, but a low confidence might signify either low certainty about the outcome, or a high certainty of being incorrect. Hence, metacognitive reports might reflect not only different levels of decision certainty, but also two certainty directions (certainty of being correct and certainty of being incorrect). It is important to test if such bi-directional processing can be measured because, for decision-making under uncertainty, information about being incorrect is as important as information about being correct for guidance of subsequent behavior. We were able to capture implicit bi-directional certainty readouts by asking subjects to bet money on their perceptual decision accuracy using a six-grade wager scale (post-decision wagering, PDW). To isolate trial-specific aspects of metacognitive judgments, we used pre-decision wagering (wagering before the perceptual decision) to subtract, from PDW trials, influences resulting from non-trial-specific assessment of expected difficulty and psychological biases. This novel design allowed independent quantification of certainty of being correct and certainty of being incorrect, showing that subjects were able to read out certainty in a bi-directional manner. Certainty readouts about being incorrect were particularly associated with metacognitive sensitivity exceeding perceptual sensitivity (i.e. meta-d′ > d′), suggesting that such enhanced metacognitive efficiency is driven by information about incorrect decisions. Readouts of certainty in both directions increased on easier trials, and both certainty directions were also associated with faster metacognitive reaction times, indicating that certainty of being incorrect was not confounded with low certainty. Finally, both readouts influenced the amount of money subjects earned through PDW, suggesting that bi-directional readouts are important for planning future actions when feedback about previous decisions is unavailable.
      Graphical abstract image

      PubDate: 2018-04-15T23:44:10Z
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 54.158.194.80
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-