Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Many models of choice assume that people retrieve memories of past experiences and use them to guide evaluation and choice. In this paper, we examine whether samples of recalled past experiences do indeed underpin our evaluations of options. We showed participants sequences of numerical values and asked them to recall as many of those values as possible and also to state how much they would be willing to pay for another draw from the sequence. Using Bayesian mixed effects modeling, we predicted participants’ evaluation of the sequences at the group level from either the average of the values they recalled or the average of the values they saw. Contrary to the predictions of recall-based models, people’s evaluations appear to be sensitive to information beyond what was actually recalled. Moreover, we did not find consistent evidence that memory for specific items is sufficient to predict evaluation of sequences. We discuss the implications for sampling models of memory and decision-making and alternative explanations. PubDate: 2023-11-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Most theoretical accounts of imitation assume that covert and overt measures of automatic imitation tap into the same underlying construct. Despite this widespread assumption, it is not well supported by empirical evidence. In fact, the only study investigating the relation between covert and overt automatic imitation failed to find a correlation between them (Genschow et al., 2017, PLOS ONE, 12[9], Article e0183784). However, because overt and covert imitation were measured using two very different tasks, and because the measure of overt imitation was found to be unreliable, it is still not clear whether a correlation between both measures exists. Here, we address this question by reanalyzing the results of a previous virtual reality study in which automatic imitation was indexed with an overt and covert measure of gaze following, both obtained within one and the same task (Cracco et al., 2022, IScience, Article 104891). The results show that, in this situation, both types of imitation do correlate. As such, our results provide support for the idea that overt and covert measures of automatic imitation measure the same underlying construct. PubDate: 2023-11-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Studies using retrospective memory tasks have revealed that animates/living beings are better remembered than are inanimates/nonliving things (the animacy effect). However, considering that memory is foremost future oriented, we hypothesized that the animacy effect would also occur in prospective memory (i.e., memory for future intentions). Using standard prospective memory (PM) procedures, we explored this hypothesis by manipulating the animacy status of the PM targets. Study 1a reports data collected from an American sample; these results were then replicated with a Portuguese sample (Study 1b). Study 2 employed a new procedure, and data were collected from a broader English-speaking sample. In these three studies, animate (vs. inanimate) targets consistently led to a better PM performance, revealing, for the first time, that the animacy advantage extends to PM. These results strengthen the adaptive approach to memory and stress the need to consider animacy as an important variable in memory studies. PubDate: 2023-11-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract As reliance on digital communication grows, so does the importance of communicating effectively with text. Yet when communicating with text, benefits from other channels, such as hand gesture, are diminished. Hand gestures support comprehension and disambiguate characteristics of the spoken message by providing information in a visual channel supporting speech. Can emoji (pictures used to supplement text communication) perform similar functions' Here, we ask whether emoji improve comprehension of indirect speech. Indirect speech is ambiguous, and appropriate comprehension depends on the receiver decoding context cues, such as hand gesture. We adapted gesture conditions from prior research (Kelly et al., 1999, Experiment 2) to a digital, text-based format, using emoji rather than gestures. Participants interpreted 12 hypothetical text-message exchanges that ended with indirect speech, communicated via text only, text+emoji, or emoji only, in a between-subjects design. Like that previously seen for hand gesture, emoji improved comprehension. Participants were more likely to correctly interpret indirect speech in the emoji-only condition compared with the text+emoji and the text-only conditions, and more likely in the text+emoji condition compared to the text-only condition. Thus, emoji are not mere decoration, but rather are integrated with text to communicate and disambiguate complex messages. Similar to gesture in face-to-face communication, emoji improve comprehension during text-based communication. PubDate: 2023-11-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract During difficult tasks, conflict can benefit performance on a subsequent trial. One theory for such performance adjustments is that people monitor for conflict and reactively engage cognitive control. This hypothesis has been challenged because tasks that control for associative learning do not show such “cognitive control” effects. The current study experimentally controlled associative learning by presenting a novel stimulus on every trial of a picture–speech conflict task and found that performance adjustments still occur. Thirty-one healthy young adults listened to and repeated words presented in background noise while viewing pictures that were congruent or incongruent (i.e., phonological neighbors) with the word. Following conflict, participants had higher word recognition (+17% points) on incongruent but not congruent trials. This result was not attributable to posterror effects nor a speed–accuracy trade-off. An analysis of erroneous responses showed that participants made more phonologically related errors than nonrelated errors only on incongruent trials, demonstrating elevated phonological conflict when the picture was a neighbor of the target word. Additionally, postconflict improvements appear to be due to better resolution of phonological conflict in the mental lexicon rather than decreased attention to the picture or increased attention to the speech signal. Our findings provide new evidence for conflict monitoring and suggest that cognitive control helps resolve phonological conflict during speech recognition in noise. PubDate: 2023-11-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Many real-world decisions involving rare events also involve extreme outcomes. Despite this confluence, decisions-from-experience research has only examined the impact of rarity and extremity in isolation. With rare events, people typically choose as if they underestimate the probability of a rare outcome happening. Separately, people typically overestimate the probability of an extreme outcome happening. Here, for the first time, we examine the confluence of these two biases in decisions-from-experience. In a between-groups behavioural experiment, we examine people’s risk preferences for rare extreme outcomes and for rare non-extreme outcomes. When outcomes are both rare and extreme, people’s risk preferences shift away from traditional risk patterns for rare events: they show reduced underweighting for events that are both rare and extreme. We simulate these results using a small-sample model of decision-making that accounts for both the underweighting of rare events and the overweighting of extreme events. These separable influences on risk preferences suggest that to understand real-world risk for rare events we must also consider the extremity of the outcomes. PubDate: 2023-11-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In recognition memory, retrieval is thought to occur by computing the global similarity of the probe to each of the studied items. However, to date, very few global similarity models have employed perceptual representations of words despite the fact that false recognition errors for perceptually similar words have consistently been observed. In this work, we integrate representations of letter strings from the reading literature with global similarity models. Specifically, we employed models of absolute letter position (slot codes and overlap models) and relative letter position (closed and open bigrams). Each of the representations was used to construct a global similarity model that made contact with responses and RTs at the individual word level using the linear ballistic accumulator (LBA) model (Brown & Heathcote Cognitive Psychology, 57 , 153–178, 2008). Relative position models were favored in three of the four datasets and parameter estimates suggested additional influence of the initial letters in the words. When semantic representations from the word2vec model were incorporated into the models, results indicated that orthographic representations were almost equally consequential as semantic representations in determining inter-item similarity and false recognition errors, which undermines previous suggestions that long-term memory is primarily driven by semantic representations. The model was able to modestly capture individual word variability in the false alarm rates, but there were limitations in capturing variability in the hit rates that suggest that the underlying representations require extension. PubDate: 2023-11-16
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In task switching, response repetitions (RRs) usually yield performance benefits as compared to response switches, but only when the task also repeats. When the task switches, RR benefits vanish or even turn into costs, yielding an interaction between repeating versus switching the task and the response (the RR effect). Different theoretical accounts for this RR effect exist, but, in the present study, we specifically tested a prediction derived from binding and retrieval accounts. These maintain that repeating the task retrieves the previous-trial response, thus causing RR benefits. Retrieval is possible due to the task-response binding formed in the previous trial. We employed a task-switching paradigm with three response options that allowed us to differentiate error types. Across two experiments (N = 46 and N = 107) we showed that response-repetition errors in response-switch trials were more likely in task repetitions than in task switches, supporting the notion that the previous response is retrieved by the repeating task, despite being wrong. Such a finding is in line with binding and retrieval accounts but cannot be easily accommodated by the competing theoretical accounts. Thus, the present study indicates task-response binding as an important mechanism underlying RR benefits in task repetitions. PubDate: 2023-11-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In dynamic environments, a central task of the attentional system is to keep track of objects changing their spatial location over time. In some instances, it is sufficient to track only the spatial locations of moving objects (i.e., multiple object tracking; MOT). In other instances, however, it is also important to maintain distinct identities of moving objects (i.e., multiple identity tracking; MIT). Despite previous research, it is not clear whether MOT and MIT performance emerge from the same tracking mechanism. In the present report, we study gaze coherence (i.e., the extent to which participants repeat their gaze behaviour when tracking the same object locations twice) across repeated MOT and MIT trials. We observed more substantial gaze coherence in repeated MOT trials compared to the repeated MIT trials or mixed MOT-MIT trial pairs. A subsequent simulation study suggests that MOT is based more on a grouping mechanism than MIT, whereas MIT is based more on a target-jumping mechanism than MOT. It thus appears unlikely that MOT and MIT emerge from the same basic tracking mechanism. PubDate: 2023-11-08
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Curiosity – the desire to seek information – is fundamental for learning and performance. Studies on curiosity have shown that people are intrinsically motivated to seek information even if it does not bring an immediate tangible benefit (i.e., non-instrumental information), but little is known as to whether people have the metacognitive capability to accurately monitor their motivation for seeking information. We examined whether people can accurately predict their own non-instrumental information-seeking behavior. Across six experiments (Experiments 1A–1E and 2, total N = 579), participants predicted that they would engage in information-seeking behavior less frequently than they actually did, suggesting that people tend to underestimate the motivational lure of curiosity. Overall, there was no consistent statistical evidence that this underestimation was altered by contextual factors (e.g., the cost to seek information). These results were consistent with the theoretical account that it is difficult for people to make sense of the internally rewarding value of information in advance. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract When looking at faces, humans invariably move their eyes to a consistent preferred first fixation location on the face. While most people have the preferred fixation location just below the eyes, a minority have it between the nose-tip and mouth. Not much is known about whether these long-term differences in the preferred fixation location are associated with distinct neural representations of faces. To study this, we used a gaze-contingent face adaptation aftereffect paradigm to test in two groups of observers, one with their mean preferred fixation location closer to the eyes (upper lookers) and the other closer to the mouth (lower lookers). In this task, participants were required to maintain their gaze at either their own group’s mean preferred fixation location or that of the other group during adaptation and testing. The two possible fixation locations were 3.6° apart on the face. We measured the face adaptation aftereffects when the adaptation and testing happened while participants maintained fixation at either the same or different locations on the face. Both groups showed equally strong adaptation effects when the adaptation and testing happened at the same fixation location. Crucially, only the upper lookers showed a partial transfer of the FAE across the two fixation locations, when adaptation occurred at the eyes. Lower lookers showed no spatial transfer of the FAE irrespective of the adaptation position. Given the classic finding that neural tuning is increasingly position invariant as one moves higher in the visual hierarchy, this result suggests that differences in the preferred fixation location are associated with distinct neural representations of faces. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The present study explored bilingual coactivation during natural monolingual sentence-reading comprehension. Native Chinese readers who had learned Japanese as a second language and those who had not learned it at all were tested. The results showed that unrelated Chinese word pairs that shared a common Japanese translation could parafoveally prime each other. Critically, this translation-related preview effect was modulated by the readers’ language-learning experiences. It was found only among the late Chinese–Japanese bilinguals, but not among the monolingual Chinese readers. By setting a novel step, which was testing bilingual coactivation of semantic knowledge in a natural reading scenario without an explicit presentation of L2 words, our results suggest that bilingual word processing can be automatic, unconscious and nonselective. The study reveals an L2-to-L1 influence on readers’ lexical activation during natural sentence reading in an exclusively native context. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Visual shape discrimination is faster for objects close to the body, in the peripersonal space (PPS), compared with objects far from the body. Visual processing enhancement in PPS occurs also when perceived depth is based on 2D pictorial cues. This advantage has been observed from relatively low-level (detection, size, orientation) to high-level visual features (face processing). While multisensory association also displays proximal advantages, whether PPS influences visual perceptual learning remains unclear. Here, we investigated whether perceptual learning effects vary according to the distance of visual stimuli (near or far) from the observer, illusorily induced by leveraging the Ponzo illusion. Participants performed a visual search task in which they reported whether a specific target object orientation (e.g., triangle pointing downward) was present among distractors. Performance was assessed before and after practicing the visual search task (30 minutes/day for 5 days) at either the close (near group) or far (far group) distance. Results showed that participants that performed the training in the near space did not improve. By contrast, participants that performed the training in the far space showed an improvement in the visual search task in both the far and near spaces. We suggest that such improvement following the far training is due to a greater deployment of attention in the far space, which could make the learning more effective and generalize across spaces. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Previous studies on the impact of cognitive load on time preference has led to conflicting results. However, these studies either failed to consider response randomness while analyzing behavioral data or involved a defective control level for a clear conclusion regarding the impacts of cognitive load. With two experiments using an adaptive approach to generating appropriate individual-specific stimuli and an additional, preregistered partial replication study, the current research provided consistent evidence that imposing cognitive load with a 2-back auditory memory task would produce simultaneously more patient and more random intertemporal choices. The finding of changed time preference was at odds with the dual-process account thereof but provided clear support for the role of reduced time sensitivity in intertemporal choices under cognitive load. Results of relevant model comparison did not support an account built upon more usage of heuristic strategies under cognitive load, either. Future research could examine boundary conditions for the dual-process and time-sensitivity accounts of time preference such as task features and individual characteristics that moderate the impact of cognitive load on time preference. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Knowledge stored in long-term memory (LTM) impacts working memory (WM) overall, but it is unclear whether LTM facilitates focusing or switching attention in WM. We addressed this question using the retro-cue paradigm: Briefly presented arrays of individually calibrated numbers of shapes (concrete or abstract) were followed by a blank retention interval (no-cue) or a retro-cue to focus participants’ attention to the to-be-probed shape. Experiment 3 included double retro-cue trials that required participants to switch their attention to a different shape. Participants recalled the color (Experiments 1) or location (Experiment 2) of the probed shape, or recognized the target shape among two other options (Experiment 3). Confirming the overall LTM effect on WM, fewer abstract shapes were needed to match the performance of concrete shapes during the calibration phase. Most importantly, retro-cues benefitted performance regardless of the nature of the shape, suggesting that LTM impacts WM overall without moderating attention. PubDate: 2023-11-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Researchers in cognitive and forensic psychology have long been interested in the impact of individual differences on eyewitness memory. The sex of the eyewitness is one such factor, with a body of research spanning over 50 years that has sought to determine if and how eyewitness memory differs between males and females. This research has significant implications across the criminal justice system, particularly in the context of gendered issues such as sexual assault. However, the findings have been inconsistent, and there is still a lack of consensus across the literature. A scoping review and analysis of the literature was performed to examine the available evidence regarding whether sex differences in eyewitness memory exist, what explanations have been proposed for any differences found, and how this research has been conducted. Through a strategic search of seven databases, 22 relevant articles were found and reviewed. Results demonstrated that despite the mixed nature of the methodologies and findings, the research suggests that neither males nor females have superior performance in the total amount of accurate information reported, but rather that females may have better memory for person-related details while males may perform better for details related to the surrounding environment. There was also consistent evidence for the own-gender bias. There was some consensus that differences in selective attention between males and females may underlie these sex differences in eyewitness memory. However, none of the studies directly tested this suggested attentional factor, and thus future research is needed to investigate this using a more systematic and empirical approach. PubDate: 2023-10-30
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Communicating with a speaker with a different accent can affect one’s own speech. Despite the strength of evidence for perception-production transfer in speech, the nature of transfer has remained elusive, with variable results regarding the acoustic properties that transfer between speakers and the characteristics of the speakers who exhibit transfer. The current study investigates perception-production transfer through the lens of statistical learning across passive exposure to speech. Participants experienced a short sequence of acoustically variable minimal pair (beer/pier) utterances conveying either an accent or typical American English acoustics, categorized a perceptually ambiguous test stimulus, and then repeated the test stimulus aloud. In the canonical condition, /b/–/p/ fundamental frequency (F0) and voice onset time (VOT) covaried according to typical English patterns. In the reverse condition, the F0xVOT relationship reversed to create an “accent” with speech input regularities atypical of American English. Replicating prior studies, F0 played less of a role in perceptual speech categorization in reverse compared with canonical statistical contexts. Critically, this down-weighting transferred to production, with systematic down-weighting of F0 in listeners’ own speech productions in reverse compared with canonical contexts that was robust across male and female participants. Thus, the mapping of acoustics to speech categories is rapidly adjusted by short-term statistical learning across passive listening and these adjustments transfer to influence listeners’ own speech productions. PubDate: 2023-10-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The current study explored cross-language morphological transfer mechanisms using a similar-script morphological translation priming paradigm in highly proficient unbalanced Turkish (first language; L1)–English (second language; L2) bilinguals. Using noncognate English and Turkish stimuli that shared a similar meaning with no form overlap (e.g., ice [Eng.] – buz [Tur.]), in Experiment 1, L2 English stem targets (e.g., ICE) were primed by affixed L1 nonwords (e.g., buzca [iceish]), nonaffixed L1 nonwords (e.g., buznak [iceald]), and unrelated L1 nonwords (e.g., tuşku [keyment]). The results revealed priming effects in both the affixed and nonaffixed nonword conditions relative to the unrelated control, and significantly larger priming in the affixed than the nonaffixed condition. In addition, enhanced cross-language morphological transfer effects were evidenced in bilinguals with an earlier age of L2 acquisition. In Experiment 2, English stem targets (e.g., ICE) were primed by nonaffixed L1 nonwords including translated stems (e.g., buznak [iceald]), semantically related stems (e.g., suzur [waterew]), and unrelated L1 nonwords (e.g., tuşzur [keyew]). The results showed significantly larger priming effects in the translated condition compared with the semantic and unrelated control conditions, with no priming in the semantic condition relative to the unrelated condition, suggesting that cross-language morphological priming effects were specifically due to the lexico-semantic relationship between the embedded word and its translation equivalent. PubDate: 2023-10-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Evidence accumulation models (EAMs) have become the dominant theoretical framework for rapid decision-making, and while many theoretically distinct variants exist, comparisons have proved challenging due to strong mimicry in their predictions about choice response time data. One solution to reduce mimicry is constraining these models with double responses, which are a second response that is made after the initial response. However, instructing participants that they are allowed to change their mind could influence their strategy for initial responding, meaning that explicit double responding paradigms may not generalise to standard paradigms. Here, we provide a validation of explicit double responding paradigms, by assessing whether participants’ initial decisions – as measured by diffusion model parameters – differ based on whether or not they were instructed that they could change their response after their initial response. Across three experiments, our results consistently indicate that allowing for changes of mind does not influence initial responses, with Bayesian analyses providing at least moderate evidence in favour of the null in all cases. Our findings suggest that explicit double responding paradigms should generalise to standard paradigms, validating the use of explicit double responding in future rapid decision-making studies. PubDate: 2023-10-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract One of the first tasks in language acquisition is word segmentation, a process to extract word forms from continuous speech streams. Statistical approaches to word segmentation have been shown to be a powerful mechanism, in which word boundaries are inferred from sequence statistics. This approach requires the learner to represent the frequency of units from syllable sequences, though accounts differ on how much statistical exposure is required. In this study, we examined the computational limit with which words can be extracted from continuous sequences. First, we discussed why two occurrences of a word in a continuous sequence is the computational lower limit for this word to be statistically defined. Next, we created short syllable sequences that contained certain words either two or four times. Learners were presented with these syllable sequences one at a time, immediately followed by a test of the novel words from these sequences. We found that, with the computationally minimal amount of two exposures, words were successfully segmented from continuous sequences. Moreover, longer syllable sequences providing four exposures to words generated more robust learning results. The implications of these results are discussed in terms of how learners segment and store the word candidates from continuous sequences. PubDate: 2023-10-26