Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Madeleine Oakley Abstract: Language and Speech, Ahead of Print. This study examines how L1 English-L2 French learners use L1 articulatory and acoustic categories to produce L2 vowels that are both similar to and different from their L1 vowels. Previous studies examining the relationship between L1 and L2 sound inventories have found that learners reuse L1 phone categories to produce L2 phones that are perceived as similar, but importantly, there is a lack of articulatory data included in these types of studies, which has reinforced the assumption that vowel categories can be solely represented by their acoustic properties. The present study uses ultrasound tongue imaging data and videos of lip rounding in addition to acoustic data to examine how L1 English-L2 French learners produce the French vowels /i y u e ø o/ compared with their English vowels /i u e o/. The results focus on individual paths to category formation to show how learners articulate L2 vowels, and reveal that they tend to reuse L1 tongue body gestures to produce the French vowels /i u e o/, and lip rounding gestures to produce the round vowels /y u o/. This study demonstrates that transfer of articulatory gestures depends on vowel quality and emphasizes the importance of using articulatory data to inform theories of L2 category formation. Citation: Language and Speech PubDate: 2024-08-01T05:27:11Z DOI: 10.1177/00238309241259748
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Murray J. Munro, Tracey M. Derwing, Kazuya Saito Abstract: Language and Speech, Ahead of Print. This short report describes a longitudinal examination of the acquisition of English-aspirated stops by an initial cohort of 24 adult Slavic-language (Russian, Ukrainian, and Croatian) speakers. All had arrived in Canada with low oral English proficiency, and all were enrolled in the same language instruction program at the outset. Initial bilabial stops in CVCs were recorded at eight testing times: six during the first year of the study, again at year 7, and finally at year 10. Intelligibility was evaluated through a blind listening assessment of the stop productions from the first seven testing times. Voice onset times (VOT) were measured for /p/ from all eight times. Mean /p/ intelligibility improved—mainly during a proposed Window of Maximal Opportunity for L2 speech acquisition–but remained below 100%, even after 7 years. For some speakers, early /p/ productions were minimally aspirated, with VOT increasing over time but remaining intermediate between L1 English and L1 Slavic-Language values at 10 years. However, inter-speaker variability was dramatic, with some speakers showing full intelligibility throughout the study and others showing many unintelligible productions at all times. Individual learning trajectories tended to be non-linear and often non-cumulative. Overall, these findings point to a developmental process that varies considerably from one learner to another. It also demonstrates the serious drawbacks of relying on group means to characterize the process of L2 segmental learning. Citation: Language and Speech PubDate: 2024-07-31T10:43:36Z DOI: 10.1177/00238309241264296
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xin Yuan, Jing Tang Abstract: Language and Speech, Ahead of Print. This study investigated how input modes (reading vs. listening) and learners’ perceptual learning style (visual vs. auditory) affected the incidental learning of collocations. A total of 182 college students were first assigned to either a visual or auditory group based on their performance on a perceptual learning style questionnaire. Each style group was subsequently subdivided into three groups who were exposed to a series of texts containing unfamiliar collocation items under one of the input conditions: written input, aural input, or no input. Results of the study indicated that both written and aural input led to gains in collocational knowledge, and aural input was more effective than written input. Furthermore, the study provided empirical evidence that there was a moderating role of perceptual learning style on incidental collocation learning. The auditory learners under aural input showed the highest rate of collocation learning among all treatment subgroups. Citation: Language and Speech PubDate: 2024-07-30T05:15:30Z DOI: 10.1177/00238309241266864
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ricky K. W. Chan, Bruce Xiao Wang Abstract: Language and Speech, Ahead of Print. Fundamental frequency (F0) has been widely studied and used in the context of speaker discrimination and forensic voice comparison casework, but most previous studies focused on long-term F0 statistics. Lexical tone, the linguistically structured and dynamic aspects of F0, has received much less research attention. A main methodological issue lies on how tonal F0 should be parameterized for the best speaker discrimination performance. This paper compares the speaker discriminatory performance of three approaches with lexical tone modeling: discrete cosine transform (DCT), polynomial curve fitting, and quantitative target approximation (qTA). Results show that using parameters based on DCT and polynomials led to similarly promising performance, whereas those based on qTA generally yielded relatively poor performance. Implications modeling surface tonal F0 and the underlying articulatory processes for speaker discrimination are discussed. Citation: Language and Speech PubDate: 2024-07-27T09:34:21Z DOI: 10.1177/00238309241261702
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Feier Gao, Chien-Jer Charles Lin Abstract: Language and Speech, Ahead of Print. Mandarin tone 3 sandhi refers to the phenomenon whereby a tone 3 syllable changes to a tone 2 when followed by another tone 3. This phonological process creates a deviation between the tonal forms realized at morphemic (/tone3—tone3/) and word ([tone2—tone3]) levels, posing questions in terms of how disyllabic tone 3 sandhi words are represented and accessed. The current study conducted three cross-modal lexical decision priming experiments to investigate this issue. Experiment 1 manipulated the frequencies of the initial morpheme and whole word, showing that the higher initial-character frequency against the whole word gives stronger activation to the underlying representation and the lower frequency of the initial character leads to stronger activation of the surface tone. Experiments 2 and 3 operationalized the relative frequency of the initial tone 3 morpheme’s realization as a sandhi tone, finding that the competition between the two tonal realizations also influences how T3 sandhi words are accessed. Specifically, the more frequently the T3 morpheme surfaces as a T2 allomorph, the less activated the underlying representation becomes in the mental lexicon. Our results indicate a complex interplay between morpheme, word, and the associated tonal representations in the mental lexicon and that these factors co-determine the lexical access of tone 3 sandhi. Citation: Language and Speech PubDate: 2024-07-25T07:33:29Z DOI: 10.1177/00238309241260062
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ronny Bujok, Antje S. Meyer, Hans Rutger Bosker Abstract: Language and Speech, Ahead of Print. Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants’ perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts. Citation: Language and Speech PubDate: 2024-06-15T05:53:57Z DOI: 10.1177/00238309241258162
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Stephanie Kaucke, Marcel Schlechtweg Abstract: Language and Speech, Ahead of Print. Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise. Citation: Language and Speech PubDate: 2024-06-10T07:35:39Z DOI: 10.1177/00238309241254350
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Emily W. Wang, Maria I. Grigos Abstract: Language and Speech, Ahead of Print. The relationship between speaking rate and speech motor variability was examined in three groups of neurotypical adults, n = 40; 15 young adults (18–30 years), 13 adults (31–40 years), and 12 middle-aged adults (41–50 years). Participants completed a connected speech task at three speaking rates (habitual, fast, and slow) where kinematic (lower lip movement) and acoustic data were obtained. Duration and variability were measured at each speaking rate. Findings revealed a complex relationship between speaking rate and variability. Adults from the middle age range (31–40 years) demonstrated shorter acoustic and kinematic durations compared with the oldest age group (41–50 years) during the habitual speaking rate condition. All adults demonstrated the greatest variability in the slow speaking rate condition, with no significant differences in variability between habitual and fast speaking rates. Interestingly, lip aperture variability was significantly lower in the youngest age group (18–30 years) compared with the two older groups during the fast speaking rate condition. Differences in measures of acoustic variability were not observed across the age levels. Strong negative correlations between kinematic/acoustic duration and lip aperture/acoustic variability in the youngest age group were revealed. Therefore, while a slow speaking rate does result in greater variability compared with habitual and fast speaking rates, longer durations of productions by the different age groups were not linked to higher spatiotemporal index (STI) values, suggesting that timing influences speech motor variability, but is not the sole contributor. Citation: Language and Speech PubDate: 2024-05-24T05:33:16Z DOI: 10.1177/00238309241252983
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Barbara Gili Fivela, Cinzia Avesani Abstract: Language and Speech, Ahead of Print. The paper introduces the Special Issue on Language Contact and Speaker Accommodation, which originates from the conference Phonetics and Phonology in Europe (PaPE) held at the University of Lecce, Italy, in 2019. It discusses the topics of language contact and speaker accommodation, summarizing the contributions included in the Special Issue, and arguing explicitly in favour of a unitary view of how both temporary and stable changes happen in (part of) the linguistic systems. Accommodation is seen as the same gradual and non-homogeneous process at play in different contact settings. In the introductory sections, a discussion is offered on various situations in which linguistic systems are in contact and on the main factors that may be at play; the following sections offer an overview of the papers included in the Special Issue, which focus on accommodation in L2 and heritage speakers as well as on the time dimension of dialect or language societal contact. Finally, accommodation is discussed as the same process that is at work in any interaction, that may modify temporarily or long-term the system of L2 learners and bilinguals (e.g., immigrants), that usually affects in the long-term the heritage speakers’ system, and that only in the long term can lead to language changes involving entire communities. Citation: Language and Speech PubDate: 2024-05-17T06:45:15Z DOI: 10.1177/00238309241246200
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Saya Kawase, Chris Davis, Jeesun Kim Abstract: Language and Speech, Ahead of Print. The study aimed to examine whether L1 speech rhythm affects L2 speech by assessing how the speech rhythm of Japanese L2 English speakers differed from native speakers. We chose Japanese and English because they differ markedly in the phonological properties that likely contribute to speech rhythm. Speech rhythm was measured by the variability of vowel and consonant intervals using rate-normalized rhythm metrics (VarcoV and VarcoC; nPVI-V and nPVI-C) and %V. The study utilized recordings of spoken sentences in English by 10 native Australian English speakers; and in English and also in Japanese by 10 native Japanese speakers (who had limited experience in speaking English). Experiment 1 compared the rhythm of L1 English (by measuring 1,750 vowels and 3,093 consonants from 20 sentences) and L1 Japanese (1,923 vowels and 2,097 consonants from 10 sentences). The results showed that for all measures, Japanese had reduced durational variability in both consonant and vowel intervals compared with English. In Experiment 2, we examined the rhythm characteristics of L1 and L2 English using 40 sentences (including the 20 in Experiment 1). The results showed that vowel and consonant intervals were less variable in L2 (Japanese English) than in L1 (Australian English) speech, mirroring the results of Experiment 1. Overall, the results are consistent with the proposal that L1 (Japanese) speech rhythm influenced L2 (English) speech. Citation: Language and Speech PubDate: 2024-05-02T07:02:09Z DOI: 10.1177/00238309241247210
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tanja Kocjančič, Tomáš Bořil, Susanna Hofmann Abstract: Language and Speech, Ahead of Print. This paper presents L2 vowel remediation in a classroom setting via two real-time visual feedback methods: articulatory ultrasound tongue imaging, which shows tongue shape and position, and a newly developed acoustic formant analyzer, which visualizes a point correlating with the combined effect of tongue position and lip rounding in a vowel quadrilateral. Ten Czech students of the Swedish language participated in the study. Swedish vowel production is difficult for Czech speakers since the languages differ significantly in their vowel systems. The students selected the vowel targets on their own and practiced in two classroom groups, with six students receiving two ultrasound training lessons, followed by one acoustic, and four students receiving two acoustic lessons, followed by one ultrasound. Audio data were collected pre-training, after the two sessions employing the first visual feedback method, and at post-training, allowing measuring Euclidean distance among selected groups of vowels and observing the direction of change within the vowel quadrilateral as a result of practice. Perception tests were performed before and after training, revealing that most learners perceived selected vowels correctly already before the practice. The study showed that both feedback methods can be successfully applied to L2 classroom learning, and both lead to the improvement in the pronunciation of the selected vowels, as well as the Swedish vowel set as a whole. However, ultrasound tongue imaging seems to have an advantage as it resulted in a greater number of improved targets. Citation: Language and Speech PubDate: 2024-05-02T06:56:37Z DOI: 10.1177/00238309231223736
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Pierre Badin, Thomas R. Sawallis, Marija Tabain, Laurent Lamalle Abstract: Language and Speech, Ahead of Print. The goal of this article is to illustrate the use of MRI for exploring bi- and multi-lingual articulatory strategies. One male and one female speaker recorded sets of static midsagittal MRIs of the whole vocal tract, producing vowels as well as consonants in various vowel contexts in either the male’s two or the female’s three languages. Both speakers were native speakers of English (American and Australian English, respectively), and both were fluent L2 speakers of French. In addition, the female speaker was a heritage speaker of Croatian. Articulatory contours extracted from the MRIs were subsequently used at three progressively more compact and abstract levels of analysis. (1) Direct comparison of overlaid contours was used to assess whether phones analogous across L1 and L2 are similar or dissimilar, both overall and in specific vocal tract regions. (2) Consonant contour variability along the vocal tract due to vowel context was determined using dispersion ellipses and used to explore the variable resistance to coarticulation for non-analogous rhotics and analogous laterals in Australian, French, and Croatian. (3) Articulatory modeling was used to focus on specific articulatory gestures (tongue position and shape, lip protrusion, laryngeal height, etc.) and then to explore the articulatory strategies in the speakers’ interlanguages for production of the French front rounded vowel series. This revealed that the Australian and American speakers used different strategies to produce the non-analogous French vowel series. We conclude that MRI-based articulatory data constitute a very rich and underused source of information that amply deserves applications to the study of L2 articulation and bilingual and multi-lingual speech. Citation: Language and Speech PubDate: 2024-04-29T06:52:36Z DOI: 10.1177/00238309231224790
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Si Berrebi, Sharon Peperkamp Abstract: Language and Speech, Ahead of Print. In Modern Hebrew, only three segmental markers are typically acknowledged as ethnically conditioned, and usage of these markers has significantly decreased in second and third generation speakers. Yet the sociolinguistic situation of diverging language backgrounds of first generation speakers, compounded with ethnic segregation in housing and the workforce, seems like a fertile ground for social identification from speech. We report two studies on prosodic variation in Modern Hebrew: a perception study and a “matched-pairs” corpus study. The results of the first illustrate that even in the absence of the known segmental markers, ethnicity perception of young native speakers may still diverge between two major ethnic identities, Mizrahi (Middle Eastern) and Ashkenazi (European). The main acoustic correlate was rhythm, measured as the proportional duration of vowels in the utterance. In the second study, actors’ speech rhythm was found to be modulated by their portrayed ethnic identity in the same direction, suggesting that this variable is socially salient—and for some speakers, controllable—enough to be involved in style shifting. This study joins a growing body of work illustrating that relatively mild rhythmic variation can contribute to social identification, and in the current case, also for ethnicity portrayal. Citation: Language and Speech PubDate: 2024-04-22T11:56:09Z DOI: 10.1177/00238309241243025
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Meike M. de Boer, Willemijn F. L. Heeren Abstract: Language and Speech, Ahead of Print. With forensic recordings being collected in multiple languages increasingly often, this study investigates the language dependency of the voiceless alveolar fricative /s/ in speakers of native (L1) Dutch and non-native (L2) English. Due to phonetic similarity between the languages, Dutch learners of English may exhibit language-independent /s/ acoustics, making it an interesting feature for multilingual forensic speaker comparisons (FSCs). However, the findings show that out of the four spectral moments, center of gravity, standard deviation (SD), skewness, and kurtosis, only SD remained stable across the languages; the other measurements were language-dependent. The results were largely independent of the /s/ tokens’ contexts, although an interaction between language and context was found for skewness and kurtosis: With a labial right phonetic neighbor, language dependency was largely reduced. The findings have implications for FSCs: as /s/ is language-dependent in speakers of L1 Dutch and L2 English, it shows limited potential for cross-linguistic speaker comparisons in forensic casework. Citation: Language and Speech PubDate: 2024-04-20T08:46:05Z DOI: 10.1177/00238309241242114
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Margaret Kehoe Abstract: Language and Speech, Ahead of Print. Word complexity indices, such as the Index of Phonetic Complexity (IPC) and the Word Complexity Measure (WCM), code a word in terms of featural and structural properties that pose difficulty in phonological development. Studies have investigated the influence of complexity indices on vocabulary development; however, few have examined their influence on consonant accuracy. Furthermore, these indices were developed for English-speaking children and have not been widely applied to other languages. This study investigates whether a word’s phonological complexity influences how accurately it is produced in French-speaking children. Four databases consisting of the productions of children (n = 74), aged 1;11 to 4;9, were analyzed. Words were coded in terms of the IPC, WCM, and parameters that add complexity during phonological development. Using mixed-effects logistic regression, we examined whether phonological complexity as determined by the IPC, WCM, or by alternative indices better accounts for the influence of complexity on production. We also investigated whether the accuracy of a target sound/structure was influenced by a word’s complexity. Results indicated that complexity based on the IPC or WCM significantly influenced consonant accuracy; however, indices tapping fewer features provided superior model fit. At younger ages, the presence of fricatives/liquids and, at all ages, the presence of alveopalatal fricatives, codas, and clusters significantly influenced accuracy. Findings were inconclusive as to whether whole word complexity influenced the accuracy of a target sound/structure. Results suggest that current complexity indices provide only approximate indications of how featural and structural properties of words influence production. Citation: Language and Speech PubDate: 2024-04-02T11:34:34Z DOI: 10.1177/00238309241237473
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Xiaoyi Tian, Amanda E. Griffith, Zane Price, Kristy Elizabeth Boyer, Kevin Tang Abstract: Language and Speech, Ahead of Print. Linguistic alignment, the tendency of speakers to share common linguistic features during conversations, has emerged as a key area of research in computer-supported collaborative learning. While previous studies have shown that linguistic alignment can have a significant impact on collaborative outcomes, there is limited research exploring its role in K–12 learning contexts. This study investigates syntactic and lexical linguistic alignments in a collaborative computer science–learning corpus from 24 pairs (48 individuals) of middle school students (aged 11–13). The results show stronger effects of self-alignment than partner alignment on both syntactic and lexical levels, with students often diverging from their partners on task-relevant words. Furthermore, student self-alignment on the syntactic level is negatively correlated with partner satisfaction ratings, while self-alignment on lexical level is positively correlated with their partner’s satisfaction. Citation: Language and Speech PubDate: 2024-03-28T10:38:39Z DOI: 10.1177/00238309241234565
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Shuwen Chen, D. H. Whalen, Peggy Pik Ki Mok Abstract: Language and Speech, Ahead of Print. Rhotic sounds are some of the most challenging sounds for L2 learners to acquire. This study investigates the production of English rhotic sounds by Mandarin–English bilinguals with two English proficiency levels. The production of the English /ɹ/ by 17 Mandarin–English bilinguals was examined with ultrasound imaging and compared with the production of native English speakers. The ultrasound data show that bilinguals can produce native-like bunched and retroflex gestures, but the distributional pattern of tongue shapes in various contexts differs from that of native speakers. Acoustically, the English /ɹ/ produced by bilinguals had a higher F3 and F3–F2, as well as some frication noise in prevocalic /ɹ/, features similar to the Mandarin /ɹ/. Mandarin–English bilinguals did produce language-specific phonetic realizations for the English and Mandarin /ɹ/s. There was a positive correlation between language proficiency and English-specific characteristics of /ɹ/ by Mandarin–English bilinguals in both articulation and acoustics. Phonetic similarities facilitated rather than hindered L2 speech learning in production: Mandarin–English bilinguals showed better performance in producing the English /ɹ/ allophones that were more similar to the Mandarin /ɹ/ (syllabic and postvocalic /ɹ/s) than producing the English /ɹ/ allophone that was less similar to the Mandarin /ɹ/ (prevocalic /ɹ/). This study contributes to our understanding of the mechanism of speech production in late bilinguals. Citation: Language and Speech PubDate: 2024-03-11T04:57:44Z DOI: 10.1177/00238309241230895
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lucia Mareková, Štefan Beňuš Abstract: Language and Speech, Ahead of Print. Research on fluency in native (L1) and non-native (L2) speech production and perception helps us understand how individual L1 speaking style might affect perceived L2 fluency and how this relationship might be reflected in L1 versus L2 oral assessment. While the relationship between production and perception of fluency in spontaneous speech has been studied, the information provided by reading has been overlooked. We argue that reading provides a direct and controlled way to assess language proficiency that might complement information gained from spontaneous speaking. This work analyzes the relationship between speech fluency production and perception in passages of L1 (Slovak) and L2 (English) read by 57 undergraduate Slovak students of English and rated for fluency by 15 English teachers who are Slovak natives. We compare acoustic production measures between L1 and L2 and analyze how their effect on perceived fluency differs for the two languages. Our main finding is that the articulation rate, the overall number of pauses, and the number of between-clause and mid-clause pauses predict ratings differently in L1 Slovak versus L2 English. The speech rate and durations of pauses predict ratings similarly in both languages. The contribution of our results to understanding fluency aspects of spontaneous and read speech, the relationship between L1 and L2, the relationship between production and perception, and to the teaching of L2 English are discussed. Citation: Language and Speech PubDate: 2024-03-06T06:20:28Z DOI: 10.1177/00238309241230899
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tsung-Ying Chen Abstract: Language and Speech, Ahead of Print. The starting-small effect is a cognitive advantage in language acquisition when learners begin by generalizing on regularities from structurally simple and shorter tokens in a skewed input distribution. Our study explored this effect as a potential explanation for the biased learning of opaque and transparent vowel harmony. In opaque vowel harmony, feature agreement occurs strictly between adjacent vowels, and an intervening “neutral vowel” blocks long-distance vowel harmony. Thus, opaque vowel harmony could be acquired even if learners start with structurally simpler and more frequent disyllabic tokens. Alternatively, transparent vowel harmony can only be observed in longer tokens demonstrating long-distance agreement by skipping a neutral vowel. Opaque vowel harmony is predicted to be learned more efficiently due to its compatibility with local dependency acquired via starting-small learning. In two artificial grammar learning experiments, learners were exposed to both vowel harmony patterns embedded in an equal number of disyllabic and trisyllabic tokens or a skewed distribution with twice as many disyllabic tokens. In Exp I, learners’ test performance suggests the consistently biased learning of local and opaque vowel harmony with starting-small learning. Furthermore, in Exp II, the acquired vowel harmony patterns varied significantly by working memory capacity with a balanced but not skewed input distribution, presumably because of the ease of cognitive demand with starting-small learning. Citation: Language and Speech PubDate: 2024-03-04T11:43:19Z DOI: 10.1177/00238309241230625
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Antonia Götz, Anna Krasotkina, Gudrun Schwarzer, Barbara Höhle Abstract: Language and Speech, Ahead of Print. Infants’ speech perception is characterized by substantial changes during the first year of life that attune the processing mechanisms to the specific properties of the ambient language. This paper focuses on these developmental changes in vowel perception. More specifically, the emergence and potential cause of perceptual asymmetries in vowel perception are investigated by an experimental study on German 6- and 9-month-olds’ discrimination of a vowel contrast that is not phonemic in German. Results show discrimination without any asymmetry in the 6-month-olds but an asymmetrical pattern with better performance when the vowel changes from the less focal to the more focal vowel than vice versa by the 9-month-olds. The results concerning the asymmetries are compatible with the Natural Referent Framework as well as with the Native Language Magnet model. Our results foster two main conclusions. First, bi-directional testing must be mandatory when testing vowel perception. Second, when testing non-native vowel perception, the relation of the stimuli to the native language vowel system has to be considered very carefully as this system impacts the perception of non-native vowels. Citation: Language and Speech PubDate: 2024-02-20T04:24:21Z DOI: 10.1177/00238309241228237
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Meredith A. Shafto, Lise Abrams, Lori E. James, Pengbo Hu, Genevieve Gray Abstract: Language and Speech, Ahead of Print. Emotion can have a profound effect on language processing, and taboo words have been increasingly used in research as highly emotional, negatively valenced stimuli. However, because taboo words as a lexical category are socially constructed and semantically idiosyncratic, they may also have complex emotional characteristics. This complexity may not be fully considered by researchers using taboo words as research stimuli. This study gathered tabooness, humor, and arousal ratings to provide a resource for researchers to better understand the sources and characteristics of the strong emotions generated by taboo words. A total of 411 participants aged 18–83 were recruited via online platforms, and all participants rated the same 264 words on tabooness, humor, and arousal. Analyses indicated that tabooness and humor ratings were positively related to each other, and both were predicted by arousal ratings. The set of ratings included here provides a tool for researchers using taboo stimuli, and our findings highlight methodological considerations while broadening our understanding of the cognitive and linguistic nature of highly emotional language. Citation: Language and Speech PubDate: 2024-02-15T12:50:37Z DOI: 10.1177/00238309241228863
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Johanna Basnak, Mitsuhiko Ota Abstract: Language and Speech, Ahead of Print. To date, research on wordform learning biases has mostly focused on language-dependent factors, such as the phonotactics and neighborhood density of the language(s) known by the learner. Domain-general biases, by contrast, have received little attention. In this study, we focus on one such bias—an advantage for string-internal repetitions—and examine its effects on wordform learning. Importantly, we consider whether any type of segmental repetition is equally beneficial for word recall, or whether learning is favored more or only by repeated consonants, in line with previous research indicating that consonants play a larger role than vowels in lexical processing. In Experiment 1, adult English speakers learned artificial consonant-vowel-consonant-vowel words containing either a repeated consonant (e.g., /sesu/, “c-rep”), a repeated vowel (e.g., /sepe/, “v-rep”), or dissimilar consonants and vowels (e.g., /sepu/, “no-rep”). Recall results showed no advantage for v-reps but higher accuracy for c-reps compared with no-reps. In Experiment 2, participants performed a label preference task with the same stimuli. The results showed dispreference for both c-reps and v-reps relative to no-reps, indicating that the results of Experiment 1 are independent of wordlikeness effects. These outcomes reveal that there is a form-learning bias for words with identical consonants but not for words with identical vowels, suggesting that a domain-general advantage for repetitions within strings is modulated by a language-specific processing bias for consonants. Citation: Language and Speech PubDate: 2024-02-05T08:53:15Z DOI: 10.1177/00238309231223909
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Marita K. Everhardt, Anastasios Sarampalis, Matt Coler, Deniz Bașkent, Wander Lowie Abstract: Language and Speech, Ahead of Print. This study investigates whether a presumed difference in the perceptibility of cues to lexical stress in spectro-temporally degraded simulated cochlear implant (CI) speech affects how listeners weight these cues during a lexical stress identification task, specifically in their non-native language. Previous research suggests that in English, listeners predominantly rely on a reduction in vowel quality as a cue to lexical stress. In Dutch, changes in the fundamental frequency (F0) contour seem to have a greater functional weight than the vowel quality contrast. Generally, non-native listeners use the cue-weighting strategies from their native language in the non-native language. Moreover, few studies have suggested that these cues to lexical stress are differently perceptible in spectro-temporally degraded electric hearing, as CI users appear to make more effective use of changes in vowel quality than of changes in the F0 contour as cues to linguistic phenomena. In this study, native Dutch learners of English identified stressed syllables in CI-simulated and non-CI-simulated Dutch and English words that contained changes in the F0 contour and vowel quality as cues to lexical stress. The results indicate that neither the cue-weighting strategies in the native language nor in the non-native language are influenced by the perceptibility of cues in the spectro-temporally degraded speech signal. These results are in contrast to our expectations based on previous research and support the idea that cue weighting is a flexible and transferable process. Citation: Language and Speech PubDate: 2024-01-29T08:24:27Z DOI: 10.1177/00238309231222207