|
|
- Vocal Speed and Processing of Persuasive Messages: Curvilinear Processing
Effects-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Most work on indicators of vocal confidence (and social influence work more broadly) examines linear relationships between variables. However, in some domains curvilinear (i.e., accelerating or decelerating) relationships may provide greater clarity in understanding human speech patterns. We review mixed past work on vocal speed as a case study, wherein faster vocal speed has been shown both to bolster and inhibit persuasion (e.g., by impairing processing). Across six total studies (Ntotal = 3,958), we show that faster speed initially increases perceived source confidence and message processing but eventually the increase attenuates or reverses. Correspondingly, vocal speed has a decelerating relationship to participants’ processing of persuasive messages, as revealed by two main processes: argument quality effects on attitudes, and the correspondence between thought valence and attitudes. The present work highlights the potential value of high-powered examinations of curvilinear relationships in non-verbal phenomenon for which speed is likely to play a role. PubDate: 2024-12-03 DOI: 10.1007/s10919-024-00477-6
- The Effect of Face Masks and Sunglasses on Emotion Perception over Two
Years of the COVID-19 Pandemic-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Since the beginning of the COVID-19 pandemic in early 2020, face masks have become a common experience for many people to reduce the spread of the disease. Although recent research has shown that face masks impair emotion recognition, it is unclear how this impairment differs from other familiar types of face covering, such as sunglasses. In the present study, participants identified expressions and rated their confidence in this assessment of six affective expressions (anger, disgust, fear, surprise, sadness, and happiness) on faces wearing masks or sunglasses at four different time points during the pandemic (June 2020, March 2021, September 2021, June 2022). They also provided judgements of emotion intensity and genuineness. Overall, emotion identification of faces with masks was less accurate and had lower ratings of confidence and emotion intensity than faces with sunglasses. Faces with sunglasses, alternatively, were rated as less genuine than faces with masks. Furthermore, this pattern for both masks and sunglasses remained stable across two years of the pandemic. This study provides new insights on the differential effects of face masks and sunglasses on emotion perception and highlights the importance of face coverings for emotion communication and social interactions. PubDate: 2024-12-01 DOI: 10.1007/s10919-024-00471-y
- Young Adults with Autism May Possess Typical-Level Skills in Identifying
Emotions from Person-in-Context Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This quantitative study was designed to examine whether the visual information provided by extra-facial elements (i.e., human bodies and contextual surroundings) affects emotion-identification accuracy among young adults with an autism spectrum disorder. Participants were 15 young adults with autism spectrum disorder (Mage = 19.47) and 15 neurotypical young adults (Mage = 19.73), who identified emotions (fear, anger, disgust, happiness, sadness, surprise) from photographs. Author-developed stimuli were used, which included head-only, whole-body-only, and whole-body-in-a-context (e.g., a person in a store) visuals. Participants viewed the images and verbally identified the emotions. Findings showed that emotion-recognition accuracy among young adults with autism increased when identifying emotions from whole-body-in-a-context images compared to their performances on the head-only or whole-body-only visuals. Young adults with autism were also less accurate than neurotypical young adults on head-only and whole-body-only visuals, but no significant differences were found between groups when identifying emotions from whole-body-in-a-context visuals. Neurotypical participants showed no significant changes across visuals. These findings indicate that young adults with autism may rely on visual information, such as bodily and contextual cues, to accurately identify emotions, which has ramifications for this group’s ability to recognize emotions in real-world situations and may inform future evaluations and interventions. PubDate: 2024-11-12 DOI: 10.1007/s10919-024-00476-7
- Accuracy, Bias, and Overgeneralization: Perceived Aggression Guides Threat
Detection and Punishment of Female Criminal Offenders-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Despite centuries of scientific (and sometimes pseudoscientific) interest in identifying criminals, research has not examined the perception and punishment of female criminals. Building on research describing how men’s facial appearance relates to juridic outcomes, we therefore investigated criminality inferences, their underlying cues, and hypothetical sentencing judgments from photos of female criminals. Participants categorized violent and nonviolent criminals significantly better than chance (Study 1) and aggression perceptions explained that detection (Study 2). Moreover, women who looked more aggressive received more severe hypothetical (Study 3a) and actual (Study 3b) sentences. Women’s facial appearance may therefore play an unintended and incidental role in perceptions of their criminality, illustrating how kernels of truth in social perception promote biases resulting from overgeneralization effects. PubDate: 2024-09-19 DOI: 10.1007/s10919-024-00475-8
- When to Laugh, When to Cry: Display Rules of Nonverbal Vocalisations
Across Four Cultures-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Nonverbal vocalisations like laughter, sighs, and groans are a fundamental part of everyday communication. Yet surprisingly little is known about the social norms concerning which vocalisations are considered appropriate to express in which context (i.e., display rules). Here, in two pre-registered studies, we investigate how people evaluate the appropriateness of different nonverbal vocalisations across locations and relationships with listeners. Study 1, with a U.S. sample (n = 250), showed that certain vocalisations (e.g., laughter, sighs, cries) are consistently viewed as more socially acceptable than others (e.g., roars, groans, moans). Additionally, location (private vs. public) and interpersonal closeness (close vs. not close) significantly influenced these perceptions, with private locations and close relationships fostering greater expressive freedom. Study 2 extended this investigation across four societies with divergent cultural norms (n = 1120 in total): the U.S. (for direct replication), Türkiye, China, and the Netherlands. Findings largely replicated those from Study 1 and supported the existence of cross-culturally consistent patterns in display rules for nonverbal vocalisations, though with some variation across cultures. This research expands our understanding of how social norms affect auditory communication, extending beyond the visual modality of facial expressions to encompass the rich world of nonverbal vocalisations. PubDate: 2024-09-09 DOI: 10.1007/s10919-024-00473-w
- Machine Learning Predicts Accuracy in Eyewitnesses’ Voices
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice. PubDate: 2024-09-09 DOI: 10.1007/s10919-024-00474-9
- The Expression of Vocal Emotions in Cognitively Healthy Adult Speakers:
Impact of Emotion Category, Gender, and Age-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This study examines the ability to express distinct emotions of negative and positive valences through voice modulations (i.e., affective prosody production) and how the speaker’s gender and age influence this ability. A sample of 31 neurologically healthy adults (17 women and 14 men, aged 41–76) were asked to say “papa” with six emotional tones—sadness, anger, fear, pleasant surprise, joy, and awe—in response to affect-evoking scenarios. The speakers’ vocal expressions were recorded and then assessed by five expert raters and 30 naive listeners using an emotion recognition task. Results showed that negative emotions were expressed more accurately than positive ones, highlighting a valence effect. In addition, female speakers showed higher recognition rates for their expressions of vocal emotions than male speakers. Furthermore, aging was associated with a moderate decline in the accuracy of prosodic emotional expression. Despite generally lower recognition rates from naive listeners compared to expert raters, recognition rates for all emotions, with the exception of awe, were not statistically different between listener groups. In conclusion, cognitively healthy adults can convey discrete emotions through prosody, including distinct positive emotions, but there are significant differences depending on the emotion expressed and individual speaker characteristics. These results highlight the complexity of affective prosody production and contribute to the understanding of individual differences in nonverbal emotional expression. PubDate: 2024-09-04 DOI: 10.1007/s10919-024-00472-x
- Touch as a Stress Buffer' Gender Differences in Subjective and
Physiological Responses to Partner and Stranger Touch-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Interpersonal touch buffers against stress under challenging conditions, but this effect depends on familiarity. People benefit from receiving touch from their romantic partners, but the results are less consistent in the context of receiving touch from an opposite-gender stranger. We propose that there may be important gender differences in how people respond to touch from opposite-gender strangers. Specifically, we propose that touch from an opposite-gender stranger may only have stress-buffering effects for men, not women. Stress was induced as participants took part in an emotion recognition task in which they received false failure feedback while being touched by a romantic partner or stranger. We measured subjective and physiological markers of stress (i.e., reduced heart rate variability) throughout the experiment. Neither stranger’s nor partner’s touch had any effect on subjective or physiological markers of stress for men. Women, however, subjectively experienced a stress-buffering effect of partner and stranger touch, but showed increased physiological markers of stress when receiving touch from an opposite-gender stranger. These results highlight the importance of considering gender when investigating touch as a stress buffer. PubDate: 2024-09-01 DOI: 10.1007/s10919-024-00455-y
- Decoding Angry and Disgusted Faces Across Cultures: Facial Prototypes and
Software Matter-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Although the same emotion can be related to different facial prototypes, little is known about the impact of differences in these prototypes on emotion perception. The present research aimed to investigate the impact of different facial prototypes on emotion perception, focusing specifically on anger and disgust. Four major prototypes of anger and six major prototypes of disgust by Asian and White targets were generated by FaceGen and MetaHuman programs. These prototypes were presented to participants from Canada and China, who were asked to identify the corresponding emotions. The results revealed that emotional signals conveyed by different facial prototypes were not uniform. While some prototypes cued distinctive emotions, others cued mixed emotions. Moreover, the signal value of specific prototypes varied across perceiver culture, with prototypes generally perceived as more mixed in Chinese compared to Canadian participants. Notably, emotional signals were also influenced by the facial generation software. Together these findings highlight the importance of considering the impact of specific facial prototypes on emotion perception, especially in cross-cultural contexts, and caution researchers to be mindful of the facial generation software when examining potential variability in emotional decoding associated with different facial prototypes. PubDate: 2024-09-01 DOI: 10.1007/s10919-024-00453-0
- The Digital Witness: Exploring Gestural Misinformation in Tele-Forensic
Interviews with 5-8-Year-Old Children-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Child abuse is a major concern worldwide. While live-link interviews have been successful in legal and medical contexts, its potential for eyewitness interviews remains insufficiently studied, particularly in terms of non-verbal misinformation. This study explored tele-forensic interviewing (tele-FI), where video-conferencing software such as Zoom or Skype are used to conduct forensic interviews, as an alternative to face-to-face interviews. Focus was given to the susceptibility of eyewitness memory to the gestural misinformation effect (GME) where post-event information in the form of gesture can distort recall of a witnessed incident. Forty-seven children were recruited ranging in age from 5- to 8-years-old (M = 6 years 11 months). Comparisons were made to face-to-face conditions from prior published work by the authors (N = 63, M = 7 years 2 months) using the same methodology, video, and question sets. Results find support for the GME during tele-FI, with 1.23 misinformation details recorded on average and tele-FI showing a similar response pattern as face-to-face interviews. Accuracy was shown to be comparable in tele-FI (M = 16.21) compared to face-to-face interviews (M = 14.02), with a notable increase in the amount of relevant information provided in the tele-FI condition. Developmental age showed significant increases in the quality and quantity of data. This study provides evidence for tele-FI as a viable alternative to face-to-face interviews, and represents the first exploration of the GME in tele-FI, to the best of our knowledge. Discussion focuses on the benefits of tele-FI and the implications for police interview guidelines. PubDate: 2024-08-06 DOI: 10.1007/s10919-024-00470-z
- Perceptions of mate poaching predict jealousy towards higher-pitched
women’s voices-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Previous research has found that higher-pitched female voices elicit jealousy among women. However, it is unknown whether jealousy towards higher-pitched female voices is driven by perceptions of the rival’s mating strategy or by beliefs about the speaker’s attractiveness to one’s romantic partner. In addition, the degree to which higher-pitched female voices elicit jealousy could be associated with variation in trait jealousy among women listeners. Here, I manipulated women’s voices to be higher or lower in pitch, and tested whether variation in jealousy towards female voices was more strongly associated with perceptions of mate poaching, beliefs about the speaker’s attractiveness to listeners’ romantic partner, or with individual differences in trait jealousy. I replicated findings that higher voice pitch elicits more jealousy from women, which was positively associated with perceptions of mate poaching. I found no evidence of an association between trait jealousy and any voice-based perception. The findings suggest that perceptions of a target’s proclivity to mate poach better explain the jealousy-inducing nature of higher-pitched female voices than do beliefs about the speaker’s attractiveness to one’s romantic partner. PubDate: 2024-07-10 DOI: 10.1007/s10919-024-00469-6
- Atheists and Christians can be Discerned from their Faces
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Whereas research has documented how atheists are perceived, none has considered their perceptibility. Atheists must first be identified as atheists in order to experience the stigma associated with them (i.e., as distrusted, disliked, and widely maligned). Although atheism is considered a concealable aspect of one’s identity, substantial research has found that a variety of ostensibly concealable attributes about a person are indeed legible from small and subtle cues. We merged these lines of inquiry here by considering the perceptibility of religious and spiritual (dis)belief. Studies 1A-1B showed that atheists could be reliably discerned from Christians based on brief glimpses of 100 standardized male faces. Experiment 2 replicated these results using female faces. Experiments 3 A-E then interrogated the facial features that support perceivers’ detection of atheism, showing that various parts of faces suffice for independently conveying atheism. Experiment 4 investigated and showed a potential mechanism for atheism detection – expressive suppression. Thus, across nine studies (N = 677), these data show robust evidence that atheists can be categorized from facial cues. PubDate: 2024-06-26 DOI: 10.1007/s10919-024-00467-8
- Impact of Hearing Loss and Auditory Rehabilitation on Dyads: A Microsocial
Perspective-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Hearing loss severely hampers verbal exchange and thus social interaction, which puts a high burden on hearing-impaired and their close partners. Until now, nonverbal interaction in hearing-impaired dyads has not been addressed as a relevant factor for well-being or quality of social relationships. Nonverbal synchrony of head- and body-movement was analysed in N = 30 dyads of persons with hearing impairment (PHI) and their significant others (SO). In a 10-minute conversation before (T1) and 6 months after cochlear implantation (T2), Motion Energy Analysis (MEA) automatically quantified head- and body-movement. Self-report measures of both dyad members were used to assess aspects of quality of life and closeness in the partnership. After cochlear implantation, nonverbal synchrony showed a downward trend and was less distinct from pseudosynchrony. Higher synchrony was associated with worse hearing-related quality of life, shorter duration of hearing impairment and less closeness in the relationship. This negative association was interpreted as an indication for the effort one has to make to cope with difficulties in a dyad`s relationship. Endorsing a holistic approach in auditory rehabilitation, we propose the assessment of nonverbal synchrony as a suitable tool to detect subtle imbalances in the interpersonal relation between PHI and SO outside conscious control and to provide cues for possible therapeutical strategies. PubDate: 2024-06-22 DOI: 10.1007/s10919-024-00468-7
- People Attribute a Range of Highly-Varied and Socially-Bound Meanings to
Naturalistic Sad Facial Expressions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Depictions of sadness are commonplace, and here we aimed to discover and catalogue the complex and nuanced ways that people interpret sad facial expressions. We used a rigorous qualitative methodology to build a thematic framework from 3,243 open-ended responses from 41 people who participated in 2020 and described what they thought sad expressors in 80 images were thinking, feeling, and/or intending to do. Face images were sourced from a novel set of naturalistic expressions (ANU Real Facial Expression Database), as well as a traditional posed expression database (Radboud Faces Database). The resultant framework revealed clear themes around the expressors’ thoughts (e.g., acceptance, contemplation, disbelief), social needs (e.g., social support or withdrawal), social behaviours/intentions (e.g., mock or manipulate), and the precipitating events (e.g., social or romantic conflict). Expressions that were perceived as genuine were more frequently described as thinking deeply, reflecting, or feeling regretful, whereas those perceived as posed were more frequently described as exaggerated, overamplified, or dramatised. Overall, findings highlight that facial expressions — even with high levels of consensus about the emotion category they belong to — are interpreted in nuanced and complex ways that emphasise their role as other-oriented social tools, and convey semantically related emotion categories that share smooth gradients with one another. Our novel thematic framework also provides an important foundation for future work aimed at understanding variation in the social functions of sadness, including exploring potential differences in interpretations across cultural settings. PubDate: 2024-06-12 DOI: 10.1007/s10919-024-00463-y
- Assessment of Movement Synchrony and Alliance in Problem-Focused and
Solution-Focused Counseling-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The present study investigated the differences in movement synchrony and therapeutic alliance between solution-focused and problem-focused counseling. Thirty-four participants each attended two counseling sessions with different counselors, one with a solution-focus and one with a problem-focus, in randomized order. The sessions consisted of three consecutive parts: problem description, standardized intervention and free intervention. Movement synchrony, including leading and pacing synchrony, was measured using Motion Energy Analysis (MEA) and windowed cross-lagged correlation (WCLC) based on video recordings of the sessions. The Helping Alliance Questionnaire (HAQ) was used to assess therapeutic alliance. Results showed that movement synchrony was significantly higher in solution-focused than in problem-focused counseling, driven by differences in the problem description part. This difference may be explained by the allegiance of the counselors to the solution-focused approach, as we observed more leading synchrony during the problem description part in solution-focused sessions. There was no significant difference in therapeutic alliance between the two conditions. This study expands the understanding of counseling approaches in the field of movement synchrony and contributes valuable insights for practitioners and researchers alike. PubDate: 2024-06-01 DOI: 10.1007/s10919-024-00466-9
- The Functions of Human Touch: An Integrative Review
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract There appears to be no attempt to categorize the specific classes of behavior that the tactile system underpins. Awareness of how an organism uses touch in their environment informs understanding of its versatility in non-verbal communication and tactile perception. This review categorizes the behavioral functions underpinned by the tactile sense, by using three sources of data: (1) Animal data, to assess if an identified function is conserved across species; (2) Human capacity data, indicating whether the tactile sense can support a proposed function; and (3) Human impaired data, documenting the impacts of impaired tactile functioning (e.g., reduced tactile sensitivity) for humans. From these data, three main functions pertinent to the tactile sense were identified: Ingestive Behavior; Environmental Hazard Detection and Management; and Social Communication. These functions are reviewed in detail and future directions are discussed with focus on social psychology, non-verbal behavior and multisensory perception. PubDate: 2024-05-28 DOI: 10.1007/s10919-024-00464-x
- Correction: Ad-Hoc Reviewers from 2022
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2024-05-27 DOI: 10.1007/s10919-024-00465-w
- Vocal Emotional Expressions in Mothers with and without a History of Major
Depressive Disorder-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Depression is associated with alterations in prosody when speaking (e.g., less variation in pitch, slowed speech rate), but less is known about its impact on emotional prosody. This is particularly important to investigate in parent–child contexts, as parental expression of emotion may contribute to the intergenerational transmission of depression risk. The current study asked mothers of preschool-aged children (with and without a history of major depressive disorder during their child’s lifetime) to produce child-relevant sentences in neutral, angry, and happy tones of voice. We examined whether groups’ portrayals were acoustically or perceptually different, in speech analyses and listener ratings. Mothers with a history of depression expressed happiness with less range in pitch and a slower speech rate (slower, more monotonous voice) than mothers with no history of depression. Across groups, happy exemplars with less range in pitch were rated as less emotionally intense, recognizable, and authentically happy by listeners; slower speech rate was associated with opposite perceptual ratings. However, listeners’ ratings did not differ by depression group as a whole. Results suggest that a history of depression may influence maternal vocal expression of happiness, but that its impact on listeners’ perceptions may depend on mothers’ idiosyncratic use of acoustic cues. PubDate: 2024-04-25 DOI: 10.1007/s10919-024-00462-z
- An Experimental Investigation of Supportive Tactile Communication During
Esteem Support Conversations-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The present study examines how supportive touch impacts evaluations of esteem support content containing high emotion-focused (HEF) or high problem-focused (HPF) messages during observed esteem support interactions. A 2 (verbal content; i.e., HEF or HPF) by 2 (nonverbal content; i.e., presence or absence of supportive tactile communication) experiment was conducted to test for main and interactional effects. Results revealed that HEF conditions were perceived to be more effective by observers at enhancing the recipient’s state self-esteem, state self-efficacy, and alleviating distress compared to HPF conditions. The supportive tactile communication conditions were perceived as better at enhancing state self-esteem and alleviating distress compared to the no supportive tactile communication conditions by observers. However, these main effects were qualified by significant two-way interactions between message content and nonverbal behavior on ratings of state self-esteem and distress alleviation, such that the addition of supportive tactile communication enhanced the effectiveness of HPF message content but not HEF content. PubDate: 2024-04-09 DOI: 10.1007/s10919-024-00461-0
- Facial and Body Posture Emotion Identification in Deaf and Hard-of-Hearing
Young Adults-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The aim of the current study was to examine facial and body posture emotion recognition among deaf and hard-of-hearing (DHH) and hearing young adults. Participants were (N = 126) DHH (n = 48) and hearing (n = 78) college students who completed two emotion recognition tasks in which they were shown photographs of faces and body postures displaying different emotions of both high and low intensities and had to infer the emotion being displayed. Compared to hearing participants, DHH participants performed worse on the body postures emotion task for both high and low intensities. They also performed more poorly on the facial emotion task, but only for low-intensity emotional facial expressions. On both tasks, DHH participants whose primary mode of communication was Signed English performed significantly more poorly than those whose primary mode was American Sign Language (ASL) or spoken English. Moreover, DHH participants who communicated using ASL performed similarly to hearing participants. This suggests that difficulties in affect recognition among DHH individuals occur when processing both facial and body postures that are more subtle and reflective of real-life displays of emotion. Importantly, this also suggests that ASL as a primary form of communication in this population may serve as a protective factor against emotion recognition difficulties, which could, in part, be due to the complex nature of this language and its requirement to perceive meaning through facial and postural expressions with a wide visual lens. PubDate: 2024-03-12 DOI: 10.1007/s10919-024-00458-9
|