Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Abraham Wandersman, Lawrence M. Scheier Pages: 143 - 153 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 143-153, June 2024. Hundreds of millions of dollars are spent each year by U.S. federal agencies for training and technical assistance (TTA) to be delivered by training and technical assistance centers (TTACs) to “delivery system organizations” (e.g., federally qualified health centers, state departments of health, substance abuse treatment centers, schools, and healthcare organizations). TTACs are often requested to help delivery system organizations implement evidence-based interventions. Yet, counterintuitively, TTACs are rarely required to use evidence-based approaches when supporting delivery systems (in the use of evidence-based programs). In fact, evaluations of TTAC activities tend to be minimal; evaluation of technical assistance (if conducted at all) often emphasizes outputs (number of encounters), satisfaction, and self-reports of knowledge gained—more substantive outcomes are not evaluated. The gap between (a) the volume of TTA services being funded and provided and (b) the evaluation of those services is immense and has the potential to be costly. The basic question to be answered is: how effective are TTA services' This article introduces the special issue on Strengthening the Science and Practice of Implementation Support: Evaluating the Effectiveness of Training and Technical Assistance Centers. The special issue promotes 1) knowledge of the state of the art of evaluation of TTACs and 2) advances in what to evaluate in TTA. A major goal of the issue is to improve the science and practice of implementation support, particularly in the areas of TTA. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:24Z DOI: 10.1177/01632787241248768 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jon Agley, Ruth Gassman, Kaitlyn Reho, Jeffrey Roberts, Susan K. R. Heil, Graciela Castillo, Lilian Golzarri-Arroyo Pages: 154 - 166 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 154-166, June 2024. In healthcare and related fields, there is often a gap between research and practice. Scholars have developed frameworks to support dissemination and implementation of best practices, such as the Interactive Systems Framework for Dissemination and Implementation, which shows how scientific innovations are conveyed to practitioners through tools, training, and technical assistance (TA). Underpinning those aspects of the model are evaluation and continuous quality improvement (CQI). However, a recent meta-analysis suggests that the approaches to and outcomes from CQI in healthcare vary considerably, and that more evaluative work is needed. Therefore, this paper describes an assessment of CQI processes within the Substance Abuse and Mental Health Services Administration’s (SAMHSA) Technology Transfer Center (TTC) Network, a large TA/TTC system in the United States comprised of 39 distinct centers. We conducted key informant interviews (n = 71 representing 28 centers in the Network) and three surveys (100% center response rates) focused on CQI, time/effort allocation, and Government Performance and Results Act (GPRA) measures. We used data from each of these study components to provide a robust picture of CQI within a TA/TTC system, identifying Network-specific concepts, concerns about conflation of the GPRA data with CQI, and principles that might be studied more generally. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:19Z DOI: 10.1177/01632787241234882 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kaitlyn Reho, Jon Agley, Ruth Gassman, Jeffrey Roberts, Susan K. R. Heil, Jharna Katara Pages: 167 - 177 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 167-177, June 2024. It is important to use evidence-based programs and practices (EBPs) to address major public health issues. However, those who use EBPs in real-world settings often require support in bridging the research-to-practice gap. In the US, one of the largest systems that provides such support is the Substance Abuse and Mental Health Services Administration’s (SAMHSA’s) Technology Transfer Center (TTC) Network. As part of a large external evaluation of the Network, this study examined how TTCs determine which EBPs to promote and how to promote them. Using semi-structured interviews and pre-testing, we developed a “Determinants of Technology Transfer” survey that was completed by 100% of TTCs in the Network. Because the study period overlapped with the onset of the COVID-19 pandemic, we also conducted a retrospective pre/post-pandemic comparison of determinants. TTCs reported relying on a broad group of factors when selecting EBPs to disseminate and the methods to do so. Stakeholder and target audience input and needs were consistently the most important determinant (both before and during COVID-19), while some other determinants fluctuated around the pandemic (e.g., public health mandates, instructions in the funding opportunity announcements). We discuss implications of the findings for technology transfer and frame the analyses in terms of the Interactive Systems Framework for Dissemination and Implementation. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:16Z DOI: 10.1177/01632787231225653 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jonathan R. Olson, Elizabeth Reisinger Walker, Lydia Chwastiak, Benjamin G. Druss, Todd Molfenter, Felicia Benson, Alfredo Cerrato, Heather J. Gotham Pages: 178 - 191 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 178-191, June 2024. Recent implementation science frameworks highlight the role of training and technical assistance (TTA) in building workforce capacity to implement evidence-based practices (EBPs). However, evaluation of TTA is limited. We describe three case examples that highlight TTA by three regional centers in the national Mental Health Technology Transfer Center (MHTTC) network. Each MHTTC formed Learning Communities (LCs) to facilitate connections among behavioral health professionals with the goals of sharing implementation strategies, discussing best-practices, and developing problem solving techniques. Data on outcomes were collected through a combination of self-report surveys and qualitative interviews. LC participants reported strong connectedness, gains in knowledge and skills, improvements in implementation capacity, and intentions to advocate for organizational and systems-level change. Furthermore, across the case examples, we identified LC characteristics that are associated with participant perceptions of outcomes, including tailoring LC content to workforce needs, providing culturally relevant information, engaging leaders, forming connections among participants and trainers, and challenging participants’ current workplace practices. These findings are interpreted through the lens of the Interactive Systems Framework, which focuses on how TTA, such as LCs, can facilitate connections between the theoretical and empirical foundations of interventions and the practices of implementing interventions in real-world settings to advance workforce capacity. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:16Z DOI: 10.1177/01632787241237246 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Elizabeth Weybright, Sandi Phibbs, Cassandra Watters, Allison Myers, Michelle Peavy, Abbey Martin Pages: 192 - 203 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 192-203, June 2024. The opioid epidemic in the United States continues to disproportionately affect those in rural, compared to urban, areas due to a variety of treatment and recovery barriers. One mechanism to increase capacity of rural-serving providers is through delivery of training and technical assistance (TTA) for evidence-based programs by leveraging the Cooperative Extension System. Guided by the Interactive Systems Framework, the current study evaluates TTA delivered by the Northwest Rural Opioid Technical Assistance Collabroative to opioid prevention, treatment, and recovery providers on short- (satisfaction, anticipated benefit), medium-, (behavioral intention to change current practice), and long-term goals (changes toward adoption of evidence-based practices). We also evaluated differences in short- and medium-term goals by intensity of TTA event and rurality of provider. Surveys of 351 providers who received TTA indicated high levels of satisfaction with TTA events attended, expressed strong agreement that they would benefit from the event, intended to make a professional practice change, and preparation toward implementing changes. Compared to urban-based providers, rural providers reported higher intention to use TTA information to change current practice. We conclude with a review of remaining gaps in the research to practice pipeline and recommendations for moving forward. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:27Z DOI: 10.1177/01632787241237515 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jochebed G. Gayles, Sarah M. Chilenski, Nataly Barragán, Brittany Rhoades Cooper, Janet Agnes Welsh, Megan Galinsky Pages: 204 - 218 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 204-218, June 2024. The research-practice gap between evidence-based intervention efficacy and its uptake in real-world contexts remains a central challenge for prevention and implementation science. Providing technical assistance (TA) is considered a crucial support mechanism that can help narrow the gap. However, empirical measurement of TA strategies and their variation is often lacking. The current study unpacks the black box of TA, highlighting different TA strategies, amounts, and their relation to intervention characteristics. First, we qualitatively categorized interactions between TA providers and implementers. Second, we explored how characteristics of implementing organizations and the intervention related to variations in the amount of TA delivered. Using data spanning six years, we analyzed over 10,000 encounters between TA providers and implementers. Content analysis yielded four distinct strategies: Consultation (27.2%), Coordination Logistics (24.5%), Monitoring (16.5%), and Resource Delivery (28.2%). Organizations with prior experience required less monitoring and resource delivery. Additionally, characteristics of the intervention were significantly associated with the amount of consultation, monitoring, coordination logistics, and resource delivery provided. The specific features of the intervention showed significant variation in their relation to TA strategies. These findings provide initial insights into the implications of intervention characteristics in determining how much of which TA strategies are needed to support implementations in real-world settings. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:21Z DOI: 10.1177/01632787241248769 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Caryn S. Ward, Sophia Farmer, Melanie Livet Pages: 219 - 229 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 219-229, June 2024. Despite the millions of dollars awarded annually by the United States Department of Education to build implementation capacity through technical assistance (TA), data on TA effectiveness are severely lacking. Foundational to the operationalization and consistent research on TA effectiveness is the development and use of standardized TA core competencies, practices, and structures. Despite advances toward a consistent definition of TA, a gap still exists in understanding how these competencies are used within an operationalized set of TA practices to produce targeted outcomes at both individual and organizational levels to facilitate implementation of evidence-based practices. The current article describes key insights derived from the evaluation of an operationalized set of TA practices used by a nationally funded TA center, the State Implementation & Scaling Up of Evidence Based Practices (SISEP) Center. The TA provided by the Center supports the uptake of evidence-based practices in K-12 education for students with disabilities. Lessons learned include: (1) the need to understand the complexities and dependencies of operationalizing TA both longitudinally and at multiple levels of the system (state, regional, local); (2) the relative importance of building general and innovation-specific capacity for implementation success; (3) the value of using a co-design and participatory approach for effective TA delivery; (4) the need to develop TA providers’ educational and implementation fluency across areas and levels of the system receiving TA; and (5) the need to ensure coordination and alignment of TA providers from different centers. Gaining an understanding into optimal TA practices will not only provide clarity of definition fundamental to TA research, but it will also inform the conceptual framing and practice of TA. Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:27Z DOI: 10.1177/01632787241247853 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Pages: 230 - 231 Abstract: Evaluation & the Health Professions, Volume 47, Issue 2, Page 230-231, June 2024.
Citation: Evaluation & the Health Professions PubDate: 2024-05-25T04:30:26Z DOI: 10.1177/01632787241246885 Issue No:Vol. 47, No. 2 (2024)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Furkan Cakir, Hasan Gercek, Sergen Ozturk, Tugba Kuru Colak, Zubeyir Sari, Mine Gulden Polat Abstract: Evaluation & the Health Professions, Ahead of Print. Patients’ general treatment expectations are an important indicator of the outcomes of the various treatments they will receive. There is a need for valid and reliable assessment tools that measure the expectations of patients receiving rehabilitation services. This study aimed to translate and validate the Treatment Expectations Questionnaire (TR.TEX-Q) in Turkish patients to assess their treatment-specific expectations. 150 physiotherapy patients were enrolled in the study. The original version of the Treatment Expectation Questionnaire was translated into Turkish. Cronbach’s α was used to investigate internal consistency. Intraclass correlation coefficients were used to assess test–retest reliability. Pearsons’s correlation was used to calculate convergent and divergent validity. Principal component analysis produced a 15-items scale which had a 6-factors structure. Cronbach’s α values ranged from .649 to .879. Test–retest reliability was high for total score and for all subscales. The ICC was between .622 and .852, p < .001. TR.TEX-Q showed good convergent validity, a moderate correlation was found between the Positivity Scale (rho = .45, p < .001). For divergent validity, low to moderate correlation was found between the TR.TEX-Q and the HADS scores. The Turkish version of Treatment Expectation Questionnaire has good reliability and validity data in terms of evaluating the treatment expectations of patients who will receive physiotherapy. Citation: Evaluation & the Health Professions PubDate: 2024-07-26T11:36:46Z DOI: 10.1177/01632787241268211
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kate Furness, Catherine E. Huggins, Lauren Hanna, Daniel Croagh, Mitchell Sarkies, Terry P. Haines Abstract: Evaluation & the Health Professions, Ahead of Print. Individuals diagnosed with upper gastrointestinal cancers experience a myriad of nutrition impact symptoms (NIS) compromise a person’s ability to adequately meet their nutritional requirements leading to malnutrition, reduced quality of life and poorer survival. Electronic health (eHealth) is a potential strategy for improving the delivery of nutrition interventions by improving early and sustained access to dietitians to address both NIS and malnutrition. This study aimed to explore whether the mode of delivery affected participant disclosure of NIS during a nutrition intervention. Participants in the intervention groups received a nutrition intervention for 18 weeks from a dietitian via telephone or mobile application (app) using behaviour change techniques to assist in goal achievement. Poisson regression determined the proportion of individuals who reported NIS compared between groups. Univariate and multiple regression analyses of demographic variables explored the relationship between demographics and reporting of NIS. The incidence of reporting of NIS was more than 1.8 times higher in the telephone group (n = 38) compared to the mobile group (n = 36). Telephone predicted a higher likelihood of disclosure of self-reported symptoms of fatigue, nausea, and anorexia throughout the intervention period. A trusting therapeutic relationship built on human connection is fundamental and may not be achieved with current models of mobile health technologies. Citation: Evaluation & the Health Professions PubDate: 2024-07-24T10:53:04Z DOI: 10.1177/01632787241267051
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gabriela Zuelli Martins Silva, Mariana Romano de Lira, Luiz Ricardo Garcêz, Steven Z. George, Randy Neblett, Adriano Pezolato, Thamiris Costa Lima, Thais Cristina Chaves Abstract: Evaluation & the Health Professions, Ahead of Print. The Fear-Avoidance Components Scale (FACS) and the Fear of Daily Activities Questionnaire (FDAQ) assess fear-avoidance model components. However, the questionnaires are not available in Brazilian Portuguese. This study aimed to translate the original English FACS and FDAQ into Brazilian (Br) Portuguese and assess their measurement properties in patients with Chronic Low Back Pain (CLBP). One hundred thirty volunteers with CLBP participated in this study. Structural validity, internal consistency, test-retest reliability, and hypothesis testing for construct validity were analyzed. Results indicated a 2-factor solution for the FACS-Br, while the FDAQ-Br had a one-factor solution. Internal consistency showed acceptable Cronbach’s alpha (alpha>.8). Suitable reliability was found for the FDAQ-Br (Intraclass Correlation Coefficient [ICC] = .98). For both FACS-Br factors, suitable reliability was found as well (ICC = .95 and .94). Hypothesis testing for construct validity confirmed more than 75% of the hypotheses proposed a priori for the FACS maladaptive pain/movement-related beliefs domain and the FDAQ-Br. In conclusion, the FACS-Br and FDAQ-Br demonstrated acceptable reliability, internal consistency, and structural validity measurement properties and their correlation (r < .50) suggests that the tools are not interchangeable measures. Citation: Evaluation & the Health Professions PubDate: 2024-07-22T02:50:37Z DOI: 10.1177/01632787241264588
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Karlie M. Mirabelli, Brandon K. Schultz, Alexander M. Schoemann, Sequoyah R. Bell, Suzanne Lazorick Abstract: Evaluation & the Health Professions, Ahead of Print. We examined the psychometric properties of the Physical Activity, Nutrition, and Technology (PANT) survey, developed by researchers to track weight management behaviors among youth. Data from 2,039 middle school students (M age = 12.4, SD = .5; 51.4% girls) were analyzed to explore and then confirm the factor structure of the PANT survey. We also examined the bivariate associations between the PANT survey, body mass index (BMI), and the Progressive Aerobic Cardiorespiratory Endurance Run (PACER). Results suggest that the PANT survey is comprised of two factors—Physical Activity and Healthy Choices—each with adequate internal consistency (α = .79 and 0.86, respectively). The Physical Activity subscale appears to be significantly associated with both z-BMI (r = −0.10, p < .001) and the PACER (r = 0.33, p < .001) in the anticipated directions, but the criterion validity of the Healthy Choices subscale is less clear. We discuss these findings and explore future directions for developing meaningful self-report wellness behavior scales for youth. Citation: Evaluation & the Health Professions PubDate: 2024-07-20T02:19:26Z DOI: 10.1177/01632787241263372
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Homayoun Pasha Safavi, Mona Bouzari Abstract: Evaluation & the Health Professions, Ahead of Print. The primary goal of the present study is to inspect the plausible job-related (i.e., challenge stressors and role blurring) and individual factors (i.e., fatigue and insomnia) that potentially lead to work-related cognitive failures among healthcare staff. Through the judgmental sampling technique, data was collected from healthcare personnel in Iran. The results revealed that challenge stressors in the form of time pressure, job responsibility, and work overload are significantly related to role blurring. Moreover, role blurring increases fatigue and insomnia among medical staff, and both insomnia and fatigue cause workplace cognitive failure. The results also confirm the mediation effect of role blurring in the association between challenge stressors, insomnia, and fatigue. According to the results, insomnia and fatigue similarly mediate the role blurring on workplace cognitive failure association. Theoretical implications, useful suggestions for practitioners, and prospective research avenues are debated in the study. Citation: Evaluation & the Health Professions PubDate: 2024-07-19T03:00:11Z DOI: 10.1177/01632787241264597
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Aynslie Hinds, Beda Suárez Aguilar, Yercine Duarte Berrio, Dorian Ospina Galeano, John Harold Gómez Vargas, Valentina Espinosa Ruiz, Javier Mignone Abstract: Evaluation & the Health Professions, Ahead of Print. The objective of the study was to assess the consistency between self-reported demographic characteristics, health conditions, and healthcare use, and administrative healthcare records, in a sample of enrollees of an Indigenous health organization in Colombia. We conducted a phone survey of a random sample of 2113 enrollees September-2020/February-2021. Administrative health records were obtained for the sample. Using ICD-10 diagnostic codes, we identified individuals who had healthcare visits for diabetes, hypertension, and/or pregnancy. Using unique identifiers, we linked their survey data to the administrative dataset. Agreement percentages and Cohen’s Kappa coefficients were calculated. Logistic regressions were performed for each health condition/state. Results showed high degree of agreement between data sources for sex and age, similar rates for diabetes and hypertension, 10% variation for pregnancy. Kappa statistics were in the moderate range. Age was significantly associated with agreement between data sources. Sex, language, and self-rated health were significant for diabetes. This is the first study with data from an Indigenous population assessing the consistency between self-reported data and administrative health records. Survey and administrative data produced similar results, suggesting that Anas Wauu can be confident in using their data for planning and research purposes, as part of the movement toward data sovereignty. Citation: Evaluation & the Health Professions PubDate: 2024-06-17T02:16:32Z DOI: 10.1177/01632787241263370
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Ellen Funkhouser, Rahma Mungia, Reesa Laws, Denis B. Nyongesa, Suzanne Gillespie, Michael C. Leo, Mary Ann McBurnie, Gregg H. Gilbert Abstract: Evaluation & the Health Professions, Ahead of Print. Surveys of health professionals typically have low response rates, which have decreased in recent years. We report on the methods used, participation rates, and study time for 11 national questionnaire studies of dentists conducted from 2014–2022. Participation rates decreased (87%–25%). Concurrent with this decrease was a decrease in the intensity with which the practitioners were recruited. Participation rates were higher when postal mail invitation and paper options were used (84% vs. 58%, p < .001). Completion rates were nearly twice as high in studies that recruited in waves than those that did not (61% vs. 35%, p = .003). Study time varied from 2.6 to 28.4 weeks. Study time was longest when postal mail and completion on paper were used (26.0 vs. 11.3 weeks, p = .01). Among studies using only online methods, study time was longer when invitations were staggered than when all invitations went out in one bolus (means 12.0 and 5.2, p = .04). Study time was positively correlated with participation rates (Spearman r = .80, p = .005). General dentists participated at an average of 12% higher rates than specialists. Recruitment methodology, such as recruiting in waves or stages, should be considered when designing surveys. Citation: Evaluation & the Health Professions PubDate: 2024-06-06T08:54:54Z DOI: 10.1177/01632787241259186
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Leon T. De Beer, Wilmar B. Schaufeli Abstract: Evaluation & the Health Professions, Ahead of Print. Some consider the burnout label to be controversial, even calling for the abandonment of the term in its entirety. In this communication, we argue for the pragmatic utility of the burnout paradigm from a utilitarian perspective, which advocates the greatest good for the most significant number of employees in organisations. We first distinguish between mild work-related burnout complaints and more severe burnout that can be identified in some contexts. We address the classification of burnout as an ‘occupational phenomenon’ by the World Health Organization and its ambiguous status in the ICD-11, highlighting the challenge of universally diagnosing burnout as a condition. We argue that a purely clinical approach might be too reactive as it normally only identifies employees with a diagnosable condition. We posit that early detection of burnout through valid assessment can identify struggling employees who do not yet have a diagnosable condition. This proactive approach can help prevent escalation into mental health crises and is more sensible for organisations in terms of effectiveness and employee retention. Citation: Evaluation & the Health Professions PubDate: 2024-06-01T01:14:09Z DOI: 10.1177/01632787241259032
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Elsa Tirado-Durán, Laura Ivonne Jiménez-Rodríguez, Marisol Castañeda-Franco, Mariana Jiménez-Tirado, Elizabeth W. Twamley, Ana Fresán-Orellana, María Yoldi-Negrete Abstract: Evaluation & the Health Professions, Ahead of Print. Cognitive deficits play an important role in Bipolar Disorder (BPD). The Cognitive Problems and Strategies Assessment (CPSA) is a measure that evaluates the patient’s perception of cognitive difficulties, and the spontaneous use of compensatory strategies and could thus have potential utility for clinical practice in patients with BPD. Our aim was to determine the validity and reliability of the Cognitive Problems and Strategies Assessment (CPSA) in Bipolar Disorder (BPD). Ninety-three BPD outpatients and 90 controls completed the Assessment of Problems with Thinking and Memory (APTM) questionnaire and the Assessment of Memory and Thinking Strategies (AMTS) questionnaire which constitute the CPSA, the Cognitive Complaints in Bipolar Disorder Rating Assessment (COBRA), as a measure of convergent validity, and general sociodemographic data. Cronbach’s alpha coefficient, Spearman’s correlation coefficient and independent sample t tests were used for Internal consistency, Convergent validity and Discriminant validity. The APTM had a Cronbach’s alpha coefficient of 0.93 and the AMTS 0.90. The COBRA score and the APTM were significantly correlated. BPD patients exhibited higher scores on the APTM and lower scores on the AMTS than controls. The present instrument enriches the clinician’s repertoire for rapid and inexpensive cognitive evaluation in BPD. Citation: Evaluation & the Health Professions PubDate: 2024-05-10T07:45:04Z DOI: 10.1177/01632787241253021
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tyler B. Mason, Jeremy C. Morales, Alex Smith, Kathryn E. Smith Abstract: Evaluation & the Health Professions, Ahead of Print. Ecological momentary assessment (EMA) of binge-eating symptoms has deepened our understanding of eating disorders. However, there has been a lack of attention on the psychometrics of EMA binge-eating symptom measures. This paper focused on evaluating the psychometric properties of a four-item binge-eating symptom measure, including multilevel factor structure, reliability, and convergent validity. Forty-nine adults with binge-eating disorder and/or food addiction completed baseline questionnaires and a 10-day EMA protocol. During EMA, participants completed assessments of eating episodes, including four binge-eating symptom items. Analyses included multilevel exploratory factor analysis, computation of omega and intraclass correlation coefficients, and multilevel structural equation models of associations between contextual factors and binge-eating symptoms. A one within-subject factor solution fit the data and showed good multilevel reliability and adequate within-subjects variability. EMA binge-eating symptoms were associated with baseline binge-eating measures as well as relevant EMA eating characteristics: including greater unhealthful food and drink intake; higher perceived taste of food; lower likelihood to be planned eating; and lower likelihood of eating to occur at work/school and other locations and greater likelihood to occur at restaurants compared to home. In conclusion, the study findings support the psychometrics of a 4-item one-factor EMA measure of binge-eating symptoms. Citation: Evaluation & the Health Professions PubDate: 2024-04-27T02:44:55Z DOI: 10.1177/01632787241249500
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Wenwen Kong, Minmin Ren, Hui Wang, Xiangjie Sun, Danjun Feng Abstract: Evaluation & the Health Professions, Ahead of Print. This study aimed to develop and validate a new scale to measure health problem prevention and control strategies employed by medical rescuers fighting epidemics. In Study I, a qualitative study, focus group discussion, and expert panel review were conducted to generate items that capture components of prevention and control strategies. In Study II, exploratory factor analysis was used to examine the scale’s structure. In Study III, the scale’s validity and reliability were assessed via confirmatory factor analysis, average variance extracted, composite reliability, and Cronbach’s α. Data analysis was performed using Nvivo 12.0, SPSS 25.0, and Amos 23.0. The final scale was divided into three subscales (comprising 5 factors and 18 items on the Before Medical Rescue subscale, 6 factors and 28 items on the During Medical Rescue subscale, and 4 factors and 14 items on the After Medical Rescue subscale). The scale has excellent validity and reliability and can be used to measure the health problem prevention and control strategies of medical rescuers fighting epidemics. Citation: Evaluation & the Health Professions PubDate: 2024-04-10T03:59:10Z DOI: 10.1177/01632787241246130
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Olga Riklikienė, Gabija Jarašiūnaitė-Fedosejeva, Ernesta Sakalauskienė, Žydrūnė Luneckaitė, Susan Ayers Abstract: Evaluation & the Health Professions, Ahead of Print. The childbirth experience and birth-related trauma are influenced by various factors, including country, healthcare system, a woman’s history of traumatic experiences, and the study’s design and instruments. This study aimed to validate the City Birth Trauma scale for Lithuanian women post-childbirth. Using a descriptive, cross-sectional survey with a nonprobability sample of 794 women who gave birth from 2020–2021, the study found good validity, reliability, and presented the prevalence of birth-related stress symptoms. A bifactor model, consisting of a general birth trauma factor and two specific factors for birth-related symptoms and general symptoms of PTSD, showed the best model fit. The Lithuanian version of the City Birth Trauma scale can be effectively used in research and clinical practice to identify birth-related trauma symptoms in women after giving birth. Citation: Evaluation & the Health Professions PubDate: 2024-03-13T06:44:27Z DOI: 10.1177/01632787241239339
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Samia Amin, Kylie Uyeda, Ian Pagano, Kayzel R. Tabangcura, Rachel Taketa, Crissy Terawaki Kawamoto, Pallav Pokhrel Abstract: Evaluation & the Health Professions, Ahead of Print. This study focused on investigating the potential of Artificial Intelligent-powered Virtual Assistants (VAs) such as Amazon Alexa, Apple Siri, and Google Assistant as tools to help individuals seeking information about Nicotine Replacement Treatment (NRT) for smoking cessation. The researchers asked 40 NRT-related questions to each of the 3 VAs and evaluated the responses for voice recognition. The study used a cross-sectional mixed-method design with a total sample size of 360 responses. Inter-rater reliability and differences between VAs’ responses were examined by SAS software, and qualitative assessments were conducted using NVivo software. Google Assistant achieved 100% voice recognition for NRT-related questions, followed by Apple Siri at 97.5%, and Amazon Alexa at 83.3%. Statistically significant differences were found between the responses of Amazon Alexa relative to both Google Assistant and Apple Siri. Researcher 1’s ratings significantly differed from Researcher 2’s (p = .001), but not from Researcher 3’s (p = .11). Virtual Assistants occasionally struggled to understand the context or nuances of questions, lacked in-depth information in their responses, and provided generic or unrelated responses. Virtual Assistants have the potential to be incorporated into smoking cessation interventions and tobacco control initiatives, contingent upon improving their competencies. Citation: Evaluation & the Health Professions PubDate: 2024-02-26T11:27:31Z DOI: 10.1177/01632787241235689