Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Roberts; Laura Weiss Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Yang; Alan Z. Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Cheloff; Abraham Z.; Bharadwa, Sonya Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kumar; Apoorva; Umasankar, Deekshitha; Shiatis, Vishal; Sidiku, Febisayo Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Abou-Hanna; Jacob J.; Kolars, Joseph C. Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Levy; Kenneth H.; Ahmed, Adham Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gonnella; Joseph S.; Callahan, Clara A.; Erdmann, James B.; Veloski, J. Jon; Markle, Ronald A.; Hojat, Mohammadreza Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Khan; Huda; Smith, Eleanor E.A.; Reusch, Ryan T. Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kinder; Florence; Byrne, Matthew H.V. Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Hassan; Shahzeb; Shlobin, Nathan A.; Mahmoud, Ali Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Caretta-Weyer; Holly A. Abstract:Residency application numbers have skyrocketed in the last decade, and stakeholders have scrambled to identify and deploy methods of reducing the number of applications submitted to each program. These interventions have traditionally focused on the logistics of the application submission and review process, neglecting many of the drivers of overapplication. Implementing application caps, preference signaling as described by Pletcher and colleagues in this issue, or an early Match does not address the fear of not matching that applicants hold, the lack of transparent data available for applicants to assess their alignment with a specific program, or issues of inequity in the residency selection process. Now is the time to reconsider the residency selection process itself. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Harris; Toi Blakley; Jacobs, Negar N.; Fuqua, Chantel F.; Lyness, Jeffrey M.; Smith, Patrick O.; Poll-Hunter, Norma I.; Piggott, Cleveland; Monroe, Alicia D. Abstract: The Association of American Medical Colleges (AAMC) in 2007 developed the Holistic Review Framework for medical school admissions to increase mission-aligned student diversity. This approach balances an applicant’s experiences, attributes, and metrics during the screening, interview, and selection processes. Faculty recruitment provides its own set of challenges, and there is persistent underrepresentation of certain racial and ethnic minority groups and women in faculty and leadership positions in U.S. academic health centers (AHCs). In 2019, the AAMC initiated a pilot program to adapt and implement the framework for use in faculty recruitment at AHCs. In this Invited Commentary, the authors describe the pilot implementation of the Holistic Review Framework for Faculty Recruitment and Retention and share lessons learned to date. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Park; You Jeong Abstract:No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Wilson; L. Tamara; Milliken, Lindsay; Cagande, Consuelo; Stewart, Colin Abstract:In May 2020, the Coalition for Physician Accountability’s Work Group on Medical Students in the Class of 2021 Moving Across Institutions for Post Graduate Training (WG) released its final report and recommendations. These recommendations pertain to away rotations, virtual interviews, Electronic Residency Application Service opening for programs and the overall residency timeline, and general communications and attempt to provide clarity and level the playing field during the 2020–2021 residency application cycle. The WG’s aims include promoting professional accountability by improving the quality, efficiency, and continuity of the education, training, and assessment of physicians. The authors argue the first 3 WG recommendations may disproportionately impact candidates from historically excluded and underrepresented groups in medicine (HEURGMs) and may affect an institution’s ability to ensure equity in the selection of residency applicants and, thus, warrant further consideration. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Phillips; Robert L. Jr; George, Brian C.; Holmboe, Eric S.; Bazemore, Andrew W.; Westfall, John M.; Bitton, Asaf Abstract:The graduate medical education (GME) system is heavily subsidized by the public in return for producing physicians who meet society’s needs. Under the terms of this implicit social contract, decisions about how this funding is allocated are deferred to the individual training sites. Institutions receiving public funding face potential conflicts of interest, which have at times prioritized institutional purposes and needs over societal needs, highlighting that there is little public accountability for how such funding is used. The cost and institutional burden of assessing many fundamental GME outcomes, such as specialty, geographic physician distribution, training-imprinted cost behaviors, and populations served, could be mitigated as data sources and methods for assessing GME outcomes and guiding training improvement already exist. This new capacity to assess system-level outcomes could help institutions and policymakers strategically address the greatest public needs. Measurement of educational outcomes can also be used to guide training improvement at every level of the educational system (i.e., the individual trainee, individual teaching institution, and collective GME system levels). There are good examples of institutions, states, and training consortia that are already assessing and using GME outcomes in these ways. The ultimate outcome could be a GME system that better meets the needs of society and better honors what is now only an implicit social contract. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Kumagai; Arno K. Abstract:Discomfort is a constant presence in the practice of medicine and an oft-ignored feature of medical education. Nonetheless, if approached with thoughtfulness, patience, and understanding, discomfort may play a critical role in the education of physicians who practice with excellence, compassion, and justice. Taking Plato’s notion of aporia—a moment of discomfort, perplexity, or impasse—as a starting point, the author follows the meandering path of aporia through Western philosophy and educational theory to argue for the importance of discomfort in opening up and orienting perspectives toward just and humanistic practice. Practical applications of this approach include problem-posing questions (from the work of Brazilian education theorist Paulo Freire), exercises to “make strange” beliefs and assumptions that are taken for granted, and the use of stories—especially stories without endings—all of which may prompt reflection and dialogical exchange. Framing this type of teaching and learning in Russian psychologist L.S. Vygotsky’s theories of development, the author proposes that mentorship and dialogical interactions may help learners to navigate through moments of discomfort and uncertainty and extend the edge of learning. This approach may give birth to a zone of proximal development that is enriched with explorations of self, others, and the world. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gonzalo; Jed D.; Wolpaw, Daniel R.; Cooney, Robert; Mazotti, Lindsay; Reilly, James B.; Wolpaw, Terry Abstract:Medical education is increasingly recognizing the importance of the systems-based practice (SBP) competency in the emerging 21st-century U.S. health care landscape. In the wake of data documenting insufficiencies in care delivery, notably in patient safety and health care disparities, the Accreditation Council for Graduate Medical Education created the SBP competency to address gaps in health outcomes and facilitate the education of trainees to better meet the needs of patients. Despite the introduction of SBP over 20 years ago, efforts to realize its potential have been incomplete and fragmented. Several challenges exist, including difficulty in operationalizing and evaluating SBP in current clinical learning environments. This inconsistent evolution of SBP has compromised the professional development of physicians who are increasingly expected to advance systems of care and actively contribute to improving patient outcomes, patient and care team experience, and costs of care. The authors prioritize 5 areas of focus necessary to further evolve SBP: comprehensive systems-based learning content, a professional development continuum, teaching and assessment methods, clinical learning environments in which SBP is learned and practiced, and professional identity as systems citizens. Accelerating the evolution of SBP in these 5 focus areas will require health system leaders and educators to embrace complexity with a systems thinking mindset, use coproduction between sponsoring health systems and education programs, create new roles to drive alignment of system and educational goals, and use design thinking to propel improvement efforts. The evolution of SBP is essential to cultivate the next generation of collaboratively effective, systems-minded professionals and improve patient outcomes. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Almeida; Marcela Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Almeida; Marcela Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Pletcher; Steven D.; Chang, C.W. David; Thorne, Marc C.; Malekzadeh, Sonya Abstract:Problem In the 2021 residency application cycle, the average otolaryngology applicant applied to more than half of programs. Increasing application numbers make it difficult for applicants to stand out to programs of interest and for programs to identify applicants with sincere interest.Approach As part of the 2021 Match, otolaryngology applicants could participate in a preference signaling process, signaling up to 5 programs of particular interest at the time of application submission. Programs received a list of applicants who submitted signals to consider during interview offer deliberations. Applicants and program directors completed surveys to evaluate the signaling process and assess the impact of signals on interview offers.Outcomes All otolaryngology residency programs participated in the signaling process. In total, 611 students submitted applications for otolaryngology residency programs, 559 applicants submitted a Match list including an otolaryngology program, and 558 applicants participated in the signaling process. The survey response rate was 42% for applicants (n = 233) and 52% for program directors (n = 62). The rate of receiving an interview offer was significantly higher from signaled programs (58%) than from both nonsignaled programs (14%; P < .001) and the comparative nonsignal program (23%; P < .001) (i.e., the program an applicant would have signaled given a sixth signal). This impact was consistent across the spectrum of applicant competitiveness. Applicants (178, 77%) and program directors (53, 91%) strongly favored continuing the program.Next Steps Many specialties face high residency application numbers. Programs have difficulty identifying applicants with sincere interest, and applicants face limited opportunities to identify programs of particular interest. Applicants to these specialties may benefit from a preference signaling process like that in otolaryngology. Additional evaluation is needed to determine the impact of signals across racial and demographic lines and to validate these early outcomes. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Schultz; Karen; McGregor, Tara; Pincock, Rob; Nichols, Kathleen; Jain, Seema; Pariag, Joel Abstract:Problem Accurate self-assessment is a critical skill for residents to develop to become safe, adaptive clinicians upon graduation. Physicians need to be able to identify and fill in knowledge and skill gaps to deal with the rapid expansion of medical knowledge and unpredicted novel emerging medical issues. Residency training to date has not consistently focused on building these overarching skills, nor have the burgeoning assessment data that competency-based medical education (CBME) affords been used beyond their initial intent to inform summative assessment decisions. Both are important missed opportunities.Approach The Queen’s University Family Medicine Program adopted CBME in 2010. In 2011, it added the capacity for residents to electronically self-assess their daily performance, with preceptors reviewing and modifying as needed before submitting. In 2018, it designed software to report discordance between residents’ self-assessment and preceptors’ assessment of performance.Outcomes From 2011–2019, 56,585 field notes were submitted, 11,429 by residents, with 28% of those (3,200/11,429) showing discordance between residents’ and preceptors’ performance assessments. When discordant, residents assessed their performance as less competent (undercalled) than their preceptor did 73% of the time (2,336/3,200 field notes). For the 864 field notes (27% of 3,200 discordant notes) where residents rated their performance higher than their preceptor did (overcalled, for 162/1,120 [14%] residents), 6 residents overcalled performance to a dangerous extent (2 or 3 levels of supervision higher than what their supervisors assessed them at) and 26 repeatedly (greater than 5 times) overcalled their level of performance by 1 supervisory level.Next Steps Inaccurate self-assessment (both overcalling and undercalling performance) has negative consequences. Awareness is a first step in addressing this. Discrepancy reports will be used during regular academic reviews with residents to discuss the nature, degree, and frequency of discrepancies, with the intent of fostering improved self-assessment of performance. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Spach; Natalie C. Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rich; Jessica V.; Luhanga, Ulemu; Fostaty Young, Sue; Wagner, Natalie; Dagnone, J. Damon; Chamberlain, Sue; McEwen, Laura A. Abstract:Problem Assessing the development and achievement of competence requires multiple formative and summative assessment strategies and the coordinated efforts of trainees and faculty (who often serve in multiple roles, such as academic advisors, program directors, and competency committee members). Operationalizing programmatic assessment (PA) in competency-based medical education (CBME) requires comprehensive practice guidelines, written in accessible language with descriptions of stakeholder activities, to move assessment theory into practice and to help guide the trainees and faculty who enact PA.Approach Informed by the Appraisal of Guidelines for Research and Evaluation II (AGREE II) framework, the authors used a multiphase, multimethod approach to develop the CBME Programmatic Assessment Practice Guidelines (PA Guidelines). The 9 guidelines are organized by phases of assessment and include descriptions of stakeholder activities. A user guide provides a glossary of key terms and summarizes how the guidelines can be used by different stakeholder groups across postgraduate medical education (PGME) contexts. The 4 phases of guideline development, including internal stakeholder consultations and external expert review, occurred between August 2016 and March 2020.Outcomes Local stakeholders and external experts agreed that the PA Guidelines hold potential for guiding initial operationalization and ongoing refinement of PA in CBME by individual stakeholders, residency programs, and PGME institutions. Since July 2020, the PA Guidelines have been used at Queen’s University to inform faculty and resident development initiatives, including online CBME modules for faculty, workshops for academic advisors/competence committee members, and a guide that supports incoming residents’ transition to CBME.Next Steps Research exploring the use of the PA Guidelines and user guide in multiple programs and institutions will gather further evidence of their acceptability and utility for guiding operationalization of PA in different contexts. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Garson; Arthur Jr Abstract: No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Foohey; Sarah; Nagji, Alim; Yilmaz, Yusuf; Sibbald, Matthew; Monteiro, Sandra; Chan, Teresa M. Abstract:Problem Physical distancing restrictions during the COVID-19 pandemic led to the transition from in-person to online teaching for many medical educators. This report describes the Virtual Resus Room (VRR)—a free, novel, open-access resource for running collaborative online simulations.Approach The lead author created the VRR in May 2020 to give learners the opportunity to rehearse their crisis resource management skills by working as a team to complete virtual tasks. The VRR uses Google Slides to link participants to the virtual environment and Zoom to link participants to each other. Students and facilitators in the emergency medicine clerkship at McMaster University used the VRR to run 2 cases between June and August 2020. Students and facilitators completed a postsession survey to assess usability and acceptability, applicability for learning or teaching, and fidelity. In addition, students took a knowledge test pre- and postsession.Outcomes Forty-six students and 11 facilitators completed the postsession surveys. Facilitators and students rated the VRR’s usability and acceptability, applicability for learning and teaching, and fidelity highly. Students showed a significant improvement in their postsession (mean = 89.06, standard deviation [SD] = 9.56) compared with their presession knowledge scores (mean = 71.17, SD = 15.77; t(34) = 7.28, P < .001, with a large effect size Cohen’s d = 1.23). Two perceived learning outcomes were identified: content learning and communication skills development. The total time spent (in minutes) facilitating VRR simulations (mean = 119, SD = 36) was significantly lower than time spent leading in-person simulations (mean = 181, SD = 58; U = 20.50, P < .008).Next Steps Next steps will include expanding the evaluation of the VRR to include participants from additional learner levels, from varying sites, and from other health professions. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Maxwell; Steve A.; Fuchs-Young, Robin; Wells, Gregg B.; Kapler, Geoffrey M.; Conover, Gloria M.; Green, Sheila; Pepper, Catherine; Gastel, Barbara; Huston, David P. Abstract:Problem Understanding and communicating medical advances driven by basic research, and acquiring foundational skills in critically appraising and communicating translational basic research literature that affects patient care, are challenging for medical students to develop.Approach The authors developed a mandatory course from 2012 to 2018 at Texas A&M University College of Medicine to address this problem. Medical Student Grand Rounds (MSGR) trains first-year students to find, critically assess, and present primary research literature about self-selected medically relevant topics. With basic science faculty mentoring, students completed milestones culminating in oral presentations. Students learned to search literature databases and then choose a clinical subject using these skills. They outlined the clinical subject area background and a mechanistic research topic into a clinical problem based on deeper evaluation of primary research literature. “Mechanistic” was defined in this context as providing experimental evidence that explained the “how” and “why” underlying clinical manifestations of a disease. Students received evaluations and feedback from mentors about discerning the quality of information and synthesizing information on their topics. Finally, students prepared and gave oral presentations, emphasizing the primary literature on their topics.Outcomes In the early stages of the course development, students had difficulty critically assessing and evaluating research literature. Mentored training by research-oriented faculty, however, dramatically improved student perceptions of the MSGR experience. Mentoring helped students develop skills to synthesize ideas from basic research literature. According to grades and self-evaluations, students increased proficiency in finding and interpreting research articles, preparing and delivering presentations, and understanding links among basic and translational research and clinical applications.Next Steps The authors plan to survey fourth-year students who have completed MSGR about their perceptions of the course in the context of clinical experiences in medical school to guide future refinements. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Blanco; Maria; Prunuske, Jacob; DiCorcia, Mark; Learman, Lee A.; Mutcheson, Brock; Huang, Grace C. Abstract:Purpose Reporting guidelines assist authors in conducting and describing their research in alignment with evidence-based and expert-determined standards. However, published research-oriented guidelines do not capture all of the components that must be present in descriptions of educational innovations in health professions education. The authors aimed to create guidelines for educational innovations in curriculum development that would be easy for early-career educators to use, support reporting necessary details, and promote educational scholarship.Method Beginning in 2017, the authors systematically developed a reporting checklist for educational innovations in curriculum development, called Defined Criteria To Report INnovations in Education (DoCTRINE), and collected validity evidence for its use according to the 4 inferences of Kane’s framework. They derived the items using a modified Delphi method, followed by pilot testing, cognitive interviewing, and interrater reliability testing. In May–November 2019, they implemented DoCTRINE for authors submitting to MedEdPORTAL, half of whom were randomized to receive the checklist (intervention group). The authors scored manuscripts using DoCTRINE while blinded to group assignment, and they collected data on final editorial decisions.Results The final DoCTRINE checklist consists of 19 items, categorized into 5 components: introduction, curriculum development, curriculum implementation, results, and discussion. The overall interrater agreement was 0.91. Among the 108 manuscripts submitted to MedEdPORTAL during the study period, the mean (SD) total score was higher for accepted than rejected submissions (16.9 [1.73] vs 15.7 [2.24], P = .006). There were no significant differences in DoCTRINE scores between the intervention group, who received the checklist, and the control group, who did not.Conclusions The authors developed DoCTRINE, using systematic approaches, for the scholarly reporting of educational innovations in curriculum development. This checklist may be a useful tool for supporting the publishing efforts of early-career faculty. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Hansen; Matt; Harrod, Tabria; Bahr, Nathan; Schoonover, Amanda; Adams, Karen; Kornegay, Josh; Stenson, Amy; Ng, Vivienne; Plitt, Jennifer; Cooper, Dylan; Scott, Nicole; Chinai, Sneha; Johnson, Julia; Conlon, Lauren Weinberger; Salva, Catherine; Caretta-Weyer, Holly; Huynh, Trang; Jones, David; Jorda, Katherine; Lo, Jamie; Mayersak, Ryanne; Paré, Emmanuelle; Hughes, Kate; Ahmed, Rami; Patel, Soha; Tsao, Suzana; Wang, Eileen; Ogburn, Tony; Guise, Jeanne-Marie Abstract:Purpose To determine whether a brief leadership curriculum including high-fidelity simulation can improve leadership skills among resident physicians.Method This was a double-blind, randomized controlled trial among obstetrics–gynecology and emergency medicine (EM) residents across 5 academic medical centers from different geographic areas of the United States, 2015–2017. Participants were assigned to 1 of 3 study arms: the Leadership Education Advanced During Simulation (LEADS) curriculum, a shortened Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) curriculum, or as active controls (no leadership curriculum). Active controls were recruited from a separate site and not randomized to limit any unintentional introduction of materials from leadership curricula. The LEADS curriculum was developed in partnership with the Council on Resident Education in Obstetrics and Gynecology and Council of Residency Directors in Emergency Medicine as a novel way to provide a leadership toolkit. Both LEADS and the abbreviated TeamSTEPPS were designed as six 10-minute interactive web-based modules.The primary outcome of interest was the leadership performance score from the validated Clinical Teamwork Scale instrument measured during standardized high-fidelity simulation scenarios. Secondary outcomes were 9 key components of leadership from the detailed leadership evaluation measured on 5-point Likert scales. Both outcomes were rated by a blinded clinical video reviewer.Results One hundred ten obstetrics–gynecology and EM residents participated in this 2-year trial. Participants in both LEADS and TeamSTEPPS had statistically significant improvement in leadership scores from “average” to “good” ranges both immediately and at the 6-month follow-up, while controls remained unchanged in the “average” category throughout the study. There were no differences between LEADS and TeamSTEPPS curricula with respect to the primary outcome.Conclusions Residents who participated in a brief structured leadership training intervention had improved leadership skills that were maintained at 6-month follow-up. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Castanelli; Damian J.; Weller, Jennifer M.; Molloy, Elizabeth; Bearman, Margaret Abstract: Purpose In competency-based medical education, workplace-based assessment provides trainees with an opportunity for guidance and supervisors the opportunity to judge the trainees’ clinical practice. Learning from assessment is enhanced when trainees reveal their thinking and are open to critique, which requires trust in the assessor. If supervisors knew more about how trainees come to trust them in workplace-based assessment, they could better engender trainee trust and improve trainees’ learning experience.Method From August 2018 to September 2019, semistructured interviews were conducted with 17 postgraduate anesthesia trainees across Australia and New Zealand. The transcripts were analyzed using constructivist grounded theory methods sensitized by a sociocultural view of learning informed by Wenger’s communities of practice theory.Results Participants described a continuum from a necessary initial trust to an experience-informed dynamic trust. Trainees assumed initial trust in supervisors based on accreditation, reputation, and a perceived obligation of trustworthiness inherent in the supervisor’s role. With experience and time, trainees’ trust evolved based on supervisor actions. Deeper levels of trainee trust arose in response to perceived supervisor investment and allowed trainees to devote more emotional and cognitive resources to patient care and learning rather than impression management. Across the continuum from initial trust to experience-informed trust, trainees made rapid trust judgments that were not preceded by conscious deliberation; instead, they represented a learned “feel for the game.”Conclusions While other factors are involved, our results indicate that the trainee behavior observed in workplace-based assessment is a product of supervisor invitation. Supervisor trustworthiness and investment in trainee development invite trainees to work and present in authentic ways in workplace-based assessment. This authentic engagement, where learners “show themselves” to supervisors and take risks, creates assessment for learning. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rassos; James; Ginsburg, Shiphra; Stalmeijer, Renée E.; Melvin, Lindsay J. Abstract:Purpose With the introduction of competency-based medical education, senior residents have taken on a new, formalized role of completing assessments of their junior colleagues. However, no prior studies have explored the role of near-peer assessment within the context of entrustable professional activities (EPAs) and competency-based medical education. This study explored internal medicine residents’ perceptions of near-peer feedback and assessment in the context of EPAs.Method Semistructured interviews were conducted from September 2019 to March 2020 with 16 internal medicine residents (8 first-year residents and 8 second- and third-year residents) at the University of Toronto, Toronto, Ontario, Canada. Interviews were conducted and coded iteratively within a constructivist grounded theory approach until sufficiency was reached.Results Senior residents noted a tension in their dual roles of coach and assessor when completing EPAs. Senior residents managed the relationship with junior residents to not upset the learner and potentially harm the team dynamic, leading to the documentation of often inflated EPA ratings. Junior residents found senior residents to be credible providers of feedback; however, they were reticent to find senior residents credible as assessors.Conclusions Although EPAs have formalized moments of feedback, senior residents struggled to include constructive feedback comments, all while knowing the assessment decisions may inform the overall summative decision of their peers. As a result, EPA ratings were often inflated. The utility of having senior residents serve as assessors needs to be reexamined because there is concern that this new role has taken away the benefits of having a senior resident act solely as a coach. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rubright; Jonathan D.; Jodoin, Michael; Woodward, Stephanie; Barone, Michael A. Abstract:Purpose Previous studies have examined and identified demographic group score differences on United States Medical Licensing Examination (USMLE) Step examinations. It is necessary to explore potential etiologies of such differences to ensure fairness of examination use. Although score differences are largely explained by preceding academic variables, one potential concern is that item-level bias may be associated with remaining group score differences. The purpose of this 2019–2020 study was to statistically identify and qualitatively review USMLE Step 1 exam questions (items) using differential item functioning (DIF) methodology.Method Logistic regression DIF was used to identify and classify the effect size of DIF on Step 1 items meeting minimum sample size criteria. After using DIF to flag items statistically, subject matter expert (SME) review was used to identify potential reasons why items may have performed differently between racial and gender groups, including characteristics such as content, format, wording, context, or stimulus materials. USMLE SMEs reviewed items to identify the group difference they believed was present, if any; articulate a rationale behind the group difference; and determine whether that rationale would be considered construct relevant or construct irrelevant.Results All identified DIF rationales were relevant to the constructs being assessed and therefore did not reflect item bias. Where SME-generated rationales aligned with statistical differences (flags), they favored self-identified women on items tagged to women’s health content categories and were judged to be construct relevant.Conclusions This study did not find evidence to support the hypothesis that group-level performance differences beyond those explained by prior academic performance variables are driven by item-level bias. Health professions examination programs have an obligation to assess for group differences, and when present, investigate to what extent, if any, measurement bias plays a role. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Russell; Frances M.; Zakeri, Bita; Herbert, Audrey; Ferre, Robinson M.; Leiser, Abraham; Wallach, Paul M. Abstract:Purpose The primary aim of this study was to evaluate the current state of point-of-care ultrasound (POCUS) integration in undergraduate medical education (UME) at MD-granting medical schools in the United States.Method In 2020, 154 clinical ultrasound directors and curricular deans at MD-granting medical schools were surveyed. The 25-question survey collected data about school characteristics, barriers to POCUS training implementation, and POCUS curriculum details. Descriptive analysis was conducted using frequency and percentage distributions.Results One hundred twenty-two (79%) of 154 schools responded to the survey, of which 36 were multicampus. Sixty-nine (57%) schools had an approved POCUS curriculum, with 10 (8%) offering a longitudinal 4-year curriculum. For a majority of schools, POCUS instruction was required during the first year (86%) and second year (68%). Forty-two (61%) schools were teaching fundamentals, diagnostic, and procedural ultrasound. One hundred fifteen (94%) schools identified barriers to implementing POCUS training in UME, which included lack of trained faculty (63%), lack of time in current curricula (54%), and lack of equipment (44%). Seven (6%) schools identified no barriers.Conclusions Over half of the responding medical schools in the United States had integrated POCUS instruction into their UME curricula. Despite this, a very small portion had a longitudinal curriculum and multiple barriers existed for implementation, with the most common being lack of trained faculty. The data from this study can be used by schools planning to add or expand POCUS instruction within their current curricula. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Cook; David A.; Stephenson, Christopher R.; Pankratz, V. Shane; Wilkinson, John M.; Maloney, Stephen; Prokop, Larry J.; Foo, Jonathan Abstract:Purpose Both overuse and underuse of clinician referrals can compromise high-value health care. The authors sought to systematically identify and synthesize published research examining associations between physician continuous professional development (CPD) and referral patterns.Method The authors searched MEDLINE, Embase, PsycInfo, and the Cochrane Database on April 23, 2020, for comparative studies evaluating CPD for practicing physicians and reporting physician referral outcomes. Two reviewers, working independently, screened all articles for inclusion. Two reviewers reviewed all included articles to extract information, including data on participants, educational interventions, study design, and outcomes (referral rate, intended direction of change, appropriateness of referral). Quantitative results were pooled using meta-analysis.Results Of 3,338 articles screened, 31 were included. These studies enrolled at least 14,458 physicians and reported 381,165 referral events. Among studies comparing CPD with no intervention, 17 studies with intent to increase referrals had a pooled risk ratio of 1.91 (95% confidence interval: 1.50, 2.44; P < .001), and 7 studies with intent to decrease referrals had a pooled risk ratio of 0.68 (95% confidence interval: 0.55, 0.83; P < .001). Five studies did not indicate the intended direction of change. Subgroup analyses revealed similarly favorable effects for specific instructional approaches (including lectures, small groups, Internet-based instruction, and audit/feedback) and for activities of varying duration. Four studies reported head-to-head comparisons of alternate CPD approaches, revealing no clear superiority for any approach. Seven studies adjudicated the appropriateness of referral, and 9 studies counted referrals that were actually completed (versus merely requested).Conclusions Although between-study differences are large, CPD is associated with statistically significant changes in patient referral rates in the intended direction of impact. There are few head-to-head comparisons of alternate CPD interventions using referrals as outcomes. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Lavoie; Patrick; Lapierre, Alexandra; Maheu-Cadotte, Marc-André; Fontaine, Guillaume; Khetir, Imène; Bélisle, Marilou Abstract:Purpose Simulation is often depicted as an effective tool for clinical decision-making education. Yet, there is a paucity of data regarding transfer of learning related to clinical decision-making following simulation-based education. The authors conducted a scoping review to map the literature regarding transfer of clinical decision-making learning outcomes following simulation-based education in nursing or medicine.Method Based on the Joanna Briggs Institute methodology, the authors searched 5 databases (CINAHL, ERIC, MEDLINE, PsycINFO, and Web of Science) in May 2020 for quantitative studies in which the clinical decision-making performance of nursing and medical students or professionals was assessed following simulation-based education. Data items were extracted and coded. Codes were organized and hierarchized into patterns to describe conceptualizations and conditions of transfer, as well as learning outcomes related to clinical decision-making and assessment methods.Results From 5,969 unique records, 61 articles were included. Only 7 studies (11%) assessed transfer to clinical practice. In the remaining 54 studies (89%), transfer was exclusively assessed in simulations that often included one or more variations in simulation features (e.g., scenarios, modalities, duration, and learner roles; 50, 82%). Learners’ clinical decision-making, including data gathering, cue recognition, diagnoses, and/or management of clinical issues, was assessed using checklists, rubrics, and/or nontechnical skills ratings.Conclusions Research on simulation-based education has focused disproportionately on the transfer of learning from one simulation to another, and little evidence exists regarding transfer to clinical practice. The heterogeneity in conditions of transfer observed represents a substantial challenge in evaluating the effect of simulation-based education. The findings suggest that 3 dimensions of clinical decision-making performance are amenable to assessment—execution, accuracy, and speed—and that simulation-based learning related to clinical decision-making is predominantly understood as a gain in generalizable skills that can be easily applied from one context to another. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Dion; Vincent; St-Onge, Christina; Bartman, Ilona; Touchie, Claire; Pugh, Debra Abstract:Purpose Progress testing is an increasingly popular form of assessment in which a comprehensive test is administered to learners repeatedly over time. To inform potential users, this scoping review aimed to document barriers, facilitators, and potential outcomes of the use of written progress tests in higher education.Method The authors followed Arksey and O’Malley’s scoping review methodology to identify and summarize the literature on progress testing. They searched 6 databases (Academic Search Complete, CINAHL, ERIC, Education Source, MEDLINE, and PsycINFO) on 2 occasions (May 22, 2018, and April 21, 2020) and included articles written in English or French and pertaining to written progress tests in higher education. Two authors screened articles for the inclusion criteria (90% agreement), then data extraction was performed by pairs of authors. Using a snowball approach, the authors also screened additional articles identified from the included reference lists. They completed a thematic analysis through an iterative process.Results A total of 104 articles were included. The majority of progress tests used a multiple-choice and/or true-or-false question format (95, 91.3%) and were administered 4 times a year (38, 36.5%). The most documented source of validity evidence was internal consistency (38, 36.5%). Four major themes were identified: (1) barriers and challenges to the implementation of progress testing (e.g., need for additional resources); (2) established collaboration as a facilitator of progress testing implementation; (3) factors that increase the acceptance of progress testing (e.g., formative use); and (4) outcomes and consequences of progress test use (e.g., progress testing contributes to an increase in knowledge).Conclusions Progress testing appears to have a positive impact on learning, and there is significant validity evidence to support its use. Although progress testing is resource- and time-intensive, strategies such as collaboration with other institutions may facilitate its use. PubDate: Sun, 01 May 2022 00:00:00 GMT-
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Santen; Sally A.; Foster, Kenneth W.; Hemphill, Robin R.; Christner, Jenny; Mejicano, George Abstract:No abstract available PubDate: Sun, 01 May 2022 00:00:00 GMT-