Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jori N. Hall, Laura R. Peck Pages: 156 - 157 Abstract: American Journal of Evaluation, Volume 43, Issue 2, Page 156-157, June 2022.
Citation: American Journal of Evaluation PubDate: 2022-06-01T01:19:05Z DOI: 10.1177/10982140221098626 Issue No:Vol. 43, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Debra J. Rog Pages: 301 - 303 Abstract: American Journal of Evaluation, Volume 43, Issue 2, Page 301-303, June 2022.
Citation: American Journal of Evaluation PubDate: 2022-06-01T01:22:37Z DOI: 10.1177/10982140221078747 Issue No:Vol. 43, No. 2 (2022)
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rebecca J. Macy, Amanda Eckhardt, Christopher J. Wretman, Ran Hu, Jeongsuk Kim, Xinyi Wang, Cindy Bombeeck Abstract: American Journal of Evaluation, Ahead of Print. The increasing number of anti-trafficking organizations and funding for anti-trafficking services have greatly out-paced evaluative efforts resulting in critical knowledge gaps, which have been underscored by recent recommendations for the development of greater evaluation capacity in the anti-trafficking field. In response to these calls, this paper reports on the development and feasibility testing of an evaluation protocol to generate practice-based evidence for an anti-trafficking transitional housing program. Guided by formative evaluation and evaluability frameworks, our practitioner-researcher team had two aims: (1) develop an evaluation protocol, and (2) test the protocol with a feasibility trial. To the best of our knowledge, this is one of only a few reports concerning anti-trafficking housing program evaluations, particularly one with many foreign-national survivors as evaluation participants. In addition to presenting evaluation findings, the team herein documented decisions and strategies related to conceptualizing, designing, and conducting the evaluation to offer approaches for future evaluations. Citation: American Journal of Evaluation PubDate: 2022-06-17T05:55:17Z DOI: 10.1177/10982140211056913
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gregory Phillips, Dylan Felt, Esrea Perez-Bill, Megan M. Ruprecht, Erik Elías Glenn, Peter Lindeman, Robin Lin Miller Abstract: American Journal of Evaluation, Ahead of Print. Lesbian, gay, bisexual, transgender, queer, intersex, Two-Spirit, and other sexual and gender minority (LGBTQ + ) individuals encounter numerous obstacles to equity across health and healthcare, education, housing, employment, and other domains. Such barriers are even greater for LGBTQ + individuals who are also Black, Indigenous, and People of Color (BIPOC), as well as those who are disabled, and those who are working-class, poor, and otherwise economically disadvantaged, among other intersecting forms of oppression. Given this, an evaluation cannot be equitable for LGBTQ + people without meaningfully including our experiences and voices. Unfortunately, all evidence indicates that evaluation has systematically failed to recognize the presence and value of LGBTQ + populations. Thus, we propose critical action steps and the articulation of a new paradigm of LGBTQ + Evaluation. Our recommendations are grounded in transformative, equitable, culturally responsive, and decolonial frameworks, as well as our own experiences as LGBTQ + evaluators and accomplices. We conclude by inviting others to participate in the articulation and enactment of this new paradigm. Citation: American Journal of Evaluation PubDate: 2022-06-10T06:35:14Z DOI: 10.1177/10982140211067206
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Joanne Xiaolei Qian-Khoo, Kiros Hiruy, Rebecca Willow-Anne Hutton, Jo Barraket Abstract: American Journal of Evaluation, Ahead of Print. Impact evaluation and measurement are highly complex and can pose challenges for both social impact providers and funders. Measuring the impact of social interventions requires the continuous exploration and improvement of evaluation approaches and tools. This article explores the available evidence on meta-evaluation—the “evaluation of evaluations”—as an analytical tool for improving impact evaluation and analysis in practice. It presents a systematic review of 15 meta-evaluations with an impact evaluation/analysis component. These studies, taken from both the scholarly and gray literature, were analyzed thematically, yielding insights about the potential contribution of meta-evaluation in improving the methodological rigor of impact evaluation and organizational learning among practitioners. To conclude, we suggest that meta-evaluation is a viable way of examining impact evaluations used in the broader social sector, particularly market-based social interventions. Citation: American Journal of Evaluation PubDate: 2022-06-10T06:20:51Z DOI: 10.1177/10982140211018276
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Corrie B. Whitmore Abstract: American Journal of Evaluation, Ahead of Print. This paper describes a framework for educating future evaluators and users of evaluation through community-engaged, experiential learning courses and offers practical guidance about how such a class can be structured. This approach is illustrated via a reflective case narrative describing how an introductory, undergraduate class at a mid-size, public university in the northwest partnered with a community agency. In the class, students learned and practiced evaluation principles in the context of a Parents as Teachers home visiting program, actively engaged in course assignments designed to support the program's evaluation needs, and presented meta-evaluative findings and recommendations for future evaluation work to the community partner to conclude the semester. This community-engaged approach to teaching evaluation anchors student learning in an applied context, promotes social engagement, and enables students to contribute to knowledge about effective human action, as outlined in the American Evaluation Association's Mission. Citation: American Journal of Evaluation PubDate: 2022-05-10T06:10:33Z DOI: 10.1177/10982140221100448
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rachael R. Kenney, Leah M. Haverhals, Krysttel C. Stryczek, Kelty B. Fehling, Sherry L. Ball Abstract: American Journal of Evaluation, Ahead of Print. Site visits are common in evaluation plans but there is a dearth of guidance about how to conduct them. This paper revisits site visit standards published by Michael Patton in 2017 and proposes a framework for evaluative site visits. We retrospectively examined documents from a series of site visits for examples of Patton's standards. Through this process, we identified additional standards and organized them into four categories and fourteen standards that can guide evaluation site visits: team competencies and knowledge (interpersonal competence, cultural humility, evaluation competence, methodological competence, subject matter knowledge, site specific knowledge), planning and coordination (project design, resources, data management), engagement (team engagement, sponsor engagement, site engagement), and confounding factors (neutrality, credibility). In the paper, we provide definitions and examples from the case of meeting, and missing, the standards. We encourage others to apply the framework in their contexts and continue the discussion around evaluative site visits. Citation: American Journal of Evaluation PubDate: 2022-02-24T05:06:00Z DOI: 10.1177/10982140221079266
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jennifer A.H. Billman Abstract: American Journal of Evaluation, Ahead of Print. For over 30 years, calls have been issued for the western evaluation field to address implicit bias in its theory and practice. Although many in the field encourage evaluators to be culturally competent, ontological competence remains unaddressed. Grounded in an institutionalized distrust of non-western perspectives of reality and knowledge frameworks, this neglect threatens the validity, reliability, and usefulness of western designed evaluations conducted in non-western settings. To address this, I introduce Ontologically Integrative Evaluation (OIE), a new framework built upon ontological competence and six foundational ontological concepts: ontological fluidity, authenticity, validity, synthesis, justice, and vocation. Grounding evaluation in three ontological considerations—what there is, what is real, and what is fundamental—OIE systematically guides evaluators through a deep exploration of their own and others’ ontological assumptions. By demonstrating the futility of evaluations grounded in a limited ontological worldview, OIE bridges the current divide between western and non-western evaluative thinking. Citation: American Journal of Evaluation PubDate: 2022-01-31T10:48:29Z DOI: 10.1177/10982140221075244
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John M. LaVelle, Natalie D. Jones, Scott I. Donaldson Abstract: American Journal of Evaluation, Ahead of Print. The impostor phenomenon is a psychological construct referring to a range of negative emotions associated with a person's perception of their own "fraudulent competence" in a field or of their lack of skills necessary to be successful in that field. Anecdotal evidence suggests that many practicing evaluators have experienced impostor feelings, but lack a framework in which to understand their experiences and the forums in which to discuss them. This paper summarizes the literature on the impostor phenomenon, applies it to the field of evaluation, and describes the results of an empirical, quantitatively focused study which included open-ended qualitative questions exploring impostorism in 323 practicing evaluators. The results suggest that impostor phenomenon in evaluators consists of three constructs: Discount, Luck, and Fake. Qualitative data analysis suggests differential coping strategies for men and women. Thematic analysis guided the development of a set of proposed solutions to help lessen the phenomenon's detrimental effects for evaluators. Citation: American Journal of Evaluation PubDate: 2022-01-31T10:47:49Z DOI: 10.1177/10982140221075243
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John M. LaVelle, Clayton L. Stephenson, Scott I. Donaldson, Justin D. Hackett Abstract: American Journal of Evaluation, Ahead of Print. Psychological theory suggests that evaluators’ individual values and traits play a fundamental role in evaluation practice, though few empirical studies have explored those constructs in evaluators. This paper describes an empirical study on evaluators’ individual, work, and political values, as well as their personality traits to predict evaluation practice and methodological orientation. The results suggest evaluators value benevolence, achievement, and universalism; they lean socially liberal but are slightly more conservative on fiscal issues; and they tend to be conscientious, agreeable, and open to new experiences. In the workplace, evaluators value competence and opportunities for growth, as well as status and independence. These constructs did not statistically predict evaluation practice, though some workplace values and individual values predicted quantitative methodological orientation. We conclude by discussing strengths, limitations, and next steps for this line of research. Citation: American Journal of Evaluation PubDate: 2022-01-31T10:47:12Z DOI: 10.1177/10982140211046537
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Pirmin Bundi, Valérie Pattyn Abstract: American Journal of Evaluation, Ahead of Print. Evaluations are considered of key importance for a well-functioning democracy. Against this background, it is vital to assess whether and how evaluation models approach the role of citizens. This paper is the first in presenting a review of citizen involvement in the main evaluation models which are commonly distinguished in the field. We present the results of both a document analysis and an international survey with experts who had a prominent role in developing the models. This overview has not only a theoretical relevance, but can also be helpful for evaluation practitioners or scholars looking for opportunities for citizen involvement. The paper contributes to the evaluation literature in the first place, but also aims to fine-tune available insights on the relationship between evidence informed policy making and citizens. Citation: American Journal of Evaluation PubDate: 2022-01-28T03:51:27Z DOI: 10.1177/10982140211047219
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Stephen H. Bell, David C. Stapleton, Michelle Wood, Daniel Gubits Abstract: American Journal of Evaluation, Ahead of Print. A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower “proof-of-concept” evidence of whether the policy can work for motivated individuals. Both forms of learning carry value, yet evaluations rarely combine the two designs. The U.S. Social Security Administration conducted an exception, the Benefit Offset National Demonstration (BOND). This article uses BOND to examine the statistical power implications and potential gains in policy learning—relative to costs—from combining volunteer and population-representative experiments. It finds that minimum detectable effects of volunteer experiments rise little when one adds a population-representative experiment, but those of a population-representative experiment double or quadruple with the addition of a volunteer experiment. Citation: American Journal of Evaluation PubDate: 2022-01-27T09:12:21Z DOI: 10.1177/10982140211006786
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Jennifer J. Esala, Liz Sweitzer, Craig Higson-Smith, Kirsten L. Anderson Abstract: American Journal of Evaluation, Ahead of Print. Advocacy evaluation has emerged in the past 20 years as a specialized area of evaluation practice. We offer a review of existing peer-reviewed literature and draw attention to the scarcity of scholarly work on human rights advocacy evaluation in the Global South. The lack of published material in this area is concerning, given the urgent need for human rights advocacy in the Global South and the difficulties of conducting advocacy in contexts in which fundamental human rights are often poorly protected. Based on the review of the literature and our professional experiences in human rights advocacy evaluation in the Global South, we identify themes in the literature that are especially salient in the Global South and warrant more attention. We also offer critical reflections on content areas not addressed in the existing literature and conclude with suggestions as to how activists, evaluators, and other stakeholders can contribute to the development of a field of practice that is responsive to the global challenge of advocacy evaluation. Citation: American Journal of Evaluation PubDate: 2022-01-12T08:07:32Z DOI: 10.1177/10982140211007937
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Roni Ellington, Clara B. Barajas, Amy Drahota, Cristian Meghea, Heatherlun Uphold, Jamil B. Scott, E. Yvonne Lewis, C. Debra Furr-Holden Abstract: American Journal of Evaluation, Ahead of Print. Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the operations, community and academic partner collaboration, scientific and community research dissemination, and cost-effectiveness. Unfortunately, there has been limited research on methodologies and frameworks that can be used to evaluate large initiatives. This study presents a framework for evaluating the Flint Center for Health Equity Solutions (FCHES), a National Institute of Minority Health and Health Disparities (NIMHD)-funded Transdisciplinary Collaborative Center (TCC) for health disparities research. This report presents a summary of the FCHES evaluation framework and evaluation questions as well as findings from the Year-2 evaluation of the Center and lessons learned. Citation: American Journal of Evaluation PubDate: 2022-01-11T09:05:56Z DOI: 10.1177/1098214021991923
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Caitlin Howley, Johnavae Campbell, Kimberly Cowley, Kimberly Cook Abstract: American Journal of Evaluation, Ahead of Print. In this article, we reflect on our experience applying a framework for evaluating systems change to an evaluation of a statewide West Virginia alliance funded by the National Science Foundation (NSF) to improve the early persistence of rural, first-generation, and other underrepresented minority science, technology, engineering, and mathematics (STEM) students in their programs of study. We begin with a description of the project and then discuss the two pillars around which we have built our evaluation of this project. Next, we present the challenge we confronted (despite the utility of our two pillars) in identifying and analyzing systems change, as well as the literature we consulted as we considered how to address this difficulty. Finally, we describe the framework we applied and examine how it helped us and where we still faced quandaries. Ultimately, this reflection serves two key purposes: 1) to consider a few of the challenges of measuring changes in systems and 2) to discuss our experience applying one framework to address these issues. Citation: American Journal of Evaluation PubDate: 2022-01-05T02:14:41Z DOI: 10.1177/10982140211041606
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Charles S. Reichardt First page: 158 Abstract: American Journal of Evaluation, Ahead of Print. Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between what happened after the program was implemented and what would have happened if the program had not been implemented, but everything else had been the same. Such a definition is often said to be linked to the use of quantitative methods. But the definition can be used just as effectively with qualitative methods. To demonstrate its broad applicability in both qualitative and quantitative research, I show how the counterfactual definition undergirds seven common approaches to assessing effects. It is not clear how any alternative to the counterfactual definition is as generally applicable as the counterfactual definition. Citation: American Journal of Evaluation PubDate: 2022-01-06T11:04:42Z DOI: 10.1177/1098214020975485
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Melvin M. Mark First page: 293 Abstract: American Journal of Evaluation, Ahead of Print.
Citation: American Journal of Evaluation PubDate: 2022-04-20T06:18:12Z DOI: 10.1177/10982140221078753
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Sharon F. Rallis First page: 295 Abstract: American Journal of Evaluation, Ahead of Print.
Citation: American Journal of Evaluation PubDate: 2022-04-27T07:43:13Z DOI: 10.1177/10982140221078750
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Stewart I. Donaldson First page: 298 Abstract: American Journal of Evaluation, Ahead of Print.
Citation: American Journal of Evaluation PubDate: 2022-02-02T05:05:32Z DOI: 10.1177/10982140221077938
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Justus Randolph First page: 304 Abstract: American Journal of Evaluation, Ahead of Print. In this tribute, I describe my wonderful experience having George Julnes as a long-time evaluation mentor and I pass on some of the sage wisdom that he passed on to me. Citation: American Journal of Evaluation PubDate: 2022-04-20T06:18:24Z DOI: 10.1177/10982140221079190
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Guili Zhang First page: 306 Abstract: American Journal of Evaluation, Ahead of Print. The passing of George Julnes, Editor of the American Journal of Evaluation (AJE), brought deep sorrow to the evaluation community. We lost a dedicated colleague and even better friend. George was a welcoming face of the American Evaluation Association (AEA) and an exemplary leader. He supported AEA membership and leadership, contributed to an internationally inclusive AEA, maintained a strong AJE editorial team, and adapted AJE to meet the new reporting standards of the American Psychological Association. Through his dedication and efforts, George helped shape AEA's professional future. His torches are being picked up by many others who have been inspired by his vision and dedication to building a warm, welcoming, professional evaluation community. Citation: American Journal of Evaluation PubDate: 2022-02-14T02:58:14Z DOI: 10.1177/10982140221079189