Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Marijn Faling, Greetje Schouten, Sietze Vellema Abstract: Evaluation, Ahead of Print. Evaluation in complex programs assembling multiple actors and combining various interventions faces contradictory requirements. In this article, we take a management perspective to show how to recognize and accommodate these contradictory elements as paradoxes. Through reflective practice we identify five paradoxes, each consisting of two contradicting logics: the paradox of purpose—between accountability and learning; the paradox of position—between autonomy and involvement; the paradox of permeability—between openness and closedness; the paradox of method—between rigor and flexibility; and the paradox of acceptance—between credibility and feasibility. We infer the paradoxes from our work in monitoring and evaluation and action research embedded in 2SCALE, a program working on inclusive agribusiness and food security in a complex environment. The intractable nature of paradoxes means they cannot be permanently resolved. Making productive use of paradoxes most likely raises new contradictions, which merit a continuous acknowledging and accommodating for well-functioning monitoring and evaluation systems. Citation: Evaluation PubDate: 2023-11-29T10:02:03Z DOI: 10.1177/13563890231215075
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Marko Nousiainen, Lars Leemann Abstract: Evaluation, Ahead of Print. This study introduces a mixed-method model for the realistic evaluation of programmes promoting the experience of social inclusion of people in disadvantaged positions. It combines qualitative and quantitative methods for exploring the context-mechanism-outcome- configurations of four cases consisting of development projects. Qualitative analyses depict the context-mechanism-outcome-configurations using participants’ interviews and small success stories as data. Quantitative analyses of a longitudinal survey including the Experiences of Social Inclusion Scale examine the context-mechanism-outcome-configurations in a larger group of participants and re-test the qualitative findings. Thus, they help to overcome the positive selection bias of the small success stories. The mixed-method approach is fruitful especially because the qualitative and the quantitative analyses amend each other’s shortcomings. In the promotion of social inclusion, it is important to help people to see themselves as active agents and allow them to connect to larger social domains. Citation: Evaluation PubDate: 2023-11-29T09:58:24Z DOI: 10.1177/13563890231210328
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Peter Dahler-Larsen, Estelle Raimondo Abstract: Evaluation, Ahead of Print.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Daniel E. Esser, Heiner Janus Abstract: Evaluation, Ahead of Print. We analyse qualitative data collected from employees at Germany’s two main international development organisations, Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) and Kreditanstalt für Wiederaufbau (KfW) Development Bank, to study how upward accountability and organisational learning interact in the world’s second largest foreign aid system. Goffman’s ‘staging’ heuristic is applied to unpack social practices in these two organisations. We find that employees navigate two separate domains, a frontstage and a backstage. They consider the federal bureaucracy an audience expecting a coherent storyline despite the messy realities of foreign aid. In response, they engage in impression management on a frontstage while shielding their backstages from scrutiny to maximise autonomy. As a result, organisational learning at GIZ and KfW in Goffman’s terms focuses on collective efficacy at satisfying accountability expectations through staged performances. We relate these insights to the hierarchical structure of Germany’s foreign aid system, the role of organisational interests and prevailing professional norms. Citation: Evaluation PubDate: 2023-10-09T06:17:58Z DOI: 10.1177/13563890231204661
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Tim Strasser, Joop de Kraker Abstract: Evaluation, Ahead of Print. Conventional evaluation and strategy approaches insufficiently address the needs of social innovation to adapt to non-linear and emergent change processes. This study addresses this shortcoming by testing a recently developed conceptual framework (3D) for the purpose of adaptive strategy and evaluation. We translated the 3D framework into a practice tool (SCALE 3D [Strategic Capacity development, Leadership and Evaluation in 3 Dimensions]) and applied it in two projects and four workshop settings through an action-research approach, involving networks of community-led sustainability initiatives. We describe practical benefits and suggest process steps for implementing SCALE 3D, as well as overall lessons learnt. We discuss how SCALE 3D can support transformation-oriented networks in alignment with adaptive strategy and evaluation approaches, to support strategic learning as well as reporting, and thereby help practitioners adapt to emerging changes and be accountable to funders. Our findings are relevant for evaluators, action researchers, strategy consultants, funders and social innovation practitioners supporting transformative networks. Citation: Evaluation PubDate: 2023-10-06T12:29:17Z DOI: 10.1177/13563890231204664
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Seweryn Krupnik, Anna Szczucka, Monika Woźniak, Valérie Pattyn Abstract: Evaluation, Ahead of Print. Qualitative comparative analysis is gradually becoming more established in the evaluation field. The purpose of this article is to highlight the potential for evaluation research of engaging in consecutive rounds of this analysis. This is possible when approaching qualitative comparative analysis as a systematic strategy for configurational theorizing. To substantiate this potential, we present two evaluation studies on Research and Development subsidies for companies in Poland. Compared with the results of the first study, the findings of the subsequent consecutive qualitative comparative analysis studies were much more nuanced and helped in developing a full-fledged configurational program theory. In addition to elaborating on the strengths of a consecutive qualitative comparative analysis approach and the relevance of configurational program theories for evaluators, this article shares the main lessons learned in overcoming challenges common to such designs. Thus, concrete guidance is offered to researchers and evaluators who are willing to take configurational theorizing seriously. Citation: Evaluation PubDate: 2023-09-29T08:26:10Z DOI: 10.1177/13563890231200292
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Gabriela Camacho Garland, Derek Beach Abstract: Evaluation, Ahead of Print. This article argues for the importance of theory and theorizing for an evaluation in the form of a process theory of change. A process theory of change centers its theoretical attention on key episodes that explain how things worked, in which the causal linkages are unpacked. The key lies in answering why actors do what they do (and thus whether these actions can be traced back to the intervention). This theorization has three steps: (1) definition of intervention and potential contribution; (2) theorization of potential contribution pathways; and (3) unpacking the process. This procedure is illustrated with a hypothetical example. Citation: Evaluation PubDate: 2023-09-28T06:23:54Z DOI: 10.1177/13563890231201876
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Steffen Bohni Nielsen, Sofie Østergaard Jaspers, Sebastian Lemire Abstract: Evaluation, Ahead of Print. Realist evaluation and experimental designs are both well-established approaches to evaluation. Over the past 10 years, realist trials—evaluations purposefully combining realist evaluation and experimental designs—have emerged. Informed by a comprehensive review of published realist trials, this article examines to what extent and how realist trials align with quality standards for realist evaluations and randomized controlled trials and to what extent and how the realist and trial aspects of realist trials are integrated. We identified only few examples that met high-quality standards for both experimental and realist studies and that merged the two designs. Citation: Evaluation PubDate: 2023-09-23T09:13:37Z DOI: 10.1177/13563890231200291
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Bente van Oort, Hilda van ‘t Riet, Adriana Parejo Pagador, Rosana Lescrauwaet Noboa, Carolien Aantjes Abstract: Evaluation, Ahead of Print. While evaluations are critical for non-governmental organizations to strengthen their advocacy strategies, evaluators and advocates encounter many difficulties evaluating such efforts. This article discusses the contribution of the participatory process evaluation methodology to advocacy evaluation, using a Dutch global health advocacy program as a case study. As participatory process evaluation is a novel methodology in the field of advocacy, the article’s primary focus concerns the application and utility of the methodology. Findings suggest that participatory process evaluation in an advocacy context can provide insights into the implementation of advocacy tools and activities, encouraging reflection and leading to ideas and practical tools to strengthen advocacy efforts. While participatory process evaluation can help overcome some of the often-experienced barriers in advocacy evaluation, further research is needed to consolidate advocacy evaluation theory and practice. Citation: Evaluation PubDate: 2023-09-23T07:49:00Z DOI: 10.1177/13563890231200057
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:John Guenther, Ian Falk, Michael J. Cole Abstract: Evaluation, Ahead of Print. This article argues that not only is theory building from qualitative evaluation possible, but ought to be considered as a desirable product and utilisation of evaluators’ work. Based on three case studies, the authors show how theory building can work, and why it can be important and useful. Theory building is seldom considered an impact-producing product from evaluation. Theory in evaluation is typically limited to ‘evaluation theory’ as a way of explaining why and how different approaches to evaluation work. Theory is also used to inform programme design and ‘theory of change’. The literature seldom suggests that evaluations can be used to build theory in social sciences. The argument presented in this article builds on literature of ‘theories of evaluation use’ to suggest that theorising is a form of knowledge utilisation arising from well-constructed, open-ended evaluation questions, and conceptual use of findings and recommendations. Citation: Evaluation PubDate: 2023-09-14T10:06:31Z DOI: 10.1177/13563890231196603
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Daria-Maria Gerke, Katrin Uude, Thorsten Kliewe Abstract: Evaluation, Ahead of Print. Academia focuses on the interplay of Higher Education Institutions and external stakeholders. In this context, academia is concerned with the societal impact and impact created in interactions with external stakeholders; the latter is often referred to as impact co-creation. There is agreement that the related processes leading to an impact are complex and multi-dimensional. However, academics disagree on how the ultimate, wider impact of research should be measured. This study seeks to conceptualize societal impact through the lens of value co-creation, arguing that societal impact is best conceptualized as the uptake of research. Based on this, we developed a generic research impact assessment framework to facilitate evaluations and enable cross-sector learning. This study contributes to academia by providing an overarching understanding of impact creation, including wider research impact, and offers the perspective that any research project involving stakeholders to a certain extent, also entails co-production. Citation: Evaluation PubDate: 2023-09-14T10:02:10Z DOI: 10.1177/13563890231195906
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Steve Powell, James Copestake, Fiona Remnant Abstract: Evaluation, Ahead of Print. Evaluators are interested in capturing how things causally influence one another. They are also interested in capturing how stakeholders think things causally influence one another. Causal mapping – the collection, coding and visualisation of interconnected causal claims – has been used widely for several decades across many disciplines for this purpose. It makes the provenance or source of such claims explicit and provides tools for gathering and dealing with this kind of data and for managing its Janus-like double-life: on the one hand, providing information about what people believe causes what, and on the other hand, preparing this information for possible evaluative judgements about what causes what. Specific reference to causal mapping in the evaluation literature is sparse, which we aim to redress here. In particular, the authors address the Janus dilemma by suggesting that causal maps can be understood neither as models of beliefs about causal pathways nor as models of causal pathways per se but as repositories of evidence for those pathways. Citation: Evaluation PubDate: 2023-09-14T10:01:31Z DOI: 10.1177/13563890231196601
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:Rick Davies, Tom Hobson, Lara Mani, Simon Beard Abstract: Evaluation, Ahead of Print. Evaluators’ main encounter with views of the future is in the form of theories of change, about how a programme will work to achieve a desired end, in a given context. These are typically focussed on specific relatively short-term futures, which are both desired and expected. But even in the short term, reality often involves unpredictable events which must be responded to. Other ways of thinking about the future may be helpful and complementary, notably those developed by foresight practitioners working in the field of futures studies. These pay more attention to a range of possible futures, rather than a single perspective. One way of exploring such futures is by using ParEvo.org, an online process that enables the participatory exploration of alternative futures. This article explains how the ParEvo process works, the theory informing its design, and its usage to date. Attention is given to three evaluation challenges, and methods to address them: (a) optimising exercise design, (b) analysis of immediate results and (c) identifying longer-term impacts. Two exercises undertaken by the Cambridge-based Centre for the Study of Existential Risk (CSER) in 2021–2022 are used as illustrative examples. Citation: Evaluation PubDate: 2023-09-11T06:20:56Z DOI: 10.1177/13563890231188743
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
First page: 528 Abstract: Evaluation, Ahead of Print.