Authors:Pete Barbrook-Johnson, Brian Castellani, Dione Hills, Alexandra Penn, Nigel Gilbert Pages: 4 - 17 Abstract: Evaluation, Volume 27, Issue 1, Page 4-17, January 2021. The value of complexity science and related approaches in policy evaluation have been widely discussed over the last 20 years, not least in this journal. We are now at a crossroads; this Special Issue argues that the use of complexity science in evaluation could deepen and broaden rendering evaluations more practical and rigorous. The risk is that the drive to better evaluate policies from a complexity perspective could falter. This special issue is the culmination of 4 years’ work at this crossroads in the UK Centre for the Evaluation of Complexity Across the Nexus. It includes two papers which consider the cultural and organisational operating context for the use of complexity in evaluation and four methodological papers on developments and applications. Together, with a strong input from practitioners, these papers aim to make complexity actionable and expand the use of complexity ideas in evaluation and policy practice. Citation: Evaluation PubDate: 2021-01-15T06:12:24Z DOI: 10.1177/1356389020976491 Issue No:Vol. 27, No. 1 (2021)
Authors:Martha Bicket, Dione Hills, Helen Wilkinson, Alexandra Penn Pages: 18 - 31 Abstract: Evaluation, Volume 27, Issue 1, Page 18-31, January 2021. Central government guidance seeks to ensure and enhance the quality of practice and decision-making across – and sometimes beyond – government. The Magenta Book, published by HM Treasury, is the key UK Government resource on policy evaluation, setting out central government guidance on how to evaluate policies, projects and programmes. The UK Centre for the Evaluation of Complexity Across the Nexus was invited to contribute its expertise to the UK Government’s 2020 update of the Magenta Book by developing an accompanying guide on policy evaluation and ‘complexity’. A small multidisciplinary team worked together to produce a set of guidance, going through multiple stages of work and drawing on a variety of sources including academic and practitioner literature and experts and stakeholders in the fields of evaluation, policy and complexity. It also drew on Centre for the Evaluation of Complexity Across the Nexus’ own work developing and testing evaluation methods for dealing with complexity in evaluation. The resulting Magenta Book 2020 Supplementary Guide: Handling Complexity in Policy Evaluation explores the implications of complexity for policy and evaluation and how evaluation can help to navigate complexity. This article, designed primarily for practitioners who might be interested in this guidance and how it was developed, describes the processes involved, particularly related to the interdisciplinary dialogue and consultation with other key stakeholders that this involved. It also briefly outlines the content and key messages in the guidance, with reflections on the experiences of the authors in developing the guide – including the challenges and insights that arose during the process, particularly around the challenges of communicating complexity to a broad audience of readers. Citation: Evaluation PubDate: 2021-01-15T06:12:22Z DOI: 10.1177/1356389020980479 Issue No:Vol. 27, No. 1 (2021)
Authors:Pete Barbrook-Johnson, Alexandra Penn Pages: 57 - 79 Abstract: Evaluation, Volume 27, Issue 1, Page 57-79, January 2021. The use of complexity science in evaluation has received growing attention over the last 20 years. We present the use of a novel complexity-appropriate method – Participatory Systems Mapping – in two real-world evaluation contexts and consider how this method can be applied more widely in evaluation. Participatory Systems Mapping involves the production of a causal map of a system by a diverse set of stakeholders. The map, once refined and validated, can be analysed and used in a variety of ways in an evaluation or in evaluation planning. The analysis approach combines network analysis with subjective information from stakeholders. We suggest Participatory Systems Mapping shows great potential to offer value to evaluators due to the unique insights it offers, the relative ease of its use, and its complementarity with existing evaluation approaches and methods. Citation: Evaluation PubDate: 2021-01-15T06:12:30Z DOI: 10.1177/1356389020976153 Issue No:Vol. 27, No. 1 (2021)
Authors:Helen Wilkinson, Dione Hills, Alexandra Penn, Pete Barbrook-Johnson Pages: 80 - 101 Abstract: Evaluation, Volume 27, Issue 1, Page 80-101, January 2021. Theory of Change diagrams are commonly used within evaluation. Due to their popularity and flexibility, Theories of Change can vary greatly, from the nuanced and nested, through to simplified and linear. We present a methodology for building genuinely holistic, complexity-appropriate, system-based Theory of Change diagrams, using Participatory Systems Mapping as a starting point. Participatory System Maps provide a general-purpose resource that can be used in many ways; however, knowing how to turn their complex view of a system into something actionable for evaluation purposes is difficult. The methodology outlined in this article gives this starting point and plots a path through from systems mapping to a Theory of Change evaluators can use. It allows evaluators to develop practical Theories of Change that take into account feedbacks, wider context and potential negative or unexpected outcomes. We use the example of the energy trilemma map presented elsewhere in this special issue to demonstrate. Citation: Evaluation PubDate: 2021-01-15T06:12:42Z DOI: 10.1177/1356389020980493 Issue No:Vol. 27, No. 1 (2021)
Authors:Barbara Befani, Corinna Elsenbroich, Jen Badham Pages: 102 - 115 Abstract: Evaluation, Volume 27, Issue 1, Page 102-115, January 2021. As policy makers require more rigorous assessments for the strength of evidence in Theory-Based evaluations, Bayesian logic is attracting increasing interest; however, the estimation of probabilities that this logic (almost) inevitably requires presents challenges. Probabilities can be estimated on the basis of empirical frequencies, but such data are often unavailable for most mechanisms that are objects of evaluation. Subjective probability elicitation techniques are well established in other fields and potentially applicable, but they present potential challenges and might not always be feasible. We introduce the community to a third way: simulated probabilities. We provide proof of concept that simulation can be used to estimate probabilities in diagnostic evaluation and illustrate our case with an application to health policy. Citation: Evaluation PubDate: 2021-01-15T06:12:44Z DOI: 10.1177/1356389020980476 Issue No:Vol. 27, No. 1 (2021)
Authors:Corey Schimpf, Pete Barbrook-Johnson, Brian Castellani Pages: 116 - 137 Abstract: Evaluation, Volume 27, Issue 1, Page 116-137, January 2021. Despite 20 years of increasing acceptance, implementing complexity-appropriate methods for ex-post evaluation remains a challenge: instead of focusing on complex interventions, methods need to help evaluators better explore how policies (no matter how simple) take place in real-world, open, dynamic systems where many intertwined factors about the cases being targeted affect outcomes in numerous ways. To assist in this advance, we developed case-based scenario simulation, a new visually intuitive evaluation tool grounded in a data-driven, case-based, computational modelling approach, which evaluators can use to explore counterfactuals, status-quo trends, and what-if scenarios for some potential set of real or imagined interventions. To demonstrate the value and versatility of case-based scenario simulation we explore four published evaluations that differ in design (cross sectional, longitudinal, and experimental) and purpose (learning or accountability), and present a prospective view of how case-based scenario simulation could support and enhance evaluators’ efforts in these complex contexts. Citation: Evaluation PubDate: 2021-01-15T06:12:34Z DOI: 10.1177/1356389020978490 Issue No:Vol. 27, No. 1 (2021)
Authors:Daniel Silver Abstract: Evaluation, Ahead of Print. The article aims to re-purpose evaluation to learn about social justice by anchoring evaluation in normative dimensions. This article demonstrates the ways in which evaluation with an establishment orientation can limit the scope for dialogue and neglect narratives that contest the status quo. It explains how a more participatory approach that engages with the standpoints of marginalised participants can enhance the potential to learn about social justice. An ethical commitment to social justice does not mean a rejection of rigour in evidence-based evaluation. Relating Fraser’s critical theory of participatory parity to the regulative ideal of evaluation creates a foundation to systematically foreground explanations about how an intervention has delivered social justice. Citation: Evaluation PubDate: 2020-12-24T09:18:57Z DOI: 10.1177/1356389020948535
Authors:Kettil Nordesjö Abstract: Evaluation, Ahead of Print. Evaluation has different uses and impacts. This article aims to describe and analyse the constitutive effects – how evaluation forms and shapes different domains in an evaluation context – of evaluation related to an evidence-based policy and practice, by investigating how the evaluation of social investment fund initiatives in three Swedish municipalities is organized and implemented. Through interviews and evaluation reports, the findings show how this way of evaluating may contribute to constitutive effects by defining worldviews, content, timeframes and evaluation roles. The article discusses how social investment fund evaluation contributes to a linear knowledge transfer model, promotes a relation between costs and evidence and concentrates the power over evaluation at the top of organizations. Citation: Evaluation PubDate: 2020-12-10T09:43:14Z DOI: 10.1177/1356389020969712
Authors:Lisa Verwoerd, Pim Klaassen, Barbara J. Regeer Abstract: Evaluation, Ahead of Print. While hybrid evaluation practices are increasingly common, many Western countries continue to favor modernist evaluation logics focused on performance management—hampering the normalization of reflexive logics revolving around system change. We use Normalization Process Theory to analyze the work evaluators from a policy assessment agency undertook to accomplish the alignment between the prevailing and proposed logics guiding evaluation practice, while implementing a reflexive evaluation approach. Ad hoc alignment strategies and insufficient investment in mutual sense-making regarding reflexive evaluation hindered normalization. We conclude that alignment requires developing reflexive evaluation legitimacy in the context of application and guarding reflexive evaluation integrity, while contextual structures and cultures and reflexive evaluation components are being negotiated. Elasticity (of contextual structures and cultures) and plasticity (of reflexive evaluation components) are introduced as helpful concepts to further understand how reflexive evaluation practices can become normalized. We reflect on the use of Normalization Process Theory for studying the normalization of reflexive evaluation. Citation: Evaluation PubDate: 2020-12-03T05:26:26Z DOI: 10.1177/1356389020969721
Authors:Jason R. Goertzen, Ashley D. Fraser, Marysia E. Stasiewicz, Michelle N. Grinman Abstract: Evaluation, Ahead of Print. Evaluators learning about developmental evaluation (DE) may struggle with the need for guidance on the concrete specifics of how to “do developmental evaluation” because of the need to tailor the use of the method to the context of the program being evaluated. Case examples in the literature provide illustrations of how others have applied this approach. This article adds to this growing body of knowledge by detailing the experience of the first year of a multiyear developmental evaluation of Complex Care Hub. This healthcare program is a hospital-at-home model where patients receive hospital-level services but sleep in their own home and receive case management as needed. The article ends with a discussion of challenges and lessons learned. Citation: Evaluation PubDate: 2020-11-17T04:16:57Z DOI: 10.1177/1356389020969714
Authors:Astrid Brousselle, Jim McDavid Abstract: Evaluation, Ahead of Print. We are currently being challenged to urgently address the environmental crisis. Intervening in this complex ecology creates the need to adopt approaches that will reconcile natural and human systems, approaches for Planetary Health. In this article, we present a Planetary Health Framework as a conceptual dialogic approach for designing and evaluating interventions. Natural and human systems dimensions have been conceptualized in an integrated way, based on existing scientific knowledge. This framework is intended to be applied using a dialogic approach. We will also show, schematically, how the use of this approach can be overlaid on each of the 17 Sustainable Development Goals. The overall aim of this article is to contribute to a transformation in our field, to expand our role from existing narrowly focused evaluation practices to taking into account in our work, how interventions do or do not make a contribution to building a better future for all. Citation: Evaluation PubDate: 2020-11-06T07:57:11Z DOI: 10.1177/1356389020952462
Authors:Ray Pawson Abstract: Evaluation, Ahead of Print. Science has a mixed record when it comes to predicting the future. Engineers build bridges based on foreknowledge of the forces that they are likely to encounter – and their constructions tend to withstand the test of time. Predicting the future course of epidemics and building intervention to contain them are much more precarious. And yet simulation models produced in prestigious centres for mathematical biology have played a significant role informing coronavirus policy in the United Kingdom and elsewhere. The predictive uncertainties include the inherent variability of the pathogen, considerable variation in host population immunity as well as the concern of this article, namely, the constantly adapting human judgements of those designing, implementing and experiencing the national response to an outbreak. Assumptions about how interventions are implemented and how people will react are, of course, built into modelling scenarios – but these estimates depict behavioural change in fixed, stimulus-response terms. Real reactions to the complex restrictions introduced to combat the virus unfold in scores of different pathways – people comply, they resist, they learn, they grow weary, they change their minds, they seek exceptions and so on. Model building is intrinsically speculative, and it is important that crisis management is not boxed in by its latent simplifications. A more pluralistic evidence base needs to be drawn on, to understand how complex interventions operate within complex societies. Citation: Evaluation PubDate: 2020-11-06T07:56:31Z DOI: 10.1177/1356389020968579
Authors:Jayne Cox, Pete Barbrook-Johnson First page: 32 Abstract: Evaluation, Ahead of Print. This paper investigates the role of evaluation commissioning in hindering the take-up of complexity-appropriate evaluation methods, using findings from interviews with 19 UK evaluation commissioners and contractors. We find, against a backdrop of a need to ‘do more with less’ and frustration with some traditional approaches, the commissioning process is perceived to hinder adoption of complexity-appropriate methods because of its inherent lack of time and flexibility, and assessment processes which struggle to compare methods fairly. Participants suggested a range of ways forward, including more scoping and dialogue in commissioning processes, more accommodation of uncertainty, fostering of demand from policy users, more robust business cases, and more radical overhauls of the commissioning process. Findings also emphasised the need to understand how the commissioning process interacts with the wider policy making environment and evidence culture, and how this manifests itself in different attitudes to risk in commissioning from different actors. Citation: Evaluation PubDate: 2020-11-28T06:29:41Z DOI: 10.1177/1356389020976157