Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 25th Sept 2022, 10:48:04pm CEST
Examining The Blueprint Of Realist Evaluation — What Are The Building Blocks Of Context-Mechanism-Outcome Configurations?
Chair(s): Sebastian Lemire (Abt Associates, United States of America)
Realist evaluation is an established approach in evaluation. The main question driving realist evaluation is to uncover how, for whom, and under what conditions an intervention works or not and why. This is accomplished by empirically examining the inner mechanisms by which an intervention generates intended/unintended outcomes within a particular context—explicating the underlying Context–Mechanism–Outcome (CMO) configurations. The overarching aim of this panel is to critically examine the Context-Mechanism-Outcome (CMO) configurations comprising the heuristic blueprint of realist evaluation. What are the underlying ontological and epistemological foundations of CMOs? What is this thing called a mechanism? What is “context” and what type of contextual factors are included in CMOs? What are the methodological benefits and limitations of CMOs? How do you elicit CMOs? What is a good—or decent—CMO? These are some of the questions that the panelists will discuss as part of the session.
The purpose of the session is to critically examine Context-Mechanism-Outcome (CMO) configurations--the heuristic blueprint of realist evaluation. Our objective is to advance our collective thinking around what CMOs are and can be--how to develop and potentially transform our application of CMOs in future realist evaluations. The session should be of interest to evaluation practitioners and commissioners engaged in theory-based evaluation in general and realist evaluation more specifically.
Biographies Ana Manzano is Associate Professor in Public Policy in the University of Leeds (UK). She is interested in the relationship between research methods, evidence and policy-making. She is an expert in realist evaluation and was part of the pioneering research group (RAMESES II), developing methodological standards in realist evaluation methods.
Sebastian Lemire brings over 15 years of experience designing and managing evaluations in education, social welfare, and international development. His areas of interest revolve around the purpose and role of program theories in evaluation, alternative approaches for impact evaluation, and how these topics merge in theory-based evaluations and evidence reviews.
Steffen Bohni-Nielsen is Director General at the Danish National Research Centre for the Working Environment. He has broad research interest in the interplay between evaluative knowledge, public management and decision-making. He has published on numerous topics such as realist evaluation, theory-based evaluation, evidence-based policy and practice, and result-based management.
Frans Leeuw is professor-emeritus at Maastricht University working in the field of evaluation, law and public policy. Earlier he was director National Audit Office of the Netherlands; director National Institute for Justice and Security Research; Dean of the Netherlands Open University and Professor of evaluation studies Utrecht University.
Presentations of the Panel
The Intrinsic Wickedness Of “Context”: Commodification, Configuration And Explanatory Power
Ana Manzano University of Leeds
Realist evaluators acknowledge the importance of context but much of their debates relate to its ‘intrinsic wickedness’ and how context interferes with the most important holy grail search for mechanism. The CMO was meant to help considering context not as a descriptive tool but as an analytical force, but has it succeeded? In our review of recently published realist studies, we identified two distinctive ‘context narratives’: One conceptualises context as observable ‘features that trigger’ mechanisms, which suggests that one can identify and then reproduce these contextual features in order to optimise the implementation of interventions as intended. Another narrative considers context as a dynamic interaction between contexts and mechanisms, implying that contexts are infinite, embedded and uncontrollable, operating in a dynamic, emergent way over time at multiple different levels of the social system. Knowledge gained about how contexts and mechanisms interact to generated intended/unintended/observed outcomes can be used to understand how interventions might be targeted at broadly similar contextual conditions or adapted to fit with different contextual conditions.
What Is In A Mechanism?
Sebastian Lemire Abt Associates
Following Pawson and Tilley’s (1997) seminal introduction of realist evaluation, sustained interest has been awarded to the use of mechanisms in evaluation. Yet, despite this sustained interest, conceptual ambiguity about the meaning and role of mechanisms in evaluation persists. “Like so many words that are bandied about,” Astbury and Leeuw (2010) observe, “‘mechanism’ can mean many different things depending on the particular field of knowledge and context within which it is used”. Semantic pluralism pervades. Informed by a comprehensive review of realist evaluations, the aim of the presentation is to examine how mechanisms are defined and applied in realist evaluations. In extension, further conceptual and practical developments for future applications of mechanisms in realist evaluation are discussed.
How Big Is The Realist Evaluation Tent?
Steffen Bohni-Nielsen the Danish National Research Centre for the Working Environment
Realist evaluation is an increasingly popular approach in evaluation circles. The steady flow of published case applications, articles and books, and even conferences dedicated entirely to realist evaluation speaks to this point. In line with the growth of realist evaluation, a diverse range of realist approaches and methods have emerged--ranging from realist syntheses and realist interviews to realist trials.
This broadening of the realist tent raises several questions: What is the epistemological and ontological bandwidth of realist evaluation? Are emerging variants of realist evaluation--such as realist trials--really realist(ic)? These are some of the questions that will be addressed as part of this presentation.
Realist Evaluation And Contribution Analysis: Commonalities, Similarities And Challenges For Knowledge Growth
Frans Leeuw Maastricht University
Realist evaluation and Contribution analysis (CA) both pay a lot of attention to the role of theories of change/program theories. Both approaches also aim to deliver statements about the effects of policy/programmes. These are only two of the commonalities. However, there are also significant differences, such as the importance attached to detecting (hidden) mechanisms (by realists and less so by CA); the importance attached to methodological rules (aka rules of thumb) used to perform the analyses and reconstructing program theories. This contribution discusses these and other points and next will discusses how the knowledge about the functioning and impact of policy can best be increased in complex and sometimes hostile times.