Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 25th Sept 2022, 10:36:20pm CEST

 
 
Session Overview
Session
ID 879: Theory-Based Impact Evaluation—Combining Experimental Design with Theory-Based Evaluation
Time:
Thursday, 09/June/2022:
3:45pm - 5:15pm

Location: Room - Galoppen 5

Session Themes:
Theme 4: Methodological shift: transforming methodologies

Show help for 'Increase or decrease the abstract text size'
Presentations
ID: 879
Panels (Abstract Submissions 2022)
Themes: Theme 4: Methodological shift: transforming methodologies
Keywords: Theory-based evaluation, realist evaluation, realist rct, experimental design, theory of change, mechanism, CMO configuration, validity, rigor

Theory-Based Impact Evaluation—Combining Experimental Design with Theory-Based Evaluation

Chair(s): Sebastian Lemire (Abt Associates)

The theory-based evaluation tradition originally gained momentum in evaluation circles as an alternative to experimental designs—as a counter to “black box evaluations” (Chen & Rossi, 1989). Over the years, a steady number of theory-based impact evaluations, purposefully combining theory-based evaluation with experimental designs, have emerged. The purpose of this session is to explore the methodological benefits and limitations of combining theory-based evaluation with experimental designs. More specifically, the panel session identifies different types of theory-based impact evaluations, examines real world applications of these, and concludes with guidance on how to move towards a stronger integration of theory-based evaluation and experimental designs.

Biographies
Mel Mark is Professor of Psychology. He has edited a dozen books and is author of more than 130 articles and chapters in books. For much of his career, Mark has applied his background in social psychology and his interest in research methods to the theory and practice of evaluation.

Stewart Donaldson is Distinguished University Professor and Executive Director of the Claremont Evaluation Center (CEC) and The Evaluators’ Institute (TEI) at Claremont Graduate University, USA. He is past president of the American Evaluation Association (2015) and has been honored with a plethora of prestigious national and regional career achievement awards.

Laura R. Peck is a Principal Scientist at Abt Associates, with deep evaluation experience, in both research and academic settings. Dr. Peck specializes in estimating program impacts in experimental and quasi-experimental evaluations. As Director of Abt’s Research, Monitoring & Evaluation Capability Center, she considers methods across Abt’s diverse global portfolio.

Debra J. Rog, Ph.D., is a Vice President for Social Policy and Economics Research. She enjoys developing new evaluation strategies and techniques. Her unit conducts a range of evaluation studies that examine the implementation, outcomes, and costs of program, policies, and systems initiatives for vulnerable and disadvantaged populations.

Sebastian Lemire brings over 15 years of experience designing and managing evaluations in education, social welfare, and international development. His areas of interest revolve around the purpose and role of program theories in evaluation, alternative approaches for impact evaluation, and how these topics merge in theory-based evaluations and evidence reviews.
 

Presentations of the Panel

 

The Value of Using Experimental Designs in Theory-Driven Evaluation Science

Stewart Donaldson
Claremont Graduate University

Theory-driven evaluation science is the systematic use of substantive knowledge about the phenomena under investigation and scientific methods to improve, to produce knowledge and feedback about, and to determine the merit, worth and significance of a wide range of evaluands (Donaldson, 2007; Donaldson & Lipsey, 2006; Leeuw & Donaldson, 2015). Rooted in critical multiplism (Shadish, 1993), evaluation designs using multiple methods whenever possible are tailored to answer the most important evaluation questions within practical constraints. Examples of theory-driven evaluations using experimental designs in combination with other methods will be presented to illustrate their importance in answering certain types of impact evaluation questions. In addition, the importance of experimental designs for supporting two of the most rigorous methods for gathering credible and actionable evidence about impact, theory-driven systematic reviews and meta-analyses, will be discussed in some detail (Donaldson, Christie, & Mark, 2015).

 

The Value of Using Experimental Design Features to Test Program Components’ Impacts

Laura Peck
Abt Associates

Experimental evaluations—especially when grounded in theory-based impact evaluation—can provide insights into the mechanisms through which program impacts arise, thereby countering the criticism that experiments are a blunt tool. This presentation will detail some variants of experimental evaluation designs that lend themselves to opening the “black box.” The presentation will describe and examine one, theory-based impact evaluation, where the program theory informed which certain program components should be carved out for an experimental test. The special value in using this theory-based experimental strategy is that the results are rigorous and potentially highly relevant to policy and practice.

 

Evaluating “Complex” Systems and Other Evaluands: Using Theory of Change to Weave in Multiple Methods in Impact Evaluation

Debra Rog
Westat

Increasingly, evaluators are asked to evaluate large complex systems of services, policies, and portfolios of projects. These interventions typically involve multiple intertwined components, multiple organizations, causal chains extending over long timeframes with multiple intermediate outcomes, and a blurring of the interventions within a broader context. Questions of impact are often paramount, yet experimental control can be difficult to achieve. Robust theories of change, developed from prior research and practice, can guide designs that weave in multiple methods to build in experimental or quasi-experimental control where relevant and feasible as well as other methods for explanation. Examples will be presented of theories of change providing the blueprint for evaluations of complex interventions, with highlights of how the theories can guide the selection of quasi-experimental designs and strengthen their internal validity as well as how quasi-experimental designs can strengthen the tests of the theories of change.

 

The Realist Trial Trade-off: Realist, Randomized, Rigorous. Pick Two.

Sebastian Lemire
Abt Associates

Realist evaluation and experimental designs are both well-established in evaluation. Over the past ten years, realist trials--evaluations combining realist evaluation and experimental designs--have emerged. Informed by a comprehensive review of realist trials, this presentation examines two questions:

(1) To what extent and how do realist trials align with established quality standards for realist evaluation and experimental designs?

(2) What methodological challenges and trade-offs did realist evaluators face when designing and implementing realist trials?

Reflections on future integration of realist evaluation and experimental designs conclude the presentation.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: EES 2022
Conference Software - ConfTool Pro 2.6.145+TC
© 2001–2022 by Dr. H. Weinreich, Hamburg, Germany