Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 25th Sept 2022, 10:31:25pm CEST
ID: 789 Panels (Abstract Submissions 2022) Themes: Theme 4: Methodological shift: transforming methodologies Keywords: Program theory, theory of change, complexity, systems thinking, theory-based evaluation, realist evaluation.
How We Model Matters — Capturing Complexity In Program Theories
Chair(s): Laura Peck (Abt Associates)
The broad and still broadening array of techniques and methods for developing program theories substantively expands our evaluation toolbox and practice. For novice—and perhaps even experienced—evaluators, the broadening array of approaches and methods, not to mention the dizzying array of corresponding terminology, may invoke a mixed sense of methodological excitement and confusion. The motivation for this panel is to promote awareness of different types of strategies and techniques for capturing complexity in program theories. The modest hope of the panelists is that fellow evaluation practitioners will be inspired and learn from these techniques and approaches, perhaps even expanding their future design and use of program theories to capture complexity.
Biographies Sebastian Lemire brings over 15 years of experience designing and managing evaluations in education, social welfare, and international development. His areas of interest revolve around the purpose and role of program theories in evaluation, alternative approaches for impact evaluation, and how these topics merge in theory-based evaluations and evidence reviews.
Fiona Mactaggart is a monitoring and evaluation specialist at Abt Britain, with 10+ years of international development experience. Fiona designs and implements MEL systems for diverse and often complex programmes. She contributes to critical thinking on adaptive management and MEL practices, non-traditional MEL methods, and challenges of contribution vs. attribution.
Stewart Donaldson is Distinguished University Professor and Executive Director of the Claremont Evaluation Center (CEC) and The Evaluators’ Institute (TEI) at Claremont Graduate University, USA. He is past president of the American Evaluation Association (2015) and has been honored with a plethora of prestigious national and regional career achievement awards.
Gordon Freer is based in Johannesburg South Africa and has led numerous theory-based evaluations of international development programmes. He has published several articles and chapters on various aspects of using a theory of change as an evaluative tool. He appreciates but believes in unravelling and simplifying complexity where possible.
Laura R. Peck is a Principal Scientist at Abt Associates, with deep evaluation experience, in both research and academic settings. Dr. Peck specializes in estimating program impacts in experimental and quasi-experimental evaluations. As Director of Abt’s Research, Monitoring & Evaluation Capability Center, she considers methods across Abt’s diverse global portfolio.
Presentations of the Panel
What Have We Learned About Capturing The Complexity Of Program Theories From Theory-driven Evaluation Practice?
Stewart Donaldson Claremont Graduate University
Professor Donaldson will summarize some of the main findings related to program theory development and application presented in his new book “Introduction to Theory-Driven Program Evaluation: Culturally Responsive and Strengths-focused Applications (2021).” A special emphasis will be placed on exploring the unique knowledge base that has been developed from both research on evaluation (ROE) studies and evaluation practice. He will illustrate ways of capturing complexity using a variety of modeling and visual techniques from actual evaluations that have been guided by stakeholder program theories and/or social science theory and research.
Visual Strategies And Modeling Techniques For Capturing Complexity In Program Theories
Sebastian Lemire Abt Associates
Program theories are widely used in evaluation. Over the years, evaluators have developed a wide range of techniques and methods for program theories, substantively expanding our evaluation toolbox and practice. In theory-based impact evaluation especially, innovative analytical approaches and techniques continue to emerge and gain prominence. Informed by a recent review of 400 theory-based and realist evaluations, this presentation describes and visually illustrates different modeling techniques and visual strategies for capturing complexity in program theories. A brief note on reflective program theorizing concludes the presentation.
Does A Theory Of Change Leave You Feeling Short-changed? How Can You Get More Out Of It?
Mbuso Jama Abt Associates
For designers, evaluators, implementers and funders, a Theory of Change is a staple of our work. It is much more likely that a Theory of Change will be included in our practice than not. And yet we know from our own experiences that a Theory of Change is often poorly done, poorly used and can at times simply be a waste of time. So why do we persevere? In this presentation, we share with you examples of when Theory of Change has delivered and benefitted our work and the work of others. We take you through our top practical hard learned lessons about when and how Theory of Change works best, with a focus on how it can be more effectively used as an evaluative tool. In our session expect to see examples of when we have failed to get the most out of Theory of Change and of when we have successfully used Theory of Change. We will also include practical methods, tools and tricks that helped us to get the most out of the theory of change tool!
Simplifying Complexity? Disentangling A Complex Theory-based Evaluation
Gordon Freer Insight Strategies
Over a period of five years, an adaptive, innovative business accelerator – SPRING – worked with 75 businesses over 9 countries to change the lives of over 2 million girls. Gordon led the seven year evaluation of this programme, almost from inception. With the evaluation drawing to a close in mid-2022, this is a prime opportunity for reflection on what worked well in evaluating this fluid, complex intervention. Contemplating the evaluation strategy, process and results, Gordon will draw lessons from real life in conducting a three pillared, theory-based, longitudinal and adaptive evaluation. Based on frank discussions with the multiple donors, implementing team members, and the evaluation team, Gordon will share what worked well, where the evaluation strategy and process might have been improved, and, in hindsight, components that might have been best left on the drawing board. The presentation will shed light and learnings on the implementation and the evaluative process, from conceptualisation, through design and inception, to delivery and conclusion.