Practical Skills in Realist Evaluation


Realist evaluation asks different questions than traditional evaluations, and realist program theory is different from other types of program theory. Realist evaluation is therefore different from other theory-based evaluation approaches and requires particular skills of evaluators. This workshop will focus on applied skills in realist evaluation. Using presentation material and practical exercises, the workshop will summarize the key concepts underpinning realist evaluation and provide experience in:

  • Writing key evaluation questions for realist evaluations
  • Developing a hypothesized CMO configuration to test in a realist evaluation
  • Collecting evidence for program mechanisms (which by definition cannot be observed)
  • Analyzing realist data

The workshop will also provide information about the (emerging) international standards for realist evaluation, and ongoing sources of support for realist evaluators.

You will learn:

  • how the terms ‘context’, ‘mechanism’ and ‘outcome’ are used in realist evaluation, and how those differ from common uses
  • how to develop realist evaluation designs, including writing key evaluation questions and developing realist program theory
  • how to obtain evidence for program mechanisms (which by definition cannot be observed)
  • how to approach data analysis for realist evaluations


Emma Williams, Gill Westhorp and Kim Grey, Charles Darwin University, Australia

Emma Williams is Principal Scientist in the evaluation unit of the Northern Institute at Charles Darwin University. Although she continues to conduct many evaluations, increasingly her work in Australia and overseas involves supporting community organizations, government departments, academic researchers and even evaluators in building and using evaluative thinking capacity. As well as practical experience, she is teaching on this topic in graduate courses in 2016.

Gill Westhorp is a specialist methodologist in realist approaches. She holds a PhD in realist methodology and is co-author and core-team member for the Rameses I (international standards for realist synthesis) and Rameses II (international standards for realist evaluation) projects. She has extensive experience in realist evaluation in multiple sectors (health, education, early years programs, international development, energy and climate change) from small community based evaluations to large international projects.

Kim Grey manages an internal evaluation unit for the Australian Government. Currently studying Masters in Evaluation by research, Melbourne University; prior graduate studies in research methods. Managed complex evaluation projects involving inter-cultural evaluation of sensitive cross-government/community sector programs addressing safety and wellbeing, labour market assistance, reform to training schemes, and Indigenous Employment Policy.






Intermediate knowledge and skills in evaluation. Beginner level understanding of realist evaluation an advantage.


Sunday, June 5 from 9:00 am to 12:00 pm

Link to CE competencies for evaluators

  • Specifies program theory
  • Frames evaluation questions
  • Develops evaluation designs
  • Develops reliable and valid measures/tools
  • Analyzes and interprets data
Share on Facebook0Tweet about this on TwitterShare on LinkedIn0Email this to someonePrint this page