Top Five Takeaways from AEA’s Evaluation 2018

  • Kimberly Ratcliff

With the growing emphasis on monitoring, evaluation and learning (MEL), IRI is committed to constantly improving and adapting our MEL efforts. Evaluation 2018 offered hundreds of workshops and presentations on the latest trends, technology, tools and practices that can enhance learning.

IRI’s Office of Monitoring, Evaluation and Learning (OMEL) showcased its expertise on eight panels at the American Evaluation Association’s annual conference in Cleveland, OH last month. IRI  Beyond that, we learned new insight to improve our MEL efforts from a wide variety of evaluators and implementers.

These were our five main takeaways from the conference:

1. Inclusive MEL is critical, but we must expand how we think about inclusivity.

Inclusive evaluation methods need to go beyond gathering diverse stakeholders in the room. We should consider power dynamics and participants’ capacities. We should also introduce inclusivity throughout the evaluation process, including evaluation design, data collection, analysis, data validation and data use. Including multiple stakeholders and empowering them to contribute or co-lead these processes through participatory methods increases the perspectives included in the evaluation and the buy-in to utilize the evaluation findings and recommendations.

2. The utility of participatory methods in democracy and governance work continues to grow.

Participatory methods, including Most Significant Change and Ripple Effect Mapping, can be a great fit for capturing outcome level results. These approaches are well suited for complex evaluations, where it is hard to pre-determine outcomes and impact. Although we have already incorporated these methods in a number of programs, hearing about our colleagues’ experiences of successfully utilizing these methods in the field provided additional evidence of their effectiveness and knowledge of how to best use these methods.

3. Data visualization makes evaluation data easier to understand and more useful.

Long, complex evaluations are hard to digest and difficult to understand. The challenge is to find a compromise between producing deliverables that include the necessary data and one-pagers or “tweetable impact” that are easier to digest. The key to making MEL products more useful is to customize their format to the needs of different audiences, whether that is a snappy infographic, a concise slide deck or an easy-to-navigate report. Even quarterly reports can be more exciting by employing simple data visualization.

4. Contextual understanding greatly enhances evaluation findings.

Using tools to monitor and understand the context of a program can increase the usefulness of evaluation findings and help explain otherwise confusing findings. Beyond the established methods of political economy analysis and stakeholder mapping, conducting site visits – not only visiting a field office, but witnessing programmatic activities – can help explain evaluation findings and inform recommendations. These methods help evaluators and program staff understand the factors that can hinder or catalyze their activities, outputs, outcomes and impact.

5. Programs that operate in dynamic and complex settings require flexible MEL approaches.

Programs in dynamic and complex environments may not operate under static theories of change or logic models, but still need MEL systems and structures to help navigate uncertainties, demonstrate results and hold implementers accountable. For IRI and other DRG implementers, there is a need for continued focus on developing and using MEL tools and approaches that are responsive to dynamic environments.

As IRI and the larger DRG community embrace learning, this conference provided OMEL staff an opportunity to learn from others and share our insights. We look forward to implementing these takeaways and continuing to help improve IRI programming.

 

Up ArrowTop