Making Policy-Relevance Relevant: How Researchers and Practitioners Can Speak the Same Language

Last month, we attended a workshop hosted by Evidence in Governance and Politics (EGAP), a cross-disciplinary network of researchers and practitioners dedicated to generating and disseminating experimental research on governance, politics and institutions. As an institutional member of EGAP, this workshop was an important opportunity for IRI to engage academics who do policy-relevant research. Since its founding in 2009, EGAP has played a critical role in bringing together researchers and practitioners to advance evidence-based policy making.

The work of EGAP and others has challenged practitioners to think critically about the kinds of evidence necessary to prove impact, especially through advanced quantitative methods like Randomized Control Trials (RCTs).  By now, most agree that RCTs are just one tool in the research methods toolbox that can be used to build a credible, accessible evidence base.  However, many practitioners still find it difficult to understand or use the data that RCTs yield and some are skeptical of their value. This seems especially true for practitioners in the democracy, human rights and governance (DRG) sector who often believe the impact of their work is particularly difficult to quantify.

In short, experimental researchers use highly technical methods many practitioners don’t understand; confidently quantify concepts like norms and culture that most practitioners feel cannot be accurately measured; and often produce “null” findings—i.e., the program didn’t have impact—for interventions that practitioners strongly believe are effective based on their frequent and direct interaction with beneficiaries.

At the same time, we know that practitioners are hungry for rigorous evidence to inform their programs. How can researchers and practitioners come together more effectively? We suggest the following tips.

For experimental researchers:

  1. Emphasize policy implications. The premise of EGAP is that experimental methods are the best way to evaluate impact and policymakers and practitioners should take their findings seriously. However, when reading experimental studies, the policy implications are sometimes only briefly outlined, poorly developed, or buried in the conclusion. EGAP and JPAL policy briefs have helped translate experimental findings into actionable guidance, but experimental researchers could try to feature policy relevance more prominently in their full articles.
  2. Focus on more realistic interventions. Many experimental researchers test treatments in order to refine theory, rather than testing treatments that are similar to real world interventions. Tests of theory often don’t reflect the content, timing, or intent of actual DRG interventions. As important as a null finding might be, it is easy for practitioners to dismiss null or negative results if the intervention doesn’t look like their programs.
  3. Focus on “how” as well as “whether.” Experimental studies tend to conclude that an intervention was either effective or not. Yet practitioners are often less concerned with whether a program had a statistically significant impact than with why and how programs—even those that had little quantifiable impact—worked for some. Practitioners are unlikely to stop doing their work because of a single negative experimental result. In order to change practice, experimental research needs to be diverse enough to provide positive ways forward for addressing the problems the failed intervention was designed to affect. Some academics advocate combining qualitative methods with experiments to better generate such nuanced insights, which is a promising approach to producing research that is meaningful to practitioners.
  4. Demonstrate your knowledge of context. The impressive amount of time and energy experimental researchers put into learning the country context is often hidden in their field notes or in their heads, even. Their published reports and articles may barely refer to this “pre-research,” relying on experimental rigor to prove the author’s credibility. In contrast, practitioners pride themselves on contextual knowledge and are dismissive of “generalists” analyzing their countries or regions. We recognize academic journals have specific expectations and requirements, but papers that do not demonstrate deep country expertise will likely fail to win over practitioners. Furthermore, this “pre-research” and the conclusions drawn from it to inform the experimental design, might often be just as valuable to practitioners as the experimental findings themselves.

For practitioners:

  1. Don’t fear what you don’t understand. For many in the policy world (including us), RCTs are beyond their methodological expertise. Nevertheless, practitioners should focus on the assumptions made, the intervention design, and results when reading experimental studies. At minimum, these studies can validate or cast doubt on the logic that undergirds many interventions.
  2. Embrace null findings. As we discussed above, there’s good reason for practitioners to want more than simply null—or even validating—findings on impact. However, too often practitioners misinterpret null findings as saying that an intervention does not work under any conditions. Consequently, they disregard the study. Instead, null findings should prompt practitioners to reexamine the nuances of their theories of change. Null findings might also indicate a real problem not with the intervention, but with its intensity, frequency, or targeting. When a small amount of money is targeting a large population with a one-time intervention, for example, it makes more sense why the effects would not be statistically detectible!
  3. Be willing to engage with researchers and the research. Seek out academic research and literature that is relevant to your specific technical focus and continually strive to update your own understanding based on new data. Rather than simply dismiss data that do not match your specific lived experiences, be ready to dig deeper and try to understand what the research might suggest for improving your project design or refining your project’s theories of change.
  4. Acknowledge that theories of change are sometimes wrong. Theories should be tested over time and, if needed, refined or adapted to reflect new learning and a changing world. 

Advice on “bridging the gap” between policy and academia is not new, but these points are worth reiterating because the divide between RCT proponents and many practitioners continues. For us, the EGAP workshop confirmed the value of RCTs and other novel approaches to both gathering evidence and informing project design. But it also showcased the need for better communication between both sides: practitioners need to be more explicit with what they need to know and experimental researchers need to better articulate how their findings should be used to reshape policy decisions. Initiatives like EGAP represent an important venue to do this and are vital to cultivating this productive exchange of ideas.

Up ArrowTop