Photo credit: Alex Radelich/Unsplash
Maria Patrocollo, MPH

Senior Writer

We aspire to use evaluation data to continually improve.

All too often, though, conducting evaluations and delivering their results is a time-bound, one-consultant-and-done process that briefs the relevant USAID Mission on a program’s successes and challenges, but falls short of its potential impact on future programming. Evaluations are crucial tools to redefine priorities, tweak interventions, and drive future interventions. But most evaluation processes fail to foster local investment – or buy-in – of results and fail to workshop and disseminate the evaluation findings to enable improvement. And rarely is the process used to guide how in-country stakeholders can work together to implement systems for ongoing learning.

A Purposeful Process

From its inception, USAID’s HEARD Project has been intentional in ensuring that the entire evaluation process – from design to data collection, analysis, and results reporting – grows local investment, capacity, and spearheads the development of relevant policies and programs to address identified priorities and gaps. We’ve ensured that USAID, and the relevant government/s and stakeholders, maximize the use of evaluations conducted. The potential is profound when evaluation results have the buy-in of key actors on the ground in decision-making spaces for both future policies and programs.

Locally Driven, Locally Relevant

The local partners and subregional anchor ISC partners have taken a lead role conducting the HEARD Project’s evaluations.

In the process of soliciting a local evaluator, the HEARD Project has been mindful of the heavy bureaucratic burden and program development know-how typically associated with submitting a proposal and has tried to make the process as simple as possible to identify the local partner with the most strategic and capacity potential through streamlined proposal requirements, placing emphasis on strong capacity statements and concept notes. Together with the selected evaluation partner, the HEARD Project has shaped their scope after award.

Facilitating Stakeholder-Driven Evaluations

The HEARD Project’s implementation science approach to evaluation scoping, design, and implementation has ensured local decision-maker and implementer engagement in a live evaluation process. By bringing the right people together at the right time to identify questions that are inclusive of everyone’s priorities for the evaluation to answer, we fostered local buy-in and relevance from the very beginning of the evaluation process – the design phase.

Together, the relevant USAID Mission, government, and all implementing partners brainstorm what additional stakeholders should be involved in the evaluation process. This includes who should be part of the design, data collection, and results gathering; who should be considered as an interviewee or member of a strategic constellation of partners reviewing the results; and creating more demand for the evaluation findings.

Local Capacity Development Through the Process

The HEARD Project’s partnership approach – enacted through the Implementation Science Collaborative (ISC) – has involved a broad range of local, regional, and global partners. The decision to be participatory has permeated the project – local partners have truly taken the lead. And we’ve made efforts to engage them in more than data collection – they’ve also analyzed the data and done so much more to drive the evaluations home. Where needed, URC and other global partners have been available for technical assistance and capacity development support. By using this approach, the HEARD Project has built local evaluation capacity through the evaluation process itself.

Partnering and Process – Using an Inclusive Implementation Science Methodology

By engaging a broader base of invested stakeholders in the evaluation process, the HEARD Project’s evaluation approach has been much more participatory and flexible than the norm.

Traditionally, the evaluator and implementer being evaluated figuratively sit on different sides of the table. At times, the process has been treated as if these parties don’t share the same end goal – to improve implementation. By bringing these stakeholders together early, the evaluation scoping process becomes more consultative, helping to unpack the “black box of implementation” to drive the evaluation team to priority focus areas.

Sustaining the Improvement Process for Locally Owned Change

The ultimate purpose of evaluation is to guide better implementation – both in policy and practice. This can only be achieved if evaluation findings are locally relevant and useful to the stakeholders who both buy into their results and want to use them.

Evaluations carried out through the HEARD Project have informed USAID Mission Country Development Cooperation Strategies (CDCSs) – their five-year strategic plans for the U.S. government’s development assistance to that country, and USAID’s future procurements. They’ve informed government policies and national action plans. And, they’ve helped USAID and governments determine what impact their investments had on health outcomes and informed their future investment decisions.

Country Highlight: Guinea

Design Phase: The Guinea Mission appreciated the government’s involvement in this phase. They considered it critical for evaluators to better understand the context and how interventions were being implemented in practice. 

Results Phase: Early engagement with government stakeholders increased acceptance and appreciation of the findings and recommendations. This was evidenced in the request for dissemination of the report broadly to subnational government stakeholders to inform their practice.

Applying Learning: The results informed the detailed discussion of USAID’s new concept note. The new Activity built on recommendations to bring together critical interventions in one integrated health service activity. The evaluation also underscored the importance of concentrating implementation in fewer locations to demonstrate the potential of the strategy. This highlighted the need to increase donor collaboration to ensure complementary implementation coverage in other districts and regions.

Locally driven evaluations set a process in place for more sustainable future impact by shifting evaluation activities towards ongoing embedded learning systems. They have thus served as a vehicle for using and strengthening the country’s collaboration, learning and adapting (CLA) system. The HEARD Project has strengthened these learning systems to ensure learning gleaned from evaluations is institutionalized, inclusive of all relevant stakeholders, and that any subsequent change is locally owned and developed.

Samantha Ski, DrPH, who has led the project’s evaluation practices since their inception, shared, “Working with stakeholders and partners, the HEARD Project has used evaluations as vehicles to maximize program learning and drive co-creation of strategic and operational recommendations from evaluation findings.”

By intentionally embedding evaluations locally, future improvements will be driven by local capacity, including the local capacity to self-evaluate to improve.