Learning From Failure 2022
Publication Date: 10/11/2022
In 2019 and 2020, CARE published Learning from Failures reports to better understand common problems that projects faced during implementation. Deliberately looking for themes in failure has helped CARE as an organization and provides insight on what is improving and what still needs troubleshooting. This report builds on the previous work to show what we most need to address in our programming now.
As always, it is important to note that while each evaluation in this analysis cited specific failures and areas for improvement in the project it reviewed, that does not mean that the projects themselves were failures. Of the 72 evaluations in this analysis, only 2 showed projects that failed to deliver on more than 15% of the project goals. The rest were able to succeed for at least 85% of their commitments. Rather, failures are issues that are within CARE’s control to improve that will improve impact for the people we serve.
To fully improve impact, we must continue to include failures in the conversation. We face a complex future full of barriers and uncertainties. Allowing an open space to discuss challenges or issues across the organization strengthens CARE’s efforts to fight for change. Qualitative analysis provides critical insights that quantitative data does not provide insight into the stories behind these challenges to better understand how we can develop solutions.
CARE reviewed a total of 72 evaluations from 65 projects, with 44 final reports published between February 2020 and September 2021 and 28 midterm reports published between March 2018 and October 2020. Seven projects had both midterm and final evaluations at the time of this analysis. For ease of analysis, as in previous years, failures were grouped into 11 categories (see Annex A, the Failures Codebook for details).
The most common failures in this year’s report are:
• Understanding context—both in the design phase of a project and refining the understanding of context and changing circumstances throughout the whole life of a project, rather than a concentrated analysis phase that is separate from project implementation. For example, an agriculture project that built it’s activities assuming that all farmers would have regular internet access, only to find that fewer than 10% of project participants had smartphones and that the network in the area is unreliable, has to significantly redesign both activities and budgets.
• Sustainability—projects often faced challenges with sustainability, particularly in planning exit strategies. Importantly, one of the core issues with sustainability is involving the right partners at the right time. 47% of projects that struggled with sustainability also had failures in partnership. For example, a project that assumed governments would take over training for project participants once the project closed, but that failed to include handover activities with the government at the local level, found that activities and impacts are not set up to be sustainable.
• Partnerships—strengthening partnerships at all levels, from government stakeholders to community members and building appropriate feedback and consultation mechanisms, is the third most common weakness across projects. For example, a project that did not include local private sector actors in its gender equality trainings and assumes that the private sector would automatically serve women farmers, found that women were not getting services or impact at the right level.
Another core finding is that failures at the design phase can be very hard to correct. While projects improve significantly between midterm and endline, this is not always possible. There are particular kinds of failure that are difficult to overcome over time. Major budget shortfalls, a MEAL plan that does not provide quality baseline data, and insufficient investments in understanding context over the entire life of a project are less likely to improve over time than partnerships and overall MEAL processes.
Some areas also showed marked improvements after significant investments. Monitoring, Evaluation, Accountability, and Learning (MEAL), Gender, Human Resources, and Budget Management are all categories that show improvements over the three rounds of learning from failures analysis. This reflects CARE’s core investments in those areas over the last 4 years, partly based on the findings and recommendations from previous Learning From Failure reports. Specifically, this round of data demonstrates that the organization is addressing gender-related issues. Not only are there fewer failures related to gender overall, the difference between midterm and final evaluations in gender displays how effective these methods are in decreasing the incidence of “failures” related to engaging women and girls and looking at structural factors that limit participation in activities.
Another key finding from this year’s analysis is that projects are improving over time. For the first time, this analysis reviewed mid-term reports in an effort to understand failures early enough in the process to adjust projects. Projects report much higher rates of failure at midterm than they do at final evaluation. In the projects where we compared midline to endline results within the same project, a significant number of failures that appeared in the mid-term evaluation were resolved by the end of the project. On average, mid-term evaluations reflect failures in 50% of possible categories, and final evaluations show failures in 38% of possible options. Partnerships (especially around engaging communities themselves), key inputs, scale planning and MEAL are all areas that show marked improvement over the life of the project.