failure

Learning From Failure 2022

In 2019 and 2020, CARE published Learning from Failures reports to better understand common problems that projects faced during implementation. Deliberately looking for themes in failure has helped CARE as an organization and provides insight on what is improving and what still needs troubleshooting. This report builds on the previous work to show what we most need to address in our programming now.
As always, it is important to note that while each evaluation in this analysis cited specific failures and areas for improvement in the project it reviewed, that does not mean that the projects themselves were failures. Of the 72 evaluations in this analysis, only 2 showed projects that failed to deliver on more than 15% of the project goals. The rest were able to succeed for at least 85% of their commitments. Rather, failures are issues that are within CARE’s control to improve that will improve impact for the people we serve.
To fully improve impact, we must continue to include failures in the conversation. We face a complex future full of barriers and uncertainties. Allowing an open space to discuss challenges or issues across the organization strengthens CARE’s efforts to fight for change. Qualitative analysis provides critical insights that quantitative data does not provide insight into the stories behind these challenges to better understand how we can develop solutions.
CARE reviewed a total of 72 evaluations from 65 projects, with 44 final reports published between February 2020 and September 2021 and 28 midterm reports published between March 2018 and October 2020. Seven projects had both midterm and final evaluations at the time of this analysis. For ease of analysis, as in previous years, failures were grouped into 11 categories (see Annex A, the Failures Codebook for details).

Results
The most common failures in this year’s report are:
• Understanding context—both in the design phase of a project and refining the understanding of context and changing circumstances throughout the whole life of a project, rather than a concentrated analysis phase that is separate from project implementation. For example, an agriculture project that built it’s activities assuming that all farmers would have regular internet access, only to find that fewer than 10% of project participants had smartphones and that the network in the area is unreliable, has to significantly redesign both activities and budgets.
• Sustainability—projects often faced challenges with sustainability, particularly in planning exit strategies. Importantly, one of the core issues with sustainability is involving the right partners at the right time. 47% of projects that struggled with sustainability also had failures in partnership. For example, a project that assumed governments would take over training for project participants once the project closed, but that failed to include handover activities with the government at the local level, found that activities and impacts are not set up to be sustainable.
• Partnerships—strengthening partnerships at all levels, from government stakeholders to community members and building appropriate feedback and consultation mechanisms, is the third most common weakness across projects. For example, a project that did not include local private sector actors in its gender equality trainings and assumes that the private sector would automatically serve women farmers, found that women were not getting services or impact at the right level.
Another core finding is that failures at the design phase can be very hard to correct. While projects improve significantly between midterm and endline, this is not always possible. There are particular kinds of failure that are difficult to overcome over time. Major budget shortfalls, a MEAL plan that does not provide quality baseline data, and insufficient investments in understanding context over the entire life of a project are less likely to improve over time than partnerships and overall MEAL processes.
Some areas also showed marked improvements after significant investments. Monitoring, Evaluation, Accountability, and Learning (MEAL), Gender, Human Resources, and Budget Management are all categories that show improvements over the three rounds of learning from failures analysis. This reflects CARE’s core investments in those areas over the last 4 years, partly based on the findings and recommendations from previous Learning From Failure reports. Specifically, this round of data demonstrates that the organization is addressing gender-related issues. Not only are there fewer failures related to gender overall, the difference between midterm and final evaluations in gender displays how effective these methods are in decreasing the incidence of “failures” related to engaging women and girls and looking at structural factors that limit participation in activities.
Another key finding from this year’s analysis is that projects are improving over time. For the first time, this analysis reviewed mid-term reports in an effort to understand failures early enough in the process to adjust projects. Projects report much higher rates of failure at midterm than they do at final evaluation. In the projects where we compared midline to endline results within the same project, a significant number of failures that appeared in the mid-term evaluation were resolved by the end of the project. On average, mid-term evaluations reflect failures in 50% of possible categories, and final evaluations show failures in 38% of possible options. Partnerships (especially around engaging communities themselves), key inputs, scale planning and MEAL are all areas that show marked improvement over the life of the project.
Read More...

Learning from Failure 2020

Part of striving for the deepest and most sustainable impact at the biggest scale possible is understanding what doesn’t work. CARE’s commitment not only to the highest quality programming, but also to continual improvement, drives us to celebrate our successes and to examine our failures. In 2019, CARE published our first Learning From Failure report, where we looked at what project evaluations told us was going wrong, and areas where we can strengthen our programming to improve our impact. By analyzing broader trends across several projects CARE can get a broader sense of systemic weaknesses that lead to failures in specific cases. We pair this with our podcast with individual case studies where we look at specific examples of failures and how to address them so we can illustrate trends with illustrative examples. That gives us the space to make bigger strategic changes to address underlying causes of failure and support teams to improve work at all levels. One example of this is targeting CARE’s investments in Monitoring Evaluation, Accountability, and Learning (MEAL) systems and capacity building to address common failures we found. In 2020, we repeated the analysis to see where we are improving, and where we still need work. Read More...

Learning From Failure 2019

Driven by a wish to learn more from what goes wrong in our programming, and to examine where changes to the broader organization and system can improve our programming and impact globally, in 2019 CARE undertook its first evaluations-based failure meta-analysis. This analysis draws learning and evidence from 114 evaluations of CARE’s work from 2015-2018 to understand the patterns and trends in what goes wrong. This helps us take a data-driven approach to strategic investments and action plans to live out CARE’s commitment to high program quality and continuous improvement across the board.
The review draws from project specific data, but deliberately anonymizes the data and focuses on overarching trends to remove blame for any specific project team or set of individuals. This exercise is designed to help us learn more about how we can change our processes and patterns of support and engagement around weak areas to improve our work. CARE is using this data to build action plans and next steps to continuously improve our programming.
Read More...

Filter Evaluations