A major focus of Giving What We Can’s research team is evaluating charity evaluators and grantmakers — this page explains what this is, why this is our focus, and how we do it.
We want to make the best possible recommendations to donors looking to maximise their impact. To be able to serve donors with a variety of values and starting assumptions, and to cover as many causes and charities as possible, we rely on third-party expert evaluators for our recommendations and grantmaking, rather than evaluating individual charities ourselves. We evaluate evaluators and their methodologies so that we can always rely on the highest-quality and most up-to-date recommendations available across a range of causes.
We currently rely on the following evaluators/grantmakers for our charity recommendations:
We evaluate evaluators to decide which evaluators:
At a high level, we do this by:
In late 2023, we conducted the following six evaluations:
We think it’s the best way we can help donors maximise their impact.
We don’t have the capacity to evaluate individual charities ourselves — there are far too many for just one research team to cover, more than a million in the US alone! — so we need to turn to other expert evaluators and grantmakers focused on impact. By our count, there are now over a dozen impact-focused grantmaking and charity evaluation organisations, some of which provide different charity recommendations in the same cause area. This leaves us, other effective giving organisations, and donors with an important choice on whose recommendations to follow.
Before 2023, we made this choice based on factors like the public reputation of an evaluator in the effective giving ecosystem, and whether its stated approach seemed to broadly align with our donors’ goals. But we wanted to do better, and thought it would be valuable to provide donors with more information about evaluators.
Beyond making our recommendations to donors, we think there are several extra benefits to evaluating evaluators:
There are substantial limitations to our first iteration of this project, which we did in 2023, but we nevertheless think that this is a significant improvement on the status quo, in which there were no independent evaluations of evaluators’ work. We discussed some of our concerns with this status quo when we first announced our research direction at the end of 2022.
In this section, we highlight:
As we do with our impact evaluation, we aim for usefulness, transparency, and justifiability, rather than comprehensiveness and procedural uniformity. Put another way, we aim to transparently communicate how we use our judgement to find the areas we think are most useful to investigate, to come to a justifiable decision on whether and how to defer to an evaluator. Some implications of this approach include that we are flexible in what we choose to investigate (making each evaluation different) and open to stopping an evaluation once we feel able to make a justifiable decision.
We also aim to avoid surprises for evaluators by alerting them of our thinking throughout the process. This is in part because we want to work with evaluators to understand and improve their approach, rather than just judging them, and also because we value their expertise.
In our first iteration of this project in 2023, we looked into six different evaluators across three high-impact cause areas:
We think of a “high-impact cause area” as a collection of causes that, for donors with a variety of values and starting assumptions (“worldviews”), provide the most promising philanthropic funding opportunities. Donors with different worldviews might choose to support the same cause area for different reasons. For example, some may donate to global catastrophic risk reduction because they believe this is the best way to reduce the risk of human extinction and thereby safeguard future generations, while others may do so because they believe the risk of catastrophes in the next few decades is sufficiently large and tractable that it is the best way to help people alive today.
Because of our worldview-diverse approach, we chose to evaluate evaluators in the three cause areas we think contain some of the most cost-effective funding opportunities across a broad range of plausible worldviews (rather than taking a view on how impact varies across these cause areas). As a research team, we think we can add most value within a cause area, whereas donors can decide for themselves which cause areas best align with their worldview.
Our choice of these three cause areas (global health and wellbeing, animal welfare, and reducing global catastrophic risk) has been informed by global priorities research from organisations like Open Philanthropy, the Global Priorities Institute, and (in the past) the Centre for Effective Altruism.
There are some promising philanthropic cause areas that we did not (yet) include (such as climate change). We intend to keep evaluating new cause areas and evaluators to add further recommendations, provided we find a strong enough case exists that, from a sufficiently plausible worldview, a donor would choose to support those cause areas over other options.
We explain the reasons for choosing each evaluator within each evaluation report. Among other reasons, in 2023, these choices were informed by:
The choice of which evaluators to prioritise affects our overall recommendations for 2023. For example, because we have not yet evaluated Founders Pledge, we have not used its research to inform our recommendations so far. This lack of comprehensiveness is one of our project’s initial key limitations. We try to partially account for this by highlighting promising alternatives to our recommendations on our donation platform, and providing resources for donors to investigate these further.
As discussed above, a key goal for our evaluations project was to decide which evaluators to rely on for our recommendations and grantmaking. We were additionally interested in providing guidance to other effective giving organisations, providing feedback to evaluators, and improving incentives in the effective giving ecosystem.
For each evaluator, our evaluation aimed to transparently and justifiably come to tailored decisions on whether and how to use its research to inform our recommendations and grantmaking. Though each evaluation is different — because we tried to focus on the most decision-relevant questions per evaluator — the general process was fairly consistent in structure:
Our 2023 evaluations had various limitations, which are detailed in each evaluation report.
Several limitations apply to the project as a whole, some of which we’ve discussed above as well:
There are also a few limitations that were present across all or most evaluations we conducted this year:
Given these limitations, we aimed to:
Even with our efforts to take an approach that prioritises transparency, justifiability, and usefulness, we appreciate there still are significant limitations to our evaluations, and see the first iteration of this project as a minimum-viable-product version which we look forward to improving on in future iterations. However, as mentioned above, we think doing these evaluations represents a significant improvement to the previous situation, in which there were no independent evaluations of evaluators’ work we (or donors and other effective giving organisations) could rely on.
Rather than evaluating individual charities, since 2023, Giving What We Can evaluates which third-party expert evaluators donors can best rely on to maximise their impact. This allows us to make even higher-quality fund and charity recommendations to donors with a wide variety of values and starting assumptions. We think this represents a big improvement over how we previously chose which evaluators to work with — based on rough heuristics— even if it still has limitations. It has also facilitated the launch of our cause area funds, which present a reliable default option for donors who want their money to be allocated according to our latest research.
Over time, we want to expand to more cause areas and evaluators, go more in-depth where it's useful, and keep refining our process based on feedback. Most importantly, we'll keep focusing on empowering donors and collaborating with evaluators to help donors have the biggest impact. We're grateful to all the evaluators who worked with us on this project so far, and look forward to continuing to improve together.
For those who would like to see our current giving recommendations, check out our best charities page. For the full selection of programs Giving What We Can supports, see our donation platform.