Philanthropic diversification

September 2016

This was written over a short period while interning at Giving What We Can.

An open question in philanthropy is whether all of an individual or organisation’s donations should be directed to a single charity or giving opportunity, or whether one should consider splitting between multiple targets (Greaves et al. 2019). This problem is particularly relevant to the effective altruism (EA) movement, which aims to bring about the greatest positive impact, since the impact-per-dollar difference between charities might be a factor of ten, a hundred, or even a thousand.

The Disease Control Priorities in Developing Countries 2nd Edition (DCP2), a comprehensive collection of articles on setting priorities in global health, estimates the cost-effectiveness of 108 interventions on different types of illness in terms of disability-adjusted life years (DALYs) per $1,000, shown in the graph below (Jamison et al. 2006). DALYs are commonly used in public health to compare health outcomes, and are calculated by summing the years of life lost due to poor health, and the number of years lived with this condition weighted by its severity.

Cost-effectiveness: DALYs per $1,000

These cost-effectiveness estimates range from 0.02 to 300 DALYs per $1,000, with a median of 5, meaning that the best program might produce 15,000 times the benefit compared to the worst, and 60 times the benefit compared to the median. The chart is also heavily skewed, suggesting that the most cost-effective charities are rare, so one should be careful when researching and splitting their giving between interventions.

An important measure in cost-effectiveness analysis is the expected value, an average of the value (utility, or “goodness”) of each possible outcome, weighted by their probabilities. For example, suppose we are giving away £100, and have a choice between charity A, which distributes deworming pills to children for £1 each with 90% effectiveness, and charity B, which does the same except the pills are £0.50 with 70% effectiveness. The expected number of children dewormed by giving to charity A is 100/1*0.9 = 90, whereas it is 100/0.5*0.7 = 140 for charity B, so we should donate to A, all else equal, as it has a greater expected value.

This is the typical argument against donation splitting: in principle, there must exist a single most cost-effective charity with the highest expected value, so a purely rational agent might say that if a donor is looking to achieve the most good with their donation, it should be given here.

While somewhat convincing, this reasoning does not take into account a range of important considerations, whether psychological, social, economic, philosophical, or otherwise. This includes various quirks of human psychology, the social context of giving, a lack of information about the best opportunities, the value of information gained by exploring new ones, the difficulties estimating cost-effectiveness, the donation size, coordination problems, and other concerns.

In this literature review, we will investigate reasons for and against splitting, and explore whether these are justified under the assumption that the donor is trying to do the most good. We will also look at how the size of the donation itself changes some of the questions we might ask.

What are the main reasons that donors split?

Heuristics and Biases

Loss Aversion. There is a great deal of evidence showing that actual human behaviour frequently departs from the expected value model in systematic ways. Kahneman & Tversky (1979) demonstrated this in their paper on prospect theory, by showing that people are loss averse - they gained less utility from acquiring an object than was lost by giving it up.

In one experiment (Kahneman 1990), students were seated in a classroom setting, and a decorated mug, worth about $5 at retail, was placed in front of some of them (the “sellers”), along with a questionnaire stating that they now owned the mug, and could indicate whether they would like to sell it, and at what price, between $0.50 and $9.50. Other students (the “choosers”) who had not received a mug were also given a questionnaire, with the option of either receiving a mug, or an amount of money also between $0.50 and $9.50, indicating their preferences between them.

The sellers seemed to frame the experiment as a loss, as they could either keep the mug, or exchange it for a sum of money. The choosers, however, thought of it as a gain: they began with nothing, but would receive either a mug or some money, so were strictly better off. In two tests, the median values of the mug were $7.00 and $7.12 for the sellers, and $3.12 and $3.50 for the choosers - while the choices the two groups made were equivalent, simply giving the sellers ownership of the mug made them value it twice as highly as the choosers, even though it is unlikely they would have paid that much to purchase one if they were not granted ownership.

In Choices, Values and Frames, Kahneman and Tversky (1983) describe another experiment in which the majority of subjects were shown to be averse to risk. They were offered a sure gain of a smaller reward of $800, or a greater reward of $1,000 with 85% probability, and therefore an expected value of $850. The majority chose the former, which runs counter to expected utility theory.

This might imply that people are more likely to donate to a cause with a high probability of a small effect, over one with a lower probability of having a huge effect, which may not make sense from an impact perspective. Kuhn (2014) writes:

The feeling of higher stakes is basically because of loss aversion bias—it feels disproportionately worse to make one really bad choice (like donating everything to your top charity when the second-place one later turns out to be better) than to make a lot of slightly bad choices (like splitting your donation among your top five charities when your top charity was actually the best one). There’s just as much potential to make a bad decision when donating to multiple charities, but there’s less potential to feel bad about it.

People often donate periodically, such as every few months or once a year, rather than in a single lump sum, and their expectation or ranking of a charity or intervention’s cost-effectiveness could change over time as they learn new information, which people in effective altruism seem particularly likely to do. If they feel especially bad when estimates for a charity they have given a large amount of money to decreases, this may impact their excitement and willingness to donate in future, so it may make sense to split if such news could take a toll in the long run. Snowden (2015) argues that all that matters is the possibility of hearing bad news:

In reality, [a donor] might never find out the actual impact of his donation. But the argument still holds if there is some risk that new evidence may come to light, suggesting that the charity to which he donated had less of a chance of doing good than previously thought. For example, a recent re-evaluation of a landmark study on mass deworming found that errors in the calculations meant that the positive spillover effects of mass deworming treatment on school attendance were no longer significant (Aiken, Davey, Hargreaves, & Hayes, 2015; Miguel & Kremer, 2004).

Snowden goes on to say that, for this reason, if a set of charities are sufficiently close in their estimated impact, it may be rational to deviate from expected utility theory and split:

Nevertheless, as [the donor’s] past and future selves can be considered as holding separate sets of preferences, it is accurate to describe [the donor’s] past self as purely altruistic while being aware that, under certain conditions, his future self might fall short of this lofty ideal. His past self therefore adopts a sophisticated choice procedure to realise the outcomes which are in fact the object of his past self’s purely altruistic preferences.

Scope Neglect. One less valid reason why people might donate to multiple charities is scope neglect, where the valuation of a problem is not proportional to its size. Perhaps the best known example of this was Desvouges et al. (1992), where three groups of subjects were told that varying numbers of migrating birds die each year by drowning in uncovered oil ponds, which could be prevented by covering these oil ponds with nets, and asked how much they would be willing to pay to provide them. For 2,000, 20,000, and 200,000 birds, the mean responses were $80, $78, and $88, so the number of birds saved did not seem to matter.

Carson and Mitchell (1995) showed that the effect also occurred when valuing human lives. By increasing the supposed risk associated with chlorinated drinking water by a factor of 600, from 0.004 to 2.43 annual deaths per 1000, people’s willingness to pay increased only by a factor of 4, from $3.78 to $15.23. Baron & Greene (1996) found no effect when varying human lives saved by a factor of 10.

There are a few different explanations given for this phenomenon, one of which is that people tend to imagine just a single bird suffering, and have difficulty visualising large numbers, called “judgment by prototype” (Kahneman et al. 1999). This might affect how much somebody is willing to give to any particular cause, and how important it seems, so they might instead spread their charitable giving over multiple causes. Another possibility is that the person’s willingness to pay is mostly unrelated to the birds, and they are spending enough to create a “warm glow” feeling (Kahneman & Knetsch 1992). If donating £100 to one charity feels equally rewarding £1,000, people may spread their giving out to create multiple warm glows.

Finally, Harrison (1992) describes a “good cause dump” explanation, that people have a certain amount they are willing to give to a cause such as “the environment”, and any intervention in the area elicits this amount.

Overall, donors should be aware of scope neglect, and aim for a more accurate sense of the scale of each problem. However, it also seems right to be sensitive of other human desires besides maximising impact, like warm glows, as Sittler (2017) says:

To some degree, we may be using splitting to satisfy our urge to purchase “fuzzies”. I say this without negative judgement, I agree with Claire Zabel that we should “de-stigmatize talking about emotional attachment to causes”. I think we should satisfy our various desires, like emotional satisfaction or positive impact, in the most efficient way possible. It may not be psychologically realistic to plan to stop splitting altogether. Instead, one could give as much as possible to the recipient with the highest expected value, while satisfying the desire to split with the small remaining part.

Diversification Bias. Variety can be good when deciding where to travel to next, or which book to read, and diversification is a useful strategy in investing in order to reduce risk. However, people will often seek it even after they no longer have reason to do so, especially when making simultaneous rather than sequential choices (Read et al. 1995).

In one study by Fox et al. (2005), participants were given small amounts of money and asked where it should be donated: One group of participants was asked how to split between a local and an international charity, and the other between five different charities (one international, four local). The median responses in each group were to split 50/50 and to give each of the five charities 20%, which seems unlikely to be optimal for impact.

In Heuristics and Biases in Charity, Baron and Szymanska (2010) break down the bias when applied to charity into three possible explanations: one is the desire to feel multiple warm glows, the second is the failure to distinguish between maximisation and allocation (or discomfort in doing so), and the third is a tendency to think of donating to charities like investments.

Regarding the second of these, Ubel et al. (1996) told their subjects:

All else equal, kidney transplants are more likely to succeed if they are transplanted into patients who ‘match’ the kidney donor. The number of successful transplants one year after transplant is eight out of ten for patients with a complete match and seven out of ten for patients with a partial match.

They were then asked how to maximize the success rate when allocating 100 kidneys between two groups: 100 patients with a complete match, and 100 patients with a partial match. Fewer than 20% of the participants responded to give all the kidneys to the completely match group, and most said to allocate the kidneys equally, giving 50 to each group.

Another study by Ubel et al. (1996) highlighted the discomfort many people - here a set of prospective jurors, medical ethicists, and experts in medical decision-making - show when making trade-offs based on cost-effectiveness. The groups were asked to choose between two screening techniques for a population at low risk of colon cancer. One was cheaper and would save 10% more lives for the same cost (1,100 versus 1,000), but the screening would only work for half the population, whereas the other was slightly more expensive but applicable to the whole population. Of the three groups, 56% of prospective jurors, 53% of medical ethicists, and 41% of experts in medical decision making recommended the less cost-effective method, mostly justified by concepts of equality and fairness, with some saying that it would be politically infeasible to offer the screening to only half the population, or that the benefit in lives saved for doing so was too small to justify it.

The final explanation given is an analogy with investment - if it is reasonable to diversify here, why not when donating to charity? Splitting makes sense when investing, because there are diminishing returns in wealth - the first $10,000 we earn is more valuable than the second, since we buy the things we need the most, and if we have $10,000, losing all of it is much worse than gaining another is good. This means investing in multiple things is worthwhile to protect against the risk of losing everything, despite the fact that we gain less money in expectation. However, in charity, an individual’s donation is unlikely to hit the point of diminishing returns in utility (we consider large donors in the section on diminishing returns), and we can expect each dollar given to do roughly an equal amount of good (Baron 2011).

There is also evidence that people diversify just for the sake of it. Baron and Szymanska (2011) ran three studies asking questions along the lines of: “A can save one life for $10,000. B can save one life for $12,500. The people helped are from the same groups, with the same problems.” The mean allocations to B in the three similarly phrased questions were between 12% and 18%, and between 38% and 43% gave B at least some of the funding. In other studies they conducted, many people allocated some funds to a less effective charity that ran multiple projects over a more effective charity which ran only one project.

It seems plausible that committing to donating to one charity at a time, so that the decisions are more sequential, could help versus the bias for variety.

Epistemic Uncertainty

A response to effective altruism is that it is too difficult, or impossible, to compare causes. Karnofsky (2011) called this “denying the choice”:

One of the things that has surprised us about the world of charity is how many people insist on answering, “Both” or “You can’t/shouldn’t be asking that question.” To them, all that matters is whether a charity does some good, not how much good it does or how it compares to other options.

Holly Elmore (2016) writes:

I understand that it’s hard, that we will always instinctively care more for the people we see than those we don’t. There’s no shame in Fink’s deep feelings for Nathalie. They are a key component of compassion. But there should be great shame in letting more people suffer and die than needed to because you can’t look past your own feelings. This is the kind of narrow empathy that Paul Bloom is against.

There are millions of people around the world dying of entirely preventable causes. Why should it make any difference that they aren’t in front of us? You know they are there. They know the suffering they feel. Poverty is a major culprit, as are neglected tropical diseases that could be cured for pennies per person per year. Money that you won’t even miss could be saving lives right now if you put it to that purpose instead of, say, home improvement or collecting action figures. Every decision we make bears on the lives of the myriad others we might be able to help.

We are always in triage. I fervently hope that one day we will be able to save everyone. In the meantime, it is irresponsible to pretend that we aren’t making life and death decisions with the allocation of our resources. Pretending there is no choice only makes our decisions worse.

In a world with limited resources, improving or saving more lives seems strictly better, all else equal. However, we cannot always run a randomized controlled trial to work out the best intervention, especially when the value of an action is thought to be in the long-term effects many years in the future. Calculating expected values explicitly is hard, if not impossible, in most situations. There will be unknowns, such as the long-term flow-through effects, intuitions and heuristics which are difficult to quantify or even communicate, and deep questions around comparing improving the lives of people alive today to those in the far-future, or weighing the suffering of animals in factory farms and the wild against human suffering (Tomasik 2013).

Karnofsky (2011) lays out other misgivings with an explicit expected value approach. One is the possibility of Pascal’s mugging - an intervention with a tiny probability of success may be seen as worthwhile because the expected payoff is so large, like positively affecting, or enabling, trillions of lives in the far future. The framework itself also does not reward finding out new information, to improve estimates, or penalize ignorance. It does not properly allow for priors, such as being skeptical of most interventions in general, putting less weight on anti-common sense conclusions, or giving more weight to ones that are supported by multiple worldviews. However, these are not arguments against maximising expected value or giving up on effective giving, merely for adopting a different, less explicit approach when doing so. It still seems worth putting in significant effort to discover the most effective charities.

In Cluelessness, Greaves (2016) says that if one considers all the effects of one’s actions to matter, the long-term, unforeseen (“flow-through”) effects may dominate, rather than the short-term impacts, like saving an immediate life by distributing insecticide-treated bednets. Depending on one’s estimates of the probability distribution (sign and magnitude) of these long-run effects, this may have some implications for the diversification question:

These charity evaluators, just like everyone else, cannot possibly include estimates of all the consequences of distributing bednets, from now until the end of time. In practice, their calculations are restricted to what are intuitively the ‘direct’ (‘foreseeable’?) consequences of bednet-distribution: estimates of the number and severity of cases of malaria that are averted by bednet-distribution, for which there is reasonably robust empirical data. In fact, the standard calculation focusses exclusively at the effectiveness of bednet-distribution in averting deaths from malaria of children under the age of 5, and (using standard techniques for evaluating death-aversions) concludes that those benefits alone suffice for ranking AMF’s cost-effectiveness above that of most other charities.

Averting the death of a child, however, has knock-on effects that have not been included in this calculation. What the calculation counts is the estimated value to the child of getting to live for an additional (say) 60 years. But the intervention in question also has systematic effects on others, which latter (1) have not been counted, (2) in aggregate may well be far larger than the effect of prolonging the child’s life on the child himself, and (3) are of unknown net valence. The most obvious such effects proceed via considerations of population size. In the first instance, averting a child death directly increases the size of the population, for the following (say) 60 years, by one. Secondly, averting child deaths has longer-run effects on population size: both because the children in question will (statistically) themselves go on to have children, and because a reduction in the child mortality rate has systematic, although difficult to estimate, effects on the near-future fertility rate. Assuming for the sake of argument that the net effect of averting child deaths is to increase population size, the arguments concerning whether this is a positive, neutral or a negative thing are complex. But, callous as it may sound, the hypothesis that (overpopulation is a sufficiently real and serious problem that) the knock-on effects of averting child deaths are negative and larger in magnitude than the direct (positive) effects cannot be entirely discounted.

It is possible that committing to only donate to one’s top charity could reduce the effect of uncertainty, by incentivising donors to be more rigorous when making their decision (Kuhn 2014):

Deciding on only one charity involves projecting extremely dissimilar things (e.g. global health interventions and catastrophic risk reductions) onto a single axis (good done per dollar) and comparing them. If you give yourself the option, it’s very tempting to throw up your hands and declare that they’re incommensurable so you have to donate some money to both. Deciding only to donate to whichever cause comes out on the very top in your analysis makes the stakes feel higher, which is a strong motivator to do your work thoroughly and reduce your uncertainty as much as possible.

He also writes that it may be worthwhile donating to a single charity because of the overhead in giving to many:

Being a large donor for an organization takes overhead: you need to follow it more closely, update your opinion of it more frequently, figure out how your money was used, schedule and itemize your donations, and so on. If you only donate to one organization, you minimize these costs, freeing you up to do other effective things.

This is possible, but organisations like GiveWell and Animal Charity Evaluators, and advisors at places like the Future of Humanity Institute or the Centre for Effective Altruism, and the effective altruism community in general, exist to offload a lot of this thinking.

Signalling

People might also give to charity for less altruistic motives, like for social approval.

For example, when charities publicise donations, they often assign donors to categories based on the amount donated rather than showing the exact amount, and studies have shown that donations tend to be concentrated around the lower ends of each bracket (Glazer et al. 1996, Harbaugh 1998). An fMRI study (Izuma et al. 2009) showed that when participants were asked to choose between giving money to charity or keeping it for themselves, both with and without an observer, there was a high activation in the striatum, which plays a large role in the brain’s reward system, when donating the money while being observed (gaining a social reward), and when keeping the money in the absence of observers (monetary reward without social cost). It can also be a means to signal income or wealth (Glazer et al. 1996) or trustworthiness (Fehrler 2010). Peer pressure from donating in pairs causes people to give larger amounts, but to be less happy with the donation (Reyniers et al. 2013).

In a talk (Alexander 2013), Robin Hanson of the blog Overcoming Bias argued that signalling made donating to a single charity feel difficult:

Then he started talking about how you should only ever donate to one charity – the most effective. I’d heard this one before and even written essays speaking in favor of it, but it’s always been very hard for me and I’ve always chickened out. What Robin added was, once again, a psychological argument – that the reason this is so hard is that if charity is showing that you care, you want to show that you care about a lot of different things. Only donating to one charity robs you of opportunities to feel good when the many targets of your largesse come up and burdens you with scope insensitivity (my guess is that most people would feel more positive affect about someone who saved a thousand dogs and one cat than someone who saved two thousand dogs. The first person saved two things, the second person only saved one.)

If people are more likely to donate, and donate more, when given the opportunity to gain the social rewards, this could be a reason for why people might give to multiple charities - there might be more opportunities for the causes to appear in conversation, or for the donation to be seen in other public places. On the other hand, the opportunity to talk about donating may help to spread the idea of giving, and particularly effective giving.

Kaufman (2013) also writes that in certain cases giving to only one charity might put others off effective altruism:

It can also be worth it to give to multiple organizations because of what it indicates to other people. I help fund 80,000 Hours because I think spreading the idea of effective altruism is the most important thing I can do. But it looks kind of sketchy to only give to metacharities, so I divide my giving between them and GiveWell’s top pick.

Are there other valid reasons for splitting?

Moral Uncertainty

It is possible that moral uncertainty is a cause for diversifying, depending on how plausible one believes possible moral frameworks are, as Bostrom (2009) writes on Overcoming Bias:

It seems people are overconfident about their moral beliefs.  But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don’t know which moral theory is correct?

It doesn’t seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.

MacAskill (2014) discussed similar ideas in his doctoral thesis, arguing for maximising “expected choice-worthiness”.

In Worldview Diversification, Karnofsky (2016) writes that it may make sense for relatively large funders to take seriously many plausible and competing worldviews, and devote significant resources to each.

Donor Coordination

Coordinating many small, individual donors (as opposed to the different coordination problems which arise from being a large donor, discussed later) can be an issue when lots of relatively small donations add up to a large total, such that it fills a charity’s funding gap and starts to hit diminishing returns, and other top charities worth funding, which may become more cost-effective than the initial charity past this point, end up with less than they needed.

Hoskin (2013) argues that this should not be a problem since most people are unlikely to agree on the best charity to give to, but that if they do (for example, if everyone gives to GiveWell’s top-ranked charity), when the charity ceases to become the most cost-effective, hopefully this will be communicated, and the next people to donate will base their decision on the updated rankings.

However, Kuhn (2014) gives an example, and explains why coordinating donations could be difficult in practice:

For instance, suppose that SCI and GiveDirectly are both soliciting donations from 100 donors with $1,000 each. Suppose that SCI is more cost-effective initially, but eventually diminishing returns kick in, so the globally optimal allocation is $70,000 to SCI and $30,000 to GiveDirectly. (And furthermore, suppose that everyone has perfect knowledge of cost-effectiveness.) If the individual donors only donate to the charity that looks more cost-effective when they make their donation, then everyone will donate to SCI, which is suboptimal. If, instead, everyone donates in proportion to the optimal outcome, then everyone will give $700 to SCI and $300 to GiveDirectly, which achieves the optimum.

This problem is mitigated if the donors donate at different times—for instance, if the first 70 donors give everything to GiveDirectly, then the last 30 could notice that GiveDirectly now looks under-funded and make up the gap. But this requires donors to donate frequently, and organizations to give frequent updates on their current funding level; it seems to me that current levels of transparency and donation frequency aren’t high enough to make this work very well.

GiveWell has also recommended diversifying among its top charities according to the proportions allocated on its website:

For donors who think of themselves as giving not only to help the charity in question but to help GiveWell, we encourage allocating your dollars in the same way that you would ideally like to see the broader GiveWell community allocate its dollars. If every GiveWell follower follows this principle, we’ll end up with an overall allocation that reflects a weighted average of followers’ opinions of the appropriate allocation. (By contrast, if every GiveWell follower reasons “My personal donation won’t hit diminishing returns, so I’ll just give exclusively to my top choice,” the overall allocation is more likely to end up “distorted.”)

Another complication with continuously updating donors on the funding needs of organisations is the following game-theoretic problem: if two or more donors are thinking of giving to the same charity, but worried about room for more funding concerns, they are incentivised to wait on the other to see what they do, since if the charity fills its funding gap, one of them can instead make their donations to their next favourite cause, which seems unfair to the other, or not donate at all, which seems worse. In response to this problem, Karnofsky (2014) said:

Posting too many updates would risk a similar problem to the one described above. We don’t want a situation in which each donor’s gift to charity X causes another donor to give less to the charity; this would create an incentive for donors to try to wait each other out.

We do intend to post a notice when a given charity appears sure to hit its “maximum” – the most we think it could absorb before hitting a point of seriously diminishing returns. This will help donors avoid supporting a charity to go well over what we think is a reasonable amount of funding, without allowing donors to “cancel each other out” at a more granular level. In other words, if you give to a charity, you can expect that your donation will raise the total amount the charity takes in, by the amount of your donation – unless the charity ends up easily hitting its maximum, in which case your donation will be partly offset by the behavior of other donors who agree with us about where the maximum ought to be. We think this is a reasonable practical position.

Showing Support

In addition to the overall amount of funding charities receive, they also care about how predictable the funding sources are (Seaman et al. 2010), as an unstable situation can cause more uncertainty and worry in its decision making, whereas a stable revenue allows them to plan ahead, grow, and take up promising new opportunities (GiveWell 2011), which could help make them even more cost-effective. Kuhn (2014) and Tomasik (2013) argue that this is an argument against effective altruists giving to single, especially small, charities, the former writes:

Organizations value some other things besides having money in the bank, like having predictable sources of funding in the future as well. And that’s a red mark for the single-donation strategy, because it makes your donations far less predictable. A single swap in your charity rankings (between first and second place) could redirect your entire donation stream. If you’re giving to multiple organizations, on the other hand, your donation streams are much less sensitive to small changes in ranking.

This applies more strongly to smaller organizations: as mentioned above, UNICEF would barely blink at $200,000 more or less of funding, but it might be the difference between life and death for a smaller organization. Some organizations, like GiveWell, have gone so far as to pass up large funding from a single source because of this worry:

At the same time, both we and Good Ventures agree that it would be a bad idea for GiveWell to draw all – or too great a proportion – of its support from Good Ventures.

One reason for this is that it would put GiveWell in an overly precarious position. While our interests are currently aligned, it is important to both parties that we would be able to go our separate ways in the case of a strong enough disagreement. If Good Ventures provided too high a proportion of support to GiveWell, the consequences of a split could become enormous for us, because we wouldn’t have a realistic way of dealing with losing Good Ventures’s support without significant disruption and downsizing. That would, in turn, put us in a position such that it would be very difficult to maintain our independence.

Another reason is to show support for organisations which, while not currently the most cost-effective in expectation, still operate according to effective altruist values, such as being transparent or working on interventions with good evidence of effectiveness, or in neglected areas. This could be an extra consideration for people with a greater ability to inspire and encourage others to give, as noted by Neil Bowerman in a forum post on MacAskill’s (2014) decision for where to donate. Some GiveWell (Karnofsky et al. 2015) staff also split their donations to show support.

How does the size of a grant affect the decision to split?

Value of Information

GiveWell (Karnofsky 2012) has said that directing donations to multiple top charities brings greater learning opportunities:

A more subtle version of this idea pertains to learning opportunities. In a sense GiveWell is like a “large donor” with a few million dollars of anticipated money moved. If we direct major funding to more than one charity, we will have improved access to each such charity and will have improved opportunities to track its progress and learn from it. In addition, though we don’t anticipate moving enough money to overwhelm any of the three charities’ room for more funding, there is an argument that each marginal dollar means less to the charity in terms of improving its prominence, ability to experiment and plan, probability of turning out not to be able to scale further, etc.

Since it is hard to put probabilities on the success and magnitude of different causes and interventions, the Open Philanthropy Project has also argued for a “hits-based giving” approach (Karnofsky 2016), funding a large number of high-risk, high-reward organisations:

One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% likely to fail, as long as the overall expected value is high enough.

And we suspect that, in fact, much of the best philanthropy is likely to fail. We suspect that high-risk, high-reward philanthropy could be described as a “hits business,” where a small number of enormous successes account for a large share of the total impact — and compensate for a large number of failed projects.

This is a similar perspective to venture capitalists such as YCombinator, which each year funds a large number of startups in a batch, expecting most of the returns and impact to come from a very small portion of these, such as Airbnb, Dropbox, or Stripe, and enables them to learn from many at once. Paul Graham often notes how startups are counterintuitive, and since the best looking ideas are already funded (akin to the low hanging fruit in charity, and the reason effective altruists like to focus on neglected areas), they might expect the best investments to be in ideas that initially seem bad (Graham 2012) - something similar might be true for charities. The YCombinator acceptance rate is under 3% (85 of around 3,000 in 2014), which might suggest that it is difficult to make judgments about investments past a certain point, and that it is worth taking a gamble on many for the highest returns.

More recently, in a talk on The Moral Value of Information, Askell (2017) argued that the information gains from exploring new and under-studied opportunities should be factored in to decisions on where to invest resources:

I’m going to argue that, if your credences are low resilience, then the value of information in this domain is generally higher than it would be in a domain where your credences are high resilience. And, I’m going to argue that this means that actually in many cases, we should prefer interventions with less evidential support, all else being equal.

The extreme view that you could have is “We should just start investing time and money in interventions with high expected value, but little or no evidential support.” A more modest proposal, which is the one that I’m going to kind of endorse, is “We should probably start explicitly including the value of information, and assessments of causes and interventions, rather than treating it as an afterthought to concrete value.” With some of things that I’ve looked at, I really think information value can swamp concrete value. If that’s the case, it really shouldn’t be an afterthought. It should be one of the primary drivers of values, not an afterthought in your calculation summary.

In summary, evidence does make a difference to expected value calculations via the value of information. If the expected concrete value to interventions is the same, this will favor testing out the intervention with less evidential support, rather than one with more. And taking value of information seriously would change what effective altruists invest their resources, i.e., their time and money in.

Diminishing Returns

A key idea in effective altruism is “room for more funding” - an intervention may have been cost-effective, but the charity might not currently be in a position to expand it, and so have no use for additional donations. We want to know the marginal impact our donation will achieve. The idea of diminishing returns is that, at some point, each additional dollar does less and less good. This is generally not an issue for small donors, since they are unlikely to hit this point, but this is a problem for larger donors, and a strong reason for diversification.

The Against Malaria Foundation was temporarily removed from GiveWell’s top charities for this reason (Karnofsky 2013):

Since naming the Against Malaria Foundation (AMF) as our #1 charity in late 2011, we have tracked $10.6 million in donations made to it as a result of our recommendation. In that time, AMF has held the funds while attempting to negotiate a net distribution to spend them on. It has not yet finalized a distribution large enough to spend the bulk of these funds (though it has funded a smaller-scale (~$1 million) distribution in Malawi).

We have been following its negotiations, and this post discusses why AMF has been unable to finalize a sufficiently large distribution. At this time, we plan not to recommend more donations to AMF until and unless it commits the bulk of its current funds to net distributions. Note that this decision does not reflect a negative view of AMF, but rather reflects room for more funding related issues.

The flipside of diminishing returns is economies of scale - for example, if a charity is successful at running a programme in one region, it can bring its learning and connections to another.

Coordination and Incentives

The earlier coordination problem is especially relevant when there is a large donor in play, which has the ability to fill any funding gaps. If Alice is a small individual donor, and Bob is making decisions on behalf of a large foundation giving out grants to charities, Alice is incentivized to wait to see how much Bob donates to her choice of charity before giving herself, since Bob may fill the funding gap, or enter diminishing returns such that the charity is much less cost-effective. Bob, on the other hand, will want to wait for Alice and other small donors to give first, so that he can make a smaller grant to the charity, and use the remaining money to give to his second choice of organisation. However, in this case, the smaller donors are essentially transferring their money to Bob so that he can grant it out elsewhere, but Alice and the others may not have the same choice of next top charity as Bob, so they may altogether lose the desire to give to charities that Bob is also interested in funding. If major donors are always willing to fill all the funding gaps in effective altruism, this might cause people to lose interest in the movement, so this is a reason that big donors may want to diversify.

GiveWell (Karnofsky 2015) has written about this problem when recommending grants to the foundation Good Ventures:

As we’ve written before, trying to anticipate and adjust to other givers’ behavior can lead to thorny-seeming dilemmas. We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.

Encouraging other donors to help support the causes and organizations we’re interested in – and ensuring that they have genuine incentives to do so – will sometimes directly contradict the goal of fully funding the best giving opportunities we see. Thinking about GiveWell’s top charities provides a vivid example. If we recommended that Good Ventures fully fund each of our top charities, GiveWell would no longer recommend these charities to individual donors. In the short run, this could mean forgoing tens of millions of dollars of potential support for these charities from individuals (this is how much we project individuals will give to our top charities this year). In the long run, the costs could be much greater: we believe that individual-donor-based support of GiveWell’s top charities has the ability to grow greatly. A major donor who simply funded top charities to capacity would be – in our view – acting very suboptimally, putting in a much greater share of the funding than ought to be necessary over the long run.

This worry may be less important if Alice and Bob each know the other is an effective altruist (especially if they are most interested in the same areas), so have similar second choice charities (Todd 2016):

Being the donor of last resort is also not as valuable as it first looks once you’re part of a community. If you give to an organisation and use up its room for more funding, then you free up another donor to go donate somewhere else. If that person shares your values and has reasonable judgement, then they’ll donate somewhere else that’s pretty good. This is especially true if you are modest about your own judgement. If someone else in your community who is smart thinks something is a good donation opportunity, then you should assign a reasonable probability to them being right.


Conclusion

For large donors such as Good Ventures, diminishing returns means diversifying charities and causes makes sense, and there are huge value of information gains from more exploratory approaches.

The case for smaller donors donating to a single charity seems somewhat more convincing. The EA community is itself diverse, and despite using the same framework, people have a range of comparative advantages that make them better suited to working in and looking at different areas, and they have differing opinions on the most important causes. This makes some reasons for diversification, such as coordination, moral uncertainty, and simplicity (since people can communicate their ideas on cost-effectiveness) less of an issue.

There is an argument that much of an EA’s impact can come from talking about it with others, so that it becomes more socially accepted and known, which diversifying donations may be useful for. That said, as long EAs can point to people within the community who donate to different charities and work on different causes, this may not be a problem. There may also be a risk of diluting EA if people donate to charities which are much less cost-effective than the best, since the ideal is to maximise impact.

There are some biases which pull people towards diversifying, and these mostly seem like things for donors to try to minimise - if an intervention is clearly more effective than another, EAs should usually not be swayed by warm glows, but it also does not seem like we should totally ignore naturally human desires like these, especially if we want the movement to be accessible, kind, and to grow.


Bibliography

Alexander (2013). Investment and Inefficient Charity. Slate Star Codex.

Askell (2017). The Moral Value of Information.

Baron & Greene (1996). Determinants of insensitivity to quantity in valuation of public goods: Contribution, warm glow, budget constraints, availability, and prominence.

Baron & Szymanska (2011). Heuristics and biases in charity.

Bostrom (2009). Moral uncertainty - towards a solution?

Carson & Mitchell (1995). Sequencing and nesting in contingent valuation surveys.

Desvousges et al. (1992). Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy.

Elmore (2016). We are in triage every second of every day.

Fehrler (2010). Charity as a Signal of Trustworthiness.

Fox et al. (2005). How subjective grouping of options influences choice and allocation: diversification bias and the phenomenon of partition dependence.

GiveWell (2011). GiveWell’s “excess assets” policy

Glazer et al. (1996). A signaling explanation for charity.

Graham (2012). Black Swan Farming.

Greaves (2016). Cluelessness.

Greaves (2019). A Research Agenda for the Global Priorities Institute. Global Priorities Institute.

Harbaugh (1998). What do donations buy?: A model of philanthropy based on prestige and warm glow.

Harrison (1992). Valuing public goods with the contingent valuation method: a critique of Kahneman and Knetsch.

Hoskin (2013). Should you only donate to one charity? Giving What We Can.

Izuma et al. (2010). Processing of the incentive for social approval in the ventral striatum during charitable donation.

Jamison et al. (2006), eds. Disease control priorities in developing countries.

Kahneman & Knetsch (1992). Valuing public goods: the purchase of moral satisfaction.

Kahneman & Tversky (1979). Prospect theory: An analysis of decision under risk.

Kahneman & Tversky (2000). Choices, values, and frames.

Kahneman et al. (1990). Experimental tests of the endowment effect and the Coase theorem.

Kahneman et al. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias.

Kahneman et al. (1999). Economic preferences or attitude expressions?: An analysis of dollar responses to public issues.

Karnofsky (2011). Denying the choice. GiveWell.

Karnofsky (2012). Our top charities for the 2012 giving season. GiveWell.

Karnofsky (2013). Change in Against Malaria Foundation Recommendation Status (room-for-more-funding-related). GiveWell.

Karnofsky (2014). Donor coordination and the “giver’s dilemma”. GiveWell.

Karnofsky (2015). Giving Now vs. Later. Good Ventures.

Karnofsky (2015). Hits-based giving. Open Philanthropy Project.

Karnofsky et al. (2015). Staff members’ personal donations for giving season 2015. GiveWell.

Kaufman (2013). Give to multiple charities.

Kuhn (2014). How many causes should you give to?

MacAskill (2013). Where I’m giving and why: Will MacAskill.

MacAskill (2014). Normative Uncertainty.

Read & Loewenstein (1995). Diversification bias: Explaining the discrepancy in variety seeking between combined and separated choices.

Read et al. (2001). Which is better: simultaneous or sequential choice?

Reyniers et al. (2013). Reluctant altruism and peer pressure in charitable giving.

Seaman et al. (2010). Chapter 1: Income Diversification. Handbook of research on nonprofit economics and management.

Sittler (2017). How much donation splitting is there, and should you split?

Snowden (2015). Does risk aversion give an agent with purely altruistic preferences a good reason to donate to multiple charities?

Todd (2016). The Value of Coordination. 80,000 Hours.

Tomasik (2013). Charity Cost-Effectiveness in an Uncertain World. Foundational Research Institute.

Tomasik (2013). When Should Altruists Be Financially Risk-Averse?

Tversky & Kahneman (1991). Loss aversion in riskless choice: A reference-dependent model.

Ubel et al. (1996). Cost-effectiveness analysis in a setting of budget constraints—is it equitable?

Ubel et al. (1996). Public preferences for efficiency and racial equity in kidney transplant allocation decisions.