This post is partly a response to arguments for only donating to one charity and to an 80,000 Hours post arguing against diminishing returns. But I’ll focus mostly on AGI-risk charities.
The rule that I should only donate to one charity is a good presumption to start with. Most objections to it are due to motivations that diverge from pure utilitarian altruism. I don’t pretend that altruism is my only motive for donating, so I’m not too concerned that I only do a rough approximation of following that rule.
Still, I want to follow the rule more closely than most people do. So when I direct less than 90% of my donations to tax-deductible nonprofits, I feel a need to point to diminishing returns  to donations to justify that.
With AGI risk organizations, I expect the value of diversity to sometimes override the normal presumption even for purely altruistic utilitarians (with caveats about having the time needed to evaluate multiple organizations, and having more than a few thousand dollars to donate; those caveats will exclude many people from this advice, so this post is mainly oriented toward EAs who are earning to give or wealthier people).
Before explaining that, I’ll reply to the 80,000 Hours post about diminishing returns.
The 80,000 Hours post focuses on charities that mostly market causes to a wide audience. The economies of scale associated with brand recognition and social proof seem more plausible than any economies of scale available to research organizations.
The shortage of existential risk research seems more dangerous than any shortage of charities which are devoted to marketing causes, so I’m focusing on the most important existential risk.
I expect diminishing returns to be common after an organization grows beyond two or three people. One reason is that the founders of most organizations exert more influence than subsequent employees over important policy decisions , so at productive organizations founders are more valuable.
For research organizations that need the smartest people, the limited number of such people implies that only small organizations can have a large fraction of employees be highly qualified.
I expect donations to very young organizations to be more valuable than other donations (which implies diminishing returns to size on average):
- It takes time to produce evidence that the organization is accomplishing something valuable, and donors quite sensibly prefer organizations that have provided such evidence.
- Even when donors try to compensate for that by evaluating the charity’s mission statement or leader’s competence, it takes some time to adequately communicate those features (e.g. it’s rare for a charity to set up an impressive web site on day one).
- It’s common for a charity to have suboptimal competence at fundraising until it grows large enough to hire someone with fundraising expertise.
- Some charities are mainly funded by a few grants in the millions of dollars, and I’ve heard reports that those often take many months between being awarded and reaching the charities’ bank (not to mention delays in awarding the grants). This sometimes means months when a charity has trouble hiring anyone who demands an immediate salary.
- Donors could in principle overcome these causes of bias, but as far as I can tell, few care about doing so. EA’s come a little closer to doing this than others, but my observations suggest that EA’s are almost as lazy about analyzing new charities as non EA’s.
- Therefore, I expect young charities to be underfunded.
Why AGI risk research needs diversity
I see more danger of researchers pursuing useless approaches for existential risks in general, and AGI risks in particular (due partly to the inherent lack of feedback), than with other causes.
The most obvious way to reduce that danger is to encourage a wide variety of people and organizations to independently research risk mitigation strategies.
I worry about AGI-risk researchers focusing all their effort on a class of scenarios which rely on a false assumption.
The AI foom debate seems superficially like the main area where a false assumption might cause AGI research to end up mostly wasted. But there are enough influential people on both sides of this issue that I expect research to not ignore one side of that debate for long.
I worry more about assumptions that no prominent people question.
I’ll describe how such an assumption might look in hindsight via an analogy to some leading developers of software intended to accomplish what the web ended up accomplishing .
Xanadu stood out as the leading developer of global hypertext software in the 1980s to about the same extent that MIRI stands out as the leading AGI-risk research organization. One reason  that Xanadu accomplished little was the assumption that they needed to make money. Part of why that seemed obvious in the 1980s was that there were no ISPs delivering an internet-like platform to ordinary people, and hardware costs were a big obstacle to anyone who wanted to provide that functionality. The hardware costs declined at a predictable enough rate that Drexler was able to predict in Engines of Creation (published in 1986) that ordinary people would get web-like functionality within a decade.
A more disturbing reason for assuming that web functionality needed to make a profit was the ideology surrounding private property. People who opposed private ownership of home, farms, factories, etc. were causing major problems. Most of us automatically treated ownership of software as working the same way as physical property.
People who are too young to remember attitudes toward free / open source software before about 1997 will have some trouble believing how reluctant people were to imagine valuable software being free.  Attitudes changed unusually fast due to the demise of communism and the availability of affordable internet access.
A few people (such as RMS) overcame the focus on cold war issues, but were too eccentric to convert many followers. We should pay attention to people with similarly eccentric AGI-risk views.
If I had to guess what faulty assumption AGI-risk researchers are making, I’d say something like faulty guesses about the nature of intelligence or the architecture of feasible AGIs. But the assumptions that look suspicious to me are ones that some moderately prominent people have questioned.
Vague intuitions along these lines have led me to delay some of my potential existential-risk donations in hopes that I’ll discover (or help create?) some newly created existential-risk projects which produce more value per dollar.
How does this affect my current giving pattern?
My favorite charity is CFAR (around 75 or 80% of my donations), which improves the effectiveness of people who might start new AGI-risk organizations or AGI-development organizations. I’ve had varied impressions about whether additional donations to CFAR have had diminishing returns. They seem to have been getting just barely enough money to hire employees they consider important.
FLI is a decent example of a possibly valuable organization that CFAR played some hard-to-quantify role in starting. It bears a superficial resemblance to an optimal incubator for additional AGI-risk research groups. But FLI seems too focused on mainstream researchers to have much hope of finding the eccentric ideas that I’m most concerned about AGI-researchers overlooking.
Ideally I’d be donating to one or two new AGI-risk startups per year. Conditions seem almost right for this. New AGI-risk organizations are being created at a good rate, mostly getting a few large grants that are probably encouraging them to focus on relatively mainstream views .
CSER and FLI sort of fit this category briefly last year before getting large grants, and I donated moderate amounts to them. I presume I didn’t give enough to them for diminishing returns to be important, but their windows of unusual need were short enough that I might well have come close to that.
I’m a little surprised that the increasing interest in this area doesn’t seem to be catalyzing the formation of more low-budget groups pursuing more unusual strategies. Please let me know of any that I’m overlooking.
See my favorite charities web page (recently updated) for more thoughts about specific charities.
 – Diminishing returns are the main way that donating to multiple charities at one time can be reconciled with utilitarian altruism.
 – I don’t know whether it ought to work this way, but I expect this pattern to continue.
 – they intended to accomplish a much more ambitious set of goals.
 – probably not the main reason.
 – presumably the people who were sympathetic to communism weren’t attracted to small software projects (too busy with politics?) or rejected working on software due to the expectation that it required working for evil capitalists.
 – The short-term effects are probably good, increasing the diversity of approaches compared to what would be the case if MIRI were the only AGI-risk organization, and reducing the risk that AGI researchers would become polarized into tribes that disagree about whether AGI is dangerous. But a field dominated by a few funders tends to focus on fewer ideas than one with many funders.