Back in November, I argued (in Inhuman Altruism) that rationalists should try to reduce their meat consumption. Here, I’ll update that argument a bit and lay out some of my background assumptions.
Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It’s still bad – but its badness is some quite smaller number that is a function of my upbringing, cows’ cognitive differences from me, and the lack of overriding game theoretic concerns as far as I can tell.
I’m actually pretty much OK with animal suffering. I generally don’t empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it’s not really the case for animals.
My primary audience was rationalists who terminally care about reducing suffering across the board — but I’ll admit I thought most LessWrong users would fit that description. I didn’t expect to see a lot of people appealing to their self-interest or their upbringing. Since it’s possible to pursue altruistic projects for selfish reasons (e.g., attempting to reduce existential risk to get a chance at living longer), I’ll clarify that my arguments are directed at people who do care about how much joy and suffering there is in the world — care rather a lot.
The most detailed defense of meat-eating was Katja Grace’s When should an effective altruist be vegetarian? Katja’s argument is that egalitarians should eat frugally and give as much money as they can to high-impact charities, rather than concerning themselves with the much smaller amounts of direct harm their dietary choices cause.
Paul Christiano made similar points in his blog comments: if you would spend more money sustaining a vegan diet than sustaining a carniferous diet, the best utilitarian option would be for you to remain a meat-eater and donate the difference.
Most people aren’t living maximally frugally and giving the exactly optimal amount to charity (yet). But the point generalizes: If you personally find that you can psychologically use the plight of animals to either (a) motivate yourself to become a vegan for an extra year or (b) motivate yourself to give hundreds of extra dollars to a worthy cause, but not both, then you should almost certainly choose (b).
My argument did assume that veganism is a special “bonus” giving opportunity, a way to do a startling amount of good without drawing resources from (or adding resources to) your other altruistic endeavors. The above considerations made me shift from feeling maybe 80% confident that most rationalists should forsake meat, to feeling maybe 70% confident.
To give more weight than that to Katja’s argument, there are two questions I’d need answered:
1. How many people are choosing between philanthropy and veganism?
Some found the term “veg*nism” (short for “veganism or/and vegetarianism”) confusing in my previous post, so I’ll switch here to speaking of meat-abstainers as “plant people” and meat-eaters as “meat people.” I’m pretty confident that the discourse would be improved by more B-movie horror dialogue.
Plant people have proven that their mindset can prevent a lot of suffering. And I don’t see any obvious signs that EAs’ plantpersonhood diminishes their EAness. To compete, Katja’s meat-person argument needs to actually motivate people to do more good. “P > Q > R” isn’t a good argument against Q if rejecting Q just causes people to regress to R (rather than advance to P).
What I want to see here are anecdotes of EAs who have had actual success trying “pay the cost of veganism in money” (or similar), to prove this is a psychologically realistic alternative and not just a way of rationalizing the status quo.
(I’m similarly curious to see if people can have real success with my idea of donating $1 to the Humane League after every meal where you eat an animal. Patrick LaVictoire has tried out this ritual, which he calls “beefminding“. (Edit 9/11: Patrick clarifies, “I did coin ‘beefminding’, but I use it to refer to tracking my meat + egg* consumption on Beeminder, and trying to slowly bend the curve by changing my default eating habits. I don’t make offsetting donations. What I’m doing is just a combination of quantified self and Reducetarianism.”))
If I “keep fixed how much of my budget I spend on myself and how much I spend on altruism,” Katja writes, plant-people-ism looks like a very ineffective form of philanthropy. But I don’t think most people spend an optimal amount on altruistic causes, and I don’t think most people who spend a suboptimal amount altruistically ought to set a hard upper limit on how much they’re willing to give. Instead, I suspect most people should set a lower limit and then ratchet that limit upward over time, or supplement it opportunistically. (This is the idea behind Chaos Altruism.)
If you’re already giving everything to efficient charities except what you need to survive, or if you can’t help but conceptualize your altruistic sentiment as a fixed resource that veganism would deplete, then I think Katja’s reasoning is relevant to your decision. Otherwise, I think veganism is a good choice, and you should even consider combining it with Katja’s method, giving up meat and doubling the cost of your switch to veganism (with the extra money going to an effective charity). We suboptimal givers should take whatever excuse we can find to do better.
Katja warns that if you become a plant person even though it’s not the perfectly optimal choice, “you risk spending your life doing suboptimal things every time a suboptimal altruistic opportunity has a chance to steal resources from what would be your personal purse.” But if the choice really is between a suboptimal altruistic act and an even less optimal personal purchase, I say: mission accomplished! Relatively minor improvements in global utility aren’t bad ideas just because they’re minor.
I could see this being a bad idea if getting into the habit of giving ineffectively depletes your will to give effectively. Perhaps most rationalists would find it exhausting or dispiriting to give in a completely ad-hoc way, without maintaining some close link to the ideal of effective altruism. (I find it psychologically easier to redirect my “triggered giving” to highly effective causes, which is the better option in any case; perhaps some people will likewise find it easier to adopt Katja’s approach than to transform into a plant person.)
It would be nice if there were some rule of thumb we could use to decide when a suboptimal giving activity is so minor as to lack moral force (even for opportunistic Chaos Altruists). If you notice a bug in your psychology that makes it easier for you to become a plant person than to become an optimally frugal eater (and optimal giver), why is that any different from volunteering at a soup kitchen to acquire warm fuzzies? Why is it EA-compatible to encourage rationalists to replace the time they spend eating meat with time spent eating plants, but not EA-compatible to encourage rationalists to replace the time they spend on Reddit with time spent at soup kitchens?
Part of the answer is simply that becoming a plant person is much more effective than regularly volunteering at soup kitchens (even though it’s still not comparable to highly efficient charities). But I don’t think that’s the whole story.
2. Should we try to do more “ordinary” nice things?
Suppose some altruistic rationalists are in a position to do more good for the world by optimizing for frugality, or by ethically offsetting especially harmful actions. I’d still worry that there’s something important we’re giving up, especially in the latter case — “mundane decency,” “ordinary niceness,” or something along those lines.
I think of this ordinary niceness thing as important for virtue cultivation, for community-building, and for general signaling purposes. By “ordinary niceness” I don’t mean deferring to conventional/mainstream morality in the absence of supporting arguments. I do mean privileging useful deontological heuristics like “don’t use violence or coercion on others, even if it feels in the moment like a utilitarian net positive.”
If we aren’t relying on cultural conventions, then I’m not sure what basis we should use for agreeing on community standards of ordinary niceness. One thought experiment I sometimes use for this purpose is: “How easy is it for me to imagine that a society twice as virtuous as present-day society would find [action] cartoonishly evil?”
I can imagine a more enlightened society responding to many of our mistakes with exasperation and disappointment, but I have a hard time imagining that they’d react with abject horror and disbelief to the discovery that consumers contributed in indirect ways to global warming — or failed to volunteer at soup kitchens. I have a much easier time imagining the “did human beings really do that?!” response to the enslavement and torture of of legions of non-human minds for the sake of modestly improving the quality of sandwiches.
I don’t want to be Thomas Jefferson. I don’t want to be “that guy who was totally kind and smart enough to do the right thing, but lacked the will to part ways with the norms of his time even when plenty of friends and arguments were successfully showing him the way.”
I’m not even sure I want to be the utilitarian Thomas Jefferson, the counterfactual Jefferson who gives his money to the very best causes and believes that giving up his slaves would impact his wealth in a way that actually reduces the world’s expected utilitarian value.
I am something like a utiltiarian, so I have to accept the arguments of the hypothetical utilitarian slaveholder (and of Katja) in principle. But in practice I’m skeptical that an actual human being will achieve more utilitarian outcomes by reasoning in that fashion.
I’m especially skeptical that an 18th-century community of effective altruists would have been spiritually undamaged by shrugging its shoulders at slaveholding members. Plausibly you don’t kick out all the slaveholders; but you do apply some social pressure to try to get them to change their ways. Because ditching ordinary niceness corrodes something important about individuals and about groups — even, perhaps, in contexts where “ordinary niceness” is extraordinary.
… I think. I don’t have a good general theory for when we should and shouldn’t adopt universal prohibitions against corrosive “utilitarian” acts. And in our case, there may be countervailing “ordinary niceness” heuristics: the norm of being inclusive to people with eating disorders and other medical conditions, the norm of letting altruists have private lives, etc.
Whatever the right theory looks like, I don’t think it will depend on our stereotypes of rationalist excellence. If it seems high-value to be a community of bizarrely kind people, even though “bizarre kindness” clashes with a lot of people’s assumptions about rationalists or about the life of the mind, even though the kindness in question is more culturally associated with Hindus and hippies than with futurists and analytic philosophers, then… just be bizarrely kind. Clash happens.
I might be talked out of this view. Paul raises the point that there are advantages to doubling down on our public image (and self-image) as unconventional altruists:
I would rather EA be associated with an unusual and cost-effective thing than a common and ineffective thing. The two are attractive to different audiences, but one audience seems more worth attracting.
On the other hand, I’d expect conventional kindness and non-specialization to improve a community’s ability to resist internal strife and external attacks. And plant people are common and unexceptional enough that eating fewer animals probably wouldn’t make vegetarianism or veganism one of our more salient characteristics in anyone’s eyes.
At the same time, plantpersonhood could help us do a nontrivial amount of extra object-level good for the world, if it doesn’t trade off against our other altruistic activities. And I think it could help us develop a stronger identity (both individually and communally) as people who are trying to become exemplars of morality and kindness in many different aspects of their life, not just in our careers or philanthropic decisions.
My biggest hesitation, returning to Katja’s calculations… is that there really is something odd about putting so much time and effort into getting effective altruists to do something suboptimal.
It’s an unresolved empirical question whether Chaos Altruism is actually a useful mindset, even for people to whom it comes naturally. Perhaps Order Altruism and the “just do the optimal thing, dammit” mindset is strictly better for everyone. Perhaps it yields larger successes, or fails more gracefully. Or perhaps rationalists naturally find systematicity and consistency more motivating; and perhaps the impact of meat-eating is too small to warrant a deontological prohibition.
More anecdotes and survey data would be very useful here!
[Epistemic status: I’m no longer confident of this post’s conclusion. I’ll say why in a follow-up post.]