Revenge of the Meat People!

Back in November, I argued (in Inhuman Altruism) that rationalists should try to reduce their meat consumption. Here, I’ll update that argument a bit and lay out some of my background assumptions.

I was surprised at the time by the popularity of responses on LessWrong like Manfred’s

Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It’s still bad – but its badness is some quite smaller number that is a function of my upbringing, cows’ cognitive differences from me, and the lack of overriding game theoretic concerns as far as I can tell.

and maxikov’s

I’m actually pretty much OK with animal suffering. I generally don’t empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it’s not really the case for animals.

My primary audience was rationalists who terminally care about reducing suffering across the board — but I’ll admit I thought most LessWrong users would fit that description. I didn’t expect to see a lot of people appealing to their self-interest or their upbringing. Since it’s possible to pursue altruistic projects for selfish reasons (e.g., attempting to reduce existential risk to get a chance at living longer), I’ll clarify that my arguments are directed at people who do care about how much joy and suffering there is in the world — care rather a lot.

The most detailed defense of meat-eating was Katja Grace’s When should an effective altruist be vegetarian? Katja’s argument is that egalitarians should eat frugally and give as much money as they can to high-impact charities, rather than concerning themselves with the much smaller amounts of direct harm their dietary choices cause.

Paul Christiano made similar points in his blog comments: if you would spend more money sustaining a vegan diet than sustaining a carniferous diet, the best utilitarian option would be for you to remain a meat-eater and donate the difference.

Most people aren’t living maximally frugally and giving the exactly optimal amount to charity (yet). But the point generalizes: If you personally find that you can psychologically use the plight of animals to either (a) motivate yourself to become a vegan for an extra year or (b) motivate yourself to give hundreds of extra dollars to a worthy cause, but not both, then you should almost certainly choose (b).

My argument did assume that veganism is a special “bonus” giving opportunity, a way to do a startling amount of good without drawing resources from (or adding resources to) your other altruistic endeavors. The above considerations made me shift from feeling maybe 80% confident that most rationalists should forsake meat, to feeling maybe 70% confident.

To give more weight than that to Katja’s argument, there are two questions I’d need answered:

 

1. How many people are choosing between philanthropy and veganism?

Some found the term “veg*nism” (short for “veganism or/and vegetarianism”) confusing in my previous post, so I’ll switch here to speaking of meat-abstainers as “plant people” and meat-eaters as “meat people.” I’m pretty confident that the discourse would be improved by more B-movie horror dialogue.

Plant people have proven that their mindset can prevent a lot of suffering. And I don’t see any obvious signs that EAs’ plantpersonhood diminishes their EAness. To compete, Katja’s meat-person argument needs to actually motivate people to do more good. “P > Q > R” isn’t a good argument against Q if rejecting Q just causes people to regress to R (rather than advance to P).

What I want to see here are anecdotes of EAs who have had actual success trying “pay the cost of veganism in money” (or similar), to prove this is a psychologically realistic alternative and not just a way of rationalizing the status quo.

(I’m similarly curious to see if people can have real success with my idea of donating $1 to the Humane League after every meal where you eat an animal. Patrick LaVictoire has tried out this ritual, which he calls “beefminding“. (Edit 9/11: Patrick clarifies, “I did coin ‘beefminding’, but I use it to refer to tracking my meat + egg* consumption on Beeminder, and trying to slowly bend the curve by changing my default eating habits. I don’t make offsetting donations. What I’m doing is just a combination of quantified self and Reducetarianism.”))

If I “keep fixed how much of my budget I spend on myself and how much I spend on altruism,” Katja writes, plant-people-ism looks like a very ineffective form of philanthropy. But I don’t think most people spend an optimal amount on altruistic causes, and I don’t think most people who spend a suboptimal amount altruistically ought to set a hard upper limit on how much they’re willing to give. Instead, I suspect most people should set a lower limit and then ratchet that limit upward over time, or supplement it opportunistically. (This is the idea behind Chaos Altruism.)

If you’re already giving everything to efficient charities except what you need to survive, or if you can’t help but conceptualize your altruistic sentiment as a fixed resource that veganism would deplete, then I think Katja’s reasoning is relevant to your decision. Otherwise, I think veganism is a good choice, and you should even consider combining it with Katja’s method, giving up meat and doubling the cost of your switch to veganism (with the extra money going to an effective charity). We suboptimal givers should take whatever excuse we can find to do better.

Katja warns that if you become a plant person even though it’s not the perfectly optimal choice, “you risk spending your life doing suboptimal things every time a suboptimal altruistic opportunity has a chance to steal resources from what would be your personal purse.” But if the choice really is between a suboptimal altruistic act and an even less optimal personal purchase, I say: mission accomplished! Relatively minor improvements in global utility aren’t bad ideas just because they’re minor.

I could see this being a bad idea if getting into the habit of giving ineffectively depletes your will to give effectively. Perhaps most rationalists would find it exhausting or dispiriting to give in a completely ad-hoc way, without maintaining some close link to the ideal of effective altruism. (I find it psychologically easier to redirect my “triggered giving” to highly effective causes, which is the better option in any case; perhaps some people will likewise find it easier to adopt Katja’s approach than to transform into a plant person.)

It would be nice if there were some rule of thumb we could use to decide when a suboptimal giving activity is so minor as to lack moral force (even for opportunistic Chaos Altruists). If you notice a bug in your psychology that makes it easier for you to become a plant person than to become an optimally frugal eater (and optimal giver), why is that any different from volunteering at a soup kitchen to acquire warm fuzzies? Why is it EA-compatible to encourage rationalists to replace the time they spend eating meat with time spent eating plants, but not EA-compatible to encourage rationalists to replace the time they spend on Reddit with time spent at soup kitchens?

Part of the answer is simply that becoming a plant person is much more effective than regularly volunteering at soup kitchens (even though it’s still not comparable to highly efficient charities). But I don’t think that’s the whole story.

 

2. Should we try to do more “ordinary” nice things?

Suppose some altruistic rationalists are in a position to do more good for the world by optimizing for frugality, or by ethically offsetting especially harmful actions. I’d still worry that there’s something important we’re giving up, especially in the latter case — “mundane decency,” “ordinary niceness,” or something along those lines.

I think of this ordinary niceness thing as important for virtue cultivation, for community-building, and for general signaling purposes. By “ordinary niceness” I don’t mean deferring to conventional/mainstream morality in the absence of supporting arguments. I do mean privileging useful deontological heuristics like “don’t use violence or coercion on others, even if it feels in the moment like a utilitarian net positive.”

If we aren’t relying on cultural conventions, then I’m not sure what basis we should use for agreeing on community standards of ordinary niceness. One thought experiment I sometimes use for this purpose is: “How easy is it for me to imagine that a society twice as virtuous as present-day society would find [action] cartoonishly evil?”

I can imagine a more enlightened society responding to many of our mistakes with exasperation and disappointment, but I  have a hard time imagining that they’d react with abject horror and disbelief to the discovery that consumers contributed in indirect ways to global warming — or failed to volunteer at soup kitchens. I have a much easier time imagining the “did human beings really do that?!” response to the enslavement and torture of of legions of non-human minds for the sake of modestly improving the quality of sandwiches.

I don’t want to be Thomas Jefferson. I don’t want to be “that guy who was totally kind and smart enough to do the right thing, but lacked the will to part ways with the norms of his time even when plenty of friends and arguments were successfully showing him the way.”

I’m not even sure I want to be the utilitarian Thomas Jefferson, the counterfactual Jefferson who gives his money to the very best causes and believes that giving up his slaves would impact his wealth in a way that actually reduces the world’s expected utilitarian value.

am something like a utiltiarian, so I have to accept the arguments of the hypothetical utilitarian slaveholder (and of Katja) in principle. But in practice I’m skeptical that an actual human being will achieve more utilitarian outcomes by reasoning in that fashion.

I’m especially skeptical that an 18th-century community of effective altruists would have been spiritually undamaged by shrugging its shoulders at slaveholding members. Plausibly you don’t kick out all the slaveholders; but you do apply some social pressure to try to get them to change their ways. Because ditching ordinary niceness corrodes something important about individuals and about groups — even, perhaps, in contexts where “ordinary niceness” is extraordinary.

… I think. I don’t have a good general theory for when we should and shouldn’t adopt universal prohibitions against corrosive “utilitarian” acts. And in our case, there may be countervailing “ordinary niceness” heuristics: the norm of being inclusive to people with eating disorders and other medical conditions, the norm of letting altruists have private lives, etc.

 

Whatever the right theory looks like, I don’t think it will depend on our stereotypes of rationalist excellence. If it seems high-value to be a community of bizarrely kind people, even though “bizarre kindness” clashes with a lot of people’s assumptions about rationalists or about the life of the mind, even though the kindness in question is more culturally associated with Hindus and hippies than with futurists and analytic philosophers, then… just be bizarrely kind. Clash happens.

I might be talked out of this view. Paul raises the point that there are advantages to doubling down on our public image (and self-image) as unconventional altruists:

I would rather EA be associated with an unusual and cost-effective thing than a common and ineffective thing. The two are attractive to different audiences, but one audience seems more worth attracting.

On the other hand, I’d expect conventional kindness and non-specialization to improve a community’s ability to resist internal strife and external attacks. And plant people are common and unexceptional enough that eating fewer animals probably wouldn’t make vegetarianism or veganism one of our more salient characteristics in anyone’s eyes.

At the same time, plantpersonhood could help us do a nontrivial amount of extra object-level good for the world, if it doesn’t trade off against our other altruistic activities. And I think it could help us develop a stronger identity (both individually and communally) as people who are trying to become exemplars of morality and kindness in many different aspects of their life, not just in our careers or philanthropic decisions.

My biggest hesitation, returning to Katja’s calculations… is that there really is something odd about putting so much time and effort into getting effective altruists to do something suboptimal.

It’s an unresolved empirical question whether Chaos Altruism is actually a useful mindset, even for people to whom it comes naturally. Perhaps Order Altruism and the “just do the optimal thing, dammit” mindset is strictly better for everyone. Perhaps it yields larger successes, or fails more gracefully. Or perhaps rationalists naturally find systematicity and consistency more motivating; and perhaps the impact of meat-eating is too small to warrant a deontological prohibition.

More anecdotes and survey data would be very useful here!


[Epistemic status: I’m no longer confident of this post’s conclusion. I’ll say why in a follow-up post.]

Advertisement

14 thoughts on “Revenge of the Meat People!

  1. So you’re “especially skeptical that an [21st]-century community of effective altruists would [be] spiritually undamaged by shrugging its shoulders at [non-vegans]”? And the proper reaction to such people is “abject horror and disbelief”?

  2. I think the main reasons I have for believing that establishing a social norm of vegetarianism in EA is a bad idea, is that by default it seems to encourage the wrong set of algorithms, and it runs against a really heavily limited resource of “things we can make default in the EA community”.

    To elaborate on the “encourages the wrong set of mental algorithms” point:

    The first thing I would like EAs to do, when they are faced with a potentially promising action, is to sit down and make an expected value estimate. If the answer turns out strongly positive, then go forth and take the action, if the answer turns out negative, don’t do it, if it turns out weakly positive or weakly negative, mostly don’t bother with it, and decrease the degree to which you expect that looking into similar questions will provide value in the future.

    The whole debate about vegetarianism runs completely counter to this. I have only seen one single individual make any attempt at calculating an expected value and the answer turned out weakly negative. There have been no follow-up EV estimates, only large amounts of unstructured and informal arguments, and in general the discussion seems to mostly be resembling politics instead of rational discourse. The correct move seems to be to disengage from the topic, and just mostly not bother what people dietary’s decisions are, given that they appear to only very vaguely matter (this is in an ideal world, where all EAs have full control over their disgust reactions).

    To elaborate on the “runs against a heavily limited resource”:

    The memetic space of EA is very heavily limited. There is only a limited number of concepts that can become “EA canon” and only a limited number of ideas that can be seen as “default” in EA. We want to choose these ideas very carefully, and as I said above, dietary choice just doesn’t seem to be a topic where EA principles give us a very clear answer. The very fact that we are discussing this topic on this blog, and that I spent literally hundreds of hours thinking about this problem in the buildup to EAG shows that we are doing something very wrong right now, and that disengaging from the topic seems like generally the best idea.

    1. I agree with you that the choice to eat meat v. not eat meat is not very important, compared to dozens of other choices typical EAs could be spending the same time and attention thinking about.

      I wouldn’t be talking about this issue myself, except I find the meta-level principles involved interesting and important, so I see this as a useful arena for clearing up when we should promote “ordinary niceness” heuristics v. strict utilitarianism; Chaos Altruism and opportunism vs. Order Altruism and ‘just do the optimal thing’; etc. E.g., I wouldn’t have written the Chaos Altruism and Private v. Public Virtue posts at all except that the meat-eating debate forced me to think more carefully about what kinds of personal habits and community norms I think are most valuable. I expect those kinds of questions to turn out to be super important, and precisely because meat-eating is relatively low-stakes I see it as a relatively safe place to work out precedents.

      1. Hmm, I can see that point. I do think that though meat-eating has consequential low stakes, it does have quite high political stakes. There are few topics that are as mindkillerish as this one, and I am seriously worried that people will convince themselves of the wrong meta principles so that they are in line with their political position. (E.g. I would strongly discourage first introducing the trolley problem as “on one side you have 5 Republicans and on the other side one Libertarian”. I expect that it will create muddled thinking).

  3. I *really* like the Thomas Jefferson example, although the actual argument seems to be “I don’t want to be this kinda person”, which is not really a strong utilitarian argument. Point is, my moral intuition is that factory farming is something that should induce abject horror. Now, eating animal products from “humane” sources… that is something much more debatable (that I’d like to see you cover in a future post 🙂 ).

    So as someone who is already a vegetarian and eating “ethical eggs”, the strongest arguments I have for changing (which I am always seriously considering, but not too close to actually doing) are:

    1. Nutrition: Creatine: I am very curious what your take on this. Also, fish/fish oil seems pretty good for you. And I’m pretty confident I would eat a more protein-rich diet if I ate meat, which I also believe is good for you (although I’m not confident about that).

    2. Eating out with people and seeing total crap vegetarian options (eating out by myself is easy enough). E.g. “the same dish, for the same price (or more!?!?), minus the meat”, or “salad/fries”.

    Another thing I’ve just recently started considering is: maybe reducitarianism (and/or only eating “ethical” meat) is a better thing to be promoting: I think it has wider appeal, and it might actually be more powerful in terms of signalling, since a lot of people seem to view vegetarianism as a “personal choice”, in a way that removes any sense of moral compulsion. I’m somewhat disturbed by this attitude, but given arguments like Katya’s, I am finding it more compelling.

  4. Sort of a tangent: The way you’re using the concept “ordinary niceness” strikes me as confusing and anti-useful. Deontological injunctions of the sort that Eliezer talks about don’t have much to do with ‘niceness’ (their purpose is more like game-theoretic cooperation, trustworthiness, debiasing, or avoiding unintended consequences, and their implementation is likely to look more ‘principled’ or ‘honorable’ than ‘nice’) and aren’t necessarily ‘ordinary’ (e.g. ‘be much more honest than people usually are’ or ‘be totally honest with epistemic peers, but not necessarily anyone else’ are plausible injunctions; ‘ordinary’ seems to build in the concept that you don’t endorse of deferring to conventional morality). They also don’t seem to have much to do with the thing you’re talking about as a possible reason to not eat meat, which you describe in ways that suggest virtue ethics and not deontology (you want to be one kind of thing and not another, you don’t want to corrode something important that affects your ability to do good). And these both seem different from (though plausibly related to) ordinary kindness-to-those-around-you.

    1. Ethical injunctions are deontological even when they’re about trying to become a certain kind of person, to the extent they prescribe specific actions rather than just endorsing character traits. A lot of academic virtue ethics is a reaction against the very idea that you can prescribe general rules of good conduct.

      I considered other terms in place of “ordinary niceness,” like “basic decency” or “run-of-the-mill goodness.” But all of these seem to have problems too. I think part of the problem is that I’m trying to argue against two ideas at once:

      1. “Excellence, for rationalists, should involve standing out, heroically endorsing some weird/unusual thing, championing neglected in-group-associated topics like cryonics.”

      “Ordinary” or “run-of-the-mill” is meant to contrast with that idea. Sometimes excellence isn’t special; there are obvious reasons why neglected areas tend to be high-value, but there are also obvious reasons why high-value things will tend not to be neglected, and veganism isn’t the kind of intervention that needs to be extremely neglected in order to be valuable.

      2. “Excellence, for rationalists, isn’t about trying to look good, trying to be polite or accommodating, trying to get along with others, or engaging in ordinary small-scale acts of kindness. Excellence looks more like being willing to make social sacrifices and take status hits for the sake of deep truths and large-scale goods.”

      Again, this are obvious benefits to this as a rule of thumb, but I also see it overused in cases where non-niceness isn’t a forced move and doesn’t actually serve the goal.

      I think you’re right this is packing too much into one concept. Also into one blog post; this probably deserves a separate discussion. For instance, I haven’t seen the argument made that discursive charity, civility, etc. are classic examples of ethical injunctions; movement atheism failed because it didn’t have a rule against being mean to people when they deserved it, and individuals in heated Internet arguments are poor judges of who deserves it and of the long-term consequences of cruelty.

      I think the relationship between this kind of niceness and veganism is more tenuous, though. It might be that I’m not sufficiently distinguishing “basic decency” from the kind of “heroic, saintly decency” Ozy likes, since I do want EAs to ratchet up their sainthood and become a community that impresses me on more axes.

  5. I think you’re confusing me with someone else. I did coin “beefminding”, but I use it to refer to tracking my meat + egg* consumption on Beeminder, and trying to slowly bend the curve by changing my default eating habits. I don’t make offsetting donations. What I’m doing is just a combination of quantified self and Reducetarianism.

    *Note that milk products represent so much less suffering per calorie than meat + eggs [http://reducing-suffering.org/how-much-direct-suffering-is-caused-by-various-animal-foods/] that I might as well not count them so long as I haven’t cut meat + egg consumption to zero.

  6. Paying others to harm sentient beings isn’t altruism. Factory-farming and slaughterhouses cause some of the worst forms of severe and readily avoidable suffering in the world today. Turning whether EAs should actively pay to harm sentient beings into a challenging moral issue takes intellectual ingenuity worthy of a better cause. Exploring ways systematically to help sentient beings is more likely to have a positive impact.

  7. “I am something like a utiltiarian, so I have to accept the arguments of the hypothetical utilitarian slaveholder (and of Katja) in principle. But in practice I’m skeptical that an actual human being will achieve more utilitarian outcomes by reasoning in that fashion.”

    At the bottom of this there is also a glaring moral hazard. After all, living the good life with black jack and hookers (and, well, food meat) makes one so much more productive, does it not? Yes, since it’s the interwebs of 2015 we’re talking in, here is the addendum: The previous sentence *is* meant to be sarcastic.

    I can see the merit in some tough utilitarian calcs regarding the tools one wields to have an effect on this world. But regarding the details of ones private life, the potential for self deception becomes virtually limitless. ‘Self deception’ of course being the charitable interpretaion.

    And then there is the not to be dismissed effect of being the change one wants to see in the world or leading by example if you are in the position to do so, which is bound to have considerable effect on ones peers or subordinates.

  8. “but I have a hard time imagining that they’d react with abject horror and disbelief to the discovery that consumers contributed in indirect ways to global warming — or failed to volunteer at soup kitchens.”

    I find it really easy to imagine the first, and not really hard the second. I actually find the first easier to imagine then vegan-future.

    this is the problem with argument-from-imagination. it’s reveling more about you then about the world.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s