nothing is mere

Inhuman altruism: Inferential gap, or motivational gap?

Brienne recently wrote that most LessWrongers and effective altruists eat meat because they haven’t yet been persuaded that non-human animals can experience suffering:

Vegans: If the meat eaters believed what you did about animal sentience, most of them would be vegans, and they would be horrified by their many previous murders. Your heart-wrenching videos aren’t convincing to them because they aren’t already convinced that animals can feel.

Meat-eaters: Vegans think there are billions of times more people on this planet than you do, they believe you’re eating a lot of those people, and they care about every one of them the way you care about every human. […]

Finally, let me tell you about what happens when you post a heart-wrenching video of apparent animal suffering: It works, if the thing you’re trying to do is make me feel terrible. My brain anthropomorphizes everything at the slightest provocation. Pigs, cows, chickens, mollusks, worms, bacteria, frozen vegetables, and even rocks. And since I know that it’s quite easy to get me to deeply empathize with a pet rock, I know better than to take those feelings as evidence that the apparently suffering thing is in fact suffering. If you posted videos of carrots in factory farms and used the same phrases to describe their miserable lives and how it’s all my fault for making the world this terrible place where oodles of carrots are murdered constantly, I’d feel the same way. So these arguments do not tend to be revelatory of truth.

I’ve argued before that non-human animals’ abilities to self-monitor, learn, collaborate, play, etc. aren’t clear evidence that they have a subjective, valenced point of view on the world. Until we’re confident we know what specific physical behaviors ‘having a subjective point of view’ evolved to produce — what cognitive problem phenomenal consciousness solves — we can’t confidently infer consciousness from the overt behaviors of infants, non-human animals, advanced AI, anesthetized humans, etc.

[I]f you work on AI, and have an intuition that a huge variety of systems can act ‘intelligently’, you may doubt that the linkage between human-style consciousness and intelligence is all that strong. If you think it’s easy to build a robot that passes various Turing tests without having full-fledged first-person experience, you’ll also probably (for much the same reason) expect a lot of non-human species to arrive at strategies for intelligently planning, generalizing, exploring, etc. without invoking consciousness. (Especially if [you think consciousness is very complex]. Evolution won’t put in the effort to make a brain conscious unless it’s extremely necessary for some reproductive advantage.)


That said, I don’t think any of this is even superficially an adequate justification for torturing, killing, and eating human infants, intelligent aliens, or cattle.



The intellectual case against meat-eating is pretty air-tight

To argue from ‘we don’t understand the cognitive basis for consciousness’ to ‘it’s OK to eat non-humans’ is acting as though our ignorance were positive knowledge we could confidently set down our weight on. Even if you have a specific cognitive model that predicts ‘there’s an 80% chance cattle can’t suffer,’ you have to be just as cautious as you’d be about torturing a 20%-likely-to-be-conscious person in a non-vegetative coma, or a 20%-likely-to-be-conscious alien. And that’s before factoring in your uncertainty about the arguments for your model.

The argument for not eating cattle, chickens, etc. is very simple:

1. An uncertainty-about-animals premise, e.g.: We don’t know enough about how cattle cognize, and about what kinds of cognition make things moral patients, to assign a less-than-1-in-20 subjective probability to ‘factory-farmed cattle undergo large quantities of something-morally-equivalent-to-suffering’.

2. An altruism-in-the-face-of-uncertainty premise, e.g.: You shouldn’t do things that have a 1-in-20 (or greater) chance of contributing to large amounts of suffering, unless the corresponding gain is huge. E.g., you shouldn’t accept $100 to flip a switch that 95% of the time does nothing and 5% of the time nonconsensually tortures an adult human for 20 minutes.

3. An eating-animals-doesn’t-have-enormous-benefits premise.

4. An eating-animals-is-causally-linked-to-factory-farming premise.

5. So don’t eat the animals in question.

This doesn’t require us to indulge in anthropomorphism or philosophical speculation. And Brienne’s updates to her post suggest she now agrees a lot of meat-eaters we know assign a non-negligible probability to ‘cattle can suffer’. (Also, kudos to Brienne on not only changing her mind about an emotionally fraught issue extremely rapidly, but also changing the original post. A lot of rationalists who are surprisingly excellent at updating their beliefs don’t seem to fully appreciate the value of updating the easy-to-Google public record of their beliefs to cut off the spread of falsehoods.)

This places intellectually honest meat-eating effective altruists in a position similar to Richard Dawkins’:


[I’m] in a very difficult moral position. I think you have a very, very strong point when you say that anybody who eats meat has a very, very strong obligation to think seriously about it. And I don’t find any very good defense. I find myself in exactly the same position as you or I would have been — well, probably you wouldn’t have been, but I might have been — 200 years ago, talking about slavery. […T]here was a time when it was simply the norm. Everybody did it. Some people did it with gusto and relish; other people, like Jefferson, did it reluctantly. I would have probably done it reluctantly. I would have sort of just gone along with what society does. It was hard to defend then, yet everybody did it. And that’s the sort of position I find myself in now. […] I live in a society which is still massively speciesist. Intellectually I recognize that, but I go along with it the same way I go along with celebrating Christmas and singing Christmas carols.

Until I see solid counter-arguments — not just counter-arguments to ‘animals are very likely conscious,’ but to the much weaker formulation needed to justify veg(etari)anism — I’ll assume people are mostly eating meat because it’s tasty and convenient and accepted-in-polite-society, not because they’re morally indifferent to torturing puppies behind closed doors.



Why isn’t LessWrong extremely veg(etari)an?

On the face of it, LessWrong ought to be leading the pack in veg(etari)anism. A lot of LessWrong’s interests and values look like they should directly cash out in a concern for animal welfare:

transhumanism and science fiction: If you think aliens and robots and heavily modified posthumans can be moral patients, you should be more open to including other nonhumans in your circle of concern.

superrationality: Veg(etari)anism benefits from an ability to bind my future self to my commitments, and from a Kantian desire to act as I’d want other philosophically inclined people in my community to act.

probabilism: If you can reason with uncertainty and resist the need for cognitive closure, you’ll be more open to the uncertainty argument.

utilitarianism: Animals causes are admirably egalitarian and scope-sensitive.

taking ideas seriously: If you’re willing to accept inconvenient conclusions even when they’re based in abstract philosophy, that gives more power to theoretical arguments for worrying about animal cognition even if you can’t detect or imagine that cognition yourself.

distrusting the status quo: Veg(etari)anism remains fairly unpopular, and societal inertia is an obvious reason why.

distrusting ad-hoc intuitions: It may not feel desperately urgent to stop buying hot dogs, but you shouldn’t trust that intuition, because it’s self-serving and vulnerable to e.g. status quo bias. This is a lot of how LessWrong goes about ‘taking ideas seriously’; one should ‘shut up and multiply’ even when a conclusion is counter-intuitive.

Yet only about 15% of LessWrong is vegetarian (compared to 4-13% of the Anglophone world, depending on the survey). By comparison, the average ‘effective altruist’ LessWronger donated $2503 to charity in 2013; 9% of LessWrongers have been to a CFAR class; and 4% of LessWrongers are signed up for cryonics (and another 24% would like to be signed up). These are much larger changes relative to the general population, where maybe 1 in 150,000 people are signed up for cryonics.

I can think of a few reasons for the discrepancy:

(a) Cryonics, existential risk, and other LessWrong-associated ideas have techy, high-IQ associations, in terms of their content and in terms of the communities that primarily endorse them. They’re tribal markers, not just attempts to maximize expected utility; and veg(etari)ans are seen as belonging to other tribes, like progressive political activists and people who just want to hug every cat.

(b) Those popular topics have been strongly endorsed and argued for by multiple community leaders appealing to emotional language and vivid prose. It’s one thing to accept cryonics and vegetarianism as abstract arguments, and another thing to actually change your lifestyle based on the argument; the latter took a lot of active pushing and promotion. (The abstract argument is important; but it’s a necessary condition for action, not a sufficient one. You can’t just say ‘I’m someone who takes ideas seriously’ and magically stop reasoning motivatedly in all contexts.)

(c) Veg(etari)anism isn’t weird and obscure enough. If you successfully sign up for cryonics, LessWrong will treat you like an intellectual and rational elite, a rare person who actually thinks clearly and acts accordingly. If you successfully donate 10% of your income to GiveWell, ditto; even though distributing deworming pills isn’t sexy and futuristic, it’s obscure enough (and supported by enough community leaders, per (b)) that it allows you to successfully signal that you’re special. If 10% of the English-speaking world donated to GiveWell or were signed up for cryonics, my guess is that LessWrongers would be too bored by those topics to rush to sign up even if the cryonics and deworming organizations had scaled up in ways that made marginal dollars more effective. Maybe you’d get 20% to sign up for cryonics, but you wouldn’t get 50% or 90%.

(d) Changing your diet is harder than spending lots of money. Where LessWrongers excel, it’s generally via once-off or sporadic spending decisions that don’t have a big impact on your daily life. (‘Successfully employing CFAR techniques’ may be an exception to this rule, if it involves reinvesting effort every single day or permanently skipping out on things you enjoy; but I don’t know how many LessWrongers do that.)

If those hypotheses are right, it might be possible to shift LessWrong types more toward veganism by improving its status in the community and making the transition to veganism easier and less daunting.



What would make a transhumanist excited about this?

I’ll conclude with various ideas for bridging the motivation gap. Note that it doesn’t follow from ‘the gap is motivational’ that posting a bunch of videos of animal torture to LessWrong or the Effective Altruism Forum is the best way to stir people’s hearts. When intellectual achievement is what you trust and prize, you’re more likely to be moved to action by things that jibe with that part of your identity.
Write stunningly beautiful, rigorous, philosophically sophisticated things that are amazing and great

I’m not primarily thinking of writing really good arguments for veg(etari)anism; as I noted above, the argument is almost too clear-cut. It leaves very little to talk about in any detail, especially if we want something that hasn’t been discussed to death on LessWrong before. However, there are still topics in the vicinity to address, such as ‘What is the current state of the evidence about the nutrition of veg(etari)an diets?’ Use Slate Star Codex as a model, and do your very best to actually portray the state of the evidence, including devoting plenty of attention to any ways veg(etari)an diets might turn out to be unhealthy. (EDIT: Soylent is popular with this demographic and is switching to a vegan recipe, so it might be especially useful to evaluate its nutritional completeness and promote a supplemented Soylent diet.)

In the long run you’ll score more points by demonstrating how epistemically rational and even-handed you are than by making any object-level argument for veg(etari)anism. Not only will you thereby find out more about whether you’re wrong, but you’ll convince rationalists to take these ideas more seriously than if you gave a more one-sided argument in favor of a policy.

Fiction, done right, can serve a similar function. I could imagine someone writing a sci-fi story set in a future where humans have evolved into wildly different species with different perceived rights, thus translating animal welfare questions into a transhumanist idiom.

Just as the biggest risk with a blog post is of being too one-sided, the biggest risk with a story is of being too didactic and persuasion-focused. The goal is not to construct heavy-handed allegories; the goal is to make an actually good story, with moral conflicts you’re genuinely unsure about. Make things that would be worth reading even if you were completely wrong about animal ethics, and as a side-effect you’ll get people interested in the science, the philosophy, and the pragmatics of related causes.
Be positive and concrete

Frame animal welfare activism as an astonishingly promising, efficient, and uncrowded opportunity to do good. Scale back moral condemnation and guilt. LessWrong types can be powerful allies, but the way to get them on board is to give them opportunities to feel like munchkins with rare secret insights, not like latecomers to a not-particularly-fun party who have to play catch-up to avoid getting yelled at. It’s fine to frame helping animals as challenging, but the challenge should be to excel and do something astonishing, not to meet a bare standard for decency.

This doesn’t necessarily mean lowering your standards; if you actually demand more of LessWrongers and effective altruists than you do of ordinary people, you’ll probably do better than if you shot for parity. If you want to change minds in a big way, think like Berwick in this anecdote from Switch:

In 2004, Donald Berwick, a doctor and the CEO of the Institute for Healthcare Improvement (IHI), had some ideas about how to save lives—massive numbers of lives. Researchers at the IHI had analyzed patient care with the kinds of analytical tools used to assess the quality of cars coming off a production line. They discovered that the ‘defect’ rate in health care was as high as 1 in 10—meaning, for example, that 10 percent of patients did not receive their antibiotics in the specified time. This was a shockingly high defect rate—many other industries had managed to achieve performance at levels of 1 error in 1,000 cases (and often far better). Berwick knew that the high medical defect rate meant that tens of thousands of patients were dying every year, unnecessarily.

Berwick’s insight was that hospitals could benefit from the same kinds of rigorous process improvements that had worked in other industries. Couldn’t a transplant operation be ‘produced’ as consistently and flawlessly as a Toyota Camry?

Berwick’s ideas were so well supported by research that they were essentially indisputable, yet little was happening. He certainly had no ability to force any changes on the industry. IHI had only seventy-five employees. But Berwick wasn’t deterred.

On December 14, 2004, he gave a speech to a room full of hospital administrators at a large industry convention. He said, ‘Here is what I think we should do. I think we should save 100,000 lives. And I think we should do that by June 14, 2006—18 months from today. Some is not a number; soon is not a time. Here’s the number: 100,000. Here’s the time: June 14, 2006—9 a.m.’

The crowd was astonished. The goal was daunting. But Berwick was quite serious about his intentions. He and his tiny team set out to do the impossible.

IHI proposed six very specific interventions to save lives. For instance, one asked hospitals to adopt a set of proven procedures for managing patients on ventilators, to prevent them from getting pneumonia, a common cause of unnecessary death. (One of the procedures called for a patient’s head to be elevated between 30 and 45 degrees, so that oral secretions couldn’t get into the windpipe.)

Of course, all hospital administrators agreed with the goal to save lives, but the road to that goal was filled with obstacles. For one thing, for a hospital to reduce its ‘defect rate,’ it had to acknowledge having a defect rate. In other words, it had to admit that some patients were dying needless deaths. Hospital lawyers were not keen to put this admission on record.

Berwick knew he had to address the hospitals’ squeamishness about admitting error. At his December 14 speech, he was joined by the mother of a girl who’d been killed by a medical error. She said, ‘I’m a little speechless, and I’m a little sad, because I know that if this campaign had been in place four or five years ago, that Josie would be fine…. But, I’m happy, I’m thrilled to be part of this, because I know you can do it, because you have to do it.’ Another guest on stage, the chair of the North Carolina State Hospital Association, said: ‘An awful lot of people for a long time have had their heads in the sand on this issue, and it’s time to do the right thing. It’s as simple as that.’

IHI made joining the campaign easy: It required only a one-page form signed by a hospital CEO. By two months after Berwick’s speech, over a thousand hospitals had enrolled. Once a hospital enrolled, the IHI team helped the hospital embrace the new interventions. Team members provided research, step-by-step instruction guides, and training. They arranged conference calls for hospital leaders to share their victories and struggles with one another. They encouraged hospitals with early successes to become ‘mentors’ to hospitals just joining the campaign.

The friction in the system was substantial. Adopting the IHI interventions required hospitals to overcome decades’ worth of habits and routines. Many doctors were irritated by the new procedures, which they perceived as constricting. But the adopting hospitals were seeing dramatic results, and their visible successes attracted more hospitals to join the campaign.

Eighteen months later, at the exact moment he’d promised to return—June 14, 2006, at 9 a.m.—Berwick took the stage again to announce the results: ‘Hospitals enrolled in the 100,000 Lives Campaign have collectively prevented an estimated 122,300 avoidable deaths and, as importantly, have begun to institutionalize new standards of care that will continue to save lives and improve health outcomes into the future.’

The crowd was euphoric. Don Berwick, with his 75-person team at IHI, had convinced thousands of hospitals to change their behavior, and collectively, they’d saved 122,300 lives—the equivalent of throwing a life preserver to every man, woman, and child in Ann Arbor, Michigan.

This outcome was the fulfillment of the vision Berwick had articulated as he closed his speech eighteen months earlier, about how the world would look when hospitals achieved the 100,000 lives goal:

‘And, we will celebrate. Starting with pizza, and ending with champagne. We will celebrate the importance of what we have undertaken to do, the courage of honesty, the joy of companionship, the cleverness of a field operation, and the results we will achieve. We will celebrate ourselves, because the patients whose lives we save cannot join us, because their names can never be known. Our contribution will be what did not happen to them. And, though they are unknown, we will know that mothers and fathers are at graduations and weddings they would have missed, and that grandchildren will know grandparents they might never have known, and holidays will be taken, and work completed, and books read, and symphonies heard, and gardens tended that, without our work, would have been only beds of weeds.’

As an added bonus, emphasizing excellence and achievement over guilt and wickedness can decrease the odds that you’ll make people feel hounded or ostracized for not immediately going vegan. I expressed this worry in Virtue, Public and Private, e.g., for people with eating disorders that restrict their dietary choices. This is also an area where ‘just be nice to people’ is surprisingly effective.

If you want to propagate a modest benchmark, consider: “After every meal where you eat an animal, donate $1 to the Humane League.” Seems like a useful way to bootstrap toward veg(etari)anism, and it fits the mix of economic mindfulness and virtue cultivation that a lot of rationalists find appealing. This sort of benchmark is forgiving without being shapeless or toothless. If you want to propagate an audacious vision for the future, consider: “There were 1200 meat-eaters on LessWrong in the 2013 survey; if we could get them to consume 30% less meat from land animals over the next 10 years, we could prevent 100,000 deaths (mostly chickens). Let’s shoot for that.” Combining an audacious vision with a simple, actionable policy should get the best results.
Embrace weird philosophies

Here’s an example of the special flavor LessWrong-style animal activism could develop:

Are there any animal welfare groups that emphasize the abyssal otherness of the nonhuman mind? That talk about the impossible dance, the catastrophe of shapeless silence that lies behind a cute puppy dog’s eyes? As opposed to talking about how ‘sad’ or ‘loving’ the puppies are?

I think I’d have a much, much easier time talking about the moral urgency of animal suffering without my Anthropomorphism Alarms going off if I were part of a community like ‘Lovecraftians for the Ethical Treatment of Animals’.

This is philosophically sound and very relevant, since our uncertainty about animal cognition is our best reason to worry about their welfare. (This is especially true when we consider the possibility that non-humans might suffer more than any human can.) And, contrary to popular misconceptions, the Lovecraftian perspective is more about profound otherness than about nightmarish evil. Rejecting anthropomorphism makes the case for veg(etari)anism stronger; and adopting that sort of emotional distance, paradoxically, is the only way to get LessWrong types interested and the only way to build trust.


Yet when I expressed an interest in this nonstandard perspective on animal well-being, I got responses from effective animal altruists like (paraphrasing):

  • ‘Your endorsement of Lovecraftian animal rights sounds like an attack on animal rights; so here’s my defense of the importance of animal rights…’
  • ‘No, viewing animal psychology as alien and unknown is scientifically absurd. We know for a fact that dogs and chickens experience human-style suffering. (David Pearce adds: Also lampreys!)’
  • ‘That’s speciesist!’

Confidence about animal psychology (in the direction of ‘it’s relevantly human-like’) and extreme uncertainty about animal psychology can both justify prioritizing animal welfare; but when you’re primarily accustomed to seeing uncertainty about animal psychology used as a rationalization for neglecting animals, it will take increasing amounts of effort to keep the policy proposal and the question-of-fact mentally distinct. Encourage more conceptual diversity and pursue more lines of questioning for their own sake, and you end up with a community that’s able to benefit more from cross-pollination with transhumanists and mainline effective altruists and, further, one that’s epistemically healthier.

[Update 9/10/15: I’ve updated and clarified my views on this topic at Revenge of the Meat People!]




  1. I’m not even sure you need premise #5:

  2. orthonormal

    This post is giving me a bigger System 1 push toward veganism than anything else I’ve ever seen. Vegan rationalists might want to take note.

  3. blacktrance

    I reject Premises 3 and 5. Perhaps more world utility would be produced if I abstained from eating meat, but that’s not enough reason for me to not eat it. For me to stop eating meat, it would have to be the case that abstaining from eating it would produce more utility *for me*, which isn’t the case. As for Premise 5, regardless of what I’d want rational agents to choose, I can’t affect that by my choice, so the effects of what I do on net animal suffering are negligible.

    • Re 3: Do you strongly prefer that intelligent aliens not undergo large amounts of nonconsensual suffering? (This should help clarify the point of disagreement. E.g., if you just don’t care about other people, then my arguments won’t be relevant to you because they’re mostly directed at aspiring effective altruists.)

      Re 5: Would you defect in a Prisoner’s Dilemma against a perfect atom-by-atom replica of yourself?

      • blacktrance

        Re 3: I care about it, but that has to be weighed against my preference for the benefits of the activity that causes suffering as a side effect. When comparing my utility for me eating meat and me not eating meat, I get higher utility when I eat meat (even if I had some tiny effect on reducing animal suffering), even if there’d be a net world increase in utility if I didn’t eat meat.

        Re 5: If it were an atom-for-atom replica of me, there’d be a kind-of causal connection between myself and it, because, being my replica, it would reach the same conclusions that I would reach. But I don’t have this kind of influence on rational agents in general.

        • m

          “I get higher utility when I eat meat”
          How certain are you of that? And how much surplus utility do you gain per month from eating meat versus eating non-meat foods? (Could you give an estimate in comparative terms to some other things that you value?) Have you ever tried not eating meat for some longer duration of time? Say two months. If not, you could try it and introspect if something changed during the experiment. Even if you start out the experiment with somewhat lower experiential wellbeing at mealtimes you may see compensating utility gains from the intellectual challenge of the experience if you are a person who can take joy in such challenges. After two months you could always switch back.

          • m

            “of the experience” –> “of the experiment”

          • blacktrance

            I’m a picky eater and it’s difficult enough for me to find food I’m willing to eat, and most of it contains meat. If I cut out meat from my diet, I’d starve (not literally, but it would be a significant inconvenience).

  4. Taking premise 2 seriously—for these interpretations of “large amounts” and “huge”—seems to rest on the act omission distinction (which many EA’s may reject).

    (Premise 5 seems to be irrelevant.)

    • I originally added 5 (“Even if my eating a single hamburger has negligible direct causal impact on factory farming, I should select the action I’d want a large number of similar agents to select.”) to explain why veganism might be preferable to e.g. ‘only consuming animal products when they’d otherwise go to waste’ and to make it clear you’re deciding for all your future selves (so the amount of suffering at stake isn’t negligible). But since the rest of the argument is sketched so quickly and informally, I agree 5 is given too much weight; I’ll remove it.

      Re 2, I’m assuming that $100 is too low to be worth the cost to human welfare (and, similarly, that the taste and nutritional value of a lifetime of hamburgers isn’t worth the suffering of that many cattle). I’m not assuming it would be any better to passively allow humans or cattle to be tortured, or that actively torturing an agent has infinite disutility.

      • You can probably agree there is some price P you have to pay to reduce cattle suffering by one-steak’s-worth. My best guess is that this amount is a relatively small fraction (< 20%) of the price of the meat, even if you make very conservative assumptions about available options to reduce cattle suffering. That extra cost often doesn't tip the scales for me. (Note: I'm excluding make-people-vegetarian interventions here, since those have other complications and don't stand up under conservative assumptions.)

        I'm fine with vegetarianism for signaling reasons, for psychological reasons, as a way of eating frugally, or out of respect for non-utilitarian views. But I do think you should have "they are effectiveness-minded utilitarians, who don't have a psychological need for bright lines" on the list of possible explanations.

        This keeps coming up, so I will probably write a post about the price P of animal suffering on personal time.

        Pragmatically, I would really prefer a focus on shifting to low-suffering meat than on shifting to veganism. I think this is much more cost-effective, but the biggest reason is actually signaling—I would rather EA be associated with an unusual and cost-effective thing than a common and ineffective thing. The two are attractive to different audiences, but one audience seems more worth attracting.

        I also don't understand the analogy to cryonics. Cryonics is a (primarily) selfish expenditure, which most people don't do because it is weird and hard to defend in conversation. Vegetarianism is a relatively common altruistic sacrifice, which most people don't make because they don't care, it's easy to rationalize away, and they aren't under social pressure to change.

        • Supposing the numbers on are accurate, and I expect to live 50 more years, I can prevent 1500+ land animal deaths by abstaining from chicken, beef, etc. I’m not super confident the nutrition, taste, and convenience of eating meat over my lifetime improves my wellbeing enough to outweigh the expected suffering of even a single factory-formed chicken; and it’s not obvious to me that veganism would sap my productivity or reduce how much I give to other causes. So 1500 seems like a utilitarian steal to me.

          Why do you think popularizing low-suffering meat is more cost-effective? (Aside from the fact that some people will just naturally find that more appealing, and those people aren’t being targeted enough.)

          The main defenses of meat-eating I’ve seen recently come from people who strongly advocate cryonics and effective altruism but say we can’t be confident non-human animals suffer. I’m pointing to the tension between suggesting it’s OK to eat animals if the odds they’re conscious are <50%, and endorsing cryonics even when success is orders of magnitude less certain. I agree if you don't care about others' suffering this should be less convincing.

          • When you abstain from eating meat, you suffer a cost. Similarly, if you save $5 by eating a cheaper meal, and donate that $5 to do something good, you suffer a cost. Cost-effectiveness means having as large an effect as possible for a given cost.

            My claims were:

            1) Low-suffering meat is significantly more cost-effective than avoiding meat altogether (you avoid the vast majority of the animal welfare harms, for a fraction of the cost), and in general reducing meat is more cost-effective than being vegetarian. This seems intuitive: if some meat involves more suffering per benefit to the eater, and cutting out the least favorable meat is obviously the most cost-effective. (It might still be a good idea to cut out the rest of your meat.)

            If you actually care about the “1500 land animals” figure then you can reduce it by 95% (if I’m reading your source correctly) by cutting out poultry (which represents 1/3 of the american meat consumption). Citing “number of animals” seems to be a marketing thing, but the same point holds for more meaningful notions of suffering. For example, I think that cutting out almost all of beef feedlotting increases the price by something like 25%, but as far as I can tell the resulting quality of life is basically the same as in nature.

            I think the people who will find this more appealing are the people who care about efficiently making a difference. That seems like a good demographic to appeal to.

            2) In absolute terms, vegetarianism does not seem cost-effective in terms of “altruistic benefit / personal cost,” unless you don’t much like meat and aren’t much concerned about the nutrition issues. Again, if you reject the act-omission distinction it is obvious that you can donate some amount of money to offset the damages caused by eating meat. My claim was that this was a relatively small fraction of the price of a meaty-meal. If so, and if you would still sometimes buy meaty-meals if they were marked up, then you would be better off eating your meaty-meal and donating the “markup.”

            I think this discussion should be about how expensive it is to reduce animal suffering in other ways. You might be skeptical of my earlier 20% claim, for example.

            Personally, I don’t think animal welfare is a high-priority cause (because I think that stuff happening today mostly matters via making the world better overall, rather than via the immediate moral significance, and that animal welfare has a modest effect on overall world-bettering). So I can do even better by supporting higher-priority causes. But I grant that others might feel differently, and for them donating to reduce animal suffering might be the end of the line.

            Now you might be arguing “veganism has no cost.” But (1) that’s a different argument, which you aren’t making, (2) I disagree, (3) given that vegeterian food is cheaper, this would amount to either claiming that the price difference is precisely balanced for all people at once (implausibly) or that people are consistently making errors when they invest in more expensive food.

            • Ah, I understand your argument now. I think that’s a good rebuttal, assuming there’s a good chance that the food cost difference actually goes to an EA cause. I’ll note it above. If you write a blog post on the topic, or have a good reference for the claim that there are no realistic cheap vegan/vegetarian diets, I’ll add a link too.

              If a vegan diet inevitably costs more money than a meat-eating one, or reduces your productivity, I agree there’s a good EA case for eating meat. (The feasibility of soylent may be relevant here; soylent is cheap, and the new recipe is vegan.) I’ve heard arguments in both directions on how humane allegedly humane meat is, and how realistic it is to promote that option.

              Initially I thought you were arguing that eating meat is justifiable (for utilitarians) if veganism costs them too much happiness, whether or not it affects their ability to improve global utility. We shouldn’t collapse all of those costs into a single “personal cost” number that gets weighed against “altruistic benefit”. Utilitarian EAs should be trying to maximize altruistic benefit; we can ignore the question of whether we’ve found an altruistic-benefit-to-personal-cost ratio that feels fair, or only think about it when it’s an instrumentally useful way for motivating you. What’s attractive about veganism is that, if it can be done low-cost, it may allow you to have a substantial humanitarian impact without diverting resources from any of the other good things you do; it’s like getting to give to an extra, not-terrible charity for free.

            • To clarify my position a bit: I’m very confident it would be a utilitarian net-loss to make meat-eaters feel hounded and unwelcome on LW; I’m moderately confident it would be a utilitarian net-gain for most LWers to switch to more humane diets; and I tentatively believe there exist some LWers such that it would be a utilitarian net-loss for them to become vegans. These views may change if I decide ‘humane meat is a viable alternative’ (e.g., it’s cost-effective and has a chance of catching on) and/or ‘donating the extra-money-veganism-would-cost to an EA cause is a viable alternative’ (e.g., it’s not just a rationalization to do less good-in-general).

              I also agree with you that, social signaling benefits aside, ‘should I stop eating mammals?’ is not in the top 50 questions an effective altruist should spend time worrying about. It’s low-impact stuff (even compared to other animal interventions), and you’re unlikely to learn about other high-impact stuff in the process. I wouldn’t have brought up this issue if no one were talking about it; but given that a lot of people are talking about it, I think there’s value in giving the right utilitarian answer here, to prove we can take ideas seriously and be humane even when we don’t get Special Snowflake Contrarian Fuzzies out of it.

              I also agree that socially signaling non-(act)-utilitarian moral sentiment is part of why this matters. Moral character matters; e.g., I don’t think any community can survive if its members expect other members to happily harm them in serious ways the moment it’s a small utilitarian gain to do so. The slavery analogy seems right to me: If you’re an 18th-century member of a nascent American effective altruist community , there’s something corrosive-of-the-movement and corrosive-of-the-soul about relying on human slavery in your earning to give. That’s not to say that I wouldn’t contribute to slavery even to save the world; but the soul of the movement is a real thing that we should also factor into our calculations, and there’s a lot to be said for what Holden calls ‘standard ethics’ (which I take to mean injunctions like ‘try extra hard not to torture and enslave people’, not necessarily traditional/historical moral norms):

              “In general, I try to behave as I would like others to behave: I try to perform very well on “standard” generosity and ethics, and overlay my more personal, debatable, potentially-biased agenda on top of that rather than in replacement of it. I wouldn’t steal money to give it to our top charities; I wouldn’t skip an important family event (even one that had little meaning for me) in order to save time for GiveWell work; and along the same lines, I want to give some substantial portion of my money each year in a way that (a) aims at helping the less fortunate (b) can’t be confused with aiming to benefit myself or those in my circle. In addition, cutting a check for the sole purpose of helping others is something I want to stay practiced at.”

              The signaling value isn’t purely non-epistemic, either. For every slave-owner whose behaviors you defend who really is acting completely rationally and completely altruistically, there are many slave-owners interested in EA who are relying on motivated reasoning and dubious hypotheses to maintain their habits. (Though I grant your point that many of the abolitionists have ridiculous epistemologies too, and turning the 18th-century EA community into just another abolitionist organization isn’t necessarily the best way to maximize utility.)

  5. Avi

    I’m a selfish agent. So I disagree with premise 3.

    I don’t see anything wrong with eating possibly sentient animals. I wouldn’t see anything wrong with eating definitely sentient animals. If I would live in a society that ate people and viewed that as natural, I wouldn’t have any moral objections to that either. If my current society decided as a whole that eating animals was unnatural, I would probably stop eating animals do to social pressure, but do not feel enough pressure currently. (Although that should NOT be read as saying that a large amount of pressure put on me now would get me to change. The likely reaction would be of rebellion. My whole society has to change.)

  6. I think the “animals don’t have feelings” objection is usually rationalisation. The implication is that non-human animals should get literally zero moral weight — so if someone wants to torture puppies to death for sheer amusement, then it’s misplaced anthropomorphism to condemn them. It would be like condemning someone for running down NPCs in GTA. I seldom find people willing to take that horn of the dilemma.

    • Honnibal: I’ll bite on that one. I think that “liking to cause suffering” is unvirtuous and not a habit that should be built, but I do think that animals get zero moral weight and I am indifferent to puppies being tortured.

      • If you agree that it’s actually “suffering”, then you must disagree that it’s suffering which is morally relevant — i.e. you’re not a utilitarian?

        • Personally, I tend to identify as more of a virtue ethics person–“enjoying the suffering of others” is unvirtuous so I don’t want to be a person who does it.

          Politically, I’m consequentialist but I’m not sure I really qualify as “utilitarian.” Either way, I don’t really count animals in my moral universe.

  7. yboris

    Reblogged this on YBoris.

  8. Robert Dogg

    You need more scientific racism for LessWrong to upvote this.

  9. Dan

    15% vegetarian strikes me as a relatively high percentage. The base rate seems to be around 5% (attempting to combine various surveys into a single estimate). That means that the difference between LW and the general population for vegetarianism is probably larger than it is for signing up for cryonics or giving a lot to EA charities (although the ratio is necessarily smaller, because the latter two activities are so much rarer in the general population).

  10. 27chaos

    If you don’t even know what problems consciousness exists to solve, why do you think you understand it well enough to say that it is important or relevant to the ethics of animal suffering?


  1. Revenge of the Meat People! | nothing is mere
  2. Why Go Vegan? – A Diary for the End

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: