Brienne recently wrote that most LessWrongers and effective altruists eat meat because they haven’t yet been persuaded that non-human animals can experience suffering:
Vegans: If the meat eaters believed what you did about animal sentience, most of them would be vegans, and they would be horrified by their many previous murders. Your heart-wrenching videos aren’t convincing to them because they aren’t already convinced that animals can feel.
Meat-eaters: Vegans think there are billions of times more people on this planet than you do, they believe you’re eating a lot of those people, and they care about every one of them the way you care about every human. […]
Finally, let me tell you about what happens when you post a heart-wrenching video of apparent animal suffering: It works, if the thing you’re trying to do is make me feel terrible. My brain anthropomorphizes everything at the slightest provocation. Pigs, cows, chickens, mollusks, worms, bacteria, frozen vegetables, and even rocks. And since I know that it’s quite easy to get me to deeply empathize with a pet rock, I know better than to take those feelings as evidence that the apparently suffering thing is in fact suffering. If you posted videos of carrots in factory farms and used the same phrases to describe their miserable lives and how it’s all my fault for making the world this terrible place where oodles of carrots are murdered constantly, I’d feel the same way. So these arguments do not tend to be revelatory of truth.
I’ve argued before that non-human animals’ abilities to self-monitor, learn, collaborate, play, etc. aren’t clear evidence that they have a subjective, valenced point of view on the world. Until we’re confident we know what specific physical behaviors ‘having a subjective point of view’ evolved to produce — what cognitive problem phenomenal consciousness solves — we can’t confidently infer consciousness from the overt behaviors of infants, non-human animals, advanced AI, anesthetized humans, etc.
[I]f you work on AI, and have an intuition that a huge variety of systems can act ‘intelligently’, you may doubt that the linkage between human-style consciousness and intelligence is all that strong. If you think it’s easy to build a robot that passes various Turing tests without having full-fledged first-person experience, you’ll also probably (for much the same reason) expect a lot of non-human species to arrive at strategies for intelligently planning, generalizing, exploring, etc. without invoking consciousness. (Especially if [you think consciousness is very complex]. Evolution won’t put in the effort to make a brain conscious unless it’s extremely necessary for some reproductive advantage.)
That said, I don’t think any of this is even superficially an adequate justification for torturing, killing, and eating human infants, intelligent aliens, or cattle.
The intellectual case against meat-eating is pretty air-tight
To argue from ‘we don’t understand the cognitive basis for consciousness’ to ‘it’s OK to eat non-humans’ is acting as though our ignorance were positive knowledge we could confidently set down our weight on. Even if you have a specific cognitive model that predicts ‘there’s an 80% chance cattle can’t suffer,’ you have to be just as cautious as you’d be about torturing a 20%-likely-to-be-conscious person in a non-vegetative coma, or a 20%-likely-to-be-conscious alien. And that’s before factoring in your uncertainty about the arguments for your model.
The argument for not eating cattle, chickens, etc. is very simple:
1. An uncertainty-about-animals premise, e.g.: We don’t know enough about how cattle cognize, and about what kinds of cognition make things moral patients, to assign a less-than-1-in-20 subjective probability to ‘factory-farmed cattle undergo large quantities of something-morally-equivalent-to-suffering’.
2. An altruism-in-the-face-of-uncertainty premise, e.g.: You shouldn’t do things that have a 1-in-20 (or greater) chance of contributing to large amounts of suffering, unless the corresponding gain is huge. E.g., you shouldn’t accept $100 to flip a switch that 95% of the time does nothing and 5% of the time nonconsensually tortures an adult human for 20 minutes.
3. An eating-animals-doesn’t-have-enormous-benefits premise.
4. An eating-animals-is-causally-linked-to-factory-farming premise.
5. So don’t eat the animals in question.
This doesn’t require us to indulge in anthropomorphism or philosophical speculation. And Brienne’s updates to her post suggest she now agrees a lot of meat-eaters we know assign a non-negligible probability to ‘cattle can suffer’. (Also, kudos to Brienne on not only changing her mind about an emotionally fraught issue extremely rapidly, but also changing the original post. A lot of rationalists who are surprisingly excellent at updating their beliefs don’t seem to fully appreciate the value of updating the easy-to-Google public record of their beliefs to cut off the spread of falsehoods.)
This places intellectually honest meat-eating effective altruists in a position similar to Richard Dawkins’:
[I’m] in a very difficult moral position. I think you have a very, very strong point when you say that anybody who eats meat has a very, very strong obligation to think seriously about it. And I don’t find any very good defense. I find myself in exactly the same position as you or I would have been — well, probably you wouldn’t have been, but I might have been — 200 years ago, talking about slavery. […T]here was a time when it was simply the norm. Everybody did it. Some people did it with gusto and relish; other people, like Jefferson, did it reluctantly. I would have probably done it reluctantly. I would have sort of just gone along with what society does. It was hard to defend then, yet everybody did it. And that’s the sort of position I find myself in now. […] I live in a society which is still massively speciesist. Intellectually I recognize that, but I go along with it the same way I go along with celebrating Christmas and singing Christmas carols.
Until I see solid counter-arguments — not just counter-arguments to ‘animals are very likely conscious,’ but to the much weaker formulation needed to justify veg(etari)anism — I’ll assume people are mostly eating meat because it’s tasty and convenient and accepted-in-polite-society, not because they’re morally indifferent to torturing puppies behind closed doors.
Why isn’t LessWrong extremely veg(etari)an?
On the face of it, LessWrong ought to be leading the pack in veg(etari)anism. A lot of LessWrong’s interests and values look like they should directly cash out in a concern for animal welfare:
transhumanism and science fiction: If you think aliens and robots and heavily modified posthumans can be moral patients, you should be more open to including other nonhumans in your circle of concern.
superrationality: Veg(etari)anism benefits from an ability to bind my future self to my commitments, and from a Kantian desire to act as I’d want other philosophically inclined people in my community to act.
probabilism: If you can reason with uncertainty and resist the need for cognitive closure, you’ll be more open to the uncertainty argument.
utilitarianism: Animals causes are admirably egalitarian and scope-sensitive.
taking ideas seriously: If you’re willing to accept inconvenient conclusions even when they’re based in abstract philosophy, that gives more power to theoretical arguments for worrying about animal cognition even if you can’t detect or imagine that cognition yourself.
distrusting the status quo: Veg(etari)anism remains fairly unpopular, and societal inertia is an obvious reason why.
distrusting ad-hoc intuitions: It may not feel desperately urgent to stop buying hot dogs, but you shouldn’t trust that intuition, because it’s self-serving and vulnerable to e.g. status quo bias. This is a lot of how LessWrong goes about ‘taking ideas seriously’; one should ‘shut up and multiply’ even when a conclusion is counter-intuitive.
Yet only about 15% of LessWrong is vegetarian (compared to 4-13% of the Anglophone world, depending on the survey). By comparison, the average ‘effective altruist’ LessWronger donated $2503 to charity in 2013; 9% of LessWrongers have been to a CFAR class; and 4% of LessWrongers are signed up for cryonics (and another 24% would like to be signed up). These are much larger changes relative to the general population, where maybe 1 in 150,000 people are signed up for cryonics.
I can think of a few reasons for the discrepancy:
(a) Cryonics, existential risk, and other LessWrong-associated ideas have techy, high-IQ associations, in terms of their content and in terms of the communities that primarily endorse them. They’re tribal markers, not just attempts to maximize expected utility; and veg(etari)ans are seen as belonging to other tribes, like progressive political activists and people who just want to hug every cat.
(b) Those popular topics have been strongly endorsed and argued for by multiple community leaders appealing to emotional language and vivid prose. It’s one thing to accept cryonics and vegetarianism as abstract arguments, and another thing to actually change your lifestyle based on the argument; the latter took a lot of active pushing and promotion. (The abstract argument is important; but it’s a necessary condition for action, not a sufficient one. You can’t just say ‘I’m someone who takes ideas seriously’ and magically stop reasoning motivatedly in all contexts.)
(c) Veg(etari)anism isn’t weird and obscure enough. If you successfully sign up for cryonics, LessWrong will treat you like an intellectual and rational elite, a rare person who actually thinks clearly and acts accordingly. If you successfully donate 10% of your income to GiveWell, ditto; even though distributing deworming pills isn’t sexy and futuristic, it’s obscure enough (and supported by enough community leaders, per (b)) that it allows you to successfully signal that you’re special. If 10% of the English-speaking world donated to GiveWell or were signed up for cryonics, my guess is that LessWrongers would be too bored by those topics to rush to sign up even if the cryonics and deworming organizations had scaled up in ways that made marginal dollars more effective. Maybe you’d get 20% to sign up for cryonics, but you wouldn’t get 50% or 90%.
(d) Changing your diet is harder than spending lots of money. Where LessWrongers excel, it’s generally via once-off or sporadic spending decisions that don’t have a big impact on your daily life. (‘Successfully employing CFAR techniques’ may be an exception to this rule, if it involves reinvesting effort every single day or permanently skipping out on things you enjoy; but I don’t know how many LessWrongers do that.)
If those hypotheses are right, it might be possible to shift LessWrong types more toward veganism by improving its status in the community and making the transition to veganism easier and less daunting.
What would make a transhumanist excited about this?
I’ll conclude with various ideas for bridging the motivation gap. Note that it doesn’t follow from ‘the gap is motivational’ that posting a bunch of videos of animal torture to LessWrong or the Effective Altruism Forum is the best way to stir people’s hearts. When intellectual achievement is what you trust and prize, you’re more likely to be moved to action by things that jibe with that part of your identity.
Write stunningly beautiful, rigorous, philosophically sophisticated things that are amazing and great
I’m not primarily thinking of writing really good arguments for veg(etari)anism; as I noted above, the argument is almost too clear-cut. It leaves very little to talk about in any detail, especially if we want something that hasn’t been discussed to death on LessWrong before. However, there are still topics in the vicinity to address, such as ‘What is the current state of the evidence about the nutrition of veg(etari)an diets?’ Use Slate Star Codex as a model, and do your very best to actually portray the state of the evidence, including devoting plenty of attention to any ways veg(etari)an diets might turn out to be unhealthy. (EDIT: Soylent is popular with this demographic and is switching to a vegan recipe, so it might be especially useful to evaluate its nutritional completeness and promote a supplemented Soylent diet.)
In the long run you’ll score more points by demonstrating how epistemically rational and even-handed you are than by making any object-level argument for veg(etari)anism. Not only will you thereby find out more about whether you’re wrong, but you’ll convince rationalists to take these ideas more seriously than if you gave a more one-sided argument in favor of a policy.
Fiction, done right, can serve a similar function. I could imagine someone writing a sci-fi story set in a future where humans have evolved into wildly different species with different perceived rights, thus translating animal welfare questions into a transhumanist idiom.
Just as the biggest risk with a blog post is of being too one-sided, the biggest risk with a story is of being too didactic and persuasion-focused. The goal is not to construct heavy-handed allegories; the goal is to make an actually good story, with moral conflicts you’re genuinely unsure about. Make things that would be worth reading even if you were completely wrong about animal ethics, and as a side-effect you’ll get people interested in the science, the philosophy, and the pragmatics of related causes.
Be positive and concrete
Frame animal welfare activism as an astonishingly promising, efficient, and uncrowded opportunity to do good. Scale back moral condemnation and guilt. LessWrong types can be powerful allies, but the way to get them on board is to give them opportunities to feel like munchkins with rare secret insights, not like latecomers to a not-particularly-fun party who have to play catch-up to avoid getting yelled at. It’s fine to frame helping animals as challenging, but the challenge should be to excel and do something astonishing, not to meet a bare standard for decency.
This doesn’t necessarily mean lowering your standards; if you actually demand more of LessWrongers and effective altruists than you do of ordinary people, you’ll probably do better than if you shot for parity. If you want to change minds in a big way, think like Berwick in this anecdote from Switch:
In 2004, Donald Berwick, a doctor and the CEO of the Institute for Healthcare Improvement (IHI), had some ideas about how to save lives—massive numbers of lives. Researchers at the IHI had analyzed patient care with the kinds of analytical tools used to assess the quality of cars coming off a production line. They discovered that the ‘defect’ rate in health care was as high as 1 in 10—meaning, for example, that 10 percent of patients did not receive their antibiotics in the speciﬁed time. This was a shockingly high defect rate—many other industries had managed to achieve performance at levels of 1 error in 1,000 cases (and often far better). Berwick knew that the high medical defect rate meant that tens of thousands of patients were dying every year, unnecessarily.
Berwick’s insight was that hospitals could beneﬁt from the same kinds of rigorous process improvements that had worked in other industries. Couldn’t a transplant operation be ‘produced’ as consistently and ﬂawlessly as a Toyota Camry?
Berwick’s ideas were so well supported by research that they were essentially indisputable, yet little was happening. He certainly had no ability to force any changes on the industry. IHI had only seventy-ﬁve employees. But Berwick wasn’t deterred.
On December 14, 2004, he gave a speech to a room full of hospital administrators at a large industry convention. He said, ‘Here is what I think we should do. I think we should save 100,000 lives. And I think we should do that by June 14, 2006—18 months from today. Some is not a number; soon is not a time. Here’s the number: 100,000. Here’s the time: June 14, 2006—9 a.m.’
The crowd was astonished. The goal was daunting. But Berwick was quite serious about his intentions. He and his tiny team set out to do the impossible.
IHI proposed six very speciﬁc interventions to save lives. For instance, one asked hospitals to adopt a set of proven procedures for managing patients on ventilators, to prevent them from getting pneumonia, a common cause of unnecessary death. (One of the procedures called for a patient’s head to be elevated between 30 and 45 degrees, so that oral secretions couldn’t get into the windpipe.)
Of course, all hospital administrators agreed with the goal to save lives, but the road to that goal was ﬁlled with obstacles. For one thing, for a hospital to reduce its ‘defect rate,’ it had to acknowledge having a defect rate. In other words, it had to admit that some patients were dying needless deaths. Hospital lawyers were not keen to put this admission on record.
Berwick knew he had to address the hospitals’ squeamishness about admitting error. At his December 14 speech, he was joined by the mother of a girl who’d been killed by a medical error. She said, ‘I’m a little speechless, and I’m a little sad, because I know that if this campaign had been in place four or ﬁve years ago, that Josie would be ﬁne…. But, I’m happy, I’m thrilled to be part of this, because I know you can do it, because you have to do it.’ Another guest on stage, the chair of the North Carolina State Hospital Association, said: ‘An awful lot of people for a long time have had their heads in the sand on this issue, and it’s time to do the right thing. It’s as simple as that.’
IHI made joining the campaign easy: It required only a one-page form signed by a hospital CEO. By two months after Berwick’s speech, over a thousand hospitals had enrolled. Once a hospital enrolled, the IHI team helped the hospital embrace the new interventions. Team members provided research, step-by-step instruction guides, and training. They arranged conference calls for hospital leaders to share their victories and struggles with one another. They encouraged hospitals with early successes to become ‘mentors’ to hospitals just joining the campaign.
The friction in the system was substantial. Adopting the IHI interventions required hospitals to overcome decades’ worth of habits and routines. Many doctors were irritated by the new procedures, which they perceived as constricting. But the adopting hospitals were seeing dramatic results, and their visible successes attracted more hospitals to join the campaign.
Eighteen months later, at the exact moment he’d promised to return—June 14, 2006, at 9 a.m.—Berwick took the stage again to announce the results: ‘Hospitals enrolled in the 100,000 Lives Campaign have collectively prevented an estimated 122,300 avoidable deaths and, as importantly, have begun to institutionalize new standards of care that will continue to save lives and improve health outcomes into the future.’
The crowd was euphoric. Don Berwick, with his 75-person team at IHI, had convinced thousands of hospitals to change their behavior, and collectively, they’d saved 122,300 lives—the equivalent of throwing a life preserver to every man, woman, and child in Ann Arbor, Michigan.
This outcome was the fulfillment of the vision Berwick had articulated as he closed his speech eighteen months earlier, about how the world would look when hospitals achieved the 100,000 lives goal:
‘And, we will celebrate. Starting with pizza, and ending with champagne. We will celebrate the importance of what we have undertaken to do, the courage of honesty, the joy of companionship, the cleverness of a field operation, and the results we will achieve. We will celebrate ourselves, because the patients whose lives we save cannot join us, because their names can never be known. Our contribution will be what did not happen to them. And, though they are unknown, we will know that mothers and fathers are at graduations and weddings they would have missed, and that grandchildren will know grandparents they might never have known, and holidays will be taken, and work completed, and books read, and symphonies heard, and gardens tended that, without our work, would have been only beds of weeds.’
As an added bonus, emphasizing excellence and achievement over guilt and wickedness can decrease the odds that you’ll make people feel hounded or ostracized for not immediately going vegan. I expressed this worry in Virtue, Public and Private, e.g., for people with eating disorders that restrict their dietary choices. This is also an area where ‘just be nice to people’ is surprisingly effective.
If you want to propagate a modest benchmark, consider: “After every meal where you eat an animal, donate $1 to the Humane League.” Seems like a useful way to bootstrap toward veg(etari)anism, and it fits the mix of economic mindfulness and virtue cultivation that a lot of rationalists find appealing. This sort of benchmark is forgiving without being shapeless or toothless. If you want to propagate an audacious vision for the future, consider: “There were 1200 meat-eaters on LessWrong in the 2013 survey; if we could get them to consume 30% less meat from land animals over the next 10 years, we could prevent 100,000 deaths (mostly chickens). Let’s shoot for that.” Combining an audacious vision with a simple, actionable policy should get the best results.
Embrace weird philosophies
Here’s an example of the special flavor LessWrong-style animal activism could develop:
Are there any animal welfare groups that emphasize the abyssal otherness of the nonhuman mind? That talk about the impossible dance, the catastrophe of shapeless silence that lies behind a cute puppy dog’s eyes? As opposed to talking about how ‘sad’ or ‘loving’ the puppies are?
I think I’d have a much, much easier time talking about the moral urgency of animal suffering without my Anthropomorphism Alarms going off if I were part of a community like ‘Lovecraftians for the Ethical Treatment of Animals’.
This is philosophically sound and very relevant, since our uncertainty about animal cognition is our best reason to worry about their welfare. (This is especially true when we consider the possibility that non-humans might suffer more than any human can.) And, contrary to popular misconceptions, the Lovecraftian perspective is more about profound otherness than about nightmarish evil. Rejecting anthropomorphism makes the case for veg(etari)anism stronger; and adopting that sort of emotional distance, paradoxically, is the only way to get LessWrong types interested and the only way to build trust.
Yet when I expressed an interest in this nonstandard perspective on animal well-being, I got responses from effective animal altruists like (paraphrasing):
- ‘Your endorsement of Lovecraftian animal rights sounds like an attack on animal rights; so here’s my defense of the importance of animal rights…’
- ‘No, viewing animal psychology as alien and unknown is scientifically absurd. We know for a fact that dogs and chickens experience human-style suffering. (David Pearce adds: Also lampreys!)’
- ‘That’s speciesist!’
Confidence about animal psychology (in the direction of ‘it’s relevantly human-like’) and extreme uncertainty about animal psychology can both justify prioritizing animal welfare; but when you’re primarily accustomed to seeing uncertainty about animal psychology used as a rationalization for neglecting animals, it will take increasing amounts of effort to keep the policy proposal and the question-of-fact mentally distinct. Encourage more conceptual diversity and pursue more lines of questioning for their own sake, and you end up with a community that’s able to benefit more from cross-pollination with transhumanists and mainline effective altruists and, further, one that’s epistemically healthier.
[Update 9/10/15: I’ve updated and clarified my views on this topic at Revenge of the Meat People!]