My guess is that The Sandman is the best comic yet created. I’ve been super excited to see it adapted to film or TV for a long time, and now a Netflix adaptation (of the beginning of the story) exists.
The adaptation is… OK?
There are a lot of fun things about it. And it really nails a few characters, and reinvents a few other characters in ways that make them more awesome and interesting than they were in the original comics. (Though overall I’d say there are more misses than hits on characterization, if we weight by importance.)
I don’t want to discourage people from watching the show (especially if this causes them to read the comics 😛), and I don’t want to say “your enjoyment was wrong!” to people who love the adaptation. I’ve been super delighted to hear people’s positive reviews of the show, and it always makes me happy to hear about which things people found fascinating or moving. I plan to continue watching it myself, and I’m excited to see what comes next. 🙂
But I do think there are several extremely core things about the comics that the Netflix adaptation misses on. So here’s my review of the Sandman TV series, and things I’d change:
(WARNING: The rest of this post will spoil important things about the comics that haven’t happened in the Netflix series yet. Don’t read this post unless you’ve read the Sandman comic series. You might also want to see the TV series before reading on?)
.
.
.
.
.
SPOILERS BELOW
.
.
.
.
.
I’d say there are three core problems with the Netflix series.
This is a quick set of excerpts I’m putting together for easy reference.
Elizabeth van Nostrand wrote an Aug. 30 blog post, Long Covid Is Not Necessarily Your Biggest Problem, concluding that “for vaccinated people under 40 with <=1 [comorbidity], the cognitive risks of long covid are lost in the noise of other risks they commonly take”.
She also concludes that
[…] your overall risk of long covid is strongly correlated with the strength of the initial infection. […]
Van Nostrand estimates estimates that the risk of hospitalization for a vaccinated person who catches Delta is:
0.38% for a healthy 30yo man;
0.24% for a healthy 30yo woman;
0.58% for an asthmatic 25yo man;
0.92% for a 40yo obese woman.
And:
[…] My tentative conclusion is that the risks to me of cognitive, mood, or fatigue side effects lasting >12 weeks from long covid are small relative to risks I was already taking, including the risk of similar long term issues from other common infectious diseases. Being hospitalized would create a risk of noticeable side effects, but is very unlikely post-vaccine (although immunity persistence is a major unresolved concern).
I want to emphasize again that ‘small relative to risks you were already taking’ doesn’t necessarily mean ‘too small to worry about’. For comparison, Josh Jacobson did a quick survey of the risks of driving and came to roughly the same conclusion: the risks are very small compared to the overall riskiness of life for people in their 30s. Josh isn’t stupid, so he obviously doesn’t mean ‘car accidents don’t happen’ or ‘car accidents aren’t dangerous when they happen’. What he means is that if you’re 35 with 15 years driving experience and not currently impaired, the marginal returns to improvements are minor.
[…] What this means is not that covid is safe, but that you should think about covid in the context of your overall risk portfolio. Depending on who you are that could include other contagious diseases, driving, drugs-n-alcohol, skydiving, camping, poor diet, insufficient exercise, too much exercise, and breathing outside [during wildfire season]. If you decide your current risk level is too high, or are suddenly realizing you were too risk-tolerant in the past, reducing covid risk in particular might not be the best bang for your buck. Paying for a personal trainer, higher quality food, or a safer car should be on your radar as much as reducing social contact, although for all I know that will end up being the best choice for you personally.
In Long COVID: Much More Than You Wanted To Know, Scott Alexander expresses stronger worries about long COVID (albeit with a broader definition of ‘long COVID’ that includes very mild symptoms like ‘reduced sense of smell’):
The prevalence of Long COVID after a mild non-hospital-level case is probably somewhere around 20%, but some of this is pretty mild.
[…]
Vaccination probably doesn’t change the per-symptomatic-case risk of Long COVID much
Alexander’s Fermi estimate:
About 25% of people who get COVID report long COVID symptoms. About half of those go away after a few months, so 12.5% get persistent symptoms. Suppose that half of those cases (totally made-up number) are very mild and not worth worrying about. Then 6.25% of people who get COVID would have serious long-lasting Long COVID symptoms.
[…] I’m going to round all of this off to about 1% – 10% per year of getting a breakthrough COVID case (though obviously this could change if the national picture got better or worse). Combined with the 0.4% to 6.25% risk of getting terrible long COVID conditional on getting COVID, that’s between a 1/150 – 1/25,000 chance of terrible long COVID per year.
[…] I find the 1/150 risk pretty scary and the 1/25,000 risk not scary at all, so, darn, I guess there’s not yet enough data to have a strong sense of how concerned I should be.
[…] What I’m confused by is how he uses the data he reports in this section to end up at 20%, since he quotes studies where (Long Covid percent in Covid group minus Long Covid percent in control group) is respectively at most 28%, 12%, 17%, 13% and 13%, two of which lack a control group. If we naively average that we get 17% minus a few percent for the missing control groups, so maybe 15%. Scott seems to be buying that ‘any symptom at all’ is a reasonable standard here, and that asking ‘did you have Long Covid?’ is ripe with false negatives.
[…] I think we can safely throw out the upper part of [Scott’s] range, as I think a 10% chance of breakthrough symptomatic Covid within a year isn’t reasonable if you do a little math, and it’s starting at 25% which seems higher than the studies referenced above would suggest, so I think the range here would be more like 1 in 1,000 to 1 in 25,000.
[…] Long Covid seems legitimate, and worth a nonzero amount of effort to minimize, but my model says it is mixing a lot of things together, is largely typical of what happens after being sick, is protected against by vaccines similarly to how they protect against symptomatic disease, and in many studies they go on a fishing expedition for symptoms then attribute everything that happens chronologically after Covid to Covid.
– […] I think his studies are too small and sample-biased to be meaningful.
– He thinks my studies (especially Taquet) didn’t look at the right sequelae.
– I was only looking at cognition (including mood disorders), whereas he looked at everything.
Scott also didn’t do age-specific estimates, although I’m that’s not a crux because I expect other post-infection syndromes to worsen with age as well.
I intended to include fatigue in my analysis of cognitive symptoms but in practice the studies I weighted most highly didn’t include them. Scott’s studies, which he admits are less rigorous although we differ on how much, did include them. Why the hell aren’t the large, EHR-based studies with control groups looking at fatigue? […]
Chinese officials suppressed early information about the virus. The WHO and the US CDC consistently spread misinformation and shoddy science throughout the course of the pandemic, and showed a shocking inability to understand and communicate basic distinctions like ‘we don’t know whether X’ versus ‘we know that not-X’.
World governments banned challenge trials for a full year based on imagined fears that they might prove unpopular, only to learn that they were very popular with the public once we bothered to check.
The US FDA banned COVID-19 testing and research during the critical early days of the pandemic in the US, and caused tens of thousands of deaths by refusing to approve well-tested vaccines in wide usage in the rest of the world. The developed world (and especially the European Union) massively under-invested in vaccines, spending thousands of dollars in human life and welfare to save pennies.
Most remarkably, many of these errors recur across many different countries, suggesting deep dysfunction in the way global elites generate, evaluate, and propagate ideas.
• Paranoid passivity.
A common theme in many of the above dysfunctions is a willingness to kill hundreds of thousands of people through inaction, before decisionmakers are willing to risk taking any unpopular action.
The Copenhagen Interpretation of Ethics points to one possible explanation, but it shouldn’t be forgotten that leaders are to a large extent giving the public what they want in all of this — it’s just that the public has pathologically low standards and a bizarre level of change aversion.
• Nationalism.
… But all of that may turn out to be a footnote in light of recent events in India. History may instead remember COVID-19 as a pandemic whose death toll largely occurred after vaccines were widely available, and one that mostly afflicted the poorest parts of the world.
The story of the pandemic may be: ‘The developed world made the strategic decision to prioritize themselves over the developing world. An effective genocide ensued. Crematoria spit their smoke into the sky while tens of millions of unused vaccines sat where they had been for months, gathering dust in storage in the US, useless even to Americans because their FDA refuses to approve the vaccines for domestic use too. They just sat there.’
• Biotech revolution.
… Or even that may turn out to be a footnote. History may remember COVID-19 like this:
‘By spurring the world to experiment with new vaccine tech, the COVID-19 pandemic ended up saving vastly more lives than it cost.’
This is even more uncertain, but if true, it raises major questions about why we couldn’t act sooner. Illnesses that kill millions of people don’t become less deadly just because we’re used to them. Yet somehow, it took a pandemic for human civilization to start taking human death and disease seriously to this degree.
Bryan Caplan’s The Case Against Education argues that education mostly serves a signaling function—it’s an easy way of proving to prospective employers that you’re a relatively smart, hard-working, mainstream member of society—and only a small part of education (maybe 20%) exists to help people learn anything or build any skills.
Bryan Caplan: [… T]here’s a standard story that almost everyone tells about why education pays in the labor market, and it just says: you go to school, they pour some skills into you, you’re better at your job, and so you get paid more. What’s the problem?
And I’m happy to say, sure, that’s part of the story.
But I say there’s also a much bigger part of the story that rarely gets discussed, and that is that when you do well in school, you impress others. You get certification. You get stamped with a sign of approval saying “Grade-A Worker”. And my story is that the majority—in fact, a large majority—of the payoff from education actually comes from this.
Selfishly speaking, that doesn’t matter so much. But from a social point of view, it matters tremendously. Because if the reason why people get paid more for school is because they learn more skills, then basically it’s a way that taxpayers invest in our productive capacities and then we produce the very wealth that we are being paid for.
But on the other hand, in the signaling story, the main thing that’s going on is that you’re getting paid because you’ve impressed employers. And if everyone has a bunch of stickers on their head, this doesn’t mean everyone gets good jobs or gets paid a lot. It just means that you need a lot of stickers in order to get a job.
So the biggest sign of this, I would say, is what’s called a credential inflation, which is you just now need more education to get a job that your dad or grandfather could’ve gotten with one or two fewer degrees.
Julia Galef: And what kind of signal are you mostly pointing at? Is it the signal that someone was good enough to be accepted into a college, or the signal that someone was good enough to graduate with the grades that they did?
Bryan Caplan: Yeah, so the graduation seems like it’s a lot more important. Because if it were the first story, if it were just you get a great signal by being accepted, then people would take their admission letters and shop them around employers saying, “I got into Harvard and Stanford, so what are you going to offer me, Goldman Sachs?” And in practice, that doesn’t seem to work very well.
So I think if you’re wondering why, I would say that there’s something very odd about a person who tries to do that. They seem like they’re trying to skip out on this sacred institution of our society. So, yeah, employers are understandably nervous about someone so weird that they would get into Harvard and then try to weasel out of it.
So in terms of what is it people are signaling, I’d say it’s really a big package of different traits. Intelligence, obviously, but it’s not just that. That’s too easy to measure by itself. It’s also work ethic. And then finally, sheer conformity, which again, is very important on the job. Someone could be really smart and really hard-working, but if they’re defiant, if they don’t play as part of the team, then they’re almost useless to you. And I say really to understand a lot of what’s going on with education, we have to focus on this conformity signaling.
[… T]he whole idea of signaling is that if you come up with a really cheap way of signaling, the result isn’t that you get your signal across at a low cost, but rather that you just have to do more of it. A key idea in the signaling model is if you found a way of cutting the cost of signaling in half, the result wouldn’t be that we do half as much signaling. The result would be that we signal for twice as long. […]
My favorite example of this is suppose that someone comes up with a new way of making synthetic diamonds at 10% of the current cost. And my question is, how long would it take before people either stopped giving diamond engagement rings, or they started giving rings that were enormous?
And the key point is that since what you’re signaling with that ring is that you’re willing to go and put in a lot of money into something to indicate your devotion, if the cost per carat of diamond were to fall, it’s not that we would just keep giving the same diamonds that we’re currently giving. Instead, people would say, “Well that doesn’t really convince people very much anymore. It doesn’t say much anymore. I’d better go and either get an even bigger diamond or give something that can’t be synthesized.”
And a lot of it is really the same for education. If you were to go and have, say, free college for all, the result wouldn’t be that everybody with a college degree can get the kind of jobs that people get with it now. Instead there’d be an army of extra people going, and then you might need a Master’s degree or another advanced degree to be considered worthy of an interview.
[…M]ost specialists in both education and labor economics, they’re only looking at income. So they’re looking at the effects of education, and then there is this really circular effort to say, “Well, since there’s a big effect on the income of the person, they must’ve learned something useful,” and you say, “Yes, but the signaling model predicts the very same thing.” So that’s a big issue.
And there is an idea of, “Well of course we all know that the people are learning tons of useful stuff.” And then when you say, “Well, actually, they’re learning a ton of stuff they’re never gonna use.” And this is then where economists will often retreat to, “Oh, well, they’re learning how to learn, learning critical thinking, it doesn’t really matter what the subject is.”
And then I’ll say there’s something they really don’t know about, which is: in educational psychology, they’ve been studying this very issue for a hundred years. They want to find evidence of learning how to learn. They want to find evidence that critical thinking is being successfully taught. And yet, after a hundred years, they’re really pretty shell-shocked and say, “Look, we’re just not finding much sign of this broad, general inculcation of thinking skills that educators love to believe is actually happening.” So that’s the stuff I’d say most economists are just totally unaware of.
[… S]tudents seem so focused on getting easy As. If you were in school to acquire skills, this is pretty perverse. But if you’re in school to impress employers, then it’s pretty easy to understand why you want an easy A, because the employer doesn’t know that it was an easy A. If you find the easiest teacher of real analysis in the country, get an A+ in exchange for doing some arithmetic, people look at that and say, “Wow, he’s got an A plus in real analysis. Wow, look at that guy.” So that makes sense.
The practical implication: if, e.g., college as it exists in the real world is largely a zero-sum arms race to signal pre-existing traits (like intelligence and disposition to conform / accept instructions), rather than a positive-sum opportunity to actually learn useful or enriching material, then causing more people to go to college doesn’t improve people’s lives in aggregate.
Just the opposite, since college is expensive in time and money. If you get another 10% of people to go to college, then everyone else has to burn that many more resources to keep up in the signaling competition, but there’s still the same pool of new jobs, and people are still roughly as good at those jobs as they would have been without the education. Which means that everyone is now burning more resources just to not fall behind relative to everyone else in the ‘signal you’re a good worker’ game. Like forcing everyone in a race to run twice as fast, without doing anything to increase the reward for absolute performance at the end.
So, for example, subsidizing college education is a terrible idea that actively hurts people. A better case could be made, if anything, for taxing it as a source of net harm to society, to try to reduce how much time people spend at college etc. and thereby put more time, money, and other resources in people’s hands. Forcing poor people to get more years of education doesn’t appear to materially benefit them at all in aggregate, but handing them back money and free years of their life certainly does.
If you want to enrich people with cool ideas as an end in itself, because cool ideas are cool, then give people Internet access and free time and let them decide how to use that time. Don’t force them into camps where they have to learn classics and jump through hoops in order to be able to pay medical bills, start a family, etc. later in life.
From Caplan’s book:
Higher education is the only product where the consumer tries to get as little out of it as possible. […]
Some big blatant facts are inexplicable without the signaling model.
[1.] The best education in the world is already free. All complaints about elite colleges’ impossible admissions and insane tuition are flatly mistaken. Fact: anyone can study at Princeton for free. While tuition is over $45,000 a year, anyone can show up and start attending classes. No one will stop you. No one will challenge you. No one will make you feel unwelcome. Gorge yourself at Princeton’s all-you-can-eat buffet of the mind. Colleges do not card. I have seen this with my own eyes at schools around the country.
If you keep your learn-for-free scheme to yourself, professors will assume you’re missing from their roster owing to a bureaucratic snafu. If you ask permission to sit in, most professors will be flattered. What a rare pleasure to teach someone who wants to learn! After four years of ‘guerrilla education,’ there’s only one thing you’ll lack: a diploma. Since you’re not in the system, your performance will be invisible to employers.
[… 2.] Failing versus forgetting. You’ve studied many subjects you barely remember. You might have motivated yourself with, ‘After the final exam, I’ll never have to think about this stupid subject again.’
[… 3.] Easy As. Students struggle to win admission to elite schools. Once they arrive, however, they hunt for professors with low expectations. A professor who wants to fill a lecture hall hands out lots of As and little homework.
[… 4.] Cheating. According to human capital purists, the labor market rewards only job skills, not academic credentials. Taken literally, this implies academic cheating is futile. Sure, a failing student can raise their grade by copying an A+ exam or plagiarizing a term paper from the Internet. Unless copying and plagiarizing make people more productive for their employer, however, the human capital model implies zero financial payoff for the worker. […]
The human capital model doesn’t just imply all cheaters are wasting their time. It also implies all educators who try to prevent cheating are wasting their time. All exams might as well be take-home. No one needs to proctor tests or call time. No one needs to punish plagiarism—or Google random sentences to detect it. Learners get job skills and financial rewards. Fakers get poetic justice.
[… 5.] Teachers have a foolproof way to make their students cheer: cancel class. If human capital purists are right, such jubilation is bizarre. Since you go to school to acquire job skills, a teacher who cancels class rips you off. You learn less, you’re less employable, yet your school doesn’t refund a dime of tuition. In construction, contractors don’t jump for joy if their roofers skip shingling to go gambling. In school, however, students jump for joy if their teachers cancel class to attend a conference in Vegas.
When students celebrate the absence of education, it’s tempting to blame their myopia on immaturity. Tempting, but wrongheaded. Once they’re in college, myopic, immature students can unilaterally skip class whenever they like. Why wait for the teacher’s green light? For most students, there’s an obvious answer: When you skip class, your relative performance suffers. When you teacher cancels class, everyone learns less, leaving your relative performance unimpaired.
Human capital purists must reject this ‘obvious answer.’ Employers reward you for your skills, not your skills compared to your classmates’. Signaling, in contrast, takes the ‘obvious answer’ over the finish line. Why do students cheer when a teacher cancels class? Because they’ve escaped an hour of drudgery without hurting their GPA.
And another excerpt—the following is a relatively minor argument in a big 400-page book, but I’ve come back to it a few times, so I’ll put it here too. As a philosophy major, I get to do this without looking like I’m lording my major over others…
We can ballpark the practicality of higher education by looking at the distribution of majors. Table 2.1 breaks down all bachelor’s degrees conferred in 2008-9 by field of study—and rates their usefulness.
High usefulness: Defenders of the real-world relevance of education love to invoke engineering. Engineering students learn how to make stuff work; employers hire them to make stuff work. Engineering has well-defined subbranches, each with straightforward applications: electrical, mechanical, civil, nuclear. Before we get carried away, we should accept a key act: Engineering is a challenging, hence unpopular, major. Psychologists outnumber engineers. Artists outnumber engineers. Social scientists plus historians outnumber engineers almost two to one. […]
Medium usefulness: Majors like business, education, and public administration sound vaguely vocational and funnel students toward predictable occupations after graduation. At the same time, they teach few technical skills, and nonmajors readily compete for the same jobs. While you could dismiss these majors as Low in usefulness, let’s give them the benefit of doubt. You don’t need a business degree to work in business, but perhaps your coursework gives you an edge. You don’t need an education degree to land a teaching job, but explicitly studying education could enhance your teaching down the road. […] By this standard, about 35% of majors end up in the Medium category. […]
Low usefulness: The status of most of the majors in this bin [which contains 40% of all bachelor’s degrees]—fine arts, philosophy, women’s studies, theology, and such—should be uncontroversial. Liberal arts programs uphold the ideal of ‘knowledge for knowledge’s sake.’ Few even pretend to prepare students for the job market. You could argue I underrate the usefulness of communications and psychology. Don’t they prepare students to work in journalism and psychology? Yet this objection is almost as naive as, ‘Don’t history programs prepare students to work as historians?’ Psychology, communications, and history’s usefulness is Low because they prepare their students for fields where paying jobs are almost impossible to get. In 2008-9, over 94,000 students earned their bachelor’s in psychology, but there are only 174,000 practicing psychologists in the country. In the same year, over 83,000 students earned their bachelor’s degree in communications. Total jobs for reporters, correspondents, and broadcast news analysts number 54,000. Historians, unsurprisingly, have the bleakest prospects of all. There were over 34,000 newly minted history graduates—and only 3,500 working historians in the entire country. […]
The staunchest defenders of education reject the idea of sorting subjects and majors by ‘usefulness.’ How do you know Latin, trigonometry, or Emily Dickinson won’t serve you on the job? A man told me his French once helped him understand an airport announcement in Paris. Without high school French, he would have missed his flight. Invest years now and one day you might save hours at an airport. See, studying French pays!
These claims remind me of Hoarders, a reality show about people whose mad acquisitiveness has ruined their lives. Some hoarders collect herds of cats, others old refrigerators, others their own garbage. Why not throw away some of their useless possessions? Stock answer: ‘I might need it one day.’ They ‘might need’ a hundred empty milk cartons.
Taken literally, the hoarders are right: there is a chance they’ll need their trash. The commensense reply is that packing your house with trash is almost always a bad idea. You must weigh the storage cost against the likely benefits. […] ‘No one knows if this trash will come in handy’ is a crazy argument for hoarding trash. ‘No one knows if this knowledge will come in handy’ is a crazy argument for hoarding knowledge.
Julia Galef: You mentioned the case in which a teacher cancels class and the students are all happy about that and say like, “Jeez, if it was really about gaining skills that they expect to increase their productivity and value to future employers, then why would they be happy?”
You know, they already paid for tuition, and now they’re just getting less for their money. Which I do think is a suggestive and striking fact about the world.
But I felt like you didn’t quite give enough space to the alternate explanation of that—which is just, you know, people buy gym memberships because they want to lose weight or get fit, and then they find excuses not to go to the gym, or they’re happy when there’s a holiday and the gym is closed, so they don’t have to go to the gym.
It just feels like there’s this common phenomenon of a tension, of struggle between your present self’s interests and your future self’s interests, and this leads to a lot of behavior that otherwise looks irrational.
Bryan Caplan: Yeah, so I think I did have a couple sentences on that point, but you’re right, I could’ve talked more about it. […] But the main thing I say is that this myopia can explain why students don’t show up on a regular day. And yeah, typical college class in the middle of the semester, barely half the students are showing up. And that, I think you might say, “Well, it’s just myopia,” because they’re going and putting this money in, and they’re gonna get worse grades, and their life is going to be worse as a result.
But of course, there’s all the students who do show up, and why is it that those students are also happy when you cancel class? And that one seems to be that well, then I get to have this holiday without having to worry about the material that I failed to learn and that is going to lead me to get lower grades.
So yeah, I think in terms of just low attendance, you can explain it with myopia. But why people see a big difference between skipping class when everyone else is doing it, and skipping class when only half of the people are doing it, or when only you’re doing it—that’s where I think that you can detect the signaling element. It’s like, “I don’t mind missing it if everyone else misses it, but if I’m the only one missing it, then I’m dead, so no. I’ll go.” […]
Julia Galef: I just want to zoom out for a moment to note that I’ve been honing in on the parts of your argument that I find relatively less convincing—but I actually do find your argument overall pretty convincing. And if you’re correct about the standard [view] being closer to 10% signaling, I’m closer to 80% than 10%.
But yeah, I don’t know, I’ve just been thinking during our conversation about cruxes of disagreement between you and me. And I think probably one of them is I just expect that companies are less rational than you expect they are. And so I would just be less surprised if they were leaving large amounts of money on the table. Or less surprised if societal inertia or irrational biases were doing a lot of the work here. Which just changes the whole way you make sense of what’s happening.
Bryan Caplan: I mean, what’s funny is for an economist, I’d say I’m very open-minded about this stuff, and there are a bunch of cases where I’ll say, “Yeah, it looks like firms are actually not maximizing profits,” or, “They’re leaving money on the table.” But again, I think the cases that are well-documented are ones where it’s more marginal.
And there is actually a big body of literature on how firms that don’t maximize profits and have low productivity per worker have much higher attrition rates than other firms. And on the other end, the firms that have usually high productivity are just more likely to not only survive, but also to expand. It’s another thing to say, your firms are leaving money on the table for five years, but to say that it’s gone on for decades, again, this seems to go against most of what we know about selective attrition in growth of firms.
Julia Galef: Okay. Well, that’s a way bigger crux of disagreement than we can resolve in two minutes, so I’ll leave it at that. I just thought it was interesting to point out.
Bryan Caplan: Yeah, totally.
Julia Galef: And I want to make sure that I don’t forget to tell you about an ironic thing that I’ve noticed, that’s very relevant to your case, which is: Philosophy departments, in their “Why you should be a philosophy major” page on their departmental website, they always cite statistics about how there’s a high return to a philosophy major, in terms of the starting salaries you get offered. And they say, “See, this is proof that philosophy majors teach you critical thinking skills!”
Bryan Caplan: Yeah, that’s terrible.
Julia Galef: Which is especially ironic, because they’re confusing correlation and causation, which is an example of poor thinking skills in their very argument! That just struck me as a very Bryan-flavored observation.
Bryan Caplan: Yeah. Plus it’s not even true that a philosophy major is well-paid.
Julia Galef: Oh?
Bryan Caplan: It’s not at the bottom of the distribution by any means, but… actually, now that I think about it, normally the numbers that I look at actually correct for test scores. So it might be that, yeah, philosophers do come in with very high test scores. So it might be that if you just look at raw means, what you’re saying is true.
Julia Galef: Right, yeah.
Bryan Caplan: But if you go and look at how people who had the same test scores but who majored in something else do, then I think philosophy does pretty poorly. Especially if you’re not looking at people who go on to get a law degree or something like that. Those people are probably pulling up the average a lot.
Students who do unschooling seem to do totally fine; and differences between education approaches mostly don’t seem to change much. So on the face of it, mandating decades of universal formal education seems to just be burning value.
There may be subtle society-wide effects of forcing people to spend a large chunk of their life in something like a well-intentioned labor camp; but the balance of these effects might be on things like “how conformist society is in general” that I don’t think are good things to optimize for.
If this is a good idea now, when the benefits are vastly lower and the risks are only slightly lower, then it was probably also a good idea a year ago. Opponents of human challenge trials should think hard about the background heuristics that caused them to get this one wrong, so we don’t have to repeat this tragic error again. ___________________________________________
Last week I went over how we know the Johnson & Johnson vaccine is safe and effective, and there are millions of doses waiting to be distributed, and there’s no good reason we can’t start that process yesterday.
I do realize that there is a difference between, as Scott Alexander discusses, the FDA’s need to be legible and reliable, and follow proper procedures, versus my ability to apply Bayesian reasoning.
Mostly, it’s a call and response. You say ‘why are we letting people die for no reason?’ and they say ‘Thalidomide!’ and ‘people won’t trust it.’
So basically, one time someone had a drug that wasn’t safe. We didn’t approve that drug because our existing review process made it look unsafe, so in response to that we created a more involved and more onerous process, as opposed to noticing that the previous process actually worked in this case exactly as designed. Then we use this as a fully general excuse to freak everyone out about everything that hasn’t gone through this process, and then use that freak out (that, to the extent it exists which it mostly doesn’t, is directly the result of such warnings) as our reason to force everything through the process. Neat trick.
Oh, and did I mention that the ‘safety data’ that requires three weeks to review is, and I quote it in its entirety, ‘nothing serious happened to anyone at all, and no one was struck by lightning.’ Either J&J has created a safe vaccine, or J&J is committing a fraud that will be caught and get everyone involved arrested within three weeks, or they’re committing a fraud so effectively that the review won’t catch the fraud and won’t help. Those are the only possibilities. If the data isn’t fraudulent then the drug is safe, period. […]
On the actual J&J vaccine, I don’t know what more there is to say. As with Moderna and Pfizer, they’ve already done the actual approval process and confirmed that it’s going to get approved before they applied, and now we’re delaying in order to make it clear we are Very Serious People who Follow Proper Procedure and are not In Bed With Industry and Putting People At Risk or Destroying Trust in Vaccines by going ‘too fast.’ Or something like that.
We have now done this three times. It’s one thing to have the first vaccine application point out that there’s weeks of lost time. It’s another thing to not have fixed the problem months later.
Meanwhile, now that we were provided a sufficiently urgent excuse that we were able to show that mRNA vaccines work, we’ve adopted them to create a vaccine for Malaria. Still very early but I consider this a favorite to end up working in some form within (regulatory burden) number of years. It’s plausible that the Covid-19 pandemic could end up net massively saving lives, and a lot of Effective Altruists (and anyone looking to actually help people) have some updating to do. It’s also worth saying that 409k people died of malaria in 2020 around the world, despite a lot of mitigation efforts, so can we please please please do some challenge trials and ramp up production in advance and otherwise give this the urgency it deserves? And speed up the approval process at least as much as we did for Covid? And fund the hell out of both testing this and doing research to create more mRNA vaccines? There’s also mRNA vaccines in the works for HIV, influenza and certain types of heart disease and cancer. These things having been around for a long time doesn’t make them not a crisis when we have the chance to fix them.
[…] Why do bioethicists habitually invoke the Tuskegee experiment? To justify current Human Subjects Review. Which is bizarre, because Human Subjects Review applies to a vast range of obviously innocuous activities. Under current rules, you need approval from Human Subjects merely to conduct a survey – i.e., to talk to a bunch of people and record their answers.
The rationale, presumably, is: “You should only conduct research on human beings if they give you informed consent. And we shouldn’t let researchers decide for themselves if informed consent has been given. Only bioethicists (and their well-trained minions) can make that call.”
On reflection, this just pushes the issue back a step. Researchers aren’t allowed to decide if their human experiment requires informed consent. However, they are allowed to decide if what they’re doing counts as an experiment. No one submits a formal request to their Human Subjects Review Board before emailing other researchers questions about their work. No professor submits a formal request to their Human Subjects Review Board before polling his students. Why not? Because they don’t classify such activities as “experiments.” How is a formal survey any more “experimental” than emailing researchers or polling students?
[…] The safest answer for bioethicists, of course, is simply: “They should get our approval for those activities, too.” The more territory bioethicists claim for themselves, however, the more you have to wonder, “How good is bioethicists’ moral judgment in the first place?”
To answer this question, let me bring up a bioethical incident thousands of times deadlier than the Tuskegee experiment. You see, there was a deadly plague called COVID-19. Researchers quickly came up with promising vaccines. They could have tested the safety and efficacy of these vaccines in about one month using voluntary paid human experimentation.
[…] In the real world, researchers only did Step 1, then waited about six months to compare naturally-occurring infection rates. During this period, ignorance of the various vaccines’ efficacy continued, almost no one received any COVID vaccine, and over a million people died. In the end, researchers discovered that the vaccines were highly effective, so this delay really did cause mass death.
How come no country on Earth tried voluntary paid human experimentation?* As far as I can tell, the most important factor was the formal and informal opposition of bioethicists. In particular, bioethicists converged on absurdly (or impossibly) high standards for “truly informed consent” to deliberate infection. Here’s a prime example:
“An important principle in human challenge studies is that subjects must give their informed consent in order to take part. That means they should be provided with all the relevant information about the risk they are considering. But that is impossible for such a new disease.”
Why can’t you bluntly tell would-be subjects, “This is a very new disease, so there could be all sorts of unforeseen complications. Do you still consent?” Because the real point of bioethics isn’t to ensure informed consent, but to veto informed consent to whatever gives bioethicists the willies.
[…] I’ve said it before and I’ll say it again: Bioethics is to ethics as astrology is to astronomy. If bioethicists had previously prevented a hundred Tuskegees from happening, COVID would still have turned the existence of their entire profession into a net negative for humanity. Verily, we would be better off if their field had never existed.
If you find this hard to believe, remember: What the Tuskegee researchers did was already illegal in 1932. Instead of creating a pile of new rules enforced by a cult of sanctimonious busybodies, the obvious response was to apply the familiar laws of contract and fiduciary duty. These rules alone would have sent people like the Tuskegee researchers to jail where they belong. And they would have left forthright practitioners of voluntary paid human experimentation free to do their vital life-saving work.
In a just world, future generations would hear stories of the monstrous effort to impede COVID-19 vaccine research. Textbooks and documentaries would icily describe bioethicists’ lame rationalizations for allowing over a million people die. If the Tuskegee experiments laid the groundwork for modern Human Subjects Review, the COVID non-experiments would lay the groundwork for the abolition of these deadly shackles on medical progress. […]
___________________________________________
General COVID-19 thoughts, from me (someone with no relevant medical background):
1. I’ve head reports of people getting seriously ill or dying from various preventable illnesses, because they’re too scared to go to the hospital for various non-COVID-related ailments. In general, I think people are unduly scared of hospitals: going to the hospital is risky, but not catastrophically so. I’d advise people to stop going into grocery stores (if they can avoid it) long before I advised avoiding hospitals (in cases where they’re worried something might be seriously wrong).
Obviously, now is not the time to go in for routine check-ups, and video calls with doctors are a good first step in most cases, etc.
2. I’ve updated toward thinking it won’t be that hard to avoid catching COVID-19 in March/April in spite of the new strain, if you’re the kind of person who’s in a social network of very cautious people who have ~all avoided catching COVID-19 thus far. A large number of people are taking few or no precautions, and the bulk of COVID-19 exposures has been (and will continue to be) drawn from that group.
If you’re young and your whole social network has almost completely avoided anyone catching COVID-19 thus far, it’s more likely your social network is being over-cautious.
3. A lot of sources have been exaggerating the risk that you’ll be infected, or infect others, even if you’ve previously caught COVID-19 or been vaccinated. I think most people who’ve recovered from COVID-19 should mostly act as though COVID-19 doesn’t exist at all, at least for the next few months (in areas where the Brazil and South Africa strains aren’t widespread yet).
I’d say the same for people who have had two shots of the Pfizer or Moderna vaccine, as long as it’s been ~2 weeks since you had your second shot. For more detailed risk assessments than that, I recommend using the microCOVID website.
4. This is the home stretch. Universal vaccine availability is on the horizon, and our vaccines seem amazingly effective (especially for preventing deaths and hospitalizations). It goes without saying that it’s extra-unfortunate to catch COVID-19 shortly before you would have gotten vaccinated.
CTRL+F “Blackrock” in this Matt Levine column for a discussion of how we accidentally stumbled into true communism for the good of all. The short version: an investing company called Blackrock owns so much of the economy that it’s in their self-interest to have all companies cooperate for the good of the economy as a whole. While they don’t usually push this too hard, the coronavirus pandemic was a big enough threat that “BlackRock is actually calling drug companies and telling them to cooperate to find a cure without worrying about credit or patents or profits”.
___________________________________________
Also from Scott’s link post:
The class-first left’s case for why the Sanders campaign failed: he tried too hard to reinvent himself as a typical liberal to fit in, but people who wanted typical liberals had better choices, and it lost him his outsider energy (see especially the description of his “astoundingly dysfunctional” South Carolina campaign – “not only did basic tasks go unfulfilled, phone-banking and canvassing data were outright fabricated” – the article claims nobody was able to fix it because it was run by social justice activists who interpreted any criticism of them as racist/sexist. Interested to hear if anyone knows of other perspectives on this). Counterpoint: South Carolina was always going to be hostile territory for him, and maybe he didn’t reinvent himself as a typical liberal enough. I cannot find any other source confirming the South Carolina campaign allegations; interested in hearing what people think.
___________________________________________
Marginal Revolution (also linked in Scott’s roundup) quotes Tanaya Devi and Roland Fryer’s “Policing the Police: The Impact of ‘Pattern-or-Practice’ Investigations on Crime”:
This paper provides the first empirical examination of the impact of federal and state “Pattern-or-Practice” investigations on crime and policing. For investigations that were not preceded by “viral” incidents of deadly force, investigations, on average, led to a statistically significant reduction in homicides and total crime. In stark contrast, all investigations that were preceded by “viral” incidents of deadly force have led to a large and statistically significant increase in homicides and total crime. We estimate that these investigations caused almost 900 excess homicides and almost 34,000 excess felonies. The leading hypothesis for why these investigations increase homicides and total crime is an abrupt change in the quantity of policing activity. In Chicago, the number of police-civilian interactions decreased by almost 90% in the month after the investigation was announced. In Riverside CA, interactions decreased 54%. In St. Louis, self-initiated police activities declined by 46%. Other theories we test such as changes in community trust or the aggressiveness of consent decrees associated with investigations — all contradict the data in important ways.
Neal Zupancic comments:
The authors seem to suggest it is mostly the investigations themselves causing the increase in crime, rather than any particular policy changes. The mechanism they propose is that police officers greatly reduce their quantity of policing when under federal investigation after a “viral” incident, but there is little indication that this comes about as the result of any particular policy reform – the suggestion is that police are either reducing public contact in an effort to avoid having their own actions scrutinized, or are trying to make a point (in the case of deliberate strikes and slowdowns/sickouts). There’s also a section (page 27) where the authors talk about the possible impact of increased paperwork, and estimate it might account for about 20% of the reduction in police activity in one city. I’m not sure if we’re calling this “reform” but even if we do it’s a small proposed effect.
Suppose that American politics decomposed into ~four ‘houses’, representing different perspectives and different sets of virtues (and vices). What might they be?
My first attempt:
[epistemic status: playing around with narratives]
Coeptis – The nietzschean house. Some combination of ‘everyone benefits when we stay out of the way and let the cream rise to the top’, and ‘if you don’t pull yourself up by your boot-straps, well, that’s on you’. Believes in advancing, gaining power, and letting power concentrate in the hands of a few super-competent elites.
Believes in cowboys, superheroes, lone vigilantes. Stubbornly refuses to take orders or conform, where it disagrees with their conscience or taste. Persnicketiness.
Believes the world is intelligible; so hand the world to the best and brightest, and let them figure it out.
Pluribus – The wisdom-of-crowds house. Believes in the elegance of democratic and (competitive, monopoly-free) market-based solutions, and believes in the nobility of respecting others’ autonomy and agency. Live, and let live. Distributed decision-making, and aesthetic appreciation for the dizzying variety of different individuals’ life-projects.
Pluribus is skeptical of Coeptis’ belief that any one individual can model and optimize the world. The world is too messy for that; it requires diverse and distributed optimization. But Pluribus agrees with Coeptis that individuals’ free action is the special sauce of civilization (albeit en masse, not through an elite).
Let a hundred flowers bloom. Don’t just leave me be; leave people be. See America for what it is, not just what you wish it were. Respect what it is. Respect who we are.
Novus – The compassionate utopian consequentialist house. Do whatever it takes to protect people and save lives. Break the rules and encroach on apparent ‘rights’ when doing so actually works and improves welfare.
Radicalism; idealism; willingness to work hard for fundamental change. Paternalism. Progress. Deliberately moving toward a brighter future.
Coeptis breaks the rules out of stubbornness and frustration with the idiots who designed things wrong. Novus breaks the rules because people are starving and in need.
Novus is confident that somehow this can be fixed, even if it’s less certain of methodology (more pragmatic, willing to experiment, break eggs, be inelegant) than the other three houses.
1. Deferring to authority. Forming tight-knit high-trust alliances. Achieving great things via teamwork and the superpower of acting in lock-step.
2. Preserving and absorbing established scholarship. Learning the lessons of history. Minding Chesterton’s fences and the wisdom of old, evolved systems. Approaching risky new ideas with caution.
1 and 2 are related: strong coordination requires everyone to know with confidence what everyone else in the group believes and wants, which requires relatively stable, uniform, uncertainty-minimizing culture. A marching band can’t be confused about what their orders are, or be perpetually uncertain about whether the left side will suddenly decide to go off and do its own thing.
3. Rule of law, since law is much of what makes society legible and enables cooperation. Applying the law consistently. Resisting corruption. Order.
The following is a long excerpt from an unpublished paper I wrote in 2012-2013, mostly before I was enmeshed in rationality-community ideas. The paper was a response to David Chalmers’ “hard problem of consciousness,” described well in “Facing up to the Problem of Consciousness” and “Consciousness and its Place in Nature.”
Chalmers gives various arguments for thinking that phenomenal consciousness isn’t reducible to merely physical facts. A complete reductive explanation must make it logically impossible for the reduced entity to differ in any way unless the thing you’re reducing it to also differs in some way. Chalmers argues that reductions of consciousness to physical facts can never be complete in this sense, because there is some aspect of consciousness that could in principle vary without varying any physical fact. This aspect is the first-person, subjective, phenomenal character of consciousness; what it actually feels like “from the inside” to instantiate conscious states.
I accept Chalmers’ arguments, for reasons I detail in an earlier section of the paper but won’t go into here. Rather, I agree with him that phenomenal reductionism is probably false; but Chalmers’ own position, which I call phenomenal fundamentalism (the idea that there are irreducible phenomenal states), also commits us to absurdities.
Eliezer Yudkowsky’s “Zombies! Zombies?” does a good job of articulating the core problem with non-interactionist fundamentalism, though I didn’t really understand this argument’s force at the time. Sean Carroll’s “Telekinesis and Quantum Field Theory” dispenses with interactionist fundamentalism.
By process of elimination, I conclude that phenomenal anti-realism, or eliminativism, is probably true: phenomenal consciousness is neither reducible nor irreducible (in our universe), because it doesn’t exist.
This idea seems absurd, so I endorse it only grudgingly: it’s absurd, but less absurd than the two alternatives.
There are a number of obvious objections to the idea. While I think some of these objections are partly successful, I think on the whole they aren’t successful enough to make eliminativism a worse option than reductionism and fundamentalism. Here, I’ll try to systematically address a large number of possible objections. In the process I’ll hopefully clarify for some people what I mean by “eliminativism.”
Be warned that the following is not my standard fare. It’s very much written for an audience of professional analytic philosophers, and is pretty relentless about pursuing fine distinctions and subtle counter-arguments. I think this is warranted by the fact that eliminativism is such a strange view. Philosophers to date have reasonably complained that anti-fundamentalists like Dennett and Yudkowsky have been needlessly sloppy and imprecise. My view is that the arguments of anti-fundamentalists have exhibited less rigor than those of fundamentalists for contingent historical reasons, and this shouldn’t be taken to indicate that the underlying idea is fragile and liable to collapse under close scrutiny.
I try to watch out for inconsistencies in my beliefs (and between my actions and my stated beliefs and goals). Yet I’m not a fan of criticizing people for things like “hypocrisy.”
It’s obviously a personal attack, and personal attacks obviously make people defensive, and defensiveness is obviously boring and terrible. But I have four other concerns with attacking people for their inconsistencies:
1. It’s too meta. Proving that someone said “p” and “not-p” is a great way to conclusively defeat them in a debate. No matter what your audience believes about p, they’ll agree with you about the laws of logic; and by not entering the fray, you get to appear impartial and objective.
But the fray matters — or if it doesn’t matter, why are we talking about p in the first place?
“You said ⊥!” is an amazing argument that works no matter what the facts are. For that reason, it’s an amazing argument that tells us nothing about the world, aside from ad-hominem facts about the claimant’s character.
If someone is saying both “p” and “not p,” then at least one of those views is false. If you know which of those views is false, why not just attack the false view? If you don’t know which of those views is false, why not talk that over and try to figure it out? If figuring it out matters less than scoring points against Ms. Placeholder, then it’s possible that neither is worth your time.
2. Charges of hypocrisy discourage updating and nuance. The easiest way to look consistent over time is to assert simple blanket statements and then refuse to change your mind about them. Better yet, say nothing substantive at all.
It’s sometimes important to publicly evaluate others’ character. In a presidential debate, for example, “ad hominem” is not always a fallacy. We’re trying to assess which person is more trustworthy and competent, not just which one is more correct; the personal virtues and vices of the candidates matter.
Yet even in this context, “Senator Placeholder is wrong on taxes” is much more useful than “Senator Placeholder is inconsistent on taxes.” Debate the latter, and the candidates and their audience only learn new things about a particular senator’s record, not about taxes; and Placeholder’s immediate incentive is to obfuscate her views or make them as simple and unchanging as possible, rather than to improve or defend them.
3. In the case of groups, charges of hypocrisy discourage intellectual diversity. This is one of the problems I have with the “motte and bailey” idea: by attacking groups for “strategically equivocating” between a more defensible view (the “motte”) and a less defensible one (the “bailey”), we neglect the more common case where some people honestly have less defensible versions of their friends’ views.
By attacking the hypocrisy rather than attacking the false view, we again focus the debate on people’s faults and vices. In this way, the motte/bailey accusation increases the number of debates that are about how generally “good” or “bad” a group is, to the exclusion of mundane empirical questions.
The motte/bailey charge can be useful when a particular individual explicitly states both the motte and the bailey, though even then it’s a charge best reserved for friends and not enemies. But when two different individuals can be accused of Emergent Hypocrisy merely for associating with each other, it becomes a lot harder to associate with anyone who doesn’t share all your views.
4. Ambitious goal-setting and self-improvement can look like behavioral hypocrisy. Accusing someone of hypocrisy because their deeds don’t live up to the moral principles they endorse encourages people to have low, easily-met standards.
We’re already risk-averse, and the charge of hypocrisy makes risk-taking even riskier, especially for groups. Trying to build a community that exemplifies certain virtues often requires that you talk quite a bit about those virtues. But then you risk looking like you already think you have those virtues.
Even if your community is a standard deviation above most groups in the virtue of Temperance, the mere fact that you’ve endorsed Temperance means that any small misstep by anyone in your group can be used to charge you with hypocrisy or hubris. And hypocrisy and hubris are approximately people’s favorite things to accuse each other of. Easier, then, to steer clear of endorsing good ideas too loudly.
We’re not very good at love yet. Some of the most textured and complex pleasures people experience happen in physically and emotionally intimate relationships — i.e., in the kinds of relationships that occasion some of our most spectacular tragedies and failures.
One reason we’re bad at love is that we lack a language, a culture, and an ingrained habit of honesty. The more we hide from others, the more we hide from ourselves. The more we hide from ourselves, the more confused and conflicted we become about our own wishes. And that in turn makes it harder to communicate in the future; and the cycle cycles on.
Why, then, are we so closed off to one another in the first place?
Lots of reasons. But one that strikes me as especially easy to fix is that we lack the vocabulary to express a lot of our desires and experiences. No words often means no awareness: no awareness of our current state, and no awareness of the alternative possibilities out there.
Selecting better terminology isn’t a hair-splitting exercise in intellectual masturbation, much as I adore intellectual masturbation. Done right, it’s a technology for enriching our emotional lives. Clear thinking can be an aid to deep feeling, and vice versa. If we want to be happier, want to make wiser decisions, we have to be able to talk about this stuff.
Below, I’ll list a few distinctions that I’ve found useful to explicitly mark in my own words and thoughts. I encourage you to play with them, test them, see which ones you like, and expand and improve on them all.
1. Love vs. amory
“Amory” is the name I use for being in a romantic or sexual relationship, and for all the little thoughts and deeds that make up those relationships. This idea is more specific than “love”, in some useful ways. It’s possible to love a platonic (or Platonic) friend, or a good sandwich, or oneself; but that’s not amory.
It’s also more inclusive than “love” in some useful ways. “But do you love your partner?” is a question I’ve seen people struggle with, because it mixes together questions about your present levels and varieties of affection, the social roles you see you relationship as fitting into, and your long-term feelings and relationship goals. Those might be important questions to answer, but it’s also nice to be able to just say that you interact with someone in physically or romantically intimate ways, without wading into those larger questions. And I find “it’s amory” less awkward and stilted than “it’s a romantic / sexual relationship.”
What do we call the people in an amorous relationship? “Partner” isn’t ideal, because it usually suggests a fairly serious relationship. And other terms (“boyfriend,” “lover,” “fuckpuppet,” “relata”…) are too gendered or otherwise specific.
My suggestion is to adopt the new term “amor“, borrowed from Latin for this targeted use. An amor is anyone you’re in a sexual or romantic relationship with. Where a “relationship” is a significant pattern of affinity and cooperation between some specific set of people. And a “romantic” relationship is one that’s characterized by communal acts, a presumption of very warm mutual feelings of caring, and behavior intended to produce mutual desire, pleasure, or intimacy associated with or analogous to sexual desire, pleasure, or intimacy. And a “sexual” relationship is one involving mutual arousal and willful stimulation of erogenous zones, especially…
… OK, that’s probably enough definition-mongering. But note that these are still vague definitions. Calling someone your “amor” (which sounds enough like the French amour that they’ll probably get the basic gist) doesn’t specify whether the relationship is sexual, romantic, physical, intellectual, serious, short-term, exclusive, primary, same-sex, with a boy, with a girl, with someone nonbinary, etc. It’s just… someone you have an existing non-platonic connection with. Self-labeling can be essentialist and restricting, but it doesn’t have to be.
Is this love? Are they my girlfriend? Am I straight? The rush to always have ready answers to questions about your identity, your desires, and the nature of your relationships is damaging because it assumes there’s always a clear answer to such questions; it assumes the answer can’t change on a regular basis; it punishes amors for disagreeing slightly on how to classify their relationship; and it discourages people from patiently waiting until they’ve gathered enough information about themselves to really know where they’re at. This is why the terms I recommend here are still pretty nebulous — but nebulous in specific, carefully chosen ways. Rather than giving up on the project of language and communication, or settling for what we have, we should try to make our language vague in the ways that mirror real human uncertainty and ambiguity, while getting rid of sources of obscurity that serve no good purpose.
2. Preference vs. behavior
Our language is terrible at distinguishing the things we want from the things we actually do. How many people are presently in their ideal relationship type? Most people’s amorous inner lives are greater than the sum of their relationships to date. And this is particularly important to recognize if we want to improve the fit between people’s preferences and their circumstances.
A useful example: Polyamory is a generic identity term, a giant tent-umbrella for people who prefer to have many concurrent romantic and sexual relationships, and for people actually engaged in such relationships. But we lack an easy way to distinguish those two subcategories, which is especially confusing when people’s preferences and relationship types change in different ways. I’ll call the first group of polyamors “polyphiles”, and the second group “multamors”. So:
Multamory is the act of being in a romantic and/or sexual relationship with more than one person over the same period of time. Multamory is opposed to unamory (a relationship with only one person) and anamory (being in no romantic and/or sexual relationships). Romantic anamory is being single. Sexual anamory is not having sex. Voluntary short-term sexual anamory is sexual abstinence (or continence); voluntary long-term sexual anamory is celibacy.
Polyphilia is a preference for having multiple simultaneous mid-to-long-term romantic and/or sexual partners. Polyphilia is opposed to monophilia (a preference for one partner) and aphilia (a preference for having no partners). We can distinguish romantic polyphilia from sexual polyphilia, and do the same for monophilia.
(… And I promise I’m not just promoting these terms because they avoid mixing Latin and Greek roots. I PROMISE.)
3. Preference vs. orientation
One’s orientation is the set of sex- and gender-related properties that one is romantically or sexually attracted to. “Attraction” here might mean sexual arousal, or intensely involving aesthetic appreciation, or a deep-seated desire to interact with persons who tend to belong to the category in question.
Such attraction comes in different levels and kinds of intensity (how attracted one is to a given range of individuals), scope (how large is the range of individuals the attraction applies to), context-dependency (how much the attraction varies with independent factors; how predictable it is given only the variables under consideration), and consistency (how much the attraction naturally or inevitably oscillates, including natural duration, how soon and how rapidly the attraction diminishes after its onset).
Preference is not orientation. My orientation is the universe of sensations (and interpretations of sensations) that viscerally entice and delight me, while my preference is what I actually want to have happen. I can be oriented toward (i.e., sensuously enjoy) chocolate ice cream, but choose not to indulge; or I can be oriented away from (i.e., dislike) chocolate ice cream, but choose to have some anyway — say, to win an ice-cream-eating contest.
Sexual orientation is what sex or gender one is sexually attracted to. Sexual attraction involves the kind of arousal we associate with sex, but it doesn’t need to involve a preference to actually have sex with the person one is attracted to. One can desire to fantasize about sex without wishing to go out and have the sex in question in the real world, for instance.
Romantic orientation is what sex or gender one is romantically attracted to. This is a much vaguer concept, encompassing the sorts of people one ‘crushes’ on, the sorts of people one enjoys dating and flirting with, the sorts of people one has especially emotionally intimate or intense friendships with, etc.
Orientation may be directed toward a primary sexual characteristic, or a secondary sexual characteristic, or any gendered physical or psychological characteristic. Gendering is partly culturally (and subculturally and individually) relative, and historically contingent, so there is no fixed set of universal characteristics that exhaust sexual or romantic orientation. What distinguishes ‘genders’ from other ways of categorizing people is just that they tend to be related in some (perhaps roundabout) fashion to the biological distinction between male and female.
Thus what will qualify as an ‘orientation’ from the perspective of one culture (e.g., a preference for people who wear long hair, dresses, and make-up) may instead qualify as a general kink in another. For some people, this will be a reason to collapse the whole idea of orientations, kinks, etc. into some larger categories, like ‘sexual turn-ons’ and ‘romantic turn-ons’.
4. Quantity
All the other confusions are amplified by the fact that our language is insensitive to quantitative difference. The Kinsey scale translates the heterosexual / homosexual dichotomy into a spectrum, which many people find useful. But it’s not clear what the scale is quantifying, which sucks a lot of the value out of it. For instance, it doesn’t distinguish weak but constant desire from intense but intermittent desire; nor does it clearly distinguish behavior, preference, and orientation.
I mentioned above that vagueness can be more useful than precision when you’re uncertain, or when there are risks associated with communicating too much too fast. Equally, we should have the ability to be precise when it is useful to clearly and concisely define ourselves to others. Language should be vague, and non-vague, in exactly the ways that people are most likely to need.
Returning to the example of polyamory, a scale that acknowledges degrees of personal preference might look like:
Strong Polyphile: Only willing to be in relationships that involve, or seek to involve, three or more people.
Moderate Polyphile: Significantly prefers multamorous relationships, but open to unamorous relationships too, possibly even ‘closed’ ones.
Weak Polyphile: Open to multamory or unamory, but slightly prefers multamory.
Ambiphile: Equally open to multamory or unamory, with no preference for either.
Weak Monophile: Open to either, but slightly prefers unamory.
Moderate Monophile: Significantly prefers unamory, but open to ‘open’ or polyamorous relationships.
Strong Monophile: Only willing to be in two-person relationships.
There are lots of other variables of human experience and behavior that would be quite easy to sum up in a few words: your relationship status at different times (e.g., ‘I’m a past-multamor’ or ‘I’m a recent-multamor’ vs. ‘I’m a present-multamor’), exactly how many people you’re in a relationship with (biamory, triamory…) or would like to be in a relationship with (diphilia, triphilia…), where you fall on various spectra from sexual to asexual or romantic to aromantic, how curious you are about a certain behavior or relationship type, how much masculinity or femininity (of various kinds) you prefer in your partners, etc.
We could carve up these concepts more finely, but I find that these distinctions are the ones I end up needing the most often. If we were categorizing food tastes rather than relationship tastes, we’d say that an ice cream orientation amounts to craving and/or enjoying the taste of ice cream, an ice cream preference amounts to an all-things-considered desire to eat ice cream when given a chance, and ice cream amory is a diet of routinely eating ice cream.
But since ice cream isn’t the psychosocial clusterfuck that interpersonal affection is, and since there’s less at stake if you fail to clearly communicate or understand your mental states about ice cream, I’d expect that there’s more discursive low-hanging love fruit than low-hanging ice cream fruit out there.
In a December 14 comment on his blog, Scott Aaronson confessed that the idea that he gains privilege from being a man feels ‘alien to his lived experience’. Generalizing from his own story, Aaronson suggested that it makes more sense to think of shy nerdy males as a disprivileged group than as a privileged one, because such men are unusually likely to be socially isolated and stigmatized, and to suffer from mental health problems.
Here’s the thing: I spent my formative years—basically, from the age of 12 until my mid-20s—feeling not “entitled,” not “privileged,” but terrified. I was terrified that one of my female classmates would somehow find out that I sexually desired her, and that the instant she did, I would be scorned, laughed at, called a creep and a weirdo, maybe even expelled from school or sent to prison. You can call that my personal psychological problem if you want, but it was strongly reinforced by everything I picked up from my environment: to take one example, the sexual-assault prevention workshops we had to attend regularly as undergrads, with their endless lists of all the forms of human interaction that “might be” sexual harassment or assault, and their refusal, ever, to specify anything that definitely wouldn’t be sexual harassment or assault. I left each of those workshops with enough fresh paranoia and self-hatred to last me through another year. […]
Of course, I was smart enough to realize that maybe this was silly, maybe I was overanalyzing things. So I scoured the feminist literature for any statement to the effect that my fears were as silly as I hoped they were. But I didn’t find any. On the contrary: I found reams of text about how even the most ordinary male/female interactions are filled with “microaggressions,” and how even the most “enlightened” males—especially the most “enlightened” males, in fact—are filled with hidden entitlement and privilege and a propensity to sexual violence that could burst forth at any moment.
Because of my fears—my fears of being “outed” as a nerdy heterosexual male, and therefore as a potential creep or sex criminal—I had constant suicidal thoughts. As Bertrand Russell wrote of his own adolescence: “I was put off from suicide only by the desire to learn more mathematics.” At one point, I actually begged a psychiatrist to prescribe drugs that would chemically castrate me (I had researched which ones), because a life of mathematical asceticism was the only future that I could imagine for myself.
The two main responses have been Laurie Penny’s “On nerd entitlement” and Amanda Marcotte’s “MIT professor explains: The real oppression is having to learn to talk to people.” These led to a rejoinder from Scott Alexander (“Untitled“) and a follow-up by Aaronson (“What I believe“). My impression is that each response in this chain has at least partly misunderstood the preceding arguments, but I’ll do my best to summarize the state of the debate without making the same mistake, borrowing liberally from others’ comments.
1. Does feminist rhetoric bear some of the blame?
Nick Tarleton responds to Scott Aaronson’s anecdote:
Scott attributes his problems entirely(?) to feminism. I’ve had similar (milder) bad experiences, but it’s really not clear to me in retrospect how much to attribute them to gender/sex-specific cultural stuff rather than general social anxiety and fear of imposing. Within gender/sex-specific cultural stuff, it’s really not clear how much to attribute to feminism rather than not-really-feminist (patriarchal, or Victorian reversed-stupidity-patriarchal) background ideas about male sexuality being aggressive, women not wanting sex, women needing protection, and the like. (Which feminism has a complicated relationship with — most feminists would disavow those ideas, but in my experience a lot of feminist rhetoric still trades on them, out of convenience or just because they’re embedded in the ways we have of thinking and talking about gender issues and better ways haven’t propagated.)
And Alexander writes:
Laurie Penny has an easy answer to any claims that any of this is feminists’ fault: “Feminism, however, is not to blame for making life hell for ‘shy, nerdy men’. Patriarchy is to blame for that.”
I say: why can’t it be both? […]
Pick any attempt to shame people into conforming with gender roles, and you’ll find self-identified feminists leading the way. Transgender people? Feminists led the effort to stigmatize them and often still do. Discrimination against sex workers? Led by feminists. Against kinky people? Feminists again. People who have too much sex, or the wrong kind of sex? Feminists are among the jeering crowd, telling them they’re self-objectifying or reinforcing the patriarchy or whatever else they want to say. Male victims of domestic violence? It’s feminists fighting against acknowledging and helping them.
Yes, many feminists have been on both sides of these issues, and there have been good feminists tirelessly working against the bad feminists. Indeed, right now there are feminists who are telling the other feminists to lay off the nerd-shaming. My girlfriend is one of them. But that’s kind of my point. There are feminists on both sides of a lot of issues, including the important ones.
Alexander is right that “Whether or not a form of cruelty is decreed to be patriarchy doesn’t tell us how many feminists are among the people twisting the knife.”, and he’s right that people who accuse nerds of misogyny often appeal in the same breath to ableist, classist, lookist, fat-shaming, and heteronormative (!) language. Being a feminist doesn’t mean you can never be cruel to people, or never misrepresent them. Consider the way Marcotte elects to summarize Aaronson’s disclosure of his many-year struggle with mental illness:
Translation: Unwilling to actually do the work required to address my social anxiety—much less actually improve my game—I decided that it would be easier to indulge a conspiracy theory where all the women in the world, led by evil feminists, are teaching each other not to fuck me. Because bitches, yo.
Marcotte adds, “I’m not a doctor, but I can imagine that it’s nearly impossible to help someone who is more interested in blaming his testicles, feminism, women generally, or the world for his mental health problems than to actually settle down and get to work at getting better.” Or, as Ozy Frantz of Thing of Thing puts it: “how dare those mentally ill people go about having distorted and inaccurate thoughts”.
Penny’s piece too ignores the possibility that feminist discourse norms are causing any harm. Sarah Constantin of Otium responds in a Facebook comment:
So, there are women nerds who make feminism their identity. The author [Penny] is one of them. And I think you do that if nerd culture treats you badly and feminist culture treats you well. But feminist culture doesn’t treat everyone well. Sometimes it’s *full* of anti-nerd contempt.
I’m unusual in this respect, but I’m much more offended and bothered by people who don’t like how my brain works than by people who don’t like what’s between my legs. I’m more wary of feminists who I suspect of wanting to mock my personal quirks and hobble my professional success than I am of sexism in STEM. I see comments on anti-SV articles like “this is what happens when you give autistic people money and power” and I get mad. I take it personally. A lot more personally than I take insults to women. Maybe it’s not fair of me, but that’s how my emotional calculus stacks up.
Scott Aaronson is right that there is a particular kind of damage that is inflicted ONLY on men and boys [eta: and queer women/girls] who want to do right by women and do not want to be “creeps”.
In general, there is a kind of damage that is inflicted ONLY upon the morally scrupulous. If you really want to be good, the demands of altruistic or self-sacrificing goodness can be paralyzing. The extreme case of this is scrupulosity as a symptom of OCD. This is a kind of pain that simply does not affect people whose personal standards are more relaxed. […]
What actually happens is that a highly scrupulous person reads a bunch of things that seem to put moral obligations on him, with the implication that the correct amount of moral obligation is always “more,” and *never* finds any piece of feminist writing that explicitly says “this is enough, you can stop here” because there aren’t that many people period who understand that obsessive moral paralysis is a problem. And so you get Scott Aaronson and many others like him (including some women!)
What we need is people talking about the problem of obsessive moral paralysis. “Yes, you *do* have some moral obligations, but they are finite and attainable. Here are realistic examples of people acting acceptably. Here are real-world examples of good men. You can be good without being a martyr.”
There is a lot to like about this piece. Penny correctly points out that women have an extra layer of marginalization on top of what Aaronson went through, and that Aaronson didn’t account for that in his comment.
However, I think the thing that rubbed me wrong about Penny’s piece is that she didn’t offer any account of the role that feminism played in Aaronson’s tortured adolescence, which is an experience unique to the privileged, and which Penny didn’t acknowledge at all. […]
Penny claims the mantle of feminism, yet she refuses to acknowledge the role that her movement played in Aaronson’s tragic story. She demands that Aaronson, as a nerdy white man, be “held to account” for the lack of women in STEM, yet refuses his call that feminism be held to account for its at-worst abusive and at-best unkind rhetoric toward people deemed “privileged.”
The thesis of Penny’s piece is that as a nerdy woman, she went through all of the hell that Aaronson did, plus extra because she’s a woman. I think if she wanted to make that claim, she should have some kind of argument that Aaronson’s unique pain somehow doesn’t count or is somehow lesser than the pain of being a woman. I don’t find that obvious, and I don’t think she even attempted to make a case for it.
I think, as feminist advocates, we are obligated to recognize the darker side of our community and its potential to cause real-world harm. Aaronson’s piece was a real, raw testimonial documenting some of that harm. Penny’s piece just seemed like she was trying to handwave it away. She was compassionate, but she ultimately didn’t seem like she was listening.
I tend to recognize this because it’s a problem I have often — when someone tells me about an issue they have, I try to relate it to my own experience. On the one hand, a measure of that is how empathy/sympathy works. But on the other hand, I have a tendency to ignore the differences that make the other person’s pain and loss unique. I feel like that may be what’s going on here.
Chana Messinger raises the possibility that the harm inflicted on some scrupulous people could be “an unfortunate but necessary side effect of spreading the right messages to everyone else”. To know whether that’s so, we’ll need to investigate how common a problem this is, and whether there are easy ways to avoid it. At this stage, however, relatively few people have acknowledged that this is a concern. I certainly wasn’t aware of it until recently, and I’m now having to rethink how I talk about moral issues.
I know there are a couple different definitions of what exactly structural oppression is, but however you define it, I feel like people who are at much higher risk of being bullied throughout school, are portrayed by the media as disgusting and ridiculous, have a much higher risk of mental disorders, and are constantly told by mainstream society that they’re ugly and defective kind of counts. If nerdiness is defined as intelligence plus poor social skills, then it is at least as heritable as other things people are willing to count as structural oppression like homosexuality (heritability of social skills, heritability of IQ, heritability of homosexuality)[.]
The three main objections I’ve heard to this line of reasoning are that (1) the shaming and bullying nerds experience is relatively minor, (2) nerds are privileged, and (3) anti-nerd sentiment is really some combination of lookism, ableism, etc.
3 strikes me as a reasonable (though not conclusively demonstrated) position, and is still consistent with points like Frantz’s:
it is amazing how laurie penny can write this entire article without mentioning that neurodiversity is a form of oppression????
“Privilege doesn’t mean you don’t suffer, which, I know, totally blows.” except that a lot of shy nerdy men are suffering because… they lack privilege… on at least one axis
Intersectionality also suggests that anti-nerd sentiment won’t perfectly reduce to its constituent parts. ‘Nerd’ could be a composite like ‘Chinese-American lesbian’ or ‘poor transgender Muslim’, but third-wave feminist theory denies that the social significance of ‘poor transgender Muslim’ is just a conjunction of the significance of ‘poor person’, ‘transgender person’, and ‘Muslim’.
Alexander gives a good response to 2, pointing out that being Jewish (for example) can simultaneously result in being privileged and oppressed. 1 seems like an open empirical question, provided we can agree on a threshold level of harm that is required for something to qualify as ‘oppression’, ‘discrimination’, etc.
Alternatively, one might object that the ‘structures’ Alexander points to are cognitive and cultural, but not institutional. Perhaps there isn’t enough economic, legal, and political restriction on nerds for them to qualify as ‘oppressed’ in the relevant sense. (And perhaps the same is true of Jews in 21st-century America, and we should think of Jews in that context as ‘historically oppressed’ but not actively oppressed? One man’s modus ponens is another’s modus tollens.)
Of course, it could turn out that ‘shy nerds’ suffer as a group from a distinct flavor of oppression even if ‘shy male nerds’ don’t. And Messinger adds in correspondence: “However strong or weak the case for nerd oppression, the case for nerd oppression by feminists is an order of magnitude or two weaker.”
But ‘oppressed’ is in the end just a word. What’s the substantive question under debate?
If some categories of suffering are unusually intense, widespread, and preventable, it makes sense to adopt the heuristic ‘allocate more attention and sympathy to those categories’. This is the schematic reasoning behind treating triggers as qualitatively more important than aversions, or treating racism as qualitatively more important than run-of-the-mill bullying. (At least, it’s the good reasoning. There may be worse reasons on hand, such as medical essentialism and outgroup antipathy.)
However, these heuristics require some policing, or they’ll degrade in effectiveness. Once everyone agrees that ‘triggers’ demand respect, people without PTSD symptoms have an incentive to expand the ‘trigger’ concept to fit their most intense preferences. Once everyone agrees that ‘oppressed groups’ get special consideration, disadvantaged people outside conventional axes of oppression have an incentive to expand the idea of ‘oppression’. This is inevitable, even if no one is being evil. Thus we need to take into account the upkeep cost of preserving these categories’ meanings when we decide whether they’re useful.
Many people intuit that we should have different norms in Europe and the Anglophone world about when it’s OK to belittle white people as a group, versus when it’s OK to belittle black people. The former is “punching up,” the latter “punching down.” Without a clear sense of whether geeks are ‘above’ or ‘below’ us, this heuristic short-circuits here; so the practical import of this debate is how strongly we should endorse a norm ‘don’t pick on shy geeky men as a group’.
Even if geeks aren’t oppressed and their problems are much smaller than those of women, black people, LGBT people, etc., their suffering is still real, and there are probably good ways to reduce it. I don’t know what the best solution here is, but trigger warnings and carefully-labeled safe spaces may be useful for people who want to avoid discussing various forms of feminism. For public spaces, perhaps we need a new concept of ‘punching straight ahead’, and new norms for when that’s OK. I generally prefer to err on the side of niceness, but I understand the arguments for being a loud gadfly, and I don’t know of a practical way to keep memes of wrath from outcompeting pacific memes.
Alexander, however, worries that even raising the issue of punching up vs. down is a red herring. He accuses feminists of misrepresenting Scott Aaronson’s ‘my suffering is real and matters’ as ‘my suffering is the most real and most important kind of suffering’:
If you look through Marcotte’s work, you find this same phrasing quite often. “Some antifeminist guy is ranting at me about how men are the ones who are really oppressed because of the draft” (source). […] But Aaronson is admitting about a hundred times that he recognizes the importance of the ways women are oppressed. The “is really oppressed” isn’t taken from him, it’s assumed by Marcotte. Her obvious worldview is – since privilege and oppression are a completely one dimensional axis, for Aaronson to claim that there is anything whatsoever that has ever been bad for men must be interpreted as a claim that they are the ones who are really oppressed and therefore women are not the ones who are really oppressed and therefore nothing whatsoever has ever been bad for women.
Alexander blames this on “Insane Moon Logic”. I find it likelier that different people, Alexander included, are just focusing on different aspects of Aaronson’s comment, to fit them into different narratives. Aaronson doesn’t deny that women are disadvantaged in various ways, but he, not Marcotte or Penny, is the person who raised the issue of whether geeks are more disprivileged than women. It shouldn’t surprise us that some eyebrows would be raised at lines like:
[1] Alas, as much as I try to understand other people’s perspectives, the first reference to my ‘male privilege’—my privilege!—is approximately where I get off the train, because it’s so alien to my actual lived experience.
[2] But I suspect the thought that being a nerdy male might not make me “privileged”—that it might even have put me into one of society’s least privileged classes—is completely alien to your way of seeing things.
[3] My recurring fantasy, through this period, was to have been born a woman, or a gay man, or best of all, completely asexual, so that I could simply devote my life to math, like my hero Paul Erdös did. Anything, really, other than the curse of having been born a heterosexual male, which for me, meant being consumed by desires that one couldn’t act on or even admit without running the risk of becoming an objectifier or a stalker or a harasser or some other creature of the darkness.
[4] As I see it, whenever these nerdy males pull themselves out of the ditch the world has tossed them into, while still maintaining enlightened liberal beliefs, including in the inviolable rights of every woman and man, they don’t deserve blame for whatever feminist shortcomings they might still have. They deserve medals at the White House.
1 appears to deny the existence of male privilege; 2 suggests that nerdy men may be “one of society’s least privileged classes”; 3 calls being a heterosexual man a “curse”; and 4 can easily be read as demanding cookies (“medals”, even) for insecure men who don’t actively reject women’s rights, no matter how glaring their “feminist shortcomings”.
Aaronson has since explained that he does believe in male privilege, and he has walked back claim 2 to just “the problem of the nerdy ‘heterosexual male’ is surely one of the worst social problems today that you can’t even acknowledge as being a problem” (emphasis added). Still, a feminist could reasonably worry that Aaronson is vacillating between a motte (‘nerds suffer too!’ or ‘there exists at least one person who was harmed by feminist rhetoric!’) and a bailey (‘nerds have it worse than all or most other groups’, or ‘pointing out problems with nerd culture is immoral’).
I hate the ‘motte’/’bailey’ framing — it encourages people to assume malice, even when we should be looking into the possibility that our conversation partner has made a mistake, or has updated their beliefs, or consists of multiple dissenting factions. But if you’re going to use the motte/bailey idea to accuse your enemies of deceit (or Moon Logic), be sure you spend at least as much time testing how readily it applies to your own side.
I don’t know whether Aaronson stands by his younger self’s belief that he would have been better off as a non-white non-heterosexual non-male. As Tarn Somervell Fletcher notes:
I’ve seen plenty of responses that seemed to have completely taken on board everything he’s [Aaronson’s] said, and just think that he’s misjudged how bad it is for some people. When you’re comparing two people’s oppression, or suffering etc. (which is a terrible terribly unproductive idea but everyone seems determined to do it anyway), the default is that both people are going to discount (or, fail to count?) the others’ experience.
I agree with Aaronson’s statement, “This whole affair makes me despair of the power of language to convey human reality” (only I came in pre-despairing). Since people are extremely bad at simulating others’ life experiences, Aaronson is likely to misunderstand how bad women, black people, trans people, etc. have it. (This is of course consistent with acknowledging the psychological importance of Aaronson’s feeling that he had it worse than everyone else.) For the same reason, a black lesbian social butterfly would be likely to misunderstand how bad Aaronson has it. If we only rely on who has the most eloquent anecdotes, rather than on reliable population-wide quality-of-life measures, we aren’t going to get very far with these discussions.
And perhaps it isn’t worth the effort, if it’s possible for us to come up with norms of discourse that work OK even when we don’t all start with perfectly accurate beliefs about people’s demographics and relative levels of privilege. Even if punching up is justifiable in principle, we may not want to come in swinging when there’s a chance we’re misappraising the situation.