Library of Scott Alexandria

I’ve said before that my favorite blog — and the one that’s shifted my views in the most varied and consequential ways — is Scott Alexander’s Slate Star Codex. Scott has written a lot of good stuff, and it can be hard to know where to begin; so I’ve listed below what I think are the best pieces for new readers to start with. This includes older writing, e.g., from Less Wrong.

The list should make the most sense to people who start from the top and read through it in order, though skipping around is encouraged too — many of the posts are self-contained. The list isn’t chronological. Instead, I’ve tried to order things by a mix of “where do I think most people should start reading?” plus “sorting related posts together.” If stuff doesn’t make sense, you may want to Google terms or read background material in Rationality: From AI to Zombies.

This is a work in progress; you’re invited to suggest things you’d add, remove, or shuffle around.

__________________________________________________

I. Rationality and Rationalization
○   Blue- and Yellow-Tinted Choices
○   The Apologist and the Revolutionary
○   Historical Realism
○   Simultaneously Right and Wrong
○   You May Already Be A Sinner
○   Beware the Man of One Study
○   Debunked and Well-Refuted
○   How to Not Lose an Argument
○   The Least Convenient Possible World
○   Bayes for Schizophrenics: Reasoning in Delusional Disorders
○   Generalizing from One Example
○   Typical Mind and Politics

II. Probabilism
○   Confidence Levels Inside and Outside an Argument
○   Schizophrenia and Geomagnetic Storms
○   Talking Snakes: A Cautionary Tale
○   Arguments from My Opponent Believes Something
○   Statistical Literacy Among Doctors Now Lower Than Chance
○   Techniques for Probability Estimates
○   On First Looking into Chapman’s “Pop Bayesianism”
○   Utilitarianism for Engineers
○   If It’s Worth Doing, It’s Worth Doing with Made-Up Statistics
○   Marijuana: Much More Than You Wanted to Know
○   Are You a Solar Deity?
○   The “Spot the Fakes” Test
○   Epistemic Learned Helplessness

III. Science and Doubt
○   Google Correlate Does Not Imply Google Causation
○   Stop Confounding Yourself! Stop Confounding Yourself!
○   Effects of Vertical Acceleration on Wrongness
○   90% Of All Claims About The Problems With Medical Studies Are Wrong
○   Prisons are Built with Bricks of Law and Brothels with Bricks of Religion, But That Doesn’t Prove a Causal Relationship
○   Noisy Poll Results and the Reptilian Muslim Climatologists from Mars
○   Two Dark Side Statistics Papers
○   Alcoholics Anonymous: Much More Than You Wanted to Know
○   The Control Group Is Out Of Control
○   The Cowpox of Doubt
○   The Skeptic’s Trilemma
○   If You Can’t Make Predictions, You’re Still in a Crisis

IV. Medicine, Therapy, and Human Enhancement
○   Scientific Freud
○   Sleep – Now by Prescription
○   In Defense of Psych Treatment for Attempted Suicide
○   Who By Very Slow Decay
○   Medicine, As Not Seen on TV
○   Searching for One-Sided Tradeoffs
○   Do Life Hacks Ever Reach Fixation?
○   Polyamory is Boring
○   Can You Condition Yourself?
○   Wirehead Gods on Lotus Thrones
○   Don’t Fear the Filter
○   Transhumanist Fables

V. Introduction to Game Theory
○   Backward Reasoning Over Decision Trees
○   Nash Equilibria and Schelling Points
○   Introduction to Prisoners’ Dilemma
○   Real-World Solutions to Prisoners’ Dilemmas
○   Interlude for Behavioral Economics
○   What is Signaling, Really?
○   Bargaining and Auctions
○   Imperfect Voting Systems
○   Game Theory as a Dark Art

VI. Promises and Principles
○   Beware Trivial Inconveniences
○   Time and Effort Discounting
○   Applied Picoeconomics
○   Schelling Fences on Slippery Slopes
○   Democracy is the Worst Form of Government Except for All the Others Except Possibly Futarchy
○   Eight Short Studies on Excuses
○   Revenge as Charitable Act
○   Would Your Real Preferences Please Stand Up?
○   Are Wireheads Happy?
○   Guilt: Another Gift Nobody Wants

VII. Cognition and Association
○   Diseased Thinking: Dissolving Questions about Disease
○   The Noncentral Fallacy — The Worst Argument in the World?
○   The Power of Positivist Thinking
○   When Truth Isn’t Enough
○   Ambijectivity
○   The Blue-Minimizing Robot
○   Basics of Animal Reinforcement
○   Wanting vs. Liking Revisited
○   Physical and Mental Behavior
○   Trivers on Self-Deception
○   Ego-Syntonic Thoughts and Values
○   Approving Reinforces Low-Effort Behaviors
○   To What Degree Do We Have Goals?
○   The Limits of Introspection
○   Secrets of the Eliminati
○   Tendencies in Reflective Equilibrium
○   Hansonian Optimism

VIII. Doing Good
○   Newtonian Ethics
○   Efficient Charity: Do Unto Others…
○   The Economics of Art and the Art of Economics
○   A Modest Proposal
○   The Life Issue
○   What if Drone Warfare Had Come First?
○   Nefarious Nefazodone and Flashy Rare Side-Effects
○   The Consequentialism FAQ
○   Doing Your Good Deed for the Day
○   I Myself Am A Scientismist
○   Whose Utilitarianism?
○   Book Review: After Virtue
○   Read History of Philosophy Backwards
○   Virtue Ethics: Not Practically Useful Either
○   Last Thoughts on Virtue Ethics
○   Proving Too Much

IX. Liberty
○   The Non-Libertarian FAQ (aka Why I Hate Your Freedom)
○   A Blessing in Disguise, Albeit a Very Good Disguise
○   Basic Income Guarantees
○   Book Review: The Nurture Assumption
○   The Death of Wages is Sin
○   Thank You For Doing Something Ambiguously Between Smoking And Not Smoking
○   Lies, Damned Lies, and Facebook (Part 1 of ∞)
○   The Life Cycle of Medical Ideas
○   Vote on Values, Outsource Beliefs
○   A Something Sort of Like Left-Libertarian-ist Manifesto
○   Plutocracy Isn’t About Money
○   Against Tulip Subsidies
○   SlateStarCodex Gives a Graduation Speech

X. Progress
○   Intellectual Hipsters and Meta-Contrarianism
○   A Signaling Theory of Class x Politics Interaction
○   Reactionary Philosophy in an Enormous, Planet-Sized Nutshell
○   A Thrive/Survive Theory of the Political Spectrum
○   We Wrestle Not With Flesh And Blood, But Against Powers And Principalities
○   Poor Folks Do Smile… For Now
○   Apart from Better Sanitation and Medicine and Education and Irrigation and Public Health and Roads and Public Order, What Has Modernity Done for Us?
○   The Wisdom of the Ancients
○   Can Atheists Appreciate Chesterton?
○   Holocaust Good for You, Research Finds, But Frequent Taunting Causes Cancer in Rats
○   Public Awareness Campaigns
○   Social Psychology is a Flamethrower
○   Nature is Not a Slate. It’s a Series of Levers.
○   The Anti-Reactionary FAQ
○   The Poor You Will Always Have With You
○   Proposed Biological Explanations for Historical Trends in Crime
○   Society is Fixed, Biology is Mutable

XI. Social Justice
○   Practically-a-Book Review: Dying to be Free
○   Drug Testing Welfare Users is a Sham, But Not for the Reasons You Think
○   The Meditation on Creepiness
○   The Meditation on Superweapons
○   The Meditation on the War on Applause Lights
○   The Meditation on Superweapons and Bingo
○   An Analysis of the Formalist Account of Power Relations in Democratic Societies
○   Arguments About Male Violence Prove Too Much
○   Social Justice for the Highly-Demanding-of-Rigor
○   Against Bravery Debates
○   All Debates Are Bravery Debates
○   A Comment I Posted on “What Would JT Do?”
○   We Are All MsScribe
○   The Spirit of the First Amendment
○   A Response to Apophemi on Triggers
○   Lies, Damned Lies, and Social Media: False Rape Accusations
○   In Favor of Niceness, Community, and Civilization

XII. Politicization
○   Right is the New Left
○   Weak Men are Superweapons
○   You Kant Dismiss Universalizability
○   I Can Tolerate Anything Except the Outgroup
○   Five Case Studies on Politicization
○   Black People Less Likely
○   Nydwracu’s Fnords
○   All in All, Another Brick in the Motte
○   Ethnic Tension and Meaningless Arguments
○   Race and Justice: Much More Than You Wanted to Know
○   Framing for Light Instead of Heat
○   The Wonderful Thing About Triggers
○   Fearful Symmetry
○   Archipelago and Atomic Communitarianism

XIII. Competition and Cooperation
○   The Demiurge’s Older Brother
○   Book Review: The Two-Income Trap
○   Just for Stealing a Mouthful of Bread
○   Meditations on Moloch
○   Misperceptions on Moloch
○   The Invisible Nation — Reconciling Utilitarianism and Contractualism
○   Freedom on the Centralized Web
○   Book Review: Singer on Marx
○   Does Class Warfare Have a Free Rider Problem?
○   Book Review: Red Plenty

__________________________________________________

If you liked these posts and want more, I suggest browsing the Slate Star Codex archives.

Advertisement

What techniques would you love to suddenly acquire?

I’ve been going to Val’s rationality dojo for CFAR workshop alumni, and I found a kind-of-similar-to-this exercise useful:

  • List a bunch of mental motions — situational responses, habits, personality traits — you wish you could possess or access at will. Visualize small things you imagine would be different about you if you were making more progress toward your goals.
  • Make these skills things you could in principle just start doing right now, like ‘when my piano teacher shuts the door at the end of our weekly lessons, I’ll suddenly find it easy to install specific if-then triggers for what times I’ll practice piano that week.’ Or ‘I’ll become a superpowered If-Then Robot, the kind of person who always thinks to use if-then triggers when she needs to keep up with a specific task.’ Not so much ‘I suddenly become a piano virtuoso’ or ‘I am impervious to projectile weapons’.
  • Optionally, think about a name or visualization that would make you personally excited and happy to think and talk about the virtuous disposition you desire. For example, when I think about the feeling of investing in a long-term goal in a manageable, realistic way, one association that spring to mind for me is the word healthy. I also visualize a solid forward motion, with my friends and life-as-a-whole relaxedly keeping pace. If I want to frame this habit as a Powerful Technique, maybe I’ll call it ‘Healthiness-jutsu’.

Here’s a grab bag of other things I’d like to start being better at:

1. casual responsibility – Freely and easily noticing and attending to my errors, faults, and obligations, without melodrama. Keeping my responsibility in view without making a big deal about it, beating myself up, or seeking a Grand Resolution. Just, ‘Yup, those are some of the things on the List. They matter. Next question?’

2. rigorous physical gentleness – My lower back is recovering from surgery. I need to consistently work to incrementally strengthen it, while being very careful not to overdo it. Often this means avoiding fun strenuous exercise, which can cause me to start telling frailty narratives to myself and psych myself out of relatively boring-but-sustainable exercise. So I’m mentally combining the idea of a boot camp with the idea of a luxurious spa: I need to be militaristic and zealous about always pampering and caring for and moderately-enhancing myself, without fail, dammit. It takes grit to be that patient and precise and non-self-destructive.

3. tsuyoku naritai – I am the naive but tenacious-and-hard-working protagonist-with-an-aura-of-destiny in a serial. I’ll face foes beyond my power — cinematic obstacles, yielding interesting, surprising failures — and I’ll learn, and grow. My journey is just beginning. I will become stronger.

4. trust – Disposing of one of my biggest practical obstacles to tsuyoku naritai. Feeling comfortable losing; feeling safe and luminous about vulnerability. Building five-second habits and social ties that make growth-mindset weakness-showing normal.

5. outcome pumping – “What you actually end up doing screens off the clever reason why you’re doing it.” Past a certain point, it just doesn’t matter exactly why or exactly how; it matters what. If I somehow find myself studying mathematics for 25 minutes a day over four months. and that is hugely rewarding, it’s almost beside the point what cognitive process I used to get there. I don’t need to have a big cause or justification for doing the awesome thing; I can just do it. Right now, in fact.

6. do the thing – Where outcome pumping is about ‘get it done and who cares about method’, I associate thing-doing with ‘once I have a plan/method/rule, do that. Follow though.’ You did the thing yesterday? Good. Do the thing today. Thing waits for no man. — — — You’re too [predicate]ish or [adjective]some to do the thing? That’s perfectly fine. Go do the thing.

When I try to visualize a shiny badass hybrid Competence Monster with all of these superpowers, I get something that looks like this. Your memetico-motivational mileage may vary.

7. sword of clear sight – Inner bullshit detector, motivated stopping piercer, etc. A thin blade cleanly divorces my person from unhealthy or not-reflectively-endorsed inner monologues. Martial arts metaphors don’t always work for me, but here they definitely feel right.

8. ferocity – STRIKE right through the obstacle. Roar. Spit fire, smash things, surge ahead. A whipping motion — a sudden SPIKE in focused agency — YES. — YES, IT MUST BE THAT TIME AGAIN. CAPS LOCK FEELS UNBELIEVABLY APPROPRIATE. … LET’S DO THIS.

9. easy response – With a sense of lightness and fluid motion-right-to-completion, immediately execute each small task as it arises. Breathe as normal. No need for a to-do list or burdensome juggling act; with no particular fuss or exertion, it is already done.

10. revisit the mountain – Take a break to look at the big picture. Ponder your vision for the future. Write blog posts like this. I’m the kind of person who benefits a lot from periodically looking back over how I’m doing and coming up with handy new narratives.

These particular examples probably won’t match your own mental associations and goals. I’d like to see your ideas; and feel free to steal from and ruthlessly alter entries on my own or others’ lists!

Is ‘consciousness’ simple? Is it ancient?

 

Assigning less than 5% probability to ‘cows are moral patients’ strikes me as really overconfident. Ditto, assigning greater than 95% probability. (A moral patient is something that can be harmed or benefited in morally important ways, though it may not be accountable for its actions in the way a moral agent is.)

I’m curious how confident others are, and I’m curious about the most extreme confidence levels they’d consider ‘reasonable’.

I also want to hear more about what theories and backgrounds inform people’s views. I’ve seen some relatively extreme views defended recently, and the guiding intuitions seem to have come from two sources:


 

(1) How complicated is consciousness? In the space of possible minds, how narrow a target is consciousness?

Humans seem to be able to have very diverse experiences — dreams, orgasms, drug-induced states — that they can remember in some detail, and at least appear to be conscious during. That’s some evidence that consciousness is robust to modification and can take many forms. So, perhaps, we can expect a broad spectrum of animals to be conscious.

But what would our experience look like if it were fragile and easily disrupted? There would probably still be edge cases. And, from inside our heads, it would look like we had amazingly varied possibilities for experience — because we couldn’t use anything but our own experience as a baseline. It certainly doesn’t look like a human brain on LSD differs as much from a normal human brain as a turkey brain differs from a human brain.

There’s some risk that we’re overestimating how robust consciousness is, because when we stumble on one of the many ways to make a human brain unconscious, we (for obvious reasons) don’t notice it as much. Drastic changes in unconscious neurochemistry interest us a lot less than minor tweaks to conscious neurochemistry.

And there’s a further risk that we’ll underestimate the complexity of consciousness because we’re overly inclined to trust our introspection and to take our experience at face value. Even if our introspection is reliable in some domains, it has no access to most of the necessary conditions for experience. So long as they lie outside our awareness, we’re likely to underestimate how parochial and contingent our consciousness is.


 

(2) How quick are you to infer consciousness from ‘intelligent’ behavior?

People are pretty quick to anthropomorphize superficially human behaviors, and our use of mental / intentional language doesn’t clearly distinguish between phenomenal consciousness and behavioral intelligence. But if you work on AI, and have an intuition that a huge variety of systems can act ‘intelligently’, you may doubt that the linkage between human-style consciousness and intelligence is all that strong. If you think it’s easy to build a robot that passes various Turing tests without having full-fledged first-person experience, you’ll also probably (for much the same reason) expect a lot of non-human species to arrive at strategies for intelligently planning, generalizing, exploring, etc. without invoking consciousness. (Especially if your answer to question 1 is ‘consciousness is very complex’. Evolution won’t put in the effort to make a brain conscious unless it’s extremely necessary for some reproductive advantage.)

… But presumably there’s some intelligent behavior that was easier for a more-conscious brain than for a less-conscious one — at least in our evolutionary lineage, if not in all possible lineages that reproduce our level of intelligence. We don’t know what cognitive tasks forced our ancestors to evolve-toward-consciousness-or-perish. At the outset, there’s no special reason to expect that task to be one that only arose for proto-humans in the last few million years.

Even if we accept that the machinery underlying human consciousness is very complex, that complex machinery could just as easily have evolved hundreds of millions of years ago, rather than tens of millions. We’d then expect it to be preserved in many nonhuman lineages, not just in humans. Since consciousness-of-pain is mostly what matters for animal welfare (not, e.g., consciousness-of-complicated-social-abstractions), we should look into hypotheses like:

first-person consciousness is an adaptation that allowed early brains to represent simple policies/strategies and visualize plan-contingent sensory experiences.

Do we have a specific cognitive reason to think that something about ‘having a point of view’ is much more evolutionarily necessary for human-style language or theory of mind than for mentally comparing action sequences or anticipating/hypothesizing future pain? If not, the data of ethology plus ‘consciousness is complicated’ gives us little reason to favor the one view over the other.

We have relatively direct positive data showing we’re conscious, but we have no negative data showing that, e.g., salmon aren’t conscious. It’s not as though we’d expect them to start talking or building skyscrapers if they were capable of experiencing suffering — at least, any theory that predicts as much has some work to do to explain the connection. At present, it’s far from obvious that the world would look any different than it does even if all vertebrates were conscious.

So… the arguments are a mess, and I honestly have no idea whether cows can suffer. The probability seems large enough to justify ‘don’t torture cows (including via factory farms)’, but that’s a pretty low bar, and doesn’t narrow the probability down much.

To the extent I currently have a favorite position, it’s something like: ‘I’m pretty sure cows are unconscious on any simple, strict, nondisjunctive definition of “consciousness;” but what humans care about is complicated, and I wouldn’t be surprised if a lot of ‘unconscious’ information-processing systems end up being counted as ‘moral patients’ by a more enlightened age. … But that’s a pretty weird view of mine, and perhaps deserves a separate discussion.

I could conclude with some crazy video of a corvid solving a rubik’s cube or an octopus breaking into a bank vault or something, but I somehow find this example of dog problem-solving more compelling:

Politics is hard mode

Eliezer  Yudkowsky has written a delightful series of posts (originally on the economics blog Overcoming Bias) about why partisan debates are so frequently hostile and unproductive. Particularly incisive is A Fable of Science and Politics.

One of the broader points Eliezer makes is that, while political issues are important, political discussion isn’t the best place to train one’s ability to look at issues objectively and update on new evidence. The way I’d put it is that politics is hard mode; it takes an extraordinary amount of discipline and skill to communicate effectively in partisan clashe.

This jibes with my own experience; I’m much worse at arguing politics than at arguing other things. And psychological studies indicate that politics is hard mode even (or especially!) for political veterans; see Taber & Lodge (2006).

Eliezer’s way of putting the same point is (riffing off of Dune): ‘Politics is the Mind-Killer.’ An excerpt from that blog post:

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back — providing aid and comfort to the enemy. […]

I’m not saying that I think Overcoming Bias should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it — but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.

Scott Alexander fleshes out why it can be dialogue-killing to attack big groups (even when the attack is accurate) in another blog post, Weak Men Are Superweapons. And Eliezer expands on his view of partisanship in follow-up posts like The Robbers Cave Experiment and Hug the Query.

bluegreen

Some people involved in political advocacy and activism have objected to the “mind-killer” framing. Miri Mogilevsky of Brute Reason explained on Facebook:

My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.

Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.

In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.

The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. […]

I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”

Eliezer clarified that by “politics” he doesn’t generally mean ‘problems that can be directly addressed in local groups but happen to be politically charged’:

Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.

Even if polarized local politics is more instrumentally tractable, though, the worry remains that it’s a poor epistemic training ground. A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical,  even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)

Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’ And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they’re ‘political,’ people may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.

Since this is one of the LessWrong memes that’s most likely to pop up in discussions between different online communities (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, I suggest obsoleting the ‘mind-killer’ framing. It’s cute, but ‘politics is hard mode’ works better as a meme to interject into random conversations. ∵:

1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Very Hard Mode, or under Middlingly Hard Mode…

2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re all familiar with the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize about what’s ‘hard.’

3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy, so it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea.  Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.

4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that clearly smacks of insult — you appear to be calling them stupid, irrational, deluded, or similar. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.

5. ‘Hard Mode’ doesn’t carry any risk of evoking (e.g., gendered) stereotypes about political activists being dumb or irrational or overemotional.

6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty still encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)

If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse.  ‘Politics is the mind-killer’ is the mind-killer. At least, it’s a relatively mind-killing way of warning people about epistemic hazards.

‘Hard Mode’ lets you communicate in the style of the Humble Aspirant rather than the Aloof Superior. Try something in the spirit of: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ If you’re worried that what you talk about will impact group epistemology, I think you should be even more worried about how you talk about it.

The AI knows, but doesn’t care

This is the first half of a LessWrong post. For background material, see A Non-Technical Introduction to AI Risk and Truly Part of You.

I summon a superintelligence, calling out: ‘I wish for my values to be fulfilled!’

The results fall short of pleasant.

Gnashing my teeth in a heap of ashes, I wail:

Is the artificial intelligence too stupid to understand what I meant? Then it is no superintelligence at all!

Is it too weak to reliably fulfill my desires? Then, surely, it is no superintelligence!

Does it hate me? Then it was deliberately crafted to hate me, for chaos predicts indifference. ———But, ah! no wicked god did intervene!

Thus disproved, my hypothetical implodes in a puff of logic. The world is saved. You’re welcome.

On this line of reasoning, safety-proofed artificial superintelligence (Friendly AI) is not difficult. It’s inevitable, provided only that we tell the AI, ‘Be Friendly.’ If the AI doesn’t understand ‘Be Friendly.’, then it’s too dumb to harm us. And if it does understand ‘Be Friendly.’, then designing it to follow such instructions is childishly easy.

The end!

… …

Is the missing option obvious?

What if the AI isn’t sadistic, or weak, or stupid, but just doesn’t care what you Really Meant by ‘I wish for my values to be fulfilled’?

When we see a Be Careful What You Wish For genie in fiction, it’s natural to assume that it’s a malevolent trickster or an incompetent bumbler. But a real Wish Machine wouldn’t be a human in shiny pants. If it paid heed to our verbal commands at all, it would do so in whatever way best fit its own values. Not necessarily the way that best fits ours.

Is indirect indirect normativity easy?

“If the poor machine could not understand the difference between ‘maximize human pleasure’ and ‘put all humans on an intravenous dopamine drip’ then it would also not understand most of the other subtle aspects of the universe, including but not limited to facts/questions like: ‘If I put a million amps of current through my logic circuits, I will fry myself to a crisp’, or ‘Which end of this Kill-O-Zap Definit-Destruct Megablaster is the end that I’m supposed to point at the other guy?’. Dumb AIs, in other words, are not an existential threat. […]

“If the AI is (and always has been, during its development) so confused about the world that it interprets the ‘maximize human pleasure’ motivation in such a twisted, logically inconsistent way, it would never have become powerful in the first place.”

Richard Loosemore

If an AI is sufficiently intelligent, then, yes, it should be able to model us well enough to make precise predictions about our behavior. And, yes, something functionally akin to our own intentional strategy could conceivably turn out to be an efficient way to predict linguistic behavior. The suggestion, then, is that we solve Friendliness by method A —

  • A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions’real meaning. Then just instruct it ‘Satisfy my preferences’, and wait for it to become smart enough to figure out my preferences.

— as opposed to B or C —

  • B. Solve the Problem of Preference-in-General in advance, and directly program it to figure out what our human preferences are and then satisfy them.
  • C. Solve the Problem of Human Preference, and explicitly program our particular preferences into the AI ourselves, rather than letting the AI discover them for us.

But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.

1. You have to actually code the seed AI to understand what we mean. You can’t just tell it ‘Start understanding the True Meaning of my sentences!’ to get the ball rolling, because it may not yet be sophisticated enough to grok the True Meaning of ‘Start understanding the True Meaning of my sentences!’.

2. The Problem of Meaning-in-General may really be ten thousand heterogeneous problems, especially if ‘semantic value’ isn’t a natural kind. There may not be a single simple algorithm that inputs any old brain-state and outputs what, if anything, it ‘means’; it may instead be that different types of content are encoded very differently.

3. The Problem of Meaning-in-General may subsume the Problem of Preference-in-General. Rather than being able to apply a simple catch-all Translation Machine to any old human concept to output a reliable algorithm for applying that concept in any intelligible situation, we may need to already understand how our beliefs and values work in some detail before we can start generalizing. On the face of it, programming an AI to fully understand ‘Be Friendly!’ seems at least as difficult as just programming Friendliness into it, but with an added layer of indirection.

4. Even if the Problem of Meaning-in-General has a unitary solution and doesn’t subsume Preference-in-General, it may still be harder if semantics is a subtler or more complex phenomenon than ethics. It’s not inconceivable that language could turn out to be more of a kludge than value; or more variable across individuals due to its evolutionary recency; or more complexly bound up with culture.

5. Even if Meaning-in-General is easier than Preference-in-General, it may still be extraordinarily difficult. The meanings of human sentences can’t be fully captured in any simple string of necessary and sufficient conditions. ‘Concepts‘ are just especially context-insensitive bodies of knowledge; we should not expect them to be uniquely reflectively consistent, transtemporally stable, discrete, easily-identified, or introspectively obvious.

6. It’s clear that building stable preferences out of B or C would create a Friendly AI. It’s not clear that the same is true for A. Even if the seed AI understands our commands, the ‘do’ part of ‘do what you’re told’ leaves a lot of dangerous wiggle room. See section 2 of Yudkowsky’s reply to Holden. If the AGI doesn’t already understand and care about human value, then it may misunderstand (or misvalue) the component of responsible request- or question-answering that depends on speakers’ implicit goals and intentions.

7. You can’t appeal to a superintelligence to tell you what code to first build it with.

The point isn’t that the Problem of Preference-in-General is unambiguously the ideal angle of attack. It’s that the linguistic competence of an AGI isn’t unambiguously the right target, and also isn’t easy or solved.

Point 7 seems to be a special source of confusion here, so I’ll focus just on it for my next post.

A non-technical introduction to AI risk

In the summer of 2008, experts attending the Global Catastrophic Risk Conference assigned a 5% probability to the human species’ going extinct due to “superintelligent AI” by the year 2100. New organizations, like the Centre for the Study of Existential Risk and the Machine Intelligence Research Institute, are springing up to face the challenge of an AI apocalypse. But what is artificial intelligence, and why do people think it’s dangerous?

As it turns out, studying AI risk is useful for gaining a deeper understanding of philosophy of mind and ethics, and a lot of the general theses are accessible to non-experts. So I’ve gathered here a list of short, accessible, informal articles, mostly written by Eliezer Yudkowsky, to serve as a philosophical crash course on the topic. The first half will focus on what makes something intelligent, and what an Artificial General Intelligence is. The second half will focus on what makes such an intelligence ‘friendly‘ — that is, safe and useful — and why this matters.

____________________________________________________________________________

Part I. Building intelligence.

An artificial intelligence is any program or machine that can autonomously and efficiently complete a complex task, like Google Maps, or a xerox machine. One of the largest obstacles to assessing AI risk is overcoming anthropomorphism, the tendency to treat non-humans as though they were quite human-like. Because AIs have complex goals and behaviors, it’s especially difficult not to think of them as people. Having a better understanding of where human intelligence comes from, and how it differs from other complex processes, is an important first step in approaching this challenge with fresh eyes.

1. Power of Intelligence. Why is intelligence important?

2. Ghosts in the Machine. Is building an intelligence from scratch like talking to a person?

3. Artificial Addition. What can we conclude about the nature of intelligence from the fact that we don’t yet understand it?

4. Adaptation-Executers, not Fitness-Maximizers. How do human goals relate to the ‘goals’ of evolution?

5. The Blue-Minimizing Robot. What are the shortcomings of thinking of things as ‘agents’, ‘intelligences’, or ‘optimizers’ with defined values/goals/preferences?

Part II. Intelligence explosion.

Forecasters are worried about Artificial General Intelligence (AGI), an AI that, like a human, can achieve a wide variety of different complex aims. An AGI could think faster than a human, making it better at building new and improved AGI — which would be better still at designing AGI. As this snowballed, AGI would improve itself faster and faster, become increasingly unpredictable and powerful as its design changed. The worry is that we’ll figure out how to make self-improving AGI before we figure out how to safety-proof every link in this chain of AGI-built AGIs.

6. Optimization and the Singularity. What is optimization? As optimization processes, how do evolution, humans, and self-modifying AGI differ?

7. Efficient Cross-Domain Optimization. What is intelligence?

8. The Design Space of Minds-In-General. What else is universally true of intelligences?

9. Plenty of Room Above Us. Why should we expect self-improving AGI to quickly become superintelligent?

Part III. AI risk.

In the Prisoner’s Dilemma, it’s better for both players to cooperate than for both to defect; and we have a natural disdain for human defectors. But an AGI is not a human; it’s just a process that increases its own ability to produce complex, low-probability situations. It doesn’t necessarily experience joy or suffering, doesn’t necessarily possess consciousness or personhood. When we treat it like a human, we not only unduly weight its own artificial ‘preferences’ over real human preferences, but also mistakenly assume that an AGI is motivated by human-like thoughts and emotions. This makes us reliably underestimate the risk involved in engineering an intelligence explosion.

10. The True Prisoner’s Dilemma. What kind of jerk would Defect even knowing the other side Cooperated?

11. Basic AI drives. Why are AGIs dangerous even when they’re indifferent to us?

12. Anthropomorphic Optimism. Why do we think things we hope happen are likelier?

13. The Hidden Complexity of Wishes. How hard is it to directly program an alien intelligence to enact my values?

14. Magical Categories. How hard is it to program an alien intelligence to reconstruct my values from observed patterns?

15. The AI Problem, with Solutions. How hard is it to give AGI predictable values of any sort? More generally, why does AGI risk matter so much?

Part IV. Ends.

A superintelligence has the potential not only to do great harm, but also to greatly benefit humanity. If we want to make sure that whatever AGIs people make respect human values, then we need a better understanding of what those values actually are. Keeping our goals in mind will also make it less likely that we’ll despair of solving the Friendliness problem. The task looks difficult, but we have no way of knowing how hard it will end up being until we’ve invested more resources into safety research. Keeping in mind how much we have to gain, and to lose, advises against both cynicism and complacency.

16. Could Anything Be Right? What do we mean by ‘good’, or ‘valuable’, or ‘moral’?

17. Morality as Fixed Computation. Is it enough to have an AGI improve the fit between my preferences and the world?

18. Serious Stories. What would a true utopia be like?

19. Value is Fragile. If we just sit back and let the universe do its thing, will it still produce value? If we don’t take charge of our future, won’t it still turn out interesting and beautiful on some deeper level?

20. The Gift We Give To Tomorrow. In explaining value, are we explaining it away? Are we making our goals less important?

In conclusion, a summary of the core argument: Five theses, two lemmas, and a couple of strategic implications.

____________________________________________________________________________

If you’re convinced, MIRI has put together a list of ways you can get involved in promoting AI safety research. You can also share this post and start conversations about it, to put the issue on more people’s radars. If you want to read on, check out the more in-depth articles below.

____________________________________________________________________________

Further reading

When dialogues become duels

Why did the recent blow-up between Sam Harris and Glenn Greenwald happen? Why was my subsequent discussion with Murtaza Hussain so unproductive? More, why are squanderous squabbles like this so common? Even among intelligent, educated people with similar moral sensibilities?

To a first approximation, the answer is simple: Hussain wrote a sloppy, under-researched hit piece. More worried about Harris’ perceived support for U.S. foreign policy than about Hussain’s journalistic misconduct, Greenwald happily lent Hussain a megaphone. Egos flared and paralyzed discussion, and only a few third parties called Hussain or Greenwald out on their errors. So there the story ended.

But if all we take away from this debacle is ‘well, Those People are crazy and dumb and shouldn’t be listened to’, we’ll have missed an opportunity to hone our own craft. Habitually thinking in such terms is how they fell into error. They thought, ‘Those guys are the Enemy. So they can’t be reasoned with. They don’t deserve to have their views presented with charity and precision! They are simply to be defeated.’

And, of course, recognizing that this way of thinking is harmful still isn’t enough. They think that we are the ones in the throes of us-vs.-them thinking. The parallelism is rather comical.

And the thing is, they’re right. … And so are we.

Both sides are at the mercy of enemythink, even if only one side happens to be right on the points of fact. Even my way of framing this conversation in pugilistic terms, as a ‘conflict’ with ‘sides’, reveals a deep vulnerability to partisan animosity. To make progress, we have to actually internalize these lessons, and not just use them as more excuses to score points against the Other Side.

There are four fundamental lessons I’ve taken away from the Hussain/Greenwald libel scandal. And they really all boil down to: Getting everything wrong is easy, and treating discussions like battles or status competitions makes it worse. Put like that, our task could hardly be more simple — or more demanding.

1. There but for the grace of Rigor go I.

Rationality is hard. It isn’t a matter of getting a couple of simple metaphysical and political questions right and then coasting on your brilliance. It takes constant vigilance, effort, self-awareness. We shouldn’t be surprised to see mostly reasonable people slipping up in big ways. Rather, we should be surprised to observe that a jabbering bipedal ape is capable of being at all reasonable in the first place!

Since we’re all really, really bad at this, we need to work together and form social circles that reinforce good epistemic hygiene. We need to exchange and test ideas for combating our biases. I couldn’t put it better than Julia Galef, who lists seven superb tips for becoming a more careful reasoner and discussant.

We can’t spend all our time just clobbering everyone slightly more unreasonable than we are. We must also look inward, seeking out the deep roots of madness that make humans susceptible to dogmatism in the first place.

2. Reality is nonpartisan.

By this I don’t mean that two sides in a dispute must be equally right. Rather, I mean that falling into reflexive partisanship is dangerous, because the world doesn’t care that you’re a Skeptic, or a Libertarian, or a Consequentialist, or a Christian. You and your ideological allies might have gotten lots of questions right in the past, yet still completely flunk your next empirical test. Reality rewards you for getting particular facts right, not for declaring your allegiance to the right abstract philosophy. And it can punish without mercy those whose operative beliefs exhibit even the smallest error, however noble their intentions.

Beware of associating the truth with a ‘side’. Beware of focusing your discussion on groups of people — ‘neoconservatives’, ‘atheists’… — rather than specific ideas and arguments. In particular, treating someone you’re talking to merely as an avatar of a monolithic Ideology will inevitably lead you to oversimplify both the individual and the ideology. That is perhaps Hussain’s most transparent error. He was convinced that he knew what genus Harris belonged to, hence felt little need to expend effort on research or on parsing new arguments. Too much theory, not enough data. Too much hedgehog, not enough fox.

I think Harris worries about this too. He doesn’t like identifying as an ‘atheist’, because he strongly opposes any tendency to see simply being reasonable as an ideology in its own right.

We should not call ourselves “atheists.” We should not call ourselves “secularists.” We should not call ourselves “humanists,” or “secular humanists,” or “naturalists,” or “skeptics,” or “anti-theists,” or “rationalists,” or “freethinkers,” or “brights.” We should not call ourselves anything. We should go under the radar—for the rest of our lives. And while there, we should be decent, responsible people who destroy bad ideas wherever we find them.

[… R]ather than declare ourselves “atheists” in opposition to all religion, I think we should do nothing more than advocate reason and intellectual honesty—and where this advocacy causes us to collide with religion, as it inevitably will, we should observe that the points of impact are always with specific religious beliefs—not with religion in general. There is no religion in general.

I’m not sure this is the best strategy for banding together to save the world. Labels can be useful tools for pooling our efforts. But it’s absolutely a good strategy when it comes to improving our intellectual clarity on an individual level, any time we see ourselves starting to use tribal allegiances as a replacement for analytic vigilance.

philosoraptor

Partisan divides lead to anger. Anger leads to hate. Hate leads to you committing inferential fallacies. Therefore, don’t just get mad, and don’t just get even; get it right. You have far more to fear from your own errors than from your adversary’s.

3. When you have a criticism, talk it over first.

It sounds banal, but you’d be surprised how much mileage this one gets you. Starting a direct conversation, ideally someplace private, makes it easy for people to change their minds without immediately worrying about their public image. It lets them explain their position, if you’ve misunderstood something. And it establishes a more human connection, encouraging learning and collaboration rather than a clash of egos.

Neither Hussain nor Greenwald extended that basic courtesy to Harris; they went for the throat first. Harris did extend that courtesy to Greenwald; but Greenwald wasn’t interested in talking things out in any detail, preferring to go public immediately.

Like Harris, I tried actually talking to Greenwald and Hussain. The result was revealing, and relatively civil. I still came away disappointed, but it was at least several steps up from the quality of Hussain’s dialogue with Harris. Had we begun with such a conversation, rather than waiting until the disputants were already entrenched in their positions, I suspect that much more progress would have been possible.

If you intensely oppose a view, that makes it all the more important to bracket egos and get clear on the facts right at the outset. All of this is consistent with subsequently bringing the discussion to the public, if the other party doesn’t respond, if you’re left dissatisfied, or if you are satisfied and want to show off how awesome your conversation was.

4. To err is human. To admit it, tremendously healthy.

Everyone screws up sometimes. The trick to really being a competent conversationalist is to notice when you screw up — to attend to it, really ponder it and let it sink in—

— and then to swiftly and mercilessly squish the mistake. Act as though you yourself were pointing out an enemy’s error. Critique it fully, openly, and aggressively.

Making concessions when you’ve screwed up, or when you and your opponent share common ground, makes your other positions stronger and more credible. Because you’ve proven that you can change your mind and notice conflicts between your theory and your data, you’ve also demonstrated that your other views are likely to track the evidence.

Don’t think, ‘Well, I’m right in spirit.’ Don’t think, ‘My mistake isn’t important. This is a distraction. I should keep a laser focus on where I’m right.’ If you ignore too many small errors, they’ll add up to a big error. If you don’t fully recognize when you’ve misjudged the evidence, but just shrug it off and return to the battlefront, then, slowly but surely, you and the facts will drift further and further apart. And you’ll never notice — for what evidence could convince you that you aren’t listening to the evidence?

Constant vigilance! That’s the lesson I take from this. Be uncompromisingly methodical. Be consistently reasonable. Never allow your past intellectual triumphs or your allegiance to the Good Guys to make you sloppy. Always seek the truth — even when the truth is a painful thing.

Realities to which you have anesthetized yourself can damage your person and your mind all the same. You just won’t notice in time to change them.

Is “Islamophobia” real?

This is a shorter version of an April 8 Secular Alliance at Indiana University blog post.

My previous post on the Sam Harris / Glenn Greenwald clusterfuffle was mostly procedural. I restricted myself to assessing the authenticity of Murtaza Hussain’s citations, barely touching on the deeper issues of substance he and Greenwald raised. But now that we’re on the topic, this is a great opportunity to pierce through the rhetoric and try to get clearer about what’s actually being disputed.

My biggest concern with the criticisms of Harris is that they freely shift between a number of different accusations, often as though they were equivalent. At the moment, the most salient seem to be:

A. He’s a racist, and has a racially motivated hatred of Muslims.
B. He has an intensely irrational fear and hatred of Muslims.
C. He has an intensely irrational fear and hatred of Islam.
D. His concerns about Islam are exaggerated.
E. He doesn’t appreciate just how harmful and dangerous the United States is.
F. He advocates militarism and condones violence in general.

I’d like to start disentangling these claims, in the hopes of encouraging actual discussions — and not just shouting matches — about them. Although I’ll use Harris and his recent detractors as a revealing test case, the conclusions here will have immediate relevance to any discussion in which people strongly disagree about the nature and geopolitical significance of Islamic extremism.

Racism?

In “Scientific racism, militarism, and the new atheists“, Hussain focuses on [A], trying to pattern-match Harris’ statements to trends exemplified in 18th- and 19th-century pseudoscience. It seems chiefly motivated by the fact that Harris, like a number of historical racists, opposed the aims of a disadvantaged group and, well, is a scientist.

Commenting on my previous post, Hussain appeared to shift gears and back off from accusing Harris of racism:

[T]he point of the post [I wrote] is not “Sam Harris is racist”. Indeed, as he accurately noted, he has a black Muslim friend. The point is that he conciously [sic] lends his scientific expertise to the legitimation of racist policies. He is also an avowed partisan and not a neutral, disinterested observer to these issues. .He [sic] is not speaking in terms of pure abstraction, and he is not as a scientist immune from the pull of ideology (as the racist pseudoscientists I compared him with illustrate). […]

Politics is my field, science is his field, and I would not make dangerously ignorant comments about neuroscience. He on the other hand feels little compulsion [sic] about doing the same politically and using his authority as a scientist and philosopher to justify the actions of those who would commit (and *have committed*) the most utterly heinous acts in recent memory.

I couldn’t care less about his atheist advocacy, I couldn’t care less if he blasphemed a million Quran’s [sic], what I care about is policies of torture and murder not being once again granted a veneer of scientific protection

I’d make three points in response. First, to my knowledge Harris has never made anything resembling the claim ‘I am a scientist, ergo my views on world politics must be correct’.

Second, although I grant that someone’s scientific background doesn’t automatically make her a reliable political commentator, experience with the sciences also doesn’t invalidate one’s future work in political or ethical theorizing. It’s possible to responsibly specialize in more than one thing in life. Moreover, interdisciplinary dialogue is a good thing, and there really are findings from the mind sciences that have important implications for our political tactics and goals. Blindly rejecting someone’s views because she has a Ph.D. in neuroscience is as bad as blindly accepting someone’s views just because she has a Ph.D. in neuroscience!

My third response is that Hussain’s attempt to backtrack from accusing Harris of racism is transparently inconsistent with his earlier statements. If he’s changed his mind, he should just say so, rather than pretend that his article is devoid of bald assertions like:

[T]he most prominent new atheists slide with ease into the most virulent racism imaginable. […]

Harris engages in a nuanced version of the same racism which his predecessors in scientific racism practiced in their discussion of the blanket characteristics of “Negroes”. […]

[Harris is in a] class with the worst proponents of scientific racism of the 20th century – including those who helped provide scientific justification for the horrors of European fascism.

That certainly doesn’t sound like an effort to maintain neutrality on Harris’ personal view of race, to merely criticize his support for “racist policies“. If such was Hussain’s intended message, then he failed rather spectacularly in communicating it.

In point of fact, I agree with Hussain and Greenwald that racism directed at Muslims is a very real problem, and that it really does lurk in the hearts of a distressingly large number of critics of Islam. (Harris agrees, too.) As Hussain rightly notes, the fact that Islam is not a race is irrelevant. It happens to be the case that most Muslims aren’t of European descent; and for most white supremacists, that’s enough.

The point here isn’t that it’s impossible to oppose Islam for bad reasons, including hideously racist ones. It’s that there may be good reasons, or bad but non-racist ones, to oppose Islam as well. In the case of Harris, we have no reason to think that any race- or skin-color-specific bias is responsible for his stance on Islam. All the undistorted evidence Hussain cites is only relevant to charges [B]-[F] in my above list. This is perhaps why Greenwald, who followed up with a much more measured article, sets the race issue aside before proceeding to make his case against Harris.

Xenophobia?

Following Greenwald, let’s momentarily bracket race. Is there any cause to be concerned more generally that the tone or content of criticism of Islam may be based in some latent fear of the foreign, the unknown?

Not in all cases, no. Plenty of critics of Islam have all too intimate and first-hand an understanding of the more oppressive and destructive elements of Islamic tradition.

But in some cases? In many cases? Perhaps even, to some extent, in Harris’ case, or in mine?

Sure.

I’m just trying to be honest and open here, and do a little soul-searching. I’m trying to understand where writers like Greenwald and Hussain are coming from. I’m trying to extract my own lessons from their concerns, even if I disagree strongly with their chosen methods and conclusions.

I can’t 100% dismiss out of hand the idea that part of the explanation for the degree and nature of our aversion to Islam really is its unfamiliarity. That’s just human psychology: When apparent dangers are weird and foreign and agenty, we’re more attentive to them, and we respond to them more quickly, strongly, and decisively. I am woefully ignorant of what day-to-day life is like nearly everywhere in the world, and no matter how much I try to understand what it’s like to be a Muslim in different societal or geographic settings, I’ll never bridge the gap completely. And that ignorance will inevitably color my judgments and priorities to some extent. I hate it, but it’s true.

Although on introspection I detect no traces of ethnic animus or cultural bias in my own head — if I did, I’d have already rooted it out, to the best of my ability — I can’t totally rule out the possibility that some latent aversion to the general Otherness of Islam is having some effect on the salience I psychologically assign to apparent threats from militant Islamism. Being biased doesn’t feel a particular way. Particularly given that we’re hypothesizing small, cumulative errors in judgment (‘micro-xenophobia’), not some overarching, horns-and-trumpets Totalitarian World-View. Everyone on the planet succumbs to small biases of that sort, to unconscious overreliance on uneducated intuitions and overgeneralized schemas.

And to say that these sorts of errors are common, and are very difficult to combat, is in no way to excuse them. I’m not admitting the possibility so that I can then be complacent about it. If I am in fact systematically biased, then I could cause some real damage without even realizing it. It’s my responsibility as a human being to very carefully and rigorously test whether (or to what extent) I am making errors of this sort.

… But the coin has two sides.

It’s just as possible that the biased ones are the people whose criticisms have been quieted by their experience with the positive elements of Islamic tradition. It’s just as possible that generally valuable heuristics like ‘be culturally tolerant’ are resulting in a destructive pro-Islam bias (‘micro-relativism’?). It’s just as possible that small (or large) attentional and inferential errors are coloring the views of Islam’s defenders, making them ignore or underestimate the risks Harris is talking about. Benevolent racism is just as real as malevolent racism.

The take-away message isn’t that one side or the other is certainly wrong, just because bias or bad faith could account for some of the claims made by either side. It’s worthwhile to set aside some time to sit quietly, to try and really probe your reasons for what you believe, see whether they are as strong as you thought, place yourself in the other side’s shoes for a time. But a general skepticism or intellectual despair can’t rationally follow from that. Perhaps we’re all biased, albeit in different directions; but, given how high the stakes are, we still have to talk about these things, and do our best to become more reasonable.

Importantly, one thing we can’t automatically take away from a discovery that some person is being irrational or bigoted, is the conclusion that that person’s arguments or conclusions are mistaken. Someone’s reasoning can be flawless even if the ultimate psychological origins for his belief are ridiculous. And, for that matter, purity of heart is no guarantor of accuracy!

It’s not good enough to feel righteous. It’s not even good enough to be righteous, or have the best of intentions. We have to put in the extra hard work of becoming right. So, with that moment of reflection behind us, we must return with all the more urgency to determining the relationships between charges of ‘racism’, ‘Islamophobia’, ‘militarism’, and so on.

Islamophobia?

In “Sam Harris, the New Atheists, and anti-Muslim animus”, Greenwald writes:

Perhaps the most repellent claim Harris made to me was that Islamophobia is fictitious and non-existent, “a term of propaganda designed to protect Islam from the forces of secularism by conflating all criticism of it with racism and xenophobia”. How anyone can observe post-9/11 political discourse in the west and believe this is truly mystifying. The meaning of “Islamophobia” is every bit as clear as “anti-semitism” or “racism” or “sexism” and all sorts of familiar, related concepts. It signifies (1) irrational condemnations of all members of a group or the group itself based on the bad acts of specific individuals in that group; (2) a disproportionate fixation on that group for sins committed at least to an equal extent by many other groups, especially one’s own; and/or (3) sweeping claims about the members of that group unjustified by their actual individual acts and beliefs. I believe all of those definitions fit Harris quite well[.]

The definition Greenwald constructs here seems rather ad-hoc, indeed tailor-made to his criticisms of Harris. It is not the ordinary definition of “Islamophobia”; its parallelism with sexism, anti-semiticism, homophobia, and clinical phobias is unusually tenuous; and it certainly isn’t the definition Harris had in mind when he criticized the term. Greenwald’s clause (3) is uselessly vague: if I made sweeping and unjustified positive claims about Muslims, that would surely not make me an Islamophobe! Adding his clauses (1) and (2) helps, but the focus on a subminority’s “sins” or “bad acts” is a complete red herring; if no Muslims had ever done anything truly wrong, Islamophobia would still be possible.

Let’s attempt a more to-the-point and generally applicable definition. If I’d never seen the word before, I’d probably expect “Islamophobia” to mean an unreasonable, pathological fear or hatred of Islam. And it’s often used that way. But it’s also used to mean an unreasonable, pathological fear or hatred of Muslims — as Greenwald’s puts it, “irrational anti-Muslim animus”. (For a historical perspective, see López 2010.)

Already, this duality raises a serious problem: Writers like Harris happily identify as anti-Islam, but strongly deny being anti-Muslim. If “Islamophobia” is used to conceal leaps between criticisms of Islam (as an ideology or cultural institution) and personal attacks on Muslims, then it will make inferences between [B] and [C] in my list above seem deceptively easy.

The best summary I’ve seen of potential problems with the term “Islamophobia” comes from Robin Richardson, a seasoned promoter of multiculturalism and education equality. He writes:

The disadvantages of the term Islamophobia are significant. Some of them are primarily about the echoes implicit in the concept of phobia. Others are about the implications of the term Islam. For convenience, they can be itemised as follows.

1. Medically, phobia implies a severe mental illness of a kind that affects only a tiny minority of people. Whatever else anxiety about Muslims may be, it is not merely a mental illness and does not merely involve a small number of people.

2. To accuse someone of being insane or irrational is to be abusive and, not surprisingly, to make them defensive and defiant. Reflective dialogue with them is then all but impossible.

3. To label someone with whom you disagree as irrational or insane is to absolve yourself of the responsibility of trying to understand, both intellectually and with empathy, why they think and act as they do, and of seeking through engagement and argument to modify their perceptions and understandings. […]

7. The term is inappropriate for describing opinions that are basically anti-religion as distinct from anti-Islam. ‘I am an Islamophobe,’ wrote the journalist Polly Toynbee in reaction to the Runnymede 1997 report, adding ‘… I am also a Christophobe. If Christianity were not such a spent force in this country, if it were powerful and dominant as it once was, it would still be every bit as damaging as Islam is in those theocratic states in its thrall… If I lived in Israel, I’d feel the same way about Judaism’.

8. The key phenomenon to be addressed is arguably anti-Muslim hostility, namely hostility towards an ethno-religious identity within western countries (including Russia), rather than hostility towards the tenets or practices of a worldwide religion. The 1997 Runnymede definition of Islamophobia was ‘a shorthand way of referring to dread or hatred of Islam – and, therefore, to fear or dislike of all or most Muslims’. In retrospect, it would have been as accurate, or arguably indeed more accurate, to say ‘a shorthand way of referring to fear or dislike of all or most Muslims – and, therefore, dread or hatred of Islam’.

Crucially, Harris isn’t claiming that there’s no such thing as anti-Muslim bigotry. He isn’t even claiming that no one criticizes Islam for bigoted reasons. Instead, his reasons for rejecting “Islamophobia” are:

Apologists for Islam have even sought to defend their faith from criticism by inventing a psychological disorder known as “Islamophobia.” My friend Ayaan Hirsi Ali is said to be suffering from it. Though she was circumcised as a girl by religious barbarians (as 98 percent of Somali girls still are)[,] has been in constant flight from theocrats ever since, and must retain a bodyguard everywhere she goes, even her criticism of Islam is viewed as a form of “bigotry” and “racism” by many “moderate” Muslims. And yet, moderate Muslims should be the first to observe how obscene Muslim bullying is—and they should be the first to defend the right of public intellectuals, cartoonists, and novelists to criticize the faith.

There is no such thing as Islamophobia. Bigotry and racism exist, of course—and they are evils that all well-intentioned people must oppose. And prejudice against Muslims or Arabs, purely because of the accident of their birth, is despicable. But like all religions, Islam is a system of ideas and practices. And it is not a form of bigotry or racism to observe that the specific tenets of the faith pose a special threat to civil society.

These are identical to Richardson’s concerns 1 and 8. Harris objects to rhetorical attempts to blur the lines between attacks on Islam and attacks on Muslims, particularly without clear arguments establishing this link.

More, he objects to dismissing all extreme criticism of Islam using the idiom of clinical phobias, because he doesn’t think extreme criticism of Islam is always unreasonable, much less radically unreasonable. If harsh critiques of Islam are not deranged across the board, then demonstrating [D] ‘His concerns about Islam are exaggerated.‘ will not suffice for demonstrating [C] ‘He has an intensely irrational fear and hatred of Islam.‘, independent of the fact that neither establishes [B] ‘He has an intensely irrational fear and hatred of Muslims.

Greenwald says that he deems Harris “Islamophobic”, not because Harris criticizes Islam, but because Harris criticizes Islam more than he criticizes other religions. But he gives no argument for why an anti-religious writer should deem all religions equally bad. It would be amazing if religions, in all their diversity, happened to pose equivalent risks. And neither racism nor xenophobia can explain the fact that Harris opposes Islam so much more strongly than he opposes far less familiar religions, like Shinto or Jainism. As Harris puts it,

At this point in human history, Islam simply is different from other faiths. The challenge we all face, Muslim and non-Muslim alike, is to find the most benign and practical ways of mitigating these differences and of changing this religion for the better.

Ockham’s Razor suggests that we at least entertain the idea that Harris is just telling the truth. He’s unusually critical of Islam because his exegetical, psychological, and geopolitical assessment of the doctrines, practices, and values associated with contemporary Islam is that they’re unusually harmful to human well-being. He could think all that, and be wrong, without ever once succumbing to a secret prejudice against Muslims.

There remains the large dialectical onus of showing that Harris’ most severe criticisms of Islam are all false, and the far larger onus of showing that they are, each and every one, so wildly irrational as to rival sexism, homophobia, or clinical phobias. If these burdens can’t all be met, then resorting to immediate name-calling, to accusations of bigotry or malice, will remain profoundly irresponsible.

The fact that there are cases where criticisms of Islam are manifestly ridiculous, without the slightest basis in scripture, tradition, or contemporary practice, does not change the fact that “Islamophobia” is rarely reserved for open-and-shut cases. The accusation is even employed as a replacement for substantive rebuttals, as though the very existence of the word constituted a reason to dismiss the critic of Islam!

If there’s one thing contemporary political discourse does not need, it’s a greater abundance of slurs and buzzwords for efficiently condemning or pigeonholing one’s ideological opponents. As such, although I’m happy to grant that Islamophobia exists in most of the senses indicated above, I am not persuaded that the word “Islamophobia” is ever the optimal way to point irrational anti-Muslim or anti-Islam sentiment out.

Jingoism?

I’ve focused on “Islamophobia”, but I doubt that’s the real issue for Greenwald or Hussain. Instead, I gather that their main objection is to Harris’ apparent defenses of U.S. foreign policy.

Would Greenwald and Hussain consider it a positive development if Harris demonstrated his lack of bias by equally strongly endorsing a variety of other U.S. military campaigns that have no relation to the Muslim world? Surely not. Greenwald’s complaint is not that Harris is inconsistently bellicose or pro-administration; it’s that he’s bellicose or pro-administration at all. Likewise, for Hussain to fixate on whether policies like war or torture are “racist” is to profoundly misunderstand the strength of his own case. Even if they weren’t racist, they could still be grotesque atrocities.

In my comments, Hussain commended biologist and antireligious activist P.Z. Myers for criticizing Islam without endorsing violence. (Greenwald has also cited Myers, with wary approval.) But Myers claims to “despise Islam as much as Harris does” (!). Writes he:

I would still say that Islam as a religion is nastier and more barbaric than, say, Anglicanism. The Anglicans do not have as a point of doctrine that it is commendable to order the execution of writers or webcomic artists, nor that a reasonable punishment for adultery is to stone the woman to death. That is not islamophobia: that is recognizing the primitive and cruel realities of a particularly vile religion, in the same way that we can condemn Catholicism for its evil policies towards women and its sheltering of pedophile priests. We can place various cults on a relatively objective scale of repugnance for their attitudes towards human rights, education, equality, honesty, etc., and on civil liberties, you know, that stuff we liberals are supposed to care about, Islam as a whole is damnably bad.

It is not islamophobia to recognize reality.

If we admit that Myers’ view of Islam is not manifestly absurd or bigoted, then we must conclude that the entire discussion of racism, xenophobia, and Islamophobia was a red herring. It is Harris’ pro-U.S., pro-Israel militarism that is the real issue.

It doesn’t take nationalism, imperialism, sadism, or white supremacism for two otherwise reasonable people to disagree as strongly as Greenwald and Harris do. Given how messy and complicated religious psychology and sociology are, different data sets, different heuristics for assessing the data, and different background theories are quite sufficient.

The simplest explanation for Harris’ more “unsettling” (as he puts it) views is that he…

  • (a) … thinks religious doctrines often have a strong influence on human behavior. E.g.:

Many peoples have been conquered by foreign powers or otherwise mistreated and show no propensity for the type of violence that is commonplace among Muslims. Where are the Tibetan Buddhist suicide bombers? The Tibetans have suffered an occupation every bit as oppressive as any ever imposed on a Muslim country. At least one million Tibetans have died as a result, and their culture has been systematically eradicated. Even their language has been taken from them. Recently, they have begun to practice self-immolation in protest. The difference between self-immolation and blowing oneself up in a crowd of children, or at the entrance to a hospital, is impossible to overstate, and reveals a great difference in moral attitude between Vajrayana Buddhism and Islam.[…] My point, of course, is that beliefs matter.

  • (b) … thinks Islam has especially violent doctrines.
  • (c) … thinks that if Islam is a significant source of violence, then the best way to respond is sometimes militaristic.

Greenwald strongly rejects (b), claiming that singling out Islam for special criticism is outright bigoted. He may also doubt (a), inasmuch as he thinks that militant Islamism is fully explicable as a response to material aggression, oppression, and exploitation. Myers, on the other hand, grants (a) and (b) but strongly rejects (c). In all these cases, rational disagreement is possible, and civil discussion may lead to genuine progress in consensus-building.

Accusing Harris of harboring a special anti-Muslim bias would be a useful tactic for discrediting his policy analysis overall. But I think Greenwald and Harris are both arguing in good faith. Why, then, has Greenwald neglected such a simple explanation for Harris’ stance? Unlike Hussain, Greenwald isn’t a sloppy or inattentive reader of Harris.

My hypothesis is that Greenwald is succumbing to the reverse halo effect. It’s hard to model other agents, and particularly hard to imagine reasonable people coming to conclusions radically unlike our own. When we find these conclusions especially odious, it’s often easiest to imagine a simple, overarching perversion that infects every aspect of the other person’s psyche. Certainly it’s easier than admitting that a person can be radically mistaken on a variety of issues without being a fool or a monster — that, here as elsewhere, people are complicated.

As more evidence of human complexity, I’d note that although Greenwald paints a picture of Harris as a kneejerk supporter of Israel and of U.S. militarism, it is Greenwald, and not Harris, who thought that the Iraq War was a good idea at the time. And while Harris has defended Israel on a number of occasions, he has also written:

As a secularist and a nonbeliever—and as a Jew—I find the idea of a Jewish state obnoxious.

and:

Judaism is as intrinsically divisive, as ridiculous in its literalism, and as at odds with the civilizing insights of modernity as any other religion. Jewish settlers, by exercising their ‘freedom of belief’ on contested land, are now one of the principal obstacles to peace in the Middle East. They will be a direct cause of war between Islam and the West should one ever erupt over the Israeli-Palestinian conflict.

Perhaps his views are quite off-base. But they are not cartoonish, and he has argued for them. His opponents would make much more progress if they spent as much time on rebuttals as they currently do on caricatures.

The innumerable sins of the United States may be relevant to the pragmatics of (c), but recognizing these sins should not automatically commit us to dismissing (a) and (b). Likewise, writes Harris:

[N]othing about honestly discussing the doctrine of Islam requires that a person not notice all that might be wrong with U.S. foreign policy, capitalism, the vestiges of empire, or anything else that may be contributing to our ongoing conflicts in the Muslim world.

There are lots of ways to reject Harris’ doctrine (c). Myers makes a pragmatic argument (improving lives, not destroying them, mitigates dogmatism) and, I gather, a principled one (pacifism is the most defensible ethos). Greenwald might add that who we’re relying on to prosecute the war makes a vast difference — that enhancing the power and authority of the U.S. would have more costs and risks than Islam ever did, even if Islamic extremism were a serious threat.

Those aren’t utterly crazy positions, and neither is Harris’. I can say that, and endorse civil open discussion, even knowing that whichever side is the wrong one is very, veryvery wrong — and that the future of human happiness, liberty, and peace depends in large part on our getting this right.

It is precisely because the question is so important that we must not allow public disagreement over the answer to degenerate into banal mud-slinging. It is precisely because our biases — be they micro-xenophobia, micro-relativism, or the halo effect — threaten to vitiate our reasoning that we must put our all into practicing self-criticism, open-mindedness, and level-headed discourse. And it is precisely because our intellectual opponents, if wrong, threaten to do so much harm, that we must work every day to come to better understand them, so that we can actually begin to change minds.

It is not an easy task, but the need is great. If we’re serious about the underlying problems, and not just about scoring points in verbal debates about them, then there is no other way.

[UPDATE, April 11: Hussain and I appeared with human rights advocate Qasim Rashid and Center for Inquiry president Ronald Lindsay on the Huffington Post Live to discuss whether the recent attacks on Harris are overblown. Click here to watch.]

__________________________________________________
Further reading
Greenwald, Glenn (2013). “Murtaza Hussain replies to Harris and his
defenders”. GGSideDocs.
Greenwald, Glenn (2013). “The racism that fuels the ‘war on terror”. The Guardian.
Harris, Sam (2013). “Response to Controversy”. Sam Harris Blog.
Harris, Sam (2012). “Wrestling the Troll”. Sam Harris Blog.
Myers, P.Z. (2013). “Both wrong, both right”. Pharyngula.
Richardson, Robin (2009). “Islamophobia or anti-Muslim racism — or what?” Insted.

What can we reasonably concede to unreason?

This post first appeared on the Secular Alliance at Indiana University blog.

In October, SAIU members headed up to Indianapolis for the Center for Inquiry‘s “Defending Science: Challenges and Strategies” workshop. Massimo Pigliucci and Julia Galef, co-hosts of the podcast Rationally Speaking, spoke about natural deficits in reasoning, while Jason Rodriguez and John Shook focused on deliberate attempts to restrict scientific inquiry.

Julia Galef drew our attention to the common assumption that being rational means abandoning all intuition and emotion, an assumption she dismissed as a flimsy Hollywood straw man, or “straw vulcan”. True rationality, Julia suggested, is about the skillful integration of intuitive and deliberative thought. As she noted in a similar talk at the Singularity Summit, these skills demand constant cultivation and vigilance. In their absence, we all predictably fall victim to an array of cognitive biases.

To that end, Galef spoke of suites of indispensable “rationality skills”:

  • Know when to override an intuitive judgment with a reasoned one. Recognize cases where your intuition reliably fails, but also cases where intuition tends to perform better than reason.
  • Learn how to query your intuitive brain. For instance, to gauge how you really feel about a possibility, visualize it concretely, and perform thought experiments to test how different parameters and framing effects are influencing you.
  • Persuade your intuitive system of what your reason already knows. For example: Anna Salamon knew intellectually that wire-guided sky jumps are safe, but was having trouble psyching herself up. So she made her knowledge of statistics concrete, imagining thousands of people jumping before her eyes. This helped trick her affective response into better aligning with her factual knowledge.

Massimo Pigliucci’s talk, “A Very Short Course in Intellectual Self-Defense”, was in a similar vein. Pigliucci drew our attention to common formal and informal fallacies, and to the limits of deductive, inductive, and mathematical thought. Dissenting from Thomas Huxley’s view that ordinary reasoning is a great deal like science, Pigliucci argued that science is cognitively unnatural. This is why untrained reasoners routinely fail to properly amass and evaluate data.

While it’s certainly important to keep in mind how much hard work empirical rigor demands, I think we should retain a qualified version of Huxley’s view. It’s worth emphasizing that careful thought is not the exclusive property of professional academics, that the basic assumptions of science are refined versions of many of the intuitions we use in navigating our everyday environments. Science’s methods are rarefied, but not exotic or parochial. If we forget this, we risk giving too much credence to presuppositionalist apologetics.

Next, Jason Rodriguez discussed the tactics and goals of science organizations seeking to appease, work with, or reach out to the religious. Surveying a number of different views on the creation-evolution debate, Rodriguez questioned when it is more valuable to attack religious doctrines head-on, and when it is more productive to avoid conflict or make concessions.

This led in to John Shook’s vigorous talk, “Science Must Never Compromise With Religion, No Matter the Metaphysical or Theological Temptations”, and a follow-up Rationally Speaking podcast with Galef and Pigliucci. As you probably guessed, it focused on attacking metaphysicians and theologians who seek to limit the scope or undermine the credibility of scientific inquiry. Shook’s basic concern was that intellectuals are undermining the authority of science when they deem some facts ‘scientific’ and others ‘unscientific’. This puts undue constraints on scientific practice. Moreover, it gives undue legitimacy to those philosophical and religious thinkers who think abstract thought or divine revelation grant us access to a special domain of Hidden Truths.

Shook’s strongest argument was against attempts to restrict science to ‘the natural’. If we define ‘Nature’ in terms of what is scientifically knowable, then this is an empty and useless constraint. But defining the natural instead as the physical, or the spatiotemporal, or the unmiraculous, deprives us of any principled reason to call our research programs ‘methodologically naturalistic’. We could imagine acquiring good empirical evidence for magic, for miracles, even for causes beyond our universe. So science’s skepticism about such phenomena is a powerful empirical conclusion. It is not an unargued assumption or prejudice on the part of scientists.

Shook also argued that metaphysics does not provide a special, unscientific source of knowledge; the claims of metaphysicians are pure and abject speculation. I found this part of the talk puzzling. Metaphysics, as the study of the basic features of reality, does not seem radically divorced from theoretical physics and mathematics, which make similar claims to expand at least our pool of conditional knowledge, knowledge of the implications of various models. Yet Shook argued, not for embracing metaphysics as a scientific field, but for dismissing it as fruitless hand-waving.

Perhaps the confusion stemmed from a rival conception of ‘metaphysics’, not as a specific academic field, but as the general practice of drawing firm conclusions about ultimate reality from introspection alone — what some might call ‘armchair philosophy’ or ‘neoscholasticism’. Philosophers of all fields — and, for that matter, scientists — would do well to more fully internalize the dangers of excessive armchair speculation. But the criticism is only useful if it is carefully aimed. If we fixate on ‘metaphysics’ and ‘theology’ as the sole targets of our opprobrium, we risk neglecting the same arrogance in other guises, while maligning useful exploration into the contents, bases, and consequences of our conceptual frameworks. And if we restrict knowledge to science, we risk not only delegitimizing fields like logic and mathematics, but also putting undue constraints on science itself. For picking out a special domain of purported facts as ‘metaphysical’, and therefore unscientific, has exactly the same risks as picking out a special domain as ‘non-natural’ or ‘supernatural’.

To defend science effectively, we have to pick our battles with care. This clearly holds true in public policy and education, where it is most useful in some cases to go for the throat, in other cases to make compromises and concessions. But it also applies to our own personal struggles to become more rational, where we must carefully weigh the costs of overriding our unreasoned intuitions, taking a balanced and long-term approach. And it also holds in disputes over the philosophical foundations and limits of scientific knowledge, where the cost of committing ourselves to unusual conceptions of ‘science’ or ‘knowledge’ or ‘metaphysics’ must be weighed against any argumentative and pedagogical benefits.

This workshop continues to stimulate my thought, and continues to fuel my drive to improve science education. The central insight the speakers shared was that the practices we group together as ‘science’ cannot be defended or promoted in a vacuum. We must bring to light the psychological and philosophical underpinnings of science, or we will risk losing sight of the real object of our hope and concern.