nothing is mere

Category Archives: value

Library of Scott Alexandria

I’ve said before that my favorite blog — and the one that’s shifted my views in the most varied and consequential ways — is Scott Alexander’s Slate Star Codex. Scott has written a lot of good stuff, and it can be hard to know where to begin; so I’ve listed below what I think are …

Continue reading

Revenge of the Meat People!

Back in November, I argued (in Inhuman Altruism) that rationalists should try to reduce their meat consumption. Here, I’ll update that argument a bit and lay out some of my background assumptions. I was surprised at the time by the popularity of responses on LessWrong like Manfred’s Unfortunately for cows, I think there is an approximately 0% chance …

Continue reading

Bostrom on AI deception

Oxford philosopher Nick Bostrom has argued, in “The Superintelligent Will,” that advanced AIs are likely to diverge in their terminal goals (i.e., their ultimate decision-making criteria), but converge in some of their instrumental goals (i.e., the policies and plans they expect to indirectly further their terminal goals). An arbitrary superintelligent AI would be mostly unpredictable, except to the extent …

Continue reading

Loving the merely physical

This is my submission to Sam Harris’ Moral Landscape challenge: “Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.)” Though I’ve mentioned before that I’m sympathetic to Harris’ argument, I’m not …

Continue reading

In defense of actually doing stuff

Most good people are kind in an ordinary way, when the intensity of human suffering in the world today calls for heroic kindness. I’ve seen ordinary kindness criticized as “pretending to try”. We go through the motions of humanism, but without significantly inconveniencing ourselves, without straying from our established habits, without violating societal expectations. It’s not that …

Continue reading

The seed is not the superintelligence

This is the conclusion of a LessWrong post, following The AI Knows, But Doesn’t Care. If an artificial intelligence is smart enough to be dangerous to people, we’d intuitively expect it to be smart enough to know how to make itself safe for people. But that doesn’t mean all smart AIs are safe. To turn that capacity into …

Continue reading

The AI knows, but doesn’t care

This is the first half of a LessWrong post. For background material, see A Non-Technical Introduction to AI Risk and Truly Part of You. I summon a superintelligence, calling out: ‘I wish for my values to be fulfilled!’ The results fall short of pleasant. Gnashing my teeth in a heap of ashes, I wail: Is …

Continue reading

A non-technical introduction to AI risk

In the summer of 2008, experts attending the Global Catastrophic Risk Conference assigned a 5% probability to the human species’ going extinct due to “superintelligent AI” by the year 2100. New organizations, like the Centre for the Study of Existential Risk and the Machine Intelligence Research Institute, are springing up to face the challenge of …

Continue reading

God is no libertarian

So the world was made by a perfectly benevolent, compassionate, loving God. Yet suffering exists. Why would a nice guy like God make a world filled with so much nastiness? All these wars, diseases, ichneumon wasps—what possible good purpose could they all serve? We want God to make our lives meaningful, purpose-driven. Yet we don’t …

Continue reading

Harry Potter and the Fuzzies of Altruism

This is a shorter version of a post for Miri Mogilevsky’s blog, Brute Reason. Effective Altruists are do-gooders with a special interest in researching the very best ways to do good, such as high-impact poverty reduction and existential risk reduction. A surprising number of them are also Harry Potter fans, probably owing to the success …

Continue reading