nothing is mere

Tag Archives: eliezer yudkowsky

The seed is not the superintelligence

This is the conclusion of a LessWrong post, following The AI Knows, But Doesn’t Care. If an artificial intelligence is smart enough to be dangerous to people, we’d intuitively expect it to be smart enough to know how to make itself safe for people. But that doesn’t mean all smart AIs are safe. To turn that capacity into …

Continue reading

A non-technical introduction to AI risk

In the summer of 2008, experts attending the Global Catastrophic Risk Conference assigned a 5% probability to the human species’ going extinct due to “superintelligent AI” by the year 2100. New organizations, like the Centre for the Study of Existential Risk and the Machine Intelligence Research Institute, are springing up to face the challenge of …

Continue reading

Harry Potter and the Fuzzies of Altruism

This is a shorter version of a post for Miri Mogilevsky’s blog, Brute Reason. Effective Altruists are do-gooders with a special interest in researching the very best ways to do good, such as high-impact poverty reduction and existential risk reduction. A surprising number of them are also Harry Potter fans, probably owing to the success …

Continue reading