At long last, serious arguments that the risk of superintelligent AI has been overblown are gaining some traction. One particularly good article, “The Myth of Superhuman AI” by Kevin Kelly, has been making the rounds. Just a few days ago he was also on Sam Harris’ podcast, speaking about the same issues.
I think this potential shift is important because there’s been a great deal of intellectual resources devoted to what is, in the end, a hypothetical risk. Of course, getting people to contemplate existential threats is orientating the intellectual moral compass toward the correct pole, and thinkers like Nick Bostrom deserve serious credit for this, as well as credit for crafting very interesting and evocative thought experiments. But I agree with Kevin Kelly that the risk of this particular existential threat has been radically overblown, and, more importantly, I think the reasons why are themselves interesting and evocative.
First, some definitions. Superintelligence refers to some entity compared to which humans would seem like children, at minimum, or ants, at maximum. The runaway threat that people are worried about is that something just a bit smarter than us might be able to build something smarter than it, etc, so the intelligence ratchet from viewing humans as children to viewing humans as ants would occur exponentially quickly. While such an entity could possibly adopt a beneficent attitude toward humanity, since it’s also incomprehensible and uncontrollable, its long-term goals would probably be orthogonal to ours in such a way that “eliminates our maps” as it were. Regardless, even just our fates being so far outside of our control constitutes an existential threat, especially if we can’t predict what the AI will do when we ask it for something (this is called the alignment problem).