At long last, serious arguments that the risk of superintelligent AI has been overblown are gaining some traction. One particularly good article, “The Myth of Superhuman AI” by Kevin Kelly, has been making the rounds. Just a few days ago he was also on Sam Harris’ podcast, speaking about the same issues.
I think this potential shift is important because there’s been a great deal of intellectual resources devoted to what is, in the end, a hypothetical risk. Of course, getting people to contemplate existential threats is orientating the intellectual moral compass toward the correct pole, and thinkers like Nick Bostrom deserve serious credit for this, as well as credit for crafting very interesting and evocative thought experiments. But I agree with Kevin Kelly that the risk of this particular existential threat has been radically overblown, and, more importantly, I think the reasons why are themselves interesting and evocative.
First, some definitions. Superintelligence refers to some entity compared to which humans would seem like children, at minimum, or ants, at maximum. The runaway threat that people are worried about is that something just a bit smarter than us might be able to build something smarter than it, etc, so the intelligence ratchet from viewing humans as children to viewing humans as ants would occur exponentially quickly. While such an entity could possibly adopt a beneficent attitude toward humanity, since it’s also incomprehensible and uncontrollable, its long-term goals would probably be orthogonal to ours in such a way that “eliminates our maps” as it were. Regardless, even just our fates being so far outside of our control constitutes an existential threat, especially if we can’t predict what the AI will do when we ask it for something (this is called the alignment problem).
My first post is inspired by the physicist and blogger Scott Aaronson, who recently blogged his criticisms about a theory I’ve been working on, called causal emergence. To see the simple nature of his error, skip down to Isn’t causal emergence just an issue of normalization?, although this does assume you are familiar with some of the theory's terminology. Since Scott's criticisms reflected a lack of understanding of the theory, it prompted me to do this generalized explainer. Please note this explainer is purposefully designed to not be technical, formalized, or comprehensive. Its goal is to give interested parties a conceptual grasp on the theory, using relatively basic notions of causation and information.
What’s causal emergence?
It’s when the higher scale of a system has more information associated with its causal structure than the underlying lower scale. Causal structure just refers to a set of causal relationships between some variables, such as states or mechanisms. Measuring causal emergence is like you're looking at the causal structure of a system with a camera (the theory) and as you focus the camera (look at different scales) the causal structure snaps into focus. Notably, it doesn’t have to be “in focus" at the lowest possible scale, the microscale. Why is this? In something approaching plain English: macrostates can be strongly coupled even while their underlying microstates are only weakly coupled. The goal of the theory is to search across scales until the scale at which variables (like elements or states) are most strongly causally coupled pops out.