At long last, serious arguments that the risk of superintelligent AI has been overblown are gaining some traction. One particularly good article, “The Myth of Superhuman AI” by Kevin Kelly, has been making the rounds. Just a few days ago he was also on Sam Harris’ podcast, speaking about the same issues. I think this potential shift is important because there’s been a great deal of intellectual resources devoted to what is, in the end, a hypothetical risk. Of course, getting people to contemplate existential threats is orientating the intellectual moral compass toward the correct pole, and thinkers like Nick Bostrom deserve serious credit for this, as well as credit for crafting very interesting and evocative thought experiments. But I agree with Kevin Kelly that the risk of this particular existential threat has been radically overblown, and, more importantly, I think the reasons why are themselves interesting and evocative. First, some definitions. Superintelligence refers to some entity compared to which humans would seem like children, at minimum, or ants, at maximum. The runaway threat that people are worried about is that something just a bit smarter than us might be able to build something smarter than it, etc, so the intelligence ratchet from viewing humans as children to viewing humans as ants would occur exponentially quickly. While such an entity could possibly adopt a beneficent attitude toward humanity, since it’s also incomprehensible and uncontrollable, its long-term goals would probably be orthogonal to ours in such a way that “eliminates our maps” as it were. Regardless, even just our fates being so far outside of our control constitutes an existential threat, especially if we can’t predict what the AI will do when we ask it for something (this is called the alignment problem). Certainly it's an interesting hypothetical scenario. But all existential threats are not equal. Some are so vanishingly small in their probabilities (rogue planets, like in the film Melancholia) that it would be madness to devote a lot of time to worrying about them. And at this point my impression from the tremors in the world wide web is that the threat of superintelligent AI is being ranked by a significant number of thinkers in the same category as the threat of nuclear war (between superpowers). Here’s Elon Musk, speaking at MIT back in 2014: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” Consider that statement. Because the obvious answer to the question of the biggest existential threat, which people have been giving for decades, is the threat of nuclear annihilation. After all, the risk of civilizational reset brought about by a global nuclear war is totally concrete and has almost happened on multiple occasions. Whereas the risk of AI is totally hypothetical, has a host of underlying and unproven assumptions, and has never even gotten close to happening. To deserve its title as leading or even top-tier existential threat, the arguments for the risk of superintelligent AI have to be incredibly strong. Kevin Kelly in his article does a good job listing five pretty sensible objections (wording is his): 1. Intelligence is not a single dimension, so "smarter than humans" is a meaningless concept. While he lists five separate objections, I think most actually spring naturally from using a specific framework to think about intelligence. Roughly speaking there are two frameworks: either an evolutionary one or a deistic one. Note that I don’t mean to imply that the deistic view is wrong merely because I’ve labeled it as deistic; plenty of really wonderful thinkers have had this framework and they aren’t adopting it for trivial psychological reasons. Rather it’s deistic in that it assumes that intelligence is effectively a single variable (or dimension) in which rocks lie at one end of the spectrum and god-like AI lies at the other end. It assumes that omniscience, or something indistinguishable to it from the human perspective, is possible. I think a lot of the points Kevin Kelly makes stem from him eschewing the deistic understanding of intelligence in favor of the evolutionary. Which way is correct? The demarcation between the two frameworks begins with an image, a scale laid out in Nick Bostrom’s book Superintelligence: This is the deistic frame in a nutshell: there's some continuum from mice up to Einstein up to near-omniscient recursively self-improved AI. While Kelly says that there is no agreed upon definition of general intelligence, I think actually there are two broad options. One is to define intelligence based on human intelligence, which can be captured (at least statistically) with IQ. The Intelligence Quotient holds up well across life, is heritable, and is predictive for all the things we associate intelligence with. Essentially, we can use the difference between low-intelligence people and high-intelligence people as a yardstick. But clearly rating a superintelligence using an IQ test is useless. We wouldn’t use Einstein as a metric for a real superintelligence of the variety Bostrom is worried about any more than we’d use the mouse as a metric for Einstein. So clearly the danger isn’t just from an AI with an exceptionally high IQ (since it’s not like high IQ people run the world anyways). Rather, the danger comes from the possibility of a runaway process of a learning algorithm that creates a god-like AI. To examine that we need a more abstract and highly generalizable notion of intelligence. A definition of intelligence is actually given by Legg and Hutter in their paper “Universal intelligence: a definition of machine intelligence.” Taking their point broadly, the intelligence of an agent is the sum of the performance of that agent on all possible problems, weighted by the simplicity of those problems (simple problems are worth more). A superintelligent entity would then be something that scores extremely high on this scale. So universal intelligence is at least somewhat describable. Interestingly, Einstein scores pretty low on this metric. In fact, every human would score pretty low on this metric. This is because the space of all problems include things that human beings are really bad at, like picking out the same two color pixels on a TV screen (and three pixels triads, and so on). We can also define intelligence broadly as: given a particular goal, the agent who needs to achieve that goal learns everything that’s necessary to achieve that goal with a high probability (which might mean achieving other goals en route to the main goal, etc). A superintelligence should score far above humans on one or both of these metrics, effectively operating as omniscient and omnicompetent. So this scale or metric specifies the deistic framework for intelligence, with the assumption that it is like a dial that can be turned up indefinitely. In contrast, let’s now introduce the evolutionary framework, which is what Kevin Kelly is using. In this framework, intelligence is really about adapting to some niche portion of the massive state space of all possible problems. I call this “endless forms” framework, which comes from On the Origin of Species, and which I'll quote in full because it’s such a wonderful closing paragraph: It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved. In the endless forms framework the different intelligences live together on a tangled bank. This is pretty much the way our artificial intelligences live together right now. One AI to fly the plane, another AI to filter spam from email, and so on. While the endless forms framework describes the technology as exists now, as we saw, superintelligence seems at least definable or imaginable. But I think there’s a good reason to believe its possibility is an illusion. The reason is instantiated in something called the “No Free Lunch” theorem. Taken from the abstract: “A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.” The proof, which there are multiple versions of in the field of machine learning and search and optimization, is pretty relevant to the field of intelligence research. No Free Lunch means that no single model, or optimization, works best for every circumstance. However, I don’t want to rely directly on the math or the specific proof itself, as these kinds of things are always limited in their universality by their assumptions. The “No Free Lunch” theorems are true, but they are also in the context of the entirety of possibility space (without weighing by likelihood). I think it’s instead useful as an example of a more generalizable idea. I would argue there is a conceptual version, a broader “No Free Lunch” principle, that applies nearly universally in evolution, intelligence, learning, engineering, and function. For instance, consider the absurdity of designing a “superorganism” that has higher fitness in all possible environments than the highest fitness of any extant organism in any of those environments. While such an entity is definable, it is not constructible. It would need to be more temperature resistant than deep vent bacteria, more hardy than a tardigrade, need to double faster than bacteria, hunt better than a pack of orcas, and, well, you get the idea. While it might be definable in terms of a fitness optimization problem (just wave your hands about precisely how it is optimizing for fitness), there isn’t any actual such thing as a universal organism that has a high fitness in all possible, or even just likely-here-on-Earth, environments. I think it’s precisely this generalized No Free Lunch principle that is the deep root of what Kelly is talking about. He mentions that “AIs will follow the same engineering maxim that all things made or born must follow: You cannot optimize every dimension. You can only have tradeoffs.” On Harris’s podcast Kelly goes on further to say that this engineer's maxim stems from a lack of resources or time. That’s all certainly true. However, just definitionally, there’s No Free Lunch because whenever you’re adapting a neural network or an organism to perform in some way you’re implicitly handicapping it in some other way. No Free Lunch in biology implies there’s no way to increase an organism’s fitness in regard to all niches or contexts. In fact, it’s probably the case that increasing fitness in regards to the current environment necessarily decreases fitness in regards to other environments. While environments on Earth aren’t uniform in their likelihood, they are incredibly variable and they do change quickly. We can make a general claim that the more changeable an environment, and the more the likelihood is spread out over diverse environments, the more a generalized form of No Free Lunch is likely to apply. I think it’s this No Free Lunch principle that actually generates the endless forms of evolution. A mutation may increase in the fitness of the organism with regard to the current environment, but it will often, perhaps always, decrease fitness in regard to other environments. Since the environment is always changing, this will forever cause a warping and shifting of where organisms are located in the landscape of possible phenotypes. It’s why there’s no end state to evolution. Since there are never any free lunches, evolution is the one game nobody can stop playing. Applying the same broad idea of No Free Lunch to intelligence indicates there will forever be endless forms of intelligence. Changes to the neural architecture of a network or a brain may help with the current problems at hand, but this will necessarily decrease its capability of solving other types of problems. Overall, a tangled bank will emerge. I recently saw a practical example of the No Free Lunch principle as specifically concerns AI, in a talk given at Yhouse in New York City by Julian Togelius, who studies intelligence by designing AIs that play computer games. Here’s a figure from a paper of his where they researched how different controllers (AIs) performed across a range of different computer games. Basically, no controller does well across the full selection of games (and the games aren’t actually even that different). It’s not like they are trying directly to build superintelligence, but it’s a telling example of how the No Free Lunch principle crops up even in what seem like small domains of expertise. Even if there were a broad general intelligence that did okay across a very broad domain of problems, it would be outcompeted by specialists willing to sacrifice their abilities in some domains to maximize abilities in others. In fact, this is precisely what’s happening with artificial neural networks and human beings right now. It’s the generalists who are being replaced. So then what about limiting the analysis to just the things that are relevant to what we normally consider as intelligent in humans? Roughly this is what Kevin’s 4th point is trying to head off, by pointing out that the actual upper bounds on recognizable human-type intelligence are unknown. Perhaps it would be statistically surprisingly for humans to have the highest general intelligence over the slim domain of the state space of problems that matters to us. Then, for instance, an AI could in theory be more intelligent than humans along precisely the same dimensions of intelligence. However, it may be that the No Free Lunch principle itself specifies an upper bound on even human-range intelligence (say, up to 200 IQ). Consider the constellation of variables that underlie something like IQ. Roughly speaking, these are variables like attention, memory, creativity, etc. We can imagine trying to maximize all these variables on sliding scales. How surprising would it be to find they were totally uncoupled! In fact, nearly everything we know empirically about the psychologically and neurologically atypical implies that savant-like abilities in one area lead to severe detriments in others. Being a cognitive generalist in the human domain of problems (which humans are) almost certainly has costs we don't even know about, e.g., all the things computers can do that we can't. Greater cognitive generalists would then have greater costs, etc. Even if humans don’t represent the apex of our intelligence niche, worrying about that isn’t nearly the same thing as worrying about runaway superintelligences turning the world into a paper clip factory with little to no warning. A runaway self-improving superintelligent AI would be a massive case of getting a free lunch. It’s like a perpetual motion machine or the ultimate organism: at first it seems conceivable, but there are good reasons to think the universe might not allow for it. A huge gap always remains between the things that are definable and things that are, even with near-infinite resources and abilities, actualizable. Recently an article recently popped up on arXiv which seems to be trying to preempt skepticism concerning the danger of superintelligence. It’s called “On the Impossibility of Supersized Machines” by the physicist Max Tegmark and his coauthors. Tegmark actually now runs a nonprofit devoted to studying and preventing existential risk, which, by the way, is not focused solely on the risk from AI. However, in the article, Tegmark and his coauthors parody those who protest against the idea of a superintelligence, giving seven glib reasons why supersized machines should be impossible. Obviously the article is having a bit of fun, but I think it’s also somewhat revealing. Consider their response to Kelly’s point that "Intelligence is not a single dimension, so "smarter than humans" is a meaningless concept." Their parody of this point is called "The Meaninglessness of Human-Level Largeness." ...one quickly concludes that there are an infinite number of metrics that could be used to measure largeness, and that people who speak of “supersized machines” do not have a particular metric in mind. Surely, then, any future machine will be larger than humans on some metrics and smaller than humans on others, just as they are today. Sorry for the convolutions here, but I think the protestation behind the parody is itself an error. Think of some space with a finite but astronomically large number of dimensions. If those dimensions don’t constrain one another (like, presumably, with metrics of size), then it would truly be a fallacy to argue that machines couldn’t be greater along on any or all of them. But in the evolutionary framework the dimensions of intelligence do constrain each other, because there are no free lunches and specialization always has a cost.
Overall, thinking in evolutionary terms gives the idea of superintelligence a very different framing, which is important when considering options. Arguments for or against something don’t really get anywhere if the frame is wrong to begin with. And at least to me, the evolutionary framework seems both more likely to be true, more defendable given what we know, and equally predictive and interesting. There is grandeur in this view of life: that endless forms of intelligence most beautiful and most wonderful have been, and are being, evolved. So next time Elon Musk gets asked what the greatest existential threat to humanity is, I hope he gives the obvious answer. Strangelets.
15 Comments
Ahh, very well argued. I agree with you more than I disagree, but I also think it’s useful to play Devil’s Advocate here.
Reply
Erik Hoel
7/5/2017 07:52:33 pm
Thanks for the detailed read Mike, and you've got some great comments.
Reply
Paul Chapman
7/6/2017 09:06:53 am
I'm new to the debate about super-AI, so I will almost certainly retread old ground. Thinking about your post over coffee, I figured I could write 10,000 words. I'll try instead to write the thousand most salient.
Reply
Paul Chapman
7/6/2017 09:19:58 am
(continued)
Reply
Sorry that your post got cut off Paul. I’ve contacted weebly to see if they can extend the comment length.
Reply
7/7/2017 03:19:51 am
A couple of random thoughts.
Reply
Erik Hoel
7/7/2017 09:56:37 am
Hey James!
Reply
7/9/2017 02:20:38 pm
I generally agree with most of what you have written here. However, a few minor things I want to discuss.
Reply
Thanks James.
Reply
Oliver
7/9/2017 08:21:58 pm
Great post Erik!
Reply
Thanks for pointing me to the Dennett paper Oliver, I hadn't read that one.
Reply
Oliver
7/10/2017 06:13:20 pm
Yes - and it seems a particularly common mistake is to think that mere improvements in speed or memory capacity (e.g. Moore's law) automatically translate into improved competence at solving real problems. 8/16/2017 11:30:37 pm
Enjoyed your article. Agree with dimensionality of intelligence. This dimensionality need not be just task based; it can also be time based. So, for example, one program can be the 5-minute chess champ, another the 40-minute champ. To human players the first might seem the better tactical player, the other the more strategic player.
Reply
Leave a Reply. |