ERIK HOEL
  • Home
  • Writing
  • Science
  • About
  • Appearances
  • Home
  • Writing
  • Science
  • About
  • Appearances
Search

craniotomies

Superintelligence is a free lunch, and there are no free lunches

7/5/2017

15 Comments

 
At long last, serious arguments that the risk of superintelligent AI has been overblown are gaining some traction. One particularly good article, “The Myth of Superhuman AI” by Kevin Kelly, has been making the rounds. Just a few days ago he was also on Sam Harris’ podcast, speaking about the same issues.

I think this potential shift is important because there’s been a great deal of intellectual resources devoted to what is, in the end, a hypothetical risk. Of course, getting people to contemplate existential threats is orientating the intellectual moral compass toward the correct pole, and thinkers like Nick Bostrom deserve serious credit for this, as well as credit for crafting very interesting and evocative thought experiments. But I agree with Kevin Kelly that the risk of this particular existential threat has been radically overblown, and, more importantly, I think the reasons why are themselves interesting and evocative.

First, some definitions. Superintelligence refers to some entity compared to which humans would seem like children, at minimum, or ants, at maximum. The runaway threat that people are worried about is that something just a bit smarter than us might be able to build something smarter than it, etc, so the intelligence ratchet from viewing humans as children to viewing humans as ants would occur exponentially quickly. While such an entity could possibly adopt a beneficent attitude toward humanity, since it’s also incomprehensible and uncontrollable, its long-term goals would probably be orthogonal to ours in such a way that “
eliminates our maps” as it were. Regardless, even just our fates being so far outside of our control constitutes an existential threat, especially if we can’t predict what the AI will do when we ask it for something (this is called the alignment problem).

​Certainly it's an interesting hypothetical scenario. But all existential threats are not equal. Some are so vanishingly small in their probabilities (rogue planets, like in the film
 Melancholia) that it would be madness to devote a lot of time to worrying about them. And at this point my impression from the tremors in the world wide web is that the threat of superintelligent AI is being ranked by a significant number of thinkers in the same category as the threat of nuclear war (between superpowers). Here’s Elon Musk, speaking at MIT back in 2014:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”
Consider that statement. Because the obvious answer to the question of the biggest existential threat, which people have been giving for decades, is the threat of nuclear annihilation. After all, the risk of civilizational reset brought about by a global nuclear war is totally concrete and has almost happened on multiple occasions. Whereas the risk of AI is totally hypothetical, has a host of underlying and unproven assumptions, and has never even gotten close to happening.

To deserve its title as leading or even top-tier existential threat, the arguments for the risk of superintelligent AI have to be incredibly strong. Kevin Kelly in his article does a good job listing five pretty sensible objections (wording is his):
1. Intelligence is not a single dimension, so "smarter than humans" is a meaningless concept.
2. Humans do not have general purpose minds, and neither will AIs.
3. Emulation of human thinking in other media will be constrained by cost.
4. Dimensions of intelligence are not infinite.
​5. Intelligences are only one factor in progress.
While he lists five separate objections, I think most actually spring naturally from using a specific framework to think about intelligence. Roughly speaking there are two frameworks: either an evolutionary one or a deistic one.

Note that I don’t mean to imply that the deistic view is wrong merely because I’ve labeled it as deistic; plenty of really wonderful thinkers have had this framework and they aren’t adopting it for trivial psychological reasons. Rather it’s deistic in that it assumes that intelligence is effectively a single variable (or dimension) in which rocks lie at one end of the spectrum and god-like AI lies at the other end. It assumes that omniscience, or something indistinguishable to it from the human perspective, is possible.

​I think a lot of the points Kevin Kelly makes stem from him eschewing the deistic understanding of intelligence in favor of the evolutionary. Which way is correct? The demarcation between the two frameworks begins with an image, a scale laid out in Nick Bostrom’s book Superintelligence:
Picture

This is the deistic frame in a nutshell: there's some continuum from mice up to Einstein up to near-omniscient recursively self-improved AI.

While Kelly says that there is no agreed upon definition of general intelligence, I think actually there are two broad options. One is to define intelligence based on human intelligence, which can be captured (at least statistically) with IQ. The Intelligence Quotient holds up well across life, is heritable, and is predictive for all the things we associate intelligence with. Essentially, we can use the difference between low-intelligence people and high-intelligence people as a yardstick.

But clearly rating a superintelligence using an IQ test is useless. We wouldn’t use Einstein as a metric for a real superintelligence of the variety Bostrom is worried about any more than we’d use the mouse as a metric for Einstein. So clearly the danger isn’t just from an AI with an exceptionally high IQ (since it’s not like high IQ people run the world anyways). Rather, the danger comes from the possibility of a runaway process of a learning algorithm that creates a god-like AI. To examine that we need a more abstract and highly generalizable notion of intelligence.

A definition of intelligence is actually given by Legg and Hutter in their paper “Universal intelligence: a definition of machine intelligence.” Taking their point broadly, the intelligence of an agent is the sum of the performance of that agent on all possible problems, weighted by the simplicity of those problems (simple problems are worth more). A superintelligent entity would then be something that scores extremely high on this scale.

So universal intelligence is at least somewhat describable. Interestingly, Einstein scores pretty low on this metric. In fact, every human would score pretty low on this metric. This is because the space of all problems include things that human beings are really bad at, like picking out the same two color pixels on a TV screen (and three pixels triads, and so on).

We can also define intelligence broadly as: given a particular goal, the agent who needs to achieve that goal learns everything that’s necessary to achieve that goal with a high probability (which might mean achieving other goals en route to the main goal, etc).


A superintelligence should score far above humans on one or both of these metrics, effectively operating as omniscient and omnicompetent. So this scale or metric specifies the deistic framework for intelligence, with the assumption that it is like a dial that can be turned up indefinitely.

In contrast, let’s now introduce the evolutionary framework, which is what Kevin Kelly is using. In this framework, intelligence is really about adapting to some niche portion of the massive state space of all possible problems. I call this “endless forms” framework, which comes from On the Origin of Species, and which I'll quote in full because it’s such a wonderful closing paragraph:
It is interesting to contemplate a tangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent upon each other in so complex a manner, have all been produced by laws acting around us. These laws, taken in the largest sense, being Growth with reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less improved forms. Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows. There is grandeur in this view of life, with its several powers, having been originally breathed by the Creator into a few forms or into one; and that, whilst this planet has gone circling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.
In the endless forms framework the different intelligences live together on a tangled bank. This is pretty much the way our artificial intelligences live together right now. One AI to fly the plane, another AI to filter spam from email, and so on. While the endless forms framework describes the technology as exists now, as we saw, superintelligence seems at least definable or imaginable.

But I think there’s a good reason to believe its possibility is an illusion. The reason is instantiated in something called the “No Free Lunch” theorem. Taken from the abstract: “A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.”


The proof, which there are multiple versions of in the field of machine learning and search and optimization, is pretty relevant to the field of intelligence research. No Free Lunch means that no single model, or optimization, works best for every circumstance.

However, I don’t want to rely directly on the math or the specific proof itself, as these kinds of things are always limited in their universality by their assumptions. The “No Free Lunch” theorems are true, but they are also in the context of the entirety of possibility space (without weighing by likelihood). I think it’s instead useful as an example of a more generalizable idea. I would argue there is a conceptual version, a broader “No Free Lunch” principle, that applies nearly universally in evolution, intelligence, learning, engineering, and function.


For instance, consider the absurdity of designing a “superorganism” that has higher fitness in all possible environments than the highest fitness of any extant organism in any of those environments. While such an entity is definable, it is not constructible. It would need to be more temperature resistant than deep vent bacteria, more hardy than a tardigrade, need to double faster than bacteria, hunt better than a pack of orcas, and, well, you get the idea. While it might be definable in terms of a fitness optimization problem (just wave your hands about precisely how it is optimizing for fitness), there isn’t any actual such thing as a universal organism that has a high fitness in all possible, or even just likely-here-on-Earth, environments.

I think it’s precisely this generalized No Free Lunch principle that is the deep root of what Kelly is talking about. He mentions that “AIs will follow the same engineering maxim that all things made or born must follow: You cannot optimize every dimension. You can only have tradeoffs.” On Harris’s podcast Kelly goes on further to say that this engineer's maxim stems from a lack of resources or time.

That’s all certainly true. However, just definitionally, there’s No Free Lunch because whenever you’re adapting a neural network or an organism to perform in some way you’re implicitly handicapping it in some other way. No Free Lunch in biology implies there’s no way to increase an organism’s fitness in regard to all niches or contexts. In fact, it’s probably the case that increasing fitness in regards to the current environment necessarily decreases fitness in regards to other environments. While environments on Earth aren’t uniform in their likelihood, they are incredibly variable and they do change quickly. We can make a general claim that the more changeable an environment, and the more the likelihood is spread out over diverse environments, the more a generalized form of No Free Lunch is likely to apply.


I think it’s this No Free Lunch principle that actually generates the endless forms of evolution. A mutation may increase in the fitness of the organism with regard to the current environment, but it will often, perhaps always, decrease fitness in regard to other environments. Since the environment is always changing, this will forever cause a warping and shifting of where organisms are located in the landscape of possible phenotypes. It’s why there’s no end state to evolution. Since there are never any free lunches, evolution is the one game nobody can stop playing.

Applying the same broad idea of No Free Lunch to intelligence indicates there will forever be endless forms of intelligence. Changes to the neural architecture of a network or a brain may help with the current problems at hand, but this will necessarily decrease its capability of solving other types of problems. Overall, a tangled bank will emerge.

​
I recently saw a practical example of the No Free Lunch principle as specifically concerns AI, in a talk given at Yhouse in New York City by Julian Togelius, who studies intelligence by designing AIs that play computer games. Here’s a figure from a paper of his where they researched how different controllers (AIs) performed across a range of different computer games.
Picture
Basically, no controller does well across the full selection of games (and the games aren’t actually even that different). It’s not like they are trying directly to build superintelligence, but it’s a telling example of how the No Free Lunch principle crops up even in what seem like small domains of expertise.

Even if there were a broad general intelligence that did okay across a very broad domain of problems, it would be outcompeted by specialists willing to sacrifice their abilities in some domains to maximize abilities in others. In fact, this is precisely what’s happening with artificial neural networks and human beings right now. It’s the generalists who are being replaced.

So then what about limiting the analysis to just the things that are relevant to what we normally consider as intelligent in humans? Roughly this is what Kevin’s 4th point is trying to head off, by pointing out that the actual upper bounds on recognizable human-type intelligence are unknown.


Perhaps it would be statistically surprisingly for humans to have the highest general intelligence over the slim domain of the state space of problems that matters to us. Then, for instance, an AI could in theory be more intelligent than humans along precisely the same dimensions of intelligence. However, it may be that the No Free Lunch principle itself specifies an upper bound on even human-range intelligence (say, up to 200 IQ). Consider the constellation of variables that underlie something like IQ. Roughly speaking, these are variables like attention, memory, creativity, etc. We can imagine trying to maximize all these variables on sliding scales. How surprising would it be to find they were totally uncoupled! In fact, nearly everything we know empirically about the psychologically and neurologically atypical implies that savant-like abilities in one area lead to severe detriments in others. Being a cognitive generalist in the human domain of problems (which humans are) almost certainly has costs we don't even know about, e.g., all the things computers can do that we can't. Greater cognitive generalists would then have greater costs, etc.

Even if humans don’t represent the apex of our intelligence niche, worrying about that isn’t nearly the same thing as worrying about runaway superintelligences turning the world into a paper clip factory with little to no warning. A runaway self-improving superintelligent AI would be a massive case of getting a free lunch. It’s like a perpetual motion machine or the ultimate organism: at first it seems conceivable, but there are good reasons to think the universe might not allow for it. A huge gap always remains between the things that are definable and things that are, even with near-infinite resources and abilities, actualizable.

Recently an article recently popped up on arXiv which seems to be trying to preempt skepticism concerning the danger of superintelligence. It’s called “On the Impossibility of Supersized Machines” by the physicist Max Tegmark and his coauthors. Tegmark actually now runs a nonprofit devoted to studying and preventing existential risk, which, by the way, is not focused solely on the risk from AI. However, in the article, Tegmark and his coauthors parody those who protest against the idea of a superintelligence, giving seven glib reasons why supersized machines should be impossible.

Obviously the article is having a bit of fun, but I think it’s also somewhat revealing. Consider their response to Kelly’s point that "Intelligence is not a single dimension, so "smarter than humans" is a meaningless concept." Their parody of this point is called "The Meaninglessness of Human-Level Largeness."
...one quickly concludes that there are an infinite number of metrics that could be used to measure largeness, and that people who speak of “supersized machines” do not have a particular metric in mind. Surely, then, any future machine will be larger than humans on some metrics and smaller than humans on others, just as they are today.
Sorry for the convolutions here, but I think the protestation behind the parody is itself an error. Think of some space with a finite but astronomically large number of dimensions. If those dimensions don’t constrain one another (like, presumably, with metrics of size), then it would truly be a fallacy to argue that machines couldn’t be greater along on any or all of them. But in the evolutionary framework the dimensions of intelligence do constrain each other, because there are no free lunches and specialization always has a cost. 

Overall, thinking in evolutionary terms gives the idea of superintelligence a very different framing, which is important when considering options. Arguments for or against something don’t really get anywhere if the frame is wrong to begin with. And at least to me, the evolutionary framework seems both more likely to be true, more defendable given what we know, and equally predictive and interesting. There is grandeur in this view of life: that endless forms of intelligence most beautiful and most wonderful have been, and are being, evolved.


So next time Elon Musk gets asked what the greatest existential threat to humanity is, I hope he gives the obvious answer. Strangelets.
15 Comments
Mike link
7/5/2017 06:45:48 pm

Ahh, very well argued. I agree with you more than I disagree, but I also think it’s useful to play Devil’s Advocate here.

First, I really like the evolutionary vs deistic distinction (although, I might suggest ‘platonic’ instead of ‘deistic’). It does seem like an open question which framework ‘intelligence’ belongs to- not a foregone conclusion as Superintelligence implies.

That said, I would argue against your “Strong No Free Lunch Theorem” (SNFLT), as described here:

>However, just definitionally, there’s No Free Lunch because whenever you’re adapting a neural network or an organism to perform in some way you’re implicitly handicapping it in some other way. No Free Lunch in biology implies there’s no way to increase an organism’s fitness in regard to all niches or contexts. In fact, it’s probably the case that increasing fitness in regards to the current environment necessarily decreases fitness in regards to other environments. While environments on Earth aren’t uniform in their likelihood, they are incredibly variable and they do change quickly. We can make a general claim that the more changeable an environment, and the more the likelihood is spread out over diverse environments, the more a generalized form of No Free Lunch is likely to apply.
>I think it’s this No Free Lunch principle that actually generates the endless forms of evolution. A mutation may increase in the fitness of the organism with regard to the current environment, but it will often, perhaps always, decrease fitness in regard to other environments. Since the environment is always changing, this will forever cause a warping and shifting of where organisms are located in the landscape of possible phenotypes. It’s why there’s no end state to evolution. Since there are never any free lunches, evolution is the one game nobody can stop playing.

Empirically, sometimes there *is* a free lunch under certain conditions— selecting bacteria for antibiotic resistance, for instance, can sometimes select for intrinsically more efficient metabolic pathways. Another, more topically relevant, example here is genetic load. See e.g. https://www.edge.org/response-detail/26714 — in short, the average human genome has lots (~hundreds to ~thousands, depending on how you count them) of broken & damaged genes, variants that are highly maladaptive, that crop up based on the math of the DNA mutation base rates & strength of purifying selection. I would argue that many of these variants have literally no redeeming qualities, no nontrivial environmental context in which they would be more adaptive than the original, unmutated version. Some people, like Stephen Hsu, think we could get a surprisingly large Free Lunch (huge boost to human health & intelligence) by “spellchecking” these errors in our genome. http://infoproc.blogspot.com/2015/08/explain-it-to-me-like-im-five-years-old.html

>Applying the same argument to intelligence, the No Free Lunch principle indicates there will forever be endless forms of intelligence. Changes to the entity may help with the current problems at hand, but this will necessarily decrease its capability of solving other types of problems. A tangled bank will emerge.
…
>However, it may be that the No Free Lunch principle itself specifies an upper bound on even human-range intelligence (say, up to 200 IQ). Consider the constellation of variables that underlie something like IQ. Roughly speaking, these are variables like attention, memory, creativity, etc. We can imagine trying to maximize all these variables on sliding scales. How surprising would it be to find they were totally uncoupled! In fact, nearly everything we know empirically about the psychologically and neurologically atypical implies that savant-like abilities in one area lead to severe detriments in others.

This seems like an important piece of the argument. Some thoughts:

(1) I can’t deny that, empirically, there seems to be some sorts of tradeoffs between savant-like abilities in one area, and deficits in others. But we also have many existence proofs of well-adjusted genius-level minds, people that could think circles around us mortals, but were also fun at parties. John Von Neumann comes to mind: happy & well-adjusted, socially adept, and no disturbing foibles, but also perhaps the smartest person of the 20th century. Probably Feynman too.

If JVN had been software, and had access to his source code, I'd expect he would have turned into some form of superintelligence.

(2) You note that it would be unreasonable for desirable mental capacities to be uncoupled, and that there will probably be tradeoffs: boost one, hurt others. This seems plausible in some cases— but so does the opposite in some cases, that they’re *coupled and synergistic* (i.e. if we can improve working memory, we a

Reply
Erik Hoel
7/5/2017 07:52:33 pm

Thanks for the detailed read Mike, and you've got some great comments.

I would definitely agree there's a viable demarcation between a strong NFL and a weak NFL principle. I think a very strong version of NFL principle is conceptually interesting, in that it leads to conclusions like: “learning makes you stupid.” This is something similar to what Wolpert argues, which I’m using as a motivating example while also abstracting away from it to gain some universality. That said, I think such a strong NFL is both too strong and, more importantly, unnecessary for making the point against superintelligence.

So I’d actually agree that there are empirical cases, possibly those that you list, where it’s either difficult to find a tradeoff or even where there don't seem to be any tradeoffs at all. But these are exceptions rather than rules; maybe individual step-up examples (either in learning or in evolution) might not be bounded by a NFL principle, but dialing up a trait like intelligence or fitness ad infinitum is.

To your (1): there are probably tradeoffs that we don't notice for our "general" intelligence, and these tradeoffs would be summarily exaggerated if our general intelligence was exaggerated, which may lead to staggering failures. I certainly would agree there’s some range of viable improvement, given that human beings themselves vary within a range. But I think if you gave Von Neumann his source code he’d have made himself schizophrenic by day’s end.

(2) It's a good point that coupling can be positive, but there are usually far more ways to break things. Happy families are all alike; every unhappy family is unhappy in its own way. Similarly, the etiology of neurological diseases are so difficult to untangle precisely because there’s so many more ways to break human intelligence than to increase it. Secondly, I think if coupling is positive it actually is more likely to then be negative when taken to extremes (i.e., superintelligence).

Reposting your third point from Facebook since it got cut off here:
"(3) Ultimately, parts of intelligence will fall into both buckets: in some senses it’ll be “evolutionary” (bounded, environmentally contextual, tradeoffs- No Free Lunch) and in other senses it will be “deistic/platonic” ("a dial that can be turned up indefinitely" and Free Lunch Friendly). I guess I’d have to think more about the nature of intelligence, and different kinds of Free Lunches, to venture a guess how to parametrize this."

I think this is all about how general intelligence itself can get, and whether by increasing generality you necessarily sacrifice specialization (another form of intelligence). Given the availability of the evolutionary framework, it seems clear to me what our a priori hypothesis should be. But even if intelligence is partly deistic/platonic as you suggest, I think the bounds implied are enough to dethrone unbounded superintelligence as a top-tier existential risk.

All the best!
Erik

Reply
Paul Chapman
7/6/2017 09:06:53 am

I'm new to the debate about super-AI, so I will almost certainly retread old ground. Thinking about your post over coffee, I figured I could write 10,000 words. I'll try instead to write the thousand most salient.

(1) If there is a 'central dogma' of anthropology, it's that humans extend their problem-solving power by inventing tools: physical tools from water jugs to container ships; and abstract tools from writing, through arithmetic, geometry, printing, to electromagnetic theory and computation (so far).

As you point out, the laws of our universe somehow require us to build *specialized* tools, which involve trade-offs according to what we want to maximize. We can build a fast car, a cheap car, an energy-efficient car, or a 10-ton truck. But we don't build just one, universal car (unless we're the former GDR): we build ALL of the variants and use each for the job it's designed for.

But we have just one kind of brain, which appears to be just flexible enough to invent and make tools.

(2) IQ tests measure the *mental* ability of *individual* humans. Specifically: perception, comprehension, modelling, pattern-finding, etc. But not all problems are mental. There's a world of difference between, "Design a way to move a 20-tonne block of granite 50km using ancient-Egyptian technology," and, "Now move this 20-tonne block." The second requires money (or slaves), obtaining the raw materials to build your sledge, finding master carpenters, acquiring permission to traverse land belonging to others, etc.

IQ is a poor measure of how well a person can solve a *practical* problem, particularly large problems which require politics, charisma, management, and often a certain amount of fraud.

And again, IQ measures *individual* intelligence, which evolutionary measures would suggest cannot have changed much, on average, over the past 7,000 years. But an ancient-Egyptian Wright brothers could not have built an aeroplane.

What matters, and what I think we should be talking about in this debate, is not individual intelligence, but the intelligence of a civilisation. Even Leonardo, given any amount of time, could not in isolation have achieved everything our species has up until now. We can solve more problems than our ancestors, not because we are individually more intelligent, but because our civilisation is.

(3) And so to Super-AI.

The deistic approach seems to do nothing more than suggest there might be a (much) better brain. But civilisation's problem-solving ability has increased measurably over 7,000 years without, arguably, any evolutionary change in individual intelligence. By *this* precedent, it's safe to assume that our problem-solving ability will continue to grow without the need for a bigger brain. And cast in this way, it's not at all clear how even a much better brain by itself can compete with collective intelligence.

The evolutionary approach seems to miss the wood for the trees. It seems to be saying that a super-AI, like a particular design of car, can only maximize its fitness to one environmental niche. But nature evolves thousands of specialized creatures, so that virtually all environments are exploited. Super-AI doesn't have to be a single brain, or a single species of brain. It can be all species at once, all specializations, inhabiting all problem-spaces.

Super-AI would do exactly what we do: build tools. Build tools specialised to solving particular problems. The human brain has evolved separate groups of neurons wired together in different ways to specialse in autonomic functions, reflex actions, vision, hearing, pattern-recognition, and higher cognitive functions. But the brain isn't (quickly!) inventing and building new ways of wiring neurons together; it's externalising and delegating problem-solving. It's said that the human brain is the most complex thing in the universe that we know of. Nonsense. Human civilisation, advanced almost entirely through the invention and use of tools, is far more complex.

Babbage's difference engine was highly specialized. So were the computers at Bletchley Park. But the Church-Turing thesis and Von Neumann ushered in an era of 'universal' computers, which are still predominant today. But already we're beginning to specialize again. 3D graphic card architecture is utterly different from CPU-based architecture. We're building low-weight, low-power computers to put in drones or on space probes.

Benz started by building small numbers of expensive, specialized vehicles. Then Ford came along with his cheap 'universal' car. But now, again, we have thousands of different specialized vehicle models. Engineering always repeats this pattern: (1) expensive and specialized; (2) cheap and all-purpose; (3) cheap and specialized. We are at the very start of the cheap-and-specialized computation era.

What would super-AI do? Not design better and better versions of itself, but build hundreds, thousands, tens of thousands, millions - why not? - of different kinds of

Reply
Paul Chapman
7/6/2017 09:19:58 am

(continued)

... digital circuit (out of silicon, out of lasers, out of whatever comes along next) - and develop any other kind of useful technology - to solve different kinds of problems.

... TOOLS.

(4) Conclusion.

The no-free-lunch theorems do not appy to tool-makers. And super-AI would surely be a tool-maker. The 'cognitive centre' of the AI can trade everything off for one ability: the design and manufacture of tools (including, of course, different kinds of toolmaking tools). Each tool individually makes other trade-offs according to primary purpose, but all those tools collectively produce, not a machine intelligence, but a machine civilisation whose problem-solving ability increases inexorably.

Cheers, Paul

Reply
Erik Hoel link
7/6/2017 10:55:18 am

Sorry that your post got cut off Paul. I’ve contacted weebly to see if they can extend the comment length.

Thanks so much for the detailed response and the close read. Interesting ideas in here and you lay out your progression of points really well. I definitely agree with distinguishing (1) from (2) as you do. So on to (3) then:

First, I think the deistic approach is more than just a better brain. It’s like a superduper infinitely (or astronomically) improvable brain. Otherwise the existential risk is lessened or nonexistent. Perhaps one day we might build a positronic brain that’s an improvement on dealing with the human niche, but unless it’s easier for them to build a super-positronic brain than it was for us to build a postitronic brain in the first place then it’s not the kind of runaway scenarios Bostrom and others rank as an existential risk. I think it would actually be harder for them to build a super-positronic brain, because of the NFLP. Perfection is both a diminishing return and involves heavy tradeoffs.

Second, in regards to your argument about toolmakers, I think it's a category error of the “where’s the university?” type. I say this because Nick Bostrom and others (as far as I know) use superintelligence to refer to a specific thing or entity. For example, some kind of oracle you can lock in a box deep in the earth. Now, one can give more hazy definitions and we can discuss those. But first, if we do indeed think of a superintelligence as an entity, then the category error is like admitting that one cannot have a superorganism and then saying “but evolution… is itself the superorganism!” It’s like, yes, as a gigantic un-integrated process, evolution is pretty perfect in the sense that it adapts to many or most environments. How? Well, evolution gets around the NFLP by sharding itself. It’s not an integrated process.

Now consider evolution’s analog: learning or problem-solving in the form of intelligence. If we group all humans and their tools into one big set, and consider that set as an entity, and then watched how that “entity” learns and solves problems, we’d be pretty amazed. It might not be a superintelligence along every dimension but it’s pretty darn super. Maybe we already have superintelligence! But I think it’s again a category error to call that set a superintelligence or to treat it like a single entity, just like it’s a category error to call all of evolution a single organism.

In fact, I think because of the NFLP sharded intelligence at a civilizational level will always outcompete any individual integrated intelligence *no matter the integrated entity.* Similarly, if I had to pick a fight between all of evolution versus any individual organism, I always pick evolution.

So I think evolution (and, analogously, intelligence) actually has three tiers: (a) individual organisms, (b) individual integrated intelligences, (c) societies of integrated intelligences. I doubt really that anything in the set of (b) that we ever build could truly compete with (c), and despite some Minskian metaphors about consciousness being a society of mind it would be a category error to confuse (b) and (c).

Basically, what I’m saying is that we already invented the next-big-step, which is groups of sharded intelligences working together in a process that mimics the basic structure of evolution (and thus its universal learning and problem-solving capabilities). Science and technology would be an example of that. Artificial intelligences just add to the tangled bank of such a process, they don’t invert the hierarchy and take over.

Reply
James Cross link
7/7/2017 03:19:51 am

A couple of random thoughts.

I read Bostrom's book Superintelligence: Paths, Dangers, Strategies a while back and not sure if I ever actually found a definition of intelligence so it is nice that you seem to be trying to develop one.

For my part, I like Stenhouse's definition of intelligence as "adaptively variable behavior within the lifetime of the individual” which works well for living organisms. I'm not sure if it can applied to machines.

My own attempt at it is: "Intelligence is a physical process that attempts to maximize the diversity and/or utility of future outcomes to achieve optimal solutions."

Any thoughts on this:

https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.168702

Reply
Erik Hoel
7/7/2017 09:56:37 am

Hey James!

I think Bostrom refers to Hunter and Legg at some point, but I may be wrong about that.

I think your definition is definitely a good one, and the paper you linked is also interesting. There's been a number of papers exploring this entropy/intelligence connection (also the notion of free energy).

But one issue I have with these kinds of definitions of intelligence is that they often presume the borders of the process are sensible. Because they are often totally agnostic they can be applied to things that aren't really things. All the biosphere is a process, but I think it's a significant category error to apply the word intelligence to it. What outcomes is it trying to solve? Maybe if one wanted to stretch language to make points and was honest about that, but I think it's clear that we should separate between intelligent agents (the near-universal meaning of the word intelligence) and really broad-systems-level thinking, like taking humankind as one big superintelligence. Like maybe the universe is one big superintelligence! Think about that!

Otherwise, because of this linguistic ambiguity people will say something like "an AI is a process" and "science is a process" and therefore if science is unbounded/all-knowing then an AI can be. I just think this is a massive false equivalency - and maybe because of that intelligence, at least in the sense of what the word is normally meant to mean, should be defined psychologically.

Reply
James Cross link
7/9/2017 02:20:38 pm

I generally agree with most of what you have written here. However, a few minor things I want to discuss.

"A runaway self-improving super intelligent AI" - that does seem to be the main thing you are arguing against as a near-term possibility. However, we could still have an order of magnitude (or more) increase above a humans in AI - an almost super AI (ASAI) - and what could be the impact of that?

Also this statement: "However, it may be that the No Free Lunch principle itself specifies an upper bound on even human-range intelligence (say, up to 200 IQ)."

Maybe I am reading this incorrectly but it seems to be suggesting the possibility that there may be an upper limit to intelligence itself whether it is biological or artificial.

That would be an interesting idea. Do you have anything to suggest this could be true? Certainly existing intelligence tests may be not be able to accurately measure anything above 200. An intelligence test for humans might have a number sequence like this 1, 3, 6, 10, 15 and ask for the next number but an intelligence test for AI might have a sequence involving 100's of numbers that ASAI could solve but no human could.

Complex biological intelligence evolved from simple one-celled organisms with very limited energy to power it and in organisms that require most of the energy they consume to be used for the core functions of metabolize and multiply. So it isn't surprising that we can't maximize all of the capabilities of intelligence at once.

I am not sure those limitations, however, would apply to an ASAI which could summon much more energy to the effort and could have multiple variations (processes, threads) of software that might be required to perform the different tasks.

All in all, however, I am not seeing what would provide motivation for ASAI. Biological entities are driven by the imperatives of metabolize and multiply as I mentioned. There are a host of neurotransmitters and hormones that perform roles in motivating intelligence.

What would motivate an ASAI? It would seem a programmed motivation would be a weak inducement for a ASAI.

Reply
Erik Hoel link
7/10/2017 07:55:11 am

Thanks James.

I'd actually couple your two questions, which is whether there can be an almost-super AI and whether there are upper bounds on intelligence itself. I think the answer to the foremost depends on the latter.

Roughly, I think what we think of as general intelligence is actually much smaller domain and more specific to humans than we imagine. So if I'm an agent embedded in a big social network that does 99.99% of my specialized thinking for me I can be "general" in this sense.

Because of this, I do think there are upper limits on intelligence, especially for what we think of as general intelligence. For example, there's a dissociation between theory of mind and systematizing. It's actually somewhat rare for people to be extremely good at both. If I remember correctly, there's a study showing that people with great theory of mind talk to inanimate objects like cars all the time, and think of mechanical problems ("what made the car unhappy?") in terms of mind.

As usual there are things we can imagine (an AI being really good at continuing lines) but the only people capable of that kind of intense pattern recognition pay for it with heavy tradeoffs. So I'm not sure what would motive something like this, but I doubt it would be motivated by much besides finding more patterns.

Furthermore, I don't think these NFL tradeoffs are just metabolic. Partly this is true, and it will still apply in the future (I wonder what the electricity bill for deep mind is). But there's another, deeper form of NFL. And that has to do with integration. Minds are integrated entities. And when you want to integrate functions there are massive constraints. For example, if I put one systematizer and one person with excellent ToM in a room together, I haven't created a superintelligent agent. This is because the integration within each agent vastly outstrips the integration between them. The advantage of giving separate intelligences to separate agents is that they can coordinate and act together while maintaining their specialization. I think that's a different category of things (societies) than individual agents. If you were to start coupling their brains together, I doubt their individual capabilities would be preserved - instead I think they may start interfering with one another. This gets into your "multiple threads" question.

Reply
Oliver
7/9/2017 08:21:58 pm

Great post Erik!

Is there not an important distinction to be made between what we might call 'instrumental' intelligence (ability to generate solutions to given tasks) and a more general 'explanatory' form of intelligence? Another way to phrase this might be 'competence' vs 'comprehension' (as Dennett has recently elaborated upon).

While instrumental intelligence seems to be able to improve along a continuous spectrum, wouldn't 'comprehension abilities' involve discrete leaps from one stage to another? And might there not be certain universal cognitive classes that a being can join via evolution? E.g. once human beings developed language, this enabled apparently inexhaustible mathematical competence to model the physical world.

As David Deutsch argues in his book the Beginning of Infinity - 'superintelligent' agents might belong to the same fundamental cognitive class as us, despite having superior speed or memory due to their specific neural differences. That is, they would not be superior to us with regard to the class of phenomena they can comprehend in principle, merely with regard to speed and memory.

Would love to hear your thoughts!

Reply
Erik Hoel link
7/10/2017 07:50:08 am

Thanks for pointing me to the Dennett paper Oliver, I hadn't read that one.

I agree that general intelligence is a universal leap forward in some respects (at least when it get it in groups), I just also think there's good evidence it is a delicate balance of tradeoffs and can't be infinitely improved. Obviously there is a range of human intelligence and maybe that range is greater than we know, but it's just not the same as a runaway superintelligent AI that wants to turn the world into a paper clip factory.

The David Deutsch idea is an interesting one. Most people who have eidetic memory have pretty severe difficulties in life. But maybe one could speed up some intelligent agent, but how? Upload a brain onto a computer and then run that at a faster clock rate? In general, I think there's a mistake in the way these futurists or singulitarians operate. The thinking often goes: if science fiction idea A supports science fiction idea B, then B is further supported. But they all feed off one another, so it isn't really true. One could equally argue backwards: if we have reason to believe A is false, we also have reason to believe B is false. In other words, I think the entire conceptual structure stands or falls pretty rapidly once the tenets of free lunches and exponentially increasing progress are questioned.

Reply
Oliver
7/10/2017 06:13:20 pm

Yes - and it seems a particularly common mistake is to think that mere improvements in speed or memory capacity (e.g. Moore's law) automatically translate into improved competence at solving real problems.

In other words, the 'program / software' really matters. The dichotomy Deutsch is pointing at is between non-creative programs - which must eventually reach their inherent limits - and creative ones - which can questions and revise their assumptions to solve ill-defined novel problems.

That's why the 'paperclip maximiser' doomsday scenario is so implausible. If such a program was uncreative, its progress would eventually be stymied, but if it is creative, it would eventually begin to question the reasons behind its rather silly fundamental goal.

Babak Farhang link
8/16/2017 11:30:37 pm

Enjoyed your article. Agree with dimensionality of intelligence. This dimensionality need not be just task based; it can also be time based. So, for example, one program can be the 5-minute chess champ, another the 40-minute champ. To human players the first might seem the better tactical player, the other the more strategic player.

Modeling a GAI in real life might be like asking it to play chess, only now it doesn't know if it's playing 5-minute or 40-minute chess. So every GAI will be beaten by some other GAI at some other game and perhaps this an argument against the possibility of the emergence of a hegemonic post singularity superintelligence singleton.

Reply
signs of genius link
5/31/2018 10:55:35 pm

Thanks for the information!

Reply
Bite Sized Bree link
11/26/2020 05:08:33 pm

Awesome blog you have here

Reply



Leave a Reply.

    Get updates on new posts, papers, or essays:

    powered by TinyLetter

    RSS Feed

    Archives

    July 2017
    June 2017


​Please consider entering your email address to hear about new articles, essays, and books I'm working on. Used sparingly, it contains behind-the-scene looks at publication and research.

powered by TinyLetter

  • Home
  • Writing
  • Science
  • About
  • Appearances