« OK, Now I'm Worried | Main | Open Thread »

July 31, 2008

Comments

We can agree that it does not suffice at all to treat an AI program as if it were a human child. But can we also agree that the state of a "grown" AI program will depend on the environment in which it was "raised"?

[nitpick]

The apple-recognition machinery in your brain does not suddenly switch off, and then switch on again later - if it did, we would be more likely to recognize it as a factor, as a requirement.

Actually, the apple-recognition machinery in the human brain really does turn off on a regular basis. You have to be awake in order to recognize an apple; you can't do it while sleeping.

[/nitpick]

Here is a link to html'ed text (instead of scanned pdf) of Tooby&Cosmides, The Psychological Foundations of Culture.

Robin: But can we also agree that the state of a "grown" AI program will depend on the environment in which it was "raised"?

It will depend on the environment in a way that it depends on its initial conditions. It will depend on the environment if it was designed to depend on the environment. The reason, presumably, why the AI is not inert in the face of the environment, like a heap of sand, is that someone went to the work of turning that silicon into an AI. Each bit of internal state change will happen because of a program that the programmer wrote, or that the AI programmed by the programmer wrote, and the chain of causality will stretch back, lawfully.

With all those provisos, yes, the grown AI will depend on the environment. Though to avoid the Detached Lever fallacy, it might be helpful to say: "The grown AI will depend on how you programmed the child AI to depend on the environment."

Doug: You have to be awake in order to recognize an apple

Dream on.

"Actually, the apple-recognition machinery in the human brain really does turn off on a regular basis. You have to be awake in order to recognize an apple; you can't do it while sleeping."

I don't remember ever dreaming about fruits, but I'm pretty sure I could recognize an apple if it happened. Did I just set myself up to have a weird dream tonight? Oh boy...

The fact that the pattern that makes the apple module light up comes from different places while dreaming than while awake doesn't matter; you don't stop recognizing it, so the model probably isn't 'off'.

Re: All this goes to explain why you can't create a kindly Artificial Intelligence by giving it nice parents and a kindly (yet occasionally strict) upbringing, the way it works with a human baby. As I've often heard proposed.

Sure you can. It's just that you would need some other stuff as well.

When you dream about an apple, though, can you be said to recognize anything? No external stimulus triggers the apple recognition program; it just happens to be triggered by unpredictable, tired firings of the brain and you starting to dream about an apple is the result of it being triggered in the first place, not the other way around.

What has always bothered me about a lot of this AI stuff is that it's simply not grounded in biology. I think you're addressing this a little bit here.

"Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. "Ah," says the captain, "this must be the lever that makes the ship dematerialize!" So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize."

This type of thing is known to happen in real life, when technology gaps are so large that people have no idea what generates the magic. See http://en.wikipedia.org/wiki/Cargo_cult.

Someone who thinks you make an AI nice by raising it in a family, probably also thinks that you make a fork-lift strong by instructing it to pump iron. The analogy is apt.

Ouch! I've been out-nitpicked! ;)

Okay, you need to be awake or in REM sleep in order to recognize an apple!

I certainly agree with the general point and conclusions here, but I think that you are overstating it.

"It is a truism in evolutionary biology that conditional responses require more genetic complexity than unconditional responses. "

is true *except* where general intelligence is at work. It probably takes more complexity to encode an organism that can multiply 7 by 8 and can multiply 432 by 8902 but cannot multiply 6 by 13 than to encode an organism that can do all three, and presumably it takes more complexity to encode a chimp with the full suite of Chimp abilities except that it cannot learn sign language than one that can learn to sign with proper education.

To what extent do you think:

1.) Culture itself evolves and follows the same principles of evolution as humans and honeybees?

2.) Culture defines worldview and horizon of knowledge/decision/ideation?

3.) Culture's means of communicating information to infants (e.g. "My First Big Book of A B C's") are evolving/changing to encode "more correct" ideas of the human organism (i.e. teach better)?

You seem to be avoiding theorizing on how society/culture -does- affect our maturation?. Can we bound this? Can we say anything effective about it?

Re: Culture itself evolves and follows the same principles of evolution as humans and honeybees

Culture exhibits directed variation in a way that was extremely rare in evolution until recently. Obviously culture evolves - but whether the "principles" are the same depends on what list of principles you use.

Growing human brains are wired to learn syntactic language - even when syntax doesn't exist in the original language, the conditional response to the words in the environment is a syntactic language with those words.

This, under the name "universal grammar", is the insight that Noam Chomsky is famous for.

At the risk of revealing my identity, I recall getting into an argument about this with Michael Vassar at the NYC meetup back in March (I think it was). If memory serves, we were talking at cross-purposes: I was trying to make the case that the discipline of theoretical ("Chomskian") linguistics, whose aim is to describe the cognitive input-response system that goes by the name of the "human language faculty", teaches us not to regard individual languages such as English or French as Platonic entities, but rather merely as ad-hoc labels for certain classes of utterances. Vassar, it seemed (and he's of course welcome to correct me if I'm misremembering), took me to be arguing for the Platonicity of some more abstract notion of "human language".

"is true *except* where general intelligence is at work. It probably takes more complexity to encode an organism that can multiply 7 by 8 and can multiply 432 by 8902 but cannot multiply 6 by 13 than to encode an organism that can do all three,"

This is just a property of algorithms in general, not of general intelligence specifically. Writing a Python/C/assembler program to multiply A and B is simpler than writing a program to multiply A and B unless A % B = 340. It depends on whether you're thinking of multiplication as an algorithm or a giant lookup table (http://www.overcomingbias.com/2007/11/artificial-addi.html).

Good call Tom.
Lets clarify that we are including systems that only approximately correspond to the content of algorithms though, (systems that 'implement' algorithms rather than 'being' said algorithms) like evolution approximating the math of evolutionary dynamics or hand calculators only approximately following classical physics?

Spaceship Dematerializer Levers (SDLs) work like magic wands, and are fully detachable. They are also known as barsom. Modern ships of the Enterprise class have electromagnetic shields that protect them when passing through asteroid belts. Asteroids have iron cores, so they are easily deflected. Asteroids that are ice melt down in the field, causing short circuits and sparking.

Eliezer, since you've mentioned this several times now, I must object: you unfairly slander a generation of AI researchers (which included me). It was and remains perfectly reasonable for programmers to give programs and data structures suggestive names, and this habit was not at all akin to thinking a machine lever could do everything the machine does. As a whole that generation certainly did not think that merely naming a data structure "fruit" gave the program all the knowledge about fruit we have.

Robin, this criticism is hardly original with myself, though I've fleshed it out after my own fashion. (And I cited Drew McDermott, in particular.) Of course not all past AI researchers made this mistake, but a very substantial fraction did so, including leaders of the field. Do you assert that this mistake was not made, or that it was made by only a very small fraction of researchers?

Eliezer, yes McDermott had useful and witty critiques of then current practice, but that was far from suggesting this entire generation of researchers were mystic idiots; McDermott said:

Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them.
You come across sometimes as suggesting that the old-timer approach to AI was a hopeless waste, so that their modest rate of progress has little to say about expected future progress. And the fact that people used suggestive names when programming seems prime evidence to you. To answer your direct question as precisely as posssible, I assert that while many researchers did at times suffer the biases McDermott mentioned, this did not reduce the rate of progress by more than a factor of two.

Whether anthropomorphism in general, or the Detached Lever fallacy in particular, reduced progress in AI by so much as a whole factor of two, is an interesting question; progress is driven by the fastest people. Removing anthropomorphism might not have sped things up much - AI is hard.

However, I would certainly bet that the size of the most exaggerated claims was driven primarily by anthropomorphism; if the culprit researchers involved had never seen a human, it would not have occurred to them to make claims within two orders of magnitude of what they claimed. Note that the size of the most exaggerated claims is driven by those most overconfident and most subject to anthropomorphism.

As you know(?) I feel that if one ignores all exaggerated claims and looks only at what actually did get accomplished, then AI has not progressed any more slowly than would be expected for a scientific field tackling a hard problem. I don't think AI is moving any more slowly on intelligence than biologists did on biology, back when elan vital was still a going hypothesis. There are specific AI researchers that I revere, like Judea Pearl and Edwin Jaynes, and others who I respect for their wisdom even when I disagree with them, like Douglas Hofstadter.

But on the whole, AGI is not now and never has been a healthy field. It seems to me - bearing in mind that we disagree about modesty in theory, though not, I've always argued, in practice - it seems to me that the amount of respect you want me to give the field as a whole, would not be wise even if this were a healthy field, given that this is my chosen area of specialization and I am trying to go beyond the past. For an unhealthy field, it should be entirely plausible even for an outsider to say, "They're Doing It Wrong". It is akin to the principle of looking to Einstein and Buffett to find out what intelligence is, rather than Jeff Skilling. A paradigm has to earn its respect, and there's no credit for trying. The harder and more diligently you try, and yet fail, the more probable it is that the methodology involved is flawed.

I accept that the most exaggerated claims were most driven by overconfidence and anthropomorphism, I accept your correcting my misstatement - you are not disappointed with old-timer AI progress, and I accept that you can reasonably think AGI is "doing it wrong." But whatever thoughtful reason you have to think you can do better, surely it isn't that others don't realize that naming a data structure "apple" doesn't tell the computer everything we know about apples.

I find myself unsure of your own stance here...? Naming a data structure "apple" doesn't tell the computer anything we know about apples.

But on the whole, AGI is not now and never has been a healthy field.

That's because there's not much money in it. Computers today are too slow and feeble. Today, even if you could build an AI that beat the best humans at go, it would demand large move times to do so. And performance is of critical importance to many applications of intelligence.

Also, software lags behind hardware - check out the history of the games for the PS3.

So: narrow AI projects can succeed today - but broad AI probably won't get much funding until it has a chance at working and being cost/performance competitive with humans - and that's still maybe 10-20 years away.

Re: dreaming apples

When you dream of an apple, you are perhaps not aware of a real physical apple, and the cognitive machinery of apple-identification is not activated by retinal stimulus.

Nevertheless, the cognitive machinery of apple-identification is still the same. If I see a picture of an apple, or if a futuristic mind-control device convinces me that an apple is in front of me, the apple-identification program in my brain functions the same in every case.

Of course, most of the time we sleep, we're not dreaming. When you're fully unconscious, your cognitive machinery can do nothing, because there's nothing for it to work with. In other words, if you can't recognize apples, it's probably because your entire mind is switched off for some reason (hopefully temporarily, but eventually, permanently.)

But on the whole, AGI is not now and never has been a healthy field.

That's because there's not much money in it. Computers today are too slow and feeble. Today, even if you could access an algorithm that beat the best humans at go, it would cost a small fortune, operate slowly, and require a huge heat sink. Performance is of critical importance to many applications of intelligence.

Also, software lags behind hardware - e.g. check out the history of the games for the PS3.

So, narrow AI projects can succeed today - but broad AI probably won't be well funded until it has a chance of being cost/performance competitive with humans - and that's still maybe 10-20 years away.

You could say the 'lever' approach is equivalent to impatiently trying to go only as 'deep' as need be, to get results. There are subfields though, eg 'Adaptive Behavior' and 'Developmental Robotics' that, while not calling themselves AGI, have it as their ultimate goal and have buckled down for the long haul, working from the bottom up.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31