« Top Docs No Healthier | Main | Top Teachers Ineffective »

August 26, 2008

Comments

"Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies."

Good analogy.

Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

It seems that we're only just beginning to learn how to hack nature. Personally.. I'd say it's a much more likely way to AI than deliberate design. But that may be just because I don't think humans are collectively that bright.

Written any code lately? How's Flare coming along?

Eliezer, do you work on coding AI? What is the ideal project that intersects practical value and progress towards AGI? How constrained is the pursuit of AGI by a lack of hardware optimized for it's general requirements? I'd love to hear more nuts and bolts stuff.

JB, ditched Flare years ago.

Aron, if I knew what code to write, I would be writing it right now. So I'm working on the "knowing" part. I don't think AGI is hardware-constrained at all - it would be a tremendous challenge just to properly use one billion operations per second, rather than throwing most of it away into inefficient algorithms.

If you meet someone who says that their AI will do XYZ just like humans ... Say to them rather: "I'm sorry, I've never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example."

This seems the wrong attitude toward someone who proposes to pursue AI via whole brain emulation. You might say that approach is too hard, or the time is not right, or that another approach will do better or earlier. But whole brain emulation hardly relys on vague analogies to human brains - it would be directly making use of their abilities.

What the Article module does

The Article module is the first architectural add-on to the
primitive, proof-of-concept AI Mind as described in the AI4U
textbook of artificial intelligence. Prior to 2008, when the
Article module was introduced, there were only enough modules
in the AI Mind to demonstrate thinking, and the AI software
did not function properly until the last major bugs were
eliminated from Mind.Forth in January of 2008.

The proof-of-concept AI Mind could think only in terms of
plural nouns without the articles "a" or "the". It is
difficult for a human user to talk only about plural nouns
with the AI. The user feels a natural desire to discuss
a single instance of an otherwise plural topic. Therefore
the first step in expanding a primitive AI Mind is to add
a group of features that include the use of singular forms
for nouns and verbs, and the use of intransitive verbs of
being and becoming for the discussion of both singular and
and plural topics.

[rest of comment deleted, for more epic gibberish see Mentifex's home site]

Aron, I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary. But if you're a hardware guy and you want something to work on, you could read Pearl's book (mentioned above) and find ways to implement some of the more computationally intensive inference algorithms in hardware. You might also want to look up the work by Geoff Hinton et al on reduced Boltzmann machines and try to implement the associated algorithms in hardware.

Eliezer, of course in order to construct AI we need to know what intelligence really is, what induction is, etc. But consider an analogy to economics. Economists understand the broad principles of the economy, but not the nuts and bolts details. The inability of the participants to fully comprehend the market system hardly inhibits its ability to function. A similar situation may hold for intelligence: we might be able to construct intelligent systems with only an understanding of the broad principles, but not the precise details, of thought.

The following is a public service announcement for all Overcoming Bias readers who may be thinking of trying to construct a real AI.

AI IS HARD. IT'S REALLY FRICKING HARD. IF YOU ARE NOT WILLING TO TRY TO DO THINGS THAT ARE REALLY FRICKING HARD THEN YOU SHOULD NOT BE WORKING ON AI. You know how hard it is to build a successful Internet startup? You know how hard it is to become a published author? You know how hard it is to make a billion dollars? Now compare the number of successful startups, successful authors, and billionaires, to the number of successful designs for a strong AI. IT'S REALLY FRICKING HARD. So if you want to even take a shot at it, accept that you're going to have to do things that DON'T SOUND EASY, like UNDERSTAND FRICKING INTELLIGENCE, and hold yourself to standards that are UNCOMFORTABLY high. You have got to LEVEL UP to take on this dragon.

Thank you. This concludes the public service announcement.

Robin, whole brain emulation might be physically possible, but I wouldn't advise putting venture capital into a project to build a flying machine by emulating a bird. Also there's the destroy-the-world issue if you don't know what you're doing.

But I'm a level 12 hafling wizard! Isn't that enough?

If you haven't seen a brain, "Nothing is easier than to familiarize one's self with the mammalian brain. Get a sheep's head, a small saw, chisel, scalpel and forceps..."

Re: Simply mimicking the human brain in an attempt to produce intelligence is akin to scavenging code off the web to write a program. Will you understand well how the program works? No. Will it work? If you can hack it, very possibly.

Except that this is undocumented spaghetti code which comes with no manual, is written in a language for which you have no interpreter, was built by a genetic algorithm, and is constructed so that it disintegrates.

The prospective hacker needs to be more than brave, they need to have no idea that other approaches are possible.

Re: I don't think anyone really knows the general requirements for AGI, and therefore nobody knows what (if any) kind of specialized hardware is necessary.

One thing which we might need - and don't yet really have - is parallelism.

True, there are FPGAs, but these are still a nightmare to use. Elsewhere parallelism is absurdly coarse-grained.

We probably won't need anything very fancy on the hardware front - just more speed and less cost, to make the results performance- and cost-competitive with humans.

Eliezer, if designing planes had turned out to be "really fricking hard" enough, requiring "uncomfortably high standards" that mere mortals shouldn't bother attempting, humans might well have flown first by emulating birds. Whole brain emulation should be doable within about a half century, so another approach to AI will succeed first only it is not really really fricking hard.

Dan, I've implemented RBMs and assorted statistical machine learning algorithms in context with the NetflixPrize. I've also recently adapted some of these to work on Nvidia cards via their CUDA platform. Performance improvements have been 20-100x and this is hardware that has only taken a few steps away from pure graphics specialization. Fine-grained parallelization, improved memory bandwidth, less chip logic devoted to branch prediction, user-controlled shared memory, etc. help.

I'm seeing a lot of interesting applications in multimedia processing, many of which have statistical learning elements. One project at Siggraph allowed users to modify a single frame of video and have that modification automatically adapt across the entire video. Magic stuff. If we are heading towards hardware that is closer to what we'd expect as the proper substrate for AI, and we are finding commercial applications that promote this development, then I think we are building towards this fricking hard problem the only way possible: in small steps. It's not the conjugate gradient, but we'll get there.

Aron: What did those performance improvements of 20-100x buy you in terms of reduced squared error on the Netflix Prize?

@Robin

But unless you use an actual human brain for your AI, you're still just creating a model that works in some way "like" a human brain. To know that it will work, you'll need to know which behaviors of the brain are important to your model and which are not (voltages? chemical transfers? tiny quantum events?). You'll also need to know what capabilities the initial brain model you construct will need vs. those it can learn along the way. I don't see how you get the answers to those questions without figuring out what intelligence really is unless generating your models is extraordinarily cheap.

For the planes/birds analogy, it's the same as the idea that feathers are really not all that useful for flight as such. But without some understanding of aerodynamics, there's no reason not to waste a lot of time on them for your bird flight emulator, while possibly never getting your wing shape really right.

Eliezer: AI IS HARD. ... You have got to LEVEL UP to take on this dragon.

- How long have you personally spent working on the AGI problem? I heard that at some point about 10 years ago, you and Ben Goertzel thought you could wrap up the AI problem in a few years. I also heard that both Robin and Nick Bostrom have worked on AI and given up. Given this data, it seems that the problem is probably beyond anyone; though this doesn't mean that it won't get solved bit-by-bit.

Roko has a point there.

I like "AI IS HARD. IT'S REALLY FRICKING HARD." But that is an argument that could cut you in several ways. Anything that has never been done is really hard. Can you tell those degrees of really hard beforehand? 105 years ago, airplanes were really hard; today, most of us could get the fundamentals of designing one with a bit of effort. The problem of human flight has not changed, but its perceived difficulty has. Is AI that kind of problem, the one that is really hard until suddenly it is not, and everyone will have a half-dozen AIs around the house in fifty years? Is AI hard like time travel? Like unaided human flight? Like proving Fermat's Last Theorem?

It seems like those CAPS will turn on you at some point in the discussion.

Eliezer, I suspect that was rhetorical. However.. top algorithms that avoid overtraining can benefit from adding model parameters (though in massively decreasing returns of scale). There are top-tier monte carlo algorithms that take weeks to converge, and if you gave them years and more parameters they'd do better (if slight). It may ultimately prove to be a non-zero advantage for those that have the algorithmic expertise and the hardware advantage particularly in a contest where people are fighting for very small quantitative differences. I mentioned this for Dan's benefit and didn't intend to connect it directly to strong AI.

I'm not imagining a scenario where someone in a lab is handed a computer that runs at 1 exaflop and this person throws a stacked RBM on there and then finally has a friend. However, I am encouraged by the steps that Nvidia and AMD have taken towards scientific computing and Intel (though behind) is simultaneously headed the same direction. Suddenly we may have a situation where for commodity prices, applications can be built that do phenomenally interesting things in video and audio processing (and others I'm unaware of). These applications aren't semantic powerhouses of abstraction, but they are undeniably more AI-like than what came before, utilizing statistical inferences and deep parallelization. Along the way we learn the basic nuts and bolts engineering basics of how to distribute work among different hardware architectures, code in parallel, develop reusable libraries and frameworks, etc.

If we take for granted that strong AI is so fricking hard we can't get there in one step, we have to start looking at what steps we can take today that are productive. That's what I'd really love to see your brain examine: the logical path to take. If we find a killer application today along the lines above, then we'll have a lot more people talking about activation functions and log probabilities. In contrast, the progress of hardware from 2001-2006 was pretty disappointing (to me at least) outside of the graphics domain.

AGI may be hard, but narrow AI isn't necessarily. How many OB readers care about real AI vs. just improving their rationality? It's not that straightforward to demonstrate to the common reader how these two are related.

Realizing the points you make in this post about AI is just like lv 10 out of 200 or something levels. It's somewhat disappointing that you actually have to even bother talking about it, because this should have been realized by everyone back in 1956, or at least in 1970, after the first round of failures. (Marvin Minsky, why weren't you pushing that point back then?) But is it bad that I sort of like how most people are confused nowadays? Conflicting emotions on this one.

Whole brain emulation -- hm, sounds like some single human being gets to be godlike first, then. Who do we pick for this? The Dalai Lama? Barack Obama? Is worrying about this a perennial topic of intellectual masturbation? Maybe.

Eliezer, what destroy-the-world issues do you see resulting from whole brain emulation? I see risks that the world will be dominated by intelligences that I don't like, but nothing that resembles tiling the universe with smiley faces.

Roko, Ben thought he could do it in a few years, and still thinks so now. I was not working with Ben on AI, then or now, and I didn't think I could do it in a few years, then or now. I made mistakes in my wild and reckless youth but that was not one of them.

[Correction: Moshe Looks points out that in 1996, "Staring into the Singularity", I claimed that it ought to be possible to get to the Singularity by 2005, which I thought I would have a reasonable chance of doing given a hundred million dollars per year. This claim was for brute-forcing AI via Manhattan Project, before I had any concept of Friendly AI. And I do think that Ben Goertzel generally sounds a bit more optimistic and reassuring about his AI project getting to general intelligence in on the order of five years given decent funding. Nonetheless, the statement above is wrong. Apparently this statement was so out of character for my modern self that I simply have no memory of ever making it, an interesting but not surprising observation - there's a reason I talk about Eliezer_1996 like he was a different person. It should also be mentioned that I do assess a thought-worthy chance of AI showing up in five years, though probably not Friendly. But this doesn't reflect the problem being easy, it reflects me trying to widen my confidence intervals.]

Zubon, the thought has tormented me for quite a while that if scientific progress continued at exactly the current rate, then it probably wouldn't be more than 100 years before Friendly AI was a six-month project for one grad student. But you see, those six months are not the hard part of the work. That's never the really hard part of the work. Scientific progress is the really fricking hard part of the work. But this is rarely appreciated, because most people don't work on that, and only apply existing techniques - that's their only referent for "hard" or "easy", and scientific progress isn't a thought that occurs to them, really. Which also goes for the majority of AGI wannabes - they think in terms of hard or easy techniques to apply, just like they think in terms of cheap or expensive hardware; the notion of hard or easy scientific problems-of-understanding to solve, does not appear anywhere on their gameboard. Scientific problems are either already solved, or clearly much too difficult for anyone to solve; so we'll have to deal with the problem using a technique we already understand, or an understandable technology that seems to be progressing, like whole brain emulation or parallel programming.

These are not the important things, and they are not the gap that separates you from the imaginary grad student of 100 years hence. That gap is made out of mysteries, and you cross it by dissolving them.

Peter, human brains are somewhat unstable even operating in ancestral parameters. Yes, you run into a different class of problems with uploading. And unlike FAI, there is a nonzero chance of full success even if you don't use exact math for everything. But there are still problems.

Richard, the whole brain emulation approach starts with and then emulates a particular human brain.

Michael, we have lots of experience picking humans to give power to.

AI IS HARD. IT'S REALLY FRICKING HARD.
So start with artificial stupidity. Stupidity is plentiful and ubiquitous - it follows that it should be easy for us to reproduce.

As it happens, we've made far more progress making computer programs that can 'think' as well as insects than can think like humans. So start with insects first, and work your way up from there.

Aron:If we take for granted that strong AI is so fricking hard we can't get there in one step, we have to start looking at what steps we can take today that are productive.

Well we probably want to work on friendly goal systems rather than how to get AGI to work. Ideally, you want to know exactly what kind of motivational system your AGI should have before you (or anyone else) knows how to build an AGI.

My personal estimate is veering towards "AGI won't come first", because a number of clever people like Robin Hanson and the guys at the Future of Humanity Institute think whole brain emulation will come first, and have good arguments for that conclusion. However, we should be ready for the contingency that AGI really gets going, which is why I think that Eliezer & SingInst are doing such a valuable job.

So, I would modify Aron's request to: If we take for granted that Friendly AI is so hard we can't get there in one step, we have to start looking at what steps we can take today that are productive. What productive steps towards FAI can we take today?

Disclaimer: perhaps the long-standing members of this blog understand the following question and may consider it impertinent. Sincerely, I am just confused (as I think anyone going to the Singularity site would be).

When I visit this page describing the "team" at the Singularity Institute, it states that Ben Goertzel is the "Director of Research", and Eliezer Yudkowsky is the "Research Fellow". EY states (above); "I was not working with Ben on AI, then or now." What actually goes on at SIAI?

Eliezer, if the US government announced a new Manhattan Project-grade attempt to be the first to build AGI, and put you in charge, would you be able to confidently say how such money should be spent in order to make genuine progress on such a goal?

Ben does outside research projects like OpenCog, since he knows the field and has the connections, and is titled "Research Director". I bear responsibility for SIAI in-house research, and am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

Silas, I would confidently say, "Oh hell no, the last thing we need right now is a Manhattan Project. Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

I think what goes on at SIAI is that Eliezer writes blog posts. ;)

Re: a number of clever people like Robin Hanson and the guys at the Future of Humanity Institute think whole brain emulation will come first, and have good arguments for that conclusion.

What? Where are these supposedly good arguments, then? Or do you mean the crack of a future dawn material?

EY:Give me $5 million/year to spend on 10 promising researchers and 10 promising students, and maybe $5 million/year to spend on outside projects that might help, and then go away. If you're lucky we'll be ready to start coding in less than a decade."

I am contacting the SIAI today to see whether they have some role I can play. If my math is correct, you need $100 million dollars, and 20 selected individuals. If the money became available, do you have the individuals in mind? Would they do it?

I'll be 72 in 10 years when the coding starts; how long will that take? Altruism be damned, remember my favorite quote: "I don't want to achieve immortality through my work. I want to achieve it through not dying. (W. Allen)

Retired, are you signed up for cryonics?

No, I don't have 20 people in mind. And I don't need that full amount, it's just the most I can presently imagine myself managing to use.

EY:
email me. I have a donor in mind.

I will, but it looks from your blog like you're already talking to Michael Vassar. I broadcast to the world, Vassar handles personal networking.

Tim: "What? Where are these supposedly good arguments, then? Or do you mean the crack of a future dawn material?"

- Anders Sandberg argues that brain scanning techniques using a straightforward technology (slicing and electron microscopy) combined with moore's law will allow us to do WBE on a fairly predictable timescale.

Someone should write a "creating friendly uploads", but a first improvement over uploading then enhancing a single human would be uploading that human ten times and enhancing all ten copies in different ways so as to mitigate some possible insanity scenarios.

Vassar handles personal networking? Dang, then I probably shouldn't have mouthed off at Robin right after he praised my work.

Steven, I think both Toby Ord and separately Anna Salamon are working along those lines.

Interesting.

I guess that under the plausible (?) assumption that at least one enhancement strategy in a not too huge search space reliably produces friendly superintelligences, the problem reduces from *creating* to *recognizing* friendliness? Even so I'm not sure that helps.

I would start by assuming away the "initially nice people" problem and ask if the "stability under enhancement" problem was solvable. If it was, then I'd add the "initially nice person" problem back in.

If like me you don't expect the first upload to all by itself rapidly become all powerful, you don't need to worry as much about upload friendliness.

[I] am titled "Research Fellow" because I helped found SIAI and I consider it nobler not to give myself grand titles like Grand Poobah.

It seems to me that the titles "Director of Research" and "Executive Director" give the holders power over you, and it is not noble to give other people power over you in exchange for dubious compensation, and the fact that Ben's track record (and doctorate?) lend credibility
to the organization strikes me as dubious compensation.

Example: the holders of the titles might have the power to disrupt your scientific plans by bringing suit claiming that a technique or a work you created and need is the intellectual property of the SI.

> AI IS HARD. IT'S REALLY FRICKING HARD.

Hundreds of blog posts and still no closer!

Re: Anders Sandberg argues that brain scanning techniques using a straightforward technology (slicing and electron microscopy) combined with Moore's law will allow us to do WBE on a fairly predictable timescale.

Well, that is not unreasonable - though it is not yet exactly crystal clear which brain features we would need to copy in order to produce something that would boot up. However, that is not a good argument for uploads coming first. Any such an argument would necessarily compare upload and non-upload paths. Straightforward synthetic intelligence based on engineering principles seems likely to require much less demanding hardware, much less in the way of brain scanning technology - and much less in the way of understanding what's going on.

The history of technology does not seem to favour the idea of AI via brain scanning to me. A car is not a synthetic horse. Calculators are not electronic abacuses. Solar panels are not technological trees. Big Blue was not made of simulated neurons.

It's not clear that we will ever bother with uploads - once we have AI. It will probably seem like a large and expensive engineering project with dubious benefits.

@ Tim Tyler:

I'm not sure how we can come to rational agreement on the relative likelihoods of WBE vs AGI being developed first. I am not emotionally committed to either view, but I'd very much like to get my hands on what factual information we have.

I suspect that WBE should be considered the favorite at the moment, because human economic systems (the military, private companies, universities) like to work on projects where one can show incremental progress, and WBE/BCI has this property. There have recently been news stories about the US military engaging in brain simulation of a cat, and of them considering BCI technology to keep up with other military powers.

I think it's likely that we will understand AGI well enough on the WBE track, even if AGI is not developed independently before that, and as a result this understanding will be implemented before WBE sorts out all the technical details and reaches its goal. So, even if it's hard to compare the independent development of these paths, dependent scenario leads to conclusion that AGI will likely come before WBE.

"AI IS HARD."

While it is apparent when something is flying, it is by no means clear what constitutes the "I" of "AI". The comparison with flight should be banned from all further AI discussions.

I anticipate definition of "I" shortly after "I" is created. Perhaps, as is so often done in IT projects, managers will declare victory, force the system upon unwilling users and pass out T-shirts bearing: "AI PER MANDATUM" (AI by mandate).

Or perhaps you have a definition of "I"?

@Aron, wow, from your initial post I thought I was giving advice to an aspiring undergraduate, glad to realize I'm talking to an expert :-)

Personally I continually bump up against performance limitations. This is often due to bad coding on my part and the overuse of Matlab for loops but I still have the strong feeling that we need faster machines. In particular, I think full intelligence will require processing VAST amounts of raw unlabelled data (video, audio, etc) and that will require fast machines. The application of statistical learning techniques to vast unlabeled data streams is about to open new doors. My take on this idea is spelled out better here.

Anyone have any problems with defining intelligence as simply "mental ability"? People are intelligent in different ways, in accordance with their mental abilities, and IQ tests measure different aspects of intelligence by measuring different mental abilities.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31