« Competent Elites | Main | Friedman's "Prediction vs. Explanation" »

September 28, 2008

Comments

" he turned out to be...
...a creationist."

On the one hand ... you find yourself impressed by the aura of '1000-year-old vampire' academics...

And on the other ... this guy worships a 2000-year old bloke who asked people to drink his blood, and was famous for rising from his coffin.

Are your worldviews really so far apart? ;-)

I am totally average student. Is it worth to understand bayesian for me and does this investment may help me in my life?(as venture capitalist, as truth seeker).

Lithuania.

Oldreader, you can go on for quite a distance before you need Bayesian math, but if you can understand it without incredible difficulty, then it is worthwhile to learn the arithmetical basics even before you begin to study the less technical and more practical advice.

My faith in Omohundro was shaken a bit by the "weird psi experiments" reference - at: here - at 1:17:45.

Omohundro gently corrected a mathematical misapprehension I had about Godel's Theorem, long after I thought I was done with it. I don't forget that sort of thing. (Plan to write it up here eventually.)

Frankly, I felt a bit like I did when Klaatu explained that the power of resurrection was "reserved to the Almighty Spirit" - in "The Day the Earth Stood Still". Except that, that time, it turned out that there was a good explanation.

I find the following passage spine tingling and goose bump inducing, and it's not the first time:

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.


Are the psychosomatic effects of your writing intentional; do you consider, or even aim for, the possibility that, as a result, somewhere, someone would be having a brief episode of being involuntarily pulled outside of themselves and realizing the terrifying immensity of it all?

Keep it up, because I don't think you can be reminded often enough of the realities of reality.

The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.

Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Google search, for example, might get better and better at understanding human requests and slowly acquire the ability to pass a Turing test. And Google doesn't need a "precise theory to permit stable self-improvement" to continually improve its search engine.

*shrug* so he's a creationist, who cares? As long as he isn't a creationist who teach evolutionary biology I'm less than concerned with what role playing games or religions one engages in during off hours. Is it silly? Entirely so. But, I think we have too much of a hang-up over the intersection of religion and science. For a few it is a trouble-some mixture indeed, but for many it is harmless cultural "play". Stating that you believe in "Creationism because the Bible says so", is only marginally more insane than saying, "I believe free market capitalism exists". If you analyzed the belief system of any living human, you'd run up against many discontinuities.

You mixed the whole "AGI researcher is creationist, is that Ok?" with the thread about relative ability in a field such as AGI. Science is firmly grounded in the idea of the individual generating science. Ideas, awards, foundations are all geared toward recognizing the individual at the expense of recognizing that science is very much a collective effort. Yes, there may be many "ordinary programmers" who show up on some AGI list with naive notions about the field, but give them a few years and a small percentage of them may become the technologists that help others like yourself toward an elegant solution.

Don't annoy the drooling masses, you may need them down the road. Science is not an individual art.

Eliezer,
How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,

Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?

If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?

I suspect that AGI will come about through small continuous improvements in services such as Google search

Google seem to be making a show of not trying.

Another possibility is stockmarket superintelligence - see my The Awakening Marketplace.

They didn't skip it.

This is the most interesting and intriguing blog post on any subject I've read in several months.

James wrote:

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ."

Would you really be surprised by a 50-fold productivity difference between low-end (those just barely able to even attempt a task) and high-end mathematicians or computer programmers in developing new techniques and algorithms? Even on ordinary corporate software development projects there are order of magnitude differences in productivity on many tasks, differences which are masked by allocation of people to the tasks where they have the greatest marginal productivity.

There is a big difference between:

1. 4 geniuses with 200 passable assistants for grunt work will do better than 6 geniuses.

2. 2000 passable programmers will do better than 4 geniuses and 200 passable assistants.

Basic research. Fundamental research. Frontier research; stuff you don't see turning into applied research until relatively late, perhaps a decade or three later.

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

Lara Foster: I'm pretty sure that a recursively self-improving AGI with capabilities that were surprisingly above those of an IQ 130 human as frequently as they were below those of an IQ 130 human would have been able to develop into something irresistibly powerful if created a decade ago. I'd expect that this was possible two decades ago. Three decades is pushing it a bit, but just a bit.

Carl Shulman,

Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.

The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don't know.

I'm pretty confident that 6 geniuses will do better than 2000 passable programmers in the long term and in most fields, though worse than 4 geniuses and 200 passable programmers.

I can't recall ever affirming that the chance is negligible that religionists enter the AGI field. Not just recently, I began to anticipate they would be among the first encountered expressing that they act on one possibility that they are confined and sedated, even given a toy universe that is matryoshka dolls indefinitely all the way in and all the way out for them.

James Miller: Temperamentally, managers who get 50 times more from effective companies have the skills of very good engineers plus a whole separate skill set, also highly developed, as managers. Also, Managers paid 50 times more may be motivated not to leave for another company, but engineers paid 50 times more may, by temperament, be motivated to instead quite and dabble in programming for open-source projects. The market pays excellent managers with excellent engineering skills 50 times more than a typical engineer's salary as start-up founders once they have saved a quarter to a half million from their salary to get a company started.

Oh yeah, also, actual geniuses are, almost by definition, VERY rare. Einstein's market value was high, but there was no reason for his salary to be. The sort of thing he worked on wasn't very valuable in the short term.

Considering the wads of cash religion$ control, I wouldn't be surprised to find myself in a future where some sort of an Artificial General Irrationality project exists recursively improving its Worship Module.

"If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer."

I think there's a case to be made that evolution, sped up, say, a million times over, or ten, might be only several levels below the average human. (Especially if we're only considering evolution of multicellular organisms with sexual recombination, which I suppose might be analogous as only considering software development using high level languages.) And I'm willing to grant that million or ten just as a matter of conversational convenience.

Ah, here we go again with the "I'm so smart because I believe in a meaningless universe". And as usual, a creationist is brought out as a straw man. Non-reductionists always have to be judged according to the worst that can be dredged up from their ranks... of course, bring out the fact that Marx, Lenin, and Stalin were all staunch reductionists and *that's* just going off topic.

There is nothing "rational" about the particular brand of religious beliefs espoused by Eliezer. So-called "rationalists" love to point to Occam's Razor, which can actually support anything one wants just by choosing an appropriate definition of the word "simple". Or if they're more mathematically sophisticated, they'll put lipstick on that pig and use Solomonoff Induction instead, which once again can give anything one wants a high prior just by choosing the language used. Since there exists a Universal Turing Machine where the bitstring "0" emulates a universe that's like our own except the Earth is actually flat and people only think it's round because of a massive conspiracy, a "rationalist" would have the right to assign at least a 50% prior to that hypothesis if he wanted to (and that probability is not going to decrease, since P(people say Earth is round|Earth is round) = P(people say Earth is round|massive conspiracy)). And to claim that such a language is too "complex", would just be begging the question.

I agree there should be a strong prior belief that anyone pursuing AGI at our current level of overall human knowledge, is likely quite ordinary or at least failing to make reasonably obvious conclusions.

"The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs."

Except that the gradual improvements cannot occur without the breakthroughs.

"Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?"

Small differences can have very big effects.

"Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer's Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field."

Without geniuses to guide their work, less intelligent persons are not going to make progress where new thinking is required.

"The best way to judge productivity differences is to look at salaries."

In modern industrial societies, the most highly creative and productive people (and investors) are grossly underpaid relative to the majority of people.

"the most highly creative and productive people (and investors) are grossly underpaid relative to the majority of people."

Do you mean to say that investors are underpaid, that investors aren't creative and productive people, or that investors aren't people? Hehe.

michael vassar,
You've quietly slid from engineers to programmers. Other kinds of engineers need a lot more money to make it a hobby. Maybe they make up for it with less variation in ability, but I doubt it. Even if you didn't mean to talk about other engineers, their situation needs explaining.

Speaking of creationism and AI, I always liked the dedication of Gerry Sussman's dissertation:

"To the Maharal of Prague, who was the first to realize that the statement 'God created man in His own image' is recursive"

Some context here. Sussman is definitely an above-average AI scientist.

Is it possible that humans might create blight power AI, sure. Is it possible that a monkey banging away on a keyboard might create the complete works of Shakespeare, sure. I'm not going to hold my breath though.

If groups of humans do manage to cobble together an AGI out of half baked theories and random trial and error, it is likely to have as much hope of recursively self-improving easily as a singular human performing neurosurgery on themselves. Even given the tools to alter neural connections and weightings without damage, I don't see much hope of quick improvement.

Power level intelligence requires power level optimisation power to create out of nothing. If you can create a power level intelligence, that optimises in the same way that you would wish to, then by the definition you have given for optimisation power the creator of that intelligence must have it.

Developing something that can become a power level AI first would be like accidentally creating a space ship when trying to fly for the first time. Trying to hit a infinitesimal target in optimisation space, when you don't even know if you are in the right ball park.

One of the main benefits I see from real AI, is the intellectual shockwave that will hit humanity when we can demonstrate that intellect is naturalistic. A deep understanding of what we are is necessary for further growth of humanity.

When experienced celebrated AI researchers consistently say human-level AI looks a long way off you say that means little - how could they know. And then you feel you have the sorting-hat vision to just chat with someone for a few minutes and know they couldn't possibly contribute to such progress.

Non-reductionists always have to be judged according to the worst that can be dredged up from their ranks...
I notice that you're using Reductionist language to express your thoughts, splitting up reality into various smaller concepts that then interact.

Perhaps you would care to express the best of Non-reductionism in non-reductive language, as a means of demonstration?

Take your time.

Eliezer: If you are a level below Jaynes, Evolution is at least a hundred levels below the average engineer. What happened to the small gap between Village Idiot and Einstein?

I was thinking, "Can one human engineer put forth an effort equivalent to a billion years of optimization by an evolution in one year? Doesn't seem like it. Million years? Sounds about right." So I said, "six levels". This isn't the same sort of level I use to compare myself to Jaynes, but then you couldn't expect that of a comparison between humans and evolutions.

When experienced celebrated AI researchers consistently say human-level AI looks a long way off you say that means little - how could they know. And then you feel you have the sorting-hat vision to just chat with someone for a few minutes and know they couldn't possibly contribute to such progress.

One of these judgment problems is vastly easier than the other, and the easier one isn't timing the arrival of AI.

And I didn't say "can't contribute", I said they couldn't have cracked it.

Eliezer: One comment is that I don't particularly trust your capability to assess the insights or mental capabilities of people who think very differently from yourself. It may be that the people whose intelligence you most value (who you rate as residing on "high levels", to quasi-borrow your terminology) are those who are extremely talented at the kind of thinking you personally most value. Yet, there may be many different sorts of intelligent human thinking, some of which you may not excel at, may understand relatively little of, and may not be particularly good at assessing in others. And, it's not yet clear whether the style of intelligence that you favor (or the slightly different one that I tend to intuitively, and by personality-bias, favor) is the one that is most likely to lead to powerful, beneficial AGI ... or whether some other style of intelligence may be more effective in this regard....

I note again that objective definitions of general intelligence don't really exist except in the limit of massive computational processing power (and even there, they're controversial). So, assessing intelligence or capability in practice is a subtle matter ... and I don't particularly trust your analysis of intelligence in terms of a hierarchy of levels. I guess human intelligence is more mess, heterarchical and multifaceted than that. Of course, you can meaningfully construct hierarchies of intelligence in various areas, such as "mathematical theorem proving" or "theorem proving in continuous-variable analysis and related branches of math" ... or, say, "biology experimental design" or "software design", etc. But, when dealing with something like AGI that is poorly understood and may be amenable to a variety of different approaches, it's hard to say which of these domain-specific intelligences are going to be most critical to the effective solution of the AGI problem.

Maybe one of these scientists whom you dismiss as "mediocre level" according to the particular aspects of intelligence that you value most, are actually "high level" according to other aspects of intelligence that you aren't able to recognize and evaluate so accurately ... and maybe some of these other aspects will turn out to be MORE valuable for the creation of AGI.

I'm not saying I have a strong feeling this is the case ... I'm just saying "maybe"....

Compared to you, I think I have a bit more humility about my capability to recognize what another person's capabilities really are. Yes, I can see how well they do on a test, or how clever they are in a conversation ... or what papers they publish. But how do I know what's in their mind, that is not revealed to me explicitly due to the strictures of their personality or culture? How do I know what is in their statements or works that I'm not well-suited to recognize due to my own particular biases and limitations?

When I have to choose which scientist or engineer to hire or collaborate with, then I just make my best judgments ... and if I miss out on someone great due to my own limitations of vision, so be it ... but I personally tend to be more hesitant to consider either my own gut-level assessment of another's abilities, or performance on narrowly-specified test instruments, or success in social rituals like paper-publishing or university, as fundamentally indicative of someone's general intelligence or intellectual capability...

-- Ben G

Caledonian: I don't claim that reductions aren't often possible/useful, that claim would be just stupid :) Rather, what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything. (Never mind that it can't even explain all of *known physics*...) Besides, I'm not nearly conceited enough to think of myself as the best.

Besides, if you aren't a p-zombie, then it's patently obvious why reductionism is flat out wrong: you can't build a consciousness out of quarks and such (as they are described by current physics, anyway) any more then you could build a house out of water (that would stay up at standard temperature and pressure; sorry, I suck at analogies). Consciousness isn't an "emergent" property.

To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."

Obviously I don't think my judgment is perfect; but I'm not trying to use it to make subtle distinctions between 20 almost-equally-qualified candidates during a job interview. So the question is, is such judgment good enough that it can make gross distinctions correctly, most of the time?

Robin Hanson correctly pointed out yesterday that if I find that people generally rated as top names seem visibly more intelligent to me, this doesn't necessarily verify either my own judgment, or the intelligence of these people; it may just mean that I tend to intuitively judge "intelligence" using the same heuristics that others do, which explains why the people were accepted into hedge funds, why various researchers are accepted as big-names, etc.

But I don't know how plausible that really is. For one thing, talking with Steve Omohundro or Sebastian Thrun about math, and judging them by that, the math itself isn't something that they could fake. Steve Jurvetson can't just fake being able to construct a good counterargument using good biology. I know I'm judging from more than the core things that can't be faked, but I don't see so much of a conflict between the fakeable and unfakeable parts. I've met people who struck me as socially awkward but mathematically intelligent, and they're not in hedge funds, but I don't judge their level to be low.

It's an interesting question, and I acknowledge the force of Hanson's argument yesterday...

...but I'm not willing to flush the judgment down the toilet unless there's some other gold standard I should be using instead.

I mean, really, a creationist? Am I supposed to ignore that, and assume that the universe works the way it should, and that my imperfect observations are just noise? To weaken evidence is to strengthen priors - what prior should I be using here? In interviews you just use the GPA, or something like that, and the failure of interviewer judgment is the failure to do better than the GPA. What do I use here, if not the should-universe that is clearly wrong? If I just assume that everyone involved is a literally average scientist, that actually downgrades them.

if you aren't a p-zombie
I just happen to be a p-zombie.


Did you read Eliezer's Generalized Anti-Zombie Principle?

Rather, what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything. (Never mind that it can't even explain all of *known physics*...)
Most (all?) self-described reductionists believe the Standard Model is incomplete and needs something more to reconcile relativity with quantum mechanics. They just think the complete Unified Theory of Everything will have reductionist explanations for everything.

TGGP: I'm not an epiphenomenalist. My guess is if a non-conscious version of me was created, if such a thing is possible, it would claim to have been wrong all along and become an Eliezer follower.

A reductionist explanation for *everything* would, to avoid having anything irreducible, have to be infinitely deep; either the theory itself would be never-ending, or it would be self-referential. Is that what you believe?

A sensible reductionist theory doesn't claim that everything is reducible to something more basic. It claims that everything is reducible to a set of fundamental entities, (which are not in turn reducible to anything else,) governed by consistent laws.

Scenario:

A potentially hostile foreign country is making tremendous progress in AGI; they've already appointed it to several governmental and research positions and are making a huge sucking noise on the money market thanks to their baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!

What problems does the project director face? What is the optimum number of geniuses working on AGI? Can there be too many? Where do we get them from? How do we choose them?

How was the real Manhattan Project structured? How wide was the top of the pyramid? How many individuals contributed to the key insights and breakthroughs?

"baby/juvenile-AGI that is about to turn mature any month/week/day/hour now.

This calls for an AGI Manhattan Project!"

Probably too late for a Manhattan Project to be the appropriate response at that point. Negotiation or military action seem more feasible.

Eliezer said:

***
To all claiming that the judgment is too subtle to carry out, agree or disagree: "Someone could have the knowledge and intelligence to synthesize a mind from scratch on current hardware, reliably as an individual rather than by luck as one member of a mob, and yet be a creationist."
***

Strongly agree.

I'm not making any specific judgments about the particular Creationist you have in mind here (and I'm pretty sure I know who you mean)... but I see no reason to believe that Creationism renders an individual unable to solve the science and engineering problems involved in creating AGI. Understanding *mind* is one thing ... beliefs about cosmogony are another...

I note that there are many different belief systems lumped under the label of "Creationism" ... not all of them are stupid or anti-intellectual.... (Though, I do not accept any of them myself, being a lifelong atheist...)

And, there may be a statistical anticorrelation between Creationism and IQ ... but it's not so strong a relationship as to let you draw useful conclusions about individual cases in the face of more particular information about the people in question...

-- Ben G

People with apparently irrational religious views have had major insights into technical areas of philosophy and to the theory of rationality:

Thomas Bayes
Robert Aumann
Saul Kripke
Hilary Putnam

I'm sure there are others, but these are the best known examples. Putnam was also a Maoist for a while. A number of top German scientists worked for the Nazis, having seen their Jewish colleagues chased out of their university positions.

Some people with scientific accomplishments have been positivly crazy, in fact. E.g. Kary Mullis, who developed the polymerase chain reaction, winning a nobel prize. In 1992, Mullis founded a business with the intent to sell pieces of jewelry containing the amplified DNA of deceased famous people like Elvis Presley and Marilyn Monroe. He's also an AIDS denier and a global warming skeptic.

Eli:

I don't know what it would take to synthesize a mind from scratch on current hardware, but I do think that there are creationists who would at least be significantly above my level. I don't know of any, but I do have a creationist friend who is a good enough thinker that, while I don't think he's better than me, the fact that I just happened to meet him (our parents were friends) suggests that there are other creationists who are.

I'm not sure where this sequence of posts is going, but I feel I should use the opportunity to advertise my own status as somewhere way above average and yet extremely badly positioned to use my abilities. I consider that what I should be working on is something like the Singularity Institute's agenda, but with the understanding that today's scientific ontology is radically incomplete on at least two fronts, and that fundamental new ontological ideas are therefore required. Eliezer has repeatedly made the point that getting AGI and FAI right is far more difficult and far more important than is appreciated by most people attracted to the subject. Something similar may be said regarding people's ideas about ontology, the basic nature of reality. There is a terrible complacency among people who have assimilated the ontological perspectives of mathematical physics and computer science, and the people who do object to the adequacy of naturalism are generally pressing in a retrograde direction.

Experience suggests that it's extremely unlikely that this message will improve anything, and so I'll just have to save myself, but it is all nonetheless true.

Peter: I disagree. I met that friend, and he's not even the smartest creationist I have met, but he isn't even close to your level. Not remotely. I think it somewhat unlikely there are creationists at your level (Richard Smalley included) and would be astounded if there were any at mine. Well... I mean avowed and sincere biblical literalists, there might be all sorts of doctrines that could be called creationist.

what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything.

That's not what "reductionism" means - emphasis or no emphasis.

Eliezer,

Could you elaborate a little bit more about the danger of inventing AGI by the large crowd of mediocre researchers?

Why would it be more dangerous than AGI break-through made in a single lab?

From my perspective -- the more people are involved in the invention -- the safer it is for the whole society.

Rather, what I oppose is reduction*ism*, the dogmatic belief that the Standard Model can explain everything.

No one who believes the current Standard Model can explain everything is a scientist... or rational... or well-educated. Or mediocrely-educated. Or even poorly-educated. Even a schoolchild should know better.

In short, I rather doubt that anyone with any credibility at all holds the belief you're talking about. You oppose a ludicrous position that is highly unlikely to exist as a vital, influential entity. It is almost certainly a strawman.

I think of the quest for AGI as more like a chess game.

The greatest chess master in the world doesn't know "how to win a chess game". He doesn't sit down at the board knowing how he's going to win.

He knows, mostly, how to make one move at a time that puts him in a better position than he was before.

I played lousy chess when I tried to plan ahead. I played much better when I played it one move at a time, using lookahead only to avoid traps. In real life, lookahead is less important than in chess, because of the less-discrete, smoother nature of the search space.

So I think that a person of high intelligence, who has the modesty to know that he can look only a short distance ahead, has a better chance of getting to AGI than a genius who is overconfident. Slow and steady will win this race.

(Not that I think "a person" will create AGI. That is like expecting "a person" to cure cancer. It takes tens of thousands of people, and tens of billions of dollars, just to create a new version of Windows. Most programmers believe that this is because Microsoft is so inefficient at coding. Most programmers have notions of how difficult large software projects are that they construct by seeing how difficult small projects are, and scaling linearly. Difficulty does not scale linearly with complexity.)

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31