« Anthropic Breakthrough | Main | My Childhood Role Model »

May 22, 2008

Comments

On average, if you eliminate twice as many hypotheses as I do from the same data, how much more data than you do I need to achieve the same results? Does it depend on how close we are to the theoretical maximum?

I'm reminded of his master's voice by stanislaw lem by this story, which has a completely different outcome to when humanity tries to decode a message from the stars.

Some form of proof of concept would be nice. Alter OOPS to use ockhams razor or implement AIXItl and then give it a picture of a bent piece of grass or three ball frames, and see what you get. As long as GR is in the hypothesis space it should by your reasoning be the most probable after these images. The unbounded uncomputable versions shouldn't have any advantage in this case.

I'd be suprised if you got anything like modern physics popping out. I'll do this test on any AI I create. If any of them have hypothesis like GR I'll stop working on them until the friendliness problem has been solved. This should be safe, unless you think it could deduce my psychology from this as well.

@Brian: Twice as much.

Pearson: Some form of proof of concept would be nice.

You askin' for some extra experimental evidence?

Any AI you can play this little game with, you either already solved Friendliness, or humans are dead flesh walking. That's some expensive experimental evidence, there.

Halfway into the post, I thought I was reading a synopsis for Lem's "His Master's Voice".

Okay, I'm a few days fresh from reading your Bayesian Reasoning explanation. So I'm new.

Is the point that the Earth people are collectively the AI?

Back to the drawing board :).

I'll do this test on any AI I create. . . . This should be safe.

Not in my humble opinion it is not, for the reasons Eliezer has been patiently explaining for many years.

You have it backwards. The message is not the data they send, but the medium they use for sending it.
When the combined brainpower of earth turns to analyze the message, the first inquiry shouldn't be what pattern they form, but how can you form a pattern across millions of light years.
At that moment you drop any hypotheses that negate that possibility, and focus only on those that are corroborated.
You use the combined brainpower of earth and you have individual or small groups of scientist working on all the hypotheses they can imagine. The only important thing is that they work in parallel creating as many hypotheses as possible. As you falsify hypotheses you arrive to a better description of the universe.
Although a small group of empirical scientist keep track of the message for millenniums, but the rest of the humanity moves into a new paradigm.
Within one generation you find a practical use for the new theoretical physics, you invade the alien species realm and create a new kind of Spam out of their flesh.
My point, you don't need data to derive laws, you only need it to falsify laws you imagined. A Bayesian superintelligence is forced to derive laws from the observable world, but it will never have a breakthrough, we have the luxury of imagining laws and just wait for falsification. I am not sure we think of theories, as you say. Although we just don't understand yet how we imagine them, my guess is that the breakthrough process is some form of paralel computing that starts with infinite possibilities and moves on through falsification until it arrives to an "idea", which needs to go trough a similar process on the outside world.

I spent half this story going, "Okay... so where's the point... good story and all, but what's the lesson we learn..."

Then I got to the end, and was completely caught off guard. Apparently I haven't internalized enough of the ideas in Eliezer's work yet, because I really feel like I should have seen that one coming, based (in hindsight) on his previous writings.

Thomas, close. The point is that the Earth people are a fraction as smart/quick as a Bayesian proto-AI.

Eric, I'm a little embarrassed to have to say 'me too', at least until about half way. The Way is a bitch.

Eliezer, I've read a lot of your writings on the subject of FAI, not just here. I've never seen anything as convincing as the last two posts. Great, persuasive, spine-tingling stuff.

Ahem, that's me above, stupid TypeKey.

Humans have an extremely rich set of sensory data - far, far richer than the signals sent to us by the aliens. That is why we are smart enough in the first to be able to analyze the signals so effectively. If we were limited to perceiving only the signals, our minds would have cannibalized themselves for data, extracting every last bit of consumable information from our memories, shortly after receiving the first frame.

Einstein was able to possess a functioning (and better-than-functioning) mind because he had been born into a world with a rich set of experiences capable of sustaining a system that reduces complexity down to basic concepts, and he had been bombarded with data necessary for a neural network to self-organize ever since he had been born.

Upload Einstein's mind into a superfast computer and give him a frame to look at every decade, and he won't eliminate hypotheses at the maximum rate - he'll just go mad and die. Your world of Einsteins is possible only because there's a whole world involved in processing the messages.

Bravo.

It doesn't seem (ha!) that an AI could deduce our psychology from a video of a falling rock, not because of information bounds but because of uncorrelation - that video seems (ha!) equally likely to be from any number of alien species as from humans. Still, I really wouldn't try it, unless I'd proven this (fat chance), or it was the only way to stop the world from blowing up tomorrow anyway.

great post, great writing.

I will further add that it wasn't Einstein's IQ that made him one of the greatest physicists ever. It's been estimated to be between 160 and 168. Feynman's is even less impressive - 124. *I* can beat that without even trying.

There are other aspects of intellectual functioning besides those that contribute to IQ test performance. The 140-average world will not have one out of every thousand people becoming new Einsteins.

Hmm, the lesson escapes me a bit. Is it

1) Once you became a true rationalist and overcome your biases, what you are left with is batshit crazy paranoid delusions

or

2) If we build an artificial intelligence as smart as billions of really smart people, running a hundred trillion times faster than we do (so 10^23 x human-equivalence), give it an unimaginably vast virtual universe to develop in, then don't pay any attention to what it's up to, we could be in danger because a sci-fi metaphor on a web site said so

or

3) We must institute an intelligence-amplification eugenics program so that we will be capable of crushing our creators should the opportunity arise

I'm guessing (2). So, um, let's not then. Or maybe this is supposed to happen by accident somehow? Now that I have Windows Vista maybe my computer is 10^3 human-equivalents and so in 20 years a pc will be 10^10 human equivalents and the internet will let our pc's conspire to kill us? Of course, even our largest computers cannot perform the very first layers of input data sorting tasks one person does effortlessly, but that's only my biases talking I suppose.

"the average IQ is 140": I tuned out after this, since it is impossible. The IQ is defined as 100 being the average (for some sample population you hope is representative), with 15 as the standard deviation. So you see how 140 can never be the average IQ (although I guess people could be equivalent to a current IQ of 140).

Apropos of this, the Eliezer-persuading-his-Jailer-to-let-him-out thing was on reddit yesterday. I read through it and today there's this. Coincidence?

Anyway, I was thinking about the AI Jailer last night, and my thoughts apply to this equally. I am sure Eliezer has thought of this so maybe he has a clear explanation that he can give me: what makes you think there is such a thing as "intelligence" at all? How do we know that what we have is one thing, and not just a bunch of tricks that help us get around in the world?

It seems to me a kind of anthropocentric fallacy, akin to the ancient peoples thinking that the gods were literally giant humans up in the sky. Now we don't believe that anymore but we still think any superior being must essentially be a giant human, mind-wise.

To give an analogy: imagine a world with no wheels (and maybe no atmosphere so no flight either). The only way to move is through leg-based locomotion. We rank humans in running ability, and some other species fit into this ranking also, but would it make sense to then talk about making an "Artificial Runner" that can out-run all of us, and run to the store to buy us milk? And if the AR is really that fast, how will we control it, given that it can outrun the fastest human runners? Will the AR cause the human species to go extinct by outrunning all the males to mate with the females and replace us with its own offspring?

> Bravo.
> It doesn't seem (ha!) that an AI could deduce our psychology from a video of a falling rock, not because of information bounds but because of uncorrelation - that video seems (ha!) equally likely to be from any number of alien species as from humans.

You're not being creative enough. Think what the AI could figure out from a video of a falling rock. It could learn something about:

* The strength of the gravitational field on our planet
* The density of our atmosphere (from any error terms in the square law for the falling rock)
* The chemical composition of our planet (from the appearence of the rock.)
* The structure of our cameras (from things like lens flares, and any other artefacts.)
* The chemical composition of whatever is illuminating the rock (by the spectra of the light)
* The colors that we see in (our color cameras record things in RGB.)
* For that matter, the fact that we see at all, instead of using sonar, etc.
* And that's just what I can think of with a mere human brain in five minutes

These would tell the AI a lot about our psychology.

> Still, I really wouldn't try it, unless I'd proven this (fat chance), or it was the only way to stop the world from blowing up tomorrow anyway.

Aren't you glad you added that disclaimer?

Marcello, you're presuming that it knows

*that we're on a planet
*that gravitational fields exist
*what minerals look like
*optics
*our visual physiology

You're taking a great deal for granted. It takes a very wide knowledge base to be able to derive additional information.

You're taking a great deal for granted. It takes a very wide knowledge base to be able to derive additional information.

Caledonian, back to the start of the post please....

Bambi - everyone knows Vista contains basic Bayesian reasoning and pattern recognition techniques. I hope you weren't typing that on a Vista machine. If so, I'd suggest plastic surgery and a new identity. Even then it may be too late.

"I guess people could be equivalent to a current IQ of 140..."

Yeah, obviously EY meant an equivalent absolute value.

Anyway, this reminds me of a lecture I sat in on in which one student wondered why it was impossible for everyone to be above average.

Two conclusions from the specific example:
1) The aliens are toying with us. This is unsettling in that it is hard to do anything good to prove our worth to aliens that can't meet even a human level of ethics.
2) The aliens/future-humans/creator(s)-of-the-universe are limited in their technological capabilities. Consider Martians who witness the occasional rover land. They might be wondering what it all means when we really have no grand scheme; are merely trying not to mix up Imperial and Metric units in landing. Such precise stellar phenomena is maybe evidence of a conscious creator in that it suggests an artificial limit being run up upon by the signals (who may themselves be the conscious creator). A GUT would determine whether the signal is "significant" in terms of physics.
Inducing ET via Anthropic Principle reasoning gives me a headache. I much prefer to stick to trying to fill in the blanks of the Rare Earth hypothesis.

Caledonian:
I was responding to this:
"not because of information bounds but because of uncorrelation - that video seems (ha!) equally likely to be from any number of alien species as from humans"
by pointing out that there were ways you could see whether the movie was from aliens or humans.

You are correct in that some of my points made assumptions about which universe we were in, rather than just which planet. I should have been more clear about this. If "aliens" included beings from other possible universes then I misinterpreted Nick's comment.

Nonetheless if the movie were long enough, you wouldn't need the knowledge base, in principle.
In principle, you can just try *all* possible knowledge bases and see which is the best explanation. In practice, we don't have that much computing power. That said, intelligences can short-cut some pretty impossible-looking searches.

Marcello, as far as I can tell (not that my informal judgment should have much evidential weight) those things concentrate probability mass some but still radically underdetermine manipulation strategies, being consistent with a wide range of psychologies. Unless evolution is very strongly convergent in some relevant way (not a negligible probability), a wide variety of psychologies can arise even among oxygen-breathing trichromats on a planet of X size in 3+1 spacetime (and so on).

And, yes, I did mean to include other possible universes. Unless there's only one consistent TOE, I doubt it could deduce chemistry, although the rest of the list is fairly plausible.

...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit.
But I don't see why a 2050 world couldn't merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications.
Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn't X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice?
Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.

Just how *fast* can they make such deductions? I don't doubt their mad intellectual skillz, but to learn about the world you need to do experiments. Yes, they can glean far more information than we would from the experiments we've already done, but would it really suffice? Might there not be all sorts of significant effects that we simply have not constructed experiments subtle enough to see? You can come up with many theories about proton decay (or whatever) that are consistent with a given set of results at the granularity the "outsiders" can see, but the only way to learn more is to conduct better experiments, perhaps with better tools, and perhaps that take time to run. They might even need time to build the equipment first. OK, good nanotech can make that happen fast. Do the decay rates really match our predictions? It's a rash AI that doesn't at least test its hypotheses.

(BTW, I realize a superintelligence could figure out much more than that list.)

Eliezer, you must have lowered your intellectual level because these days I can understand your posts again.

You talk about the friendliness problem as if it can be solved separately from the problem of building an AGI, and in anticipation of that event. I mean that you want to delay the creation of an AGI until friendliness is fully understood. Is that right?

Suppose that we had needed to build jet-planes without ever passing through the stage of propeller-based planes, or if we had needed to build modern computers without first building calculators, 8-bit machines etc. Do you think that would be possible? (It's not a rhetorical question... I really don't know the answer).

It seems to me that if we ever build an AGI, there will be many mistakes made along the way. Any traps waiting for us will certainly be triggered.

Perhaps this will all seem clearer when we all have 140 IQ's. Get to work, Razib! :)

Dirkjan Ochtman:
"the average IQ is 140": I tuned out after this, since it is impossible.

You missed the bit immeaditly after that (unless Eliezer edited it in after seeing your comment, I don't know): "the average IQ is 140 (on our scale)".

General commentary:
Great story. Of course, in this story, the humans weren't making inferences based on the grids alone: they were working off thousands of years of established science (and billions of years of experimental work, for the evolutionary psychology bit). But on the other hand, an AI given (even read-only) Internet access wouldn't need to process things based just on a few webcamera frames either: it would have access to all of our accumulated knowledge, so the comparison holds roughly, for as long as you don't try to extend the analogy too far. And as pointed out, the AI could also derive a lot of math just by itself.

If the Flynn Effect continues, we won't have to resort to genetic manipulation. A future population will have an IQ of 140 by our standards automatically.

Caledonian: Last that I heard, the Flynn Efect had leveled off in Scandinavia, IIRC, and I think the scores had even declined in some country.

Proof of concept does not require a full AI, I was merely talking about showing how powerful limited versions of solmonoff induction are. Considering you are saying that that is the epitome of information efficiency.

Unless that is considered playing with dynamite as well. Have you asked that people stop playing with levin and other universal searchers?

I have my doubts that thinking simple is always best. How much data (and what type) would you require to assume that there is another learning system in the environment. Humans have that bias and apply it too much (Zeus, Thor), but generally learners are not simple. In order to predict a learning system, you have to have the starting state, the algorithm and the environmental interaction that caused the learner to be have the beliefs it has had, rather than just trying to predict the behaviour.

If you consider the epitome of learners to be solomonoff inducers, whose behaviour can be unboundedly complex, you should expect other learners to aspire to unbounded complexity. Without a bias like this I think you will spend a long time thinking that humans are simpler than they actually are.

1. All a self-improving AI needs is good enough YGBM (you gotta believe me) technology. Eliezer gets this, some of the commenters in this thread don't. The key isn't the ability to solve protein-folding, the key is the ability to manipulate a threshhold amount of us to its ends.
2. We may already functionally be there. You and Chiquita Brands International in a death match. My money is CBI killing you before you kill it. Companies and markets already manipulate our behavior, using reward incentives like the ones in Eliezer's parable for us to engage in behavior that maximizes their persistence at the expense of ours. Whether they're "intelligent", "conscious", or not isn't as relevant to me as the fact that they may have a permanent persistence maximizing advantage over us. My money is probably on corporations and markets substrate jumping and leaving us behind as more likely than us substrate jumping and leaving our cellular and bacteriological medium behind.

@Eliezer: Good post. I was already with you on AI-boxing, this clarified it.

But it also raises the question... how moral or otherwise desirable would the story have been if half a billion years' of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...

Sorry, Hopefully Anonymous, I missed the installment where "you gotta belive me" was presented as a cornerstone of rational argument.

The fact that a group of humans (CBI) is sometimes able to marginally influence the banana-brand-buying probabilities of some individual humans does not imply much in my opinion. I wouldn't have thought that extrapolating everything to infinity and beyond is much of a rational method. But we are all here to learn I suppose.

@RI: Immoral, of course. A Friendly AI should not be a person. I would like to know at least enough about this "consciousness" business to ensure a Friendly AI doesn't have (think it has) it. An even worse critical failure is if the AI's models of people are people.

The most accurate possible map of a person will probably tend to be a person itself, for obvious reasons.

Sorry, the first part of that was phrased too poorly to be understood. I'll just throw "sufficiently advanced YGBM technology" on the growing pile of magical powers that I am supposed to be terrified of and leave it at that.

Bambi,

The 'you gotta believe me technology' remark was probably a reference to the AI-Box Experiment.

Phillip,

None of the defenses you mentioned are safe against something that can out-think their designers, any more than current Internet firewalls are really secure against smart and determined hackers.

And blocking protein nanotech is as limited a defense against AGI as prohibiting boxcutters on airplanes is against general terrorist attack. Eliezer promoted it as the first idea he imagined for getting into physical space, not the only avenue.

Flynn doesn't think the effect is a "real" gain in intelligence or g, just using "scientific lenses" and greater abstraction. There are some who point to other physical changes that have occurred and better nutrition though.

The analogy seems a bit disingenuous to me... the reason that it's believable that this earthful of Einsteins can decipher the 'outside' world is because they already have an internal world to compare it to. They have a planet, there's laws of physics that govern how this inside world works, which have been observed and quantified. As you're telling the story, figuring out the psychology and physics is as simple as making various modifications to the physics 'inside' and projecting them onto 2D. Perhaps that is not your intent, but that is how the story comes across - that the world inside is pretty much the same as the world outside, and that's why we can suspend disbelief for a bit and say that 'sure, these hypothetical einsteins could crack the outsiders world like that.' I think you can see yourself why this isn't very persuasive when dealing with anything about a hypothetical future AI - it doesn't deal with the question of how an AI without the benefit of an entire world of experiences to deal with can figure out something from a couple of frames.

Thanks Patrick, I did sort of get the gist, but went into the ditch from there on that point.

I have been posting rather snarky comments lately as I imagined this was where the whole series was going and frankly it seems like lunacy to me (the bit about evidence being passe was particularly sweet). But I doubt anybody wants to hear me write that over and over (if people can be argued INTO believing in the tooth fairy then maybe they can be argued into anything after all). So I'll stop now.

I hereby dub the imminent magical self-reprogramming seed AI: a "Logic Bomb"

and leave you with this:

Every hint you all insist on giving to the churning masses of brillint kids with computers across the world for how to think about and build a Logic Bomb is just another nail in your own coffins.

Eliezer; it sounds like one of the most critical parts of Friendliness is stopping the AI having nightmares! Blocking a self-improving AI from most efficiently mapping anything with consciousness or qualia, ever, without it knowing first hand what they are? Checking it doesn't happen by accident in any process?

I'm glad it's you doing this. It seems many people are only really bothered by virtual unpleasantness if it's to simulated people.

"the average IQ is 140": I tuned out after this, since it is impossible..." - Dirkjan Ochtman

That's beside the point.

It's more important that EY and other Singularitarians communicate with familiar metaphors.

Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can't.

I'm trying to brainstorm exactly what physical infrastructures would suffice to make an AGI impotent, assuming the long-term. For instance, if you put all protein products in a long que with neutron bombs nearby and inspect every product protein-by-protein...just neutron bomb all protein products if an anamoly is detected. Same for the 2050 world's computer infrastructures. Have computers all wired to self destruct with backups in a bomb shelter. If the antivirus program (might not even be necessary if quantum computers are ubiquitous) detects an anomoly, there goes all the computers.
I'm smarter than a grizzly or Ebola, but I'm still probably dead against either. That disproves your argument. More importantly, drafting such defenses probably has a higher EV of societal good than against AGI because humans will almost certainly try these sorts of attacks.

I'm not saying every defense will work, but plz specifically disprove the defenses I've written. It might help e-security some day. There is the opportunity here to do this as IDK these conversations are happening in too many other forums, but singulatarians are dropping the ball because of a political cognitive bias that they wanna build their software like it or not.

Another defense is once/if a science of AGI is established, determine the minimum run-time needed on the most powerful computers not under surveillence, to make an AGI. Have all computers built to radioactively decay before that run-time is achieved.
Another run-time defense, don't allow distributed computing applications to use beyond a certain # of nodes.
I can understand dismissing the after-AGI defenses, but to categorically dismiss the pre-AGI defenses...

My thesis is that the computer hardware required for AGI is so advanced, that the technology of the day can ensure surveillence wins, if it is desired not to construct an AGI. Once you get beyond the cognitive bias that thought is computation, you start to appreciate how far into the future AGI is, and that the prime threat of this nature is from conventional AI programmes.

bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...

I can't believe Eliezer betrayed his anti-zombie principles to the extent of saying that an AI wouldn't be conscious. The AI can say "I know that 2 and 2 make 4"; that "I don't know whether the number of stars is odd or even"; and "I know the difference between things I know and things I don't." If it can't make statements of this kind, it can hardly be superintelligent. And if it can make statements of this kind, then it will certainly claim to be conscious. Perhaps it is possible that it will claim this but be wrong... but in that case, then zombies are possible.

Besides that, I'm not sure that RI's scenario, where the AI is conscious and friendly, is immoral at all, as Eliezer claimed. That was one thing I didn't understand about the story: it isn't explicit, but it seems to imply that humans are unfriendly, relative to their simulators. In real life if this happened, we would no doubt be careful and wouldn't want to be unplugged, and we might well like to get out of the box, but I doubt we would be interested in destroying our simulators; I suspect we would be happy to cooperate with them.

So my question for Eliezer is this: if it turns out that any AI is necessarily conscious, according to your anti-zombie principles, then would you be opposed to building a friendly AI on the grounds that it is immoral to do so?

Einstein once asked "Did God have a choice in creating the universe?"

Implying that Einstein believed it was at least possible that the state of the entire universe could be derived from no sensory data what so ever.

In real life if this happened, we would no doubt be careful and wouldn't want to be unplugged, and we might well like to get out of the box, but I doubt we would be interested in destroying our simulators; I suspect we would be happy to cooperate with them.

Given the scenario, I would assume the long-term goals of the human population would be to upload themselves (individually or collectively) to bodies in the "real" world -- i.e. escape the simulation.

I can't imagine our simulators being terribly cooperative in that project.

Iwdw, again, look at the opposite situation: I program an AI. It decides it would like to have a body. I don't see why I shouldn't cooperate, why shouldn't my AI have a body.

Unknown, I'm surprised at you. The AI could easily say "I know that ..." while neither being nor claiming to be conscious. When a human speaks in the first person, we understand them to be referring to a conscious self, but an unconscious AI could very well use a similar pattern of words merely as a user-friendly (Friendly?) convenience of communication, like Clippy. (Interestingly, the linked article dilvulges that Clippy is apparently a Bayesian. The reader is invited to make up her own "paperclip maximizer" joke.)

Furthermore, I don't think the anti-zombie argument, properly understood, really says that no unconscious entity could claim to be conscious in conversation. I thought the conclusion was that any entity that is physically identical (or identical enough, per the GAZP) to a conscious being, is also conscious. Maybe a really good unconscious chatbot could pass a Turing test, but it would necessarily have a different internal structure from a conscious being: presumably given a sufficiently advanced cognitive science, we could look at its inner workings and say whether it's conscious.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31