« Political Harassment | Main | Insincere Cheers »

October 05, 2008

Comments

If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end doing less than optimal.

If optimal is not synonymous with winning (i.e. doing what is necessary), what is the point of being optimal? If you die of starvation before you manage to pick the most nutritious thing to eat using bayesian methods, I'm gonna ditch bayesian methods.

@Will: The point is not that you should necessarily run the algorithm that would be optimal if you had unlimited computational resources. The point is that by understanding what that algorithm does, you have a better chance of coming up with a good approximation which you can run in a reasonable amount of time. If you are trying to build a locomotive it helps to understand Carnot Engines.

What's your justification for having P(she says "at least one is a boy" | 1B,1G) = P(she says "at least one is a girl" | 1B,1G)? Maybe the hypothetical mathematician is from a culture that considers it important to have at least one boy. (China was like that, IIRC)

In recent years I've become more appreciative of classical statistics. I still consider the Bayesian solution to be the correct one, however, often a full Bayesian treatment turns into a total mess. Sometimes, by using a few of the tricks from classical statistics, you can achieve nearly as good performance with a fraction of the complexity.

Thank you for a correct statement of the problem which indeed gives the 1/3 answer.
Here's the problem I have with the malformed version:
I agree that it's reasonable to assume that if the children were a boy and a girl it is equally likely that the parent would say "at least one is a boy" as "at least one is a girl". But I guess you're assuming the parent would say "at least one boy" if both were boys, "at least one girl" if both were girls, and either "at least one boy" or "at least one girl" with equal probability in the one of each case.

That's the simplest set of assumptions consistent with the problem. But the quote itself is inconsistent with the normal rules of social interaction. Saying "at least one is a boy" takes more words to convey less information than saying "both boys" or "one of each". I think it's perfectly reasonable to draw some inference from this violation of normal social rules, although it is not clear to me what inference should be drawn.

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2

Why isn't it 3/4 for both? Why are these scenarios mutually exclusive?

Never mind -- missed the "If" clause. (Sorry!)

No, wait -- my question stands!

Do we really want to assign a prior of 0 to the mathematician saying "I have two children, one boy and one girl"?

Sadly, I had not read Judgment under Uncertainty, and still haven't. I don't recall ever saying I did, and can't find any email in which I claimed I'd read it.

However, I do recall being annoyed in 2002-2003 at Eliezer for joking that there was nothing worth reading that wasn't online and searchable through Google (or worse, that if it wasn't on the Net then it didn't exist). He did mention Judgment under Uncertainty on a mailing list (or on IRC) as something he would like to read, so I decided my donation to SIAI would be this book.

Eliezer doesn't make that particular annoying joke anymore. :)


Someone had just asked a malformed version of an old probability puzzle [...] someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3." [..] That was when I discovered that I was of the type called 'Bayesian'.

I think a more reasonable conclusion is: yes indeed it is malformed, and the person I am speaking to is evidently not competent enough to notice how this necessarily affects the answer and invalidates the familiar answer, and so they may not be a reliable guide to probability and in particular to what is or is not "orthodox" or "bayesian." What I think you ought to have discovered was not that you were Bayesian, but that you had not blundered, whereas the person you were speaking to had blundered.

"There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative."

Erp, I don't understand what this sentence is referring to. Can someone do me a favor and explain what is the "no possible alternative" here?

@Will: The point is not that you should necessarily run the algorithm that would be optimal if you had unlimited computational resources. The point is that by understanding what that algorithm does, you have a better chance of coming up with a good approximation which you can run in a reasonable amount of time. If you are trying to build a locomotive it helps to understand Carnot Engines.

There are other scenarios when running the "optimal" algorithm is considered harmful. Consider a nascent sysop vaporising the oceans purely by trying to learn how to deal with humanity (if that amount of compute power is needed of course).

Probability theory was not designed about how to win, it was designed as way to get accurate statements about the world, assuming an observer whose computations have no impact on the world. This is a reasonable formalism for science, but only a fraction of how to win in the real world, and sometimes antithetical to winning. So if you want your system to win, don't necessarily approximate it to the best of your ability.

Ideally we want a theory of how to change energy into winning, not information and a prior into accurate hypotheses about the world, which is what probability theory gives us, and is very good at.

@ komponisto:

Also do we really want to assign a prior probability of 0 that the mathematician is a liar! :)

Eliezer: That scream of horror and embarrassment is the sound that rationalists make when they level up. Sometimes I worry that I'm not leveling up as fast as I used to, and I don't know if it's because I'm finally getting the hang of things, or because the neurons in my brain are slowly dying.
Or both. But getting the hang of things might just mean something like having core structures that are more and more durable which are harder and harder to break, making you feel like you're not leveling up as fast as you used to. Whether not leveling up as fast as before means something more like not arriving at "new theorems" as fast, might be more because of the other matter. If it doesn't cost anything and if it would slow down the neural degeneration process, be as physiologically healthy as you can on current terms.

Cat Dancer,

The frequentist answer of 1/3 is effectively making the implicit assumption that the parent would have said "at least one boy" either if both were boys or if there were one of each, and "at least one girl" if both were girls. Eliezer2008's 1/2 answer effectively assumes that the parent would have said "at least one boy" if both were boys, "at least one girl" if both were girls, and either with equal probability if there were one of each. "No alternative" assumes the parent is constrained to (truthfully) say either "at least one boy" or "at least one girl", an assumption that strikes me as being bizzare.

Will Pearson, you could not be more wrong. Winning money at games of chance is precisely what probability theory was designed for.

So the clear Bayesian version is: Mathematician says "I have two children", and you say, "Please tell me the sex of one of them", and she says "male". What's the chance both are boys?

One step back, though. The prior probability of being asked: "One's a girl. What's the chance both are boys?" is probably close to 0.

So the correct question to avoid that prior is: "What's the distribution of probabilities over 2 girls, one of each, and 2 boys?", not "What's the chance both are boys?"

Also do we really want to assign a prior probability of 0 that the mathematician is a liar! :)

That's not the point I was making.

I'm not attacking unrealistic idealization. I'm willing to stipulate that the mathematician tells the truth. What I'm questioning is the "naturalness" of Eliezer's interpretation. The interpretation that I find "common-sensical" would be the following:

Let A = both boys, B = at least one boy. The prior P(B) is 3/4, while P(A) = 1/4. The mathematician's statement instructs us to find P(A|B), which by Bayes is equal to 1/3.

Under Eliezer's interpretation, however, the question is to find P(A|C), where C = *the mathematician says* at least one boy (*as opposed to saying* at least one girl).

So if anyone is attacking the premises of the question, it is Eliezer, by introducing the quantity P(C) (which strikes me as contrived) and assigning it a value less than 1.

"But it was PT:TLOS that did the trick. Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox"

I am unaware of a statement of Cox's theorem where the full *technical* statement of the theorem comes even close to this informal characterization. I'm not saying it doesn't exist, but PT:TLOS certainly doesn't do it.

I found the first two chapters of PT:TLOS to be absolutely, wretchedly awful. It's full of technical mistakes, crazy mischaracterizations of other people's opinions, hidden assumptions and skipped steps (that he tries to justify with handwaving nonsense), and even a discussion of Godel's theorems that mixes meta levels and completly misses the point.

Cat Dancer, I think by "no alternative," he means the case of two girls.

Of course the mathematician could say something like "none are boys," but the point is whether or not the two-girls case gets special treatment. If you ask "is at least one a boy?" then "no" means two girls and "yes" means anything else.

If the mathematician is just volunteering information, it's not divided up that way. When she says "at least one is a boy," she might be turning down a chance to say "at least one is a girl," and that changes things.

At least, I think that's what he's saying. Most of probability seems as awkward to me as frequentism seems to Eliezer.

Larry D'Anna,

Could you be more specific (citations, etc), so that we can have an exchange between you and Eliezer on this?

George, Brian: thank you for the elaborations. Perhaps the point is that if I have a mental model of when the mathematician will say what, and that model is reasonably accurate, I can use that information to make more accurate deductions?

Which seems fairly obvious... but perhaps that's also the point, that Bayesian statistics allows you to use what information you have.

Eliezer:

How do you decide which books to read? In particular, why did you decide to read PT:LOS? Did Amazon recommend it?

For those who are interested, a fellow named Kevin Van Horne has compiled a nice unofficial errata page for PT:LOS here. (Check the acknowledgments for a familiar name.)

I agree that the nature of that question requires having a mental model of the mathematician, or at least a mental model of mathematicians in general, which for this question we probably don't have.

However, a similar question can more unambiguously be answered with Eliezer's answer of 1/2.

You're at a dinner at a mathematician's house, and he says that he has two kids. A boy walks through the room, and you ask if the boy is his son. He says yes. What is the probability that the other child is a girl?

Larry D'Anna on Jaynes:

I found the first two chapters of PT:TLOS to be absolutely, wretchedly awful. It's full of technical mistakes, crazy mischaracterizations of other people's opinions, hidden assumptions and skipped steps (that he tries to justify with handwaving nonsense), and even a discussion of Godel's theorems that mixes meta levels and completly misses the point.

Not to mention the totally unnecessary and irrelevant screeds against mainstream pure mathematics in general, which can only serve to alienate potential converts in that discipline (they sure alienated the hell out of me).

Eliezer,

Have you considered in detail the idea of AGI throttling, that is, given a metric of intelligence, and assuming a correlation between existential risk and said intelligence, AGI throttling is the explicit control of the AGI's intelligence level (or optimization power if you like), which indirectly also bounds existential risk.

In other words, what, if any, are the methods of bounding AGI intelligence level? Is it possible to build an AGI and explicitly set it at human level?

Agreed re: the bashing of mainstream math in PT:TLOS. AFAIK, his claims that mainstream math leads to paradoxes are all false; of course trying to act as though various items of mainstream math meant what an uneducated first glance says they mean can make them look bad. (e.g. the Banach-Tarski paradox means either "omg, mathematicians think they can violate conservation of mass!" or "OK, so I guess non-measurable things are crazy and should be avoided") It's not only unnecessary and annoying, but also I think that using usual measure theory would clarify things sometimes. For instance the fact that MaxEnt depends on what kind of distribution you start with, because a probability distribution doesn't actually have an entropy, but only a relative entropy relative to a reference measure, which is of course not necessarily uniform, even for a discrete variable. Jaynes seems to strongly deemphasize this, which is unfortunate: from PT:TLOS it seems as though MaxEnt gives you a prior given only some constraints, when really you also need a "prior prior".

Precision in seventeen syllables or less is very diffic.

Eliezer,


If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end doing less than optimal.

You say that like it's a bad thing. Your statement implies that something that is "necessary" is not necessary.

Just this morning I gave a presentation on the use of Bayesian methods for automatically predicting the functions of newly sequenced genes. The authors of the method I presented used the approximation

P(A, B, C) ~ P(A) x P(B|A) x P(C|A)

because it would have been difficult to compute P(C | B, A), and they didn't think B and C were correlated. Your statement condemns them as "less than optimal". But a sub-optimal answer you can compute is better than an optimal answer that you can't.

Do only that which you must do, and which you cannot do in any other way.

I am willing to entertain the notion that this is not utter foolishness, if you can provide us with some examples - say, ten or twenty - of scientists who had success using this approach. I would be surprised if the ratio of important non-mathematical discoveries made by following this maxim, to those made by violating it, was greater than .05. Even mathematicians often have many possible ways of approaching their problems.

David,

Building an AGI and setting it at "human level" would be of limited value. Setting it at "human level" plus epsilon could be dangerous. Humans on their own are intelligent enough to develop dangerous technologies with existential risk. (Which prompts the question: Are we safer with AI, or without AI?)

Phil,

There's really two things im considering. One, whether the general idea of AI throttling is meaningful and what the technical specifics could be (crude example: lets give it only X compute power yielding an intelligence level Y) Two, if we could reliably build a human level AI, it could be of great use, not in itself, but as a tool for investigation, since we could finally "look inside" at concrete realizations of mental concepts, which is not possible with our own minds. As an example, if we could teach a human level AI morality (presumably possible since we ourselves learn it) we would have a concrete realization of that morality as computation that could be looked at outright and even debugged. Could this not be of great value for insights into FAI?

@Phil G:

if you can provide us with some examples - say, ten or twenty - of scientists who had success using this approach.

Phil, the low prevalence of breakthroughs made using this approach is evidence of science's historical link with serendipity. What it is not is evidence that 'Bayesian precision' as Eliezer describes it is not a necessary approach when the nature of the problem calls for it.

Recall the sequence around 'Faster than Einstein'. From a top-down capital-S Science point of view, there's nothing wrong with pootling around waiting for that 'hmmm, that's odd' moment. As you say, science has been ratcheting forward like that for a long while.

However, when you're just one guy with limited resources who wishes to take a mind-boggling step forward in a difficult domain in its infancy, the answer space is small enough that pootling won't get you far at all. (Doubly so when a single misstep kills you dead, as Eliezer's fond of saying.) No-one will start coding a browser and stumble across a sentient piece of code (à la Fleming / Penicillin), let alone a seed FAI. That kind of advance requires a large number of steps, each one technically precise and reliant on its predecessors. Or so I'm told. ;)

People are very fond of saying that General Intelligence may be outside the human sphere of ability - by definition too difficult for us. Well unless someone tries as hard as it's possible to try, how will we ever know?

David, the concept behind the term Singularity refers to our inability to predict what happens on the other side.

However, you don't even have to hold with the theory of a technological Singularity to appreciate the idea that an intelligence even slightly higher than our own (not to mention orders of magnitudes faster, and certainly not to mention self-optimizing) would probably be able to do things we can't imagine. Is it worth taking the risk?

David - Yes, a human-level AI could be very useful. Politics and economics alone would benefit greatly from the simulations you could run.

(Of course, all of us but manual laborers would soon be out of a job.)

Ben,

The reason why I was considering the idea of "throttling" is precisely in order to reliably set the AI at human level (ie equivalent to an average human) and no higher. This scenario would therefore not entail the greater than human intelligence risk that you are referring to, nor would it (presumably) entail the singularity as usually defined. However, the benefits of a human level AI could be huge in terms of ability to introspect concepts that are shrouded in the mystery associated with the "mental" (vs non-mental in Eliezer's terminology). If the AI is at human level, then the AI can learn morality, then we can introspect and debug moral thinking that currently comes to us as a given. So, could it not be that the fastest path to FAI passes through human level AI? (that is not powerful enough to require FAI in the first place)

Phil,

Yes im sure it would be of great use in many things, but my main suggestion is whether the best route to FAI is through human level (but not higher) AI.

David,

Throttling an AI to human intelligence is like aiming your brand new superweapon at the world with the safety catch on. Potentially interesting, but really not worth the risk.

Besides, Eliezer would probably say that the F in FAI is the point of the code, not a module bolted into the code. There's no 'building the AI and tweaking the morality'. Either it's spot on when it's switched on, or it's unsafe.

Ben,

Using your analogy I was thinking more along lines of reliably building a non-super weapon in the first place. Also, I wasnt suggesting that F would be a module, but rather that FAI (the theory) could be easier to figure out via a non "superlative" AI, after which point you'd _then_ attempt to build the superweapon according to FAI, having had key insights into what morality is.

Imagine OpenCogPrime has reached human level AI. Presumably you could teach it morality/moral judgements like humans. At this point, you could actually look inside at the AtomTable and have a concrete mathematical representation of morality. You could even trace whats going on during judgements. Try doing the same by introspecting into your own thoughts.

Human level AI is still dangerous. Look how dangerous we are.

Consider that a human level AI which is not friendly, is likely to be far more unfriendly or difficult to bargain with than any human. (The total space of possible value systems is far far greater than the space of value systems inhabited by functioning humans). If there are enough of them, then they can cause the same kind of problem that a hostile society could.

But it's worse than that. A sufficiently unfriendly AI would be like a sociopath or psychopath by human standards. But unlike individual sociopaths among humans (who can become very powerful and do extraordinary damage, consider Stalin), they would not need to fake [human] sanity to work with others if there were a large community of like-minded unfriendly AIs. Indeed, if they were unfriendly enough and more comfortable with violence than say, your typical european/american, the result could look a lot like the colonialism of the 15th-19th centuries or earlier migrations of more warlike populations with all humans on the short end of the stick. And that's just looking at the human potential for collective violence. Surely the space of all human level intelligences contains some that are more brutally violent than the worst of us.

Could we conceivably hold this off? Possible, but it would be a big gamble, and unfriendliness would ensure that such a conflict would be inevitable. If the AI were significantly more efficient than we are (cost of upkeep and reproduction), that would be a *huge* advantage in any potential conflict. And it's hard to imagine an AI of strictly human level being commercially useful to build unless unless its efficiency is superior to ours.

Those are good points, although you did add the assumption of a community of uncontrolled widespread AI's whereas my idea was related to building one for research as part of a specific venture (eg singinst)

In any case, I have the feeling that the problem of engineering a safe controlled environment for a specific human level AI is much smaller than the problem of attaining Friendliness for AIs _in general_ (including those that are 10x, 100x, 1000x etc more intelligent). Consider also that deciding not to build an AI does not stop everybody else from doing so, so if a human level AI were valuable in achieving FAI as I suggest, then it would be wise for the very reasons you suggest to take that route before the bad scenario plays out.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31