« Friendly Projects vs. Products | Main | Shared AI Wins »

December 06, 2008

Comments

There need not be just one "true objection"; there can be many factors that together lead to an estimate. Whether you have a Ph.D., and whether folks with Ph.D. have reviewed your claims, and what they say, can certainly be relevant. Also remember that you should care lots more about the opinions of experts that could build on and endorse your work, than about average Joe opinions. Very few things ever convince average folks of anything unusual; target a narrower audience.

Immediate association: pick-up artists know well that when a girl rejects you, she often doesn't know the true reason and has to deceive herself. You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN", and have accumulated many cynical but useful insights about human mating behaviour.

Most transhumanist ideas fall under the category of "not even wrong." Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis). It's a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can't disprove the notion, but there's nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of "you can't prove this isn't possible" which is completely uninteresting from a practical viewpoint.

If I was going to depack "you should get a PhD" I'd say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you'd be more cautious. If you had a PhD, maybe you'd be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There's two things you get from a formal education: one is broad, you're exposed to a variety of subject matter that you're unlikely to encounter as an autodidact; the other is specific, you're forced to focus on problems you'd likely dismiss as trivial as an autodidact. Both offer strong correctives to preconceptions.

As for why people are less likely to express the same concern when the topic is rationality; there's a long tradition of disrespect for formal education when it comes to dispensing advice. Your discussions of rationality usually have the format of sage advice rather than scientific analysis. Nobody cares if Dr. Phil is a real doctor.

Vladimir, I don't quite think that's the "narrower audience" Robin is talking about...

Robin, see the Post Scriptum. I would be willing to get a PhD thesis if it went by the old rules and the old meaning of "Prove you can make an original, significant contribution to human knowledge and that you've mastered an existing field", rather than, "This credential shows you have spent X number of years in a building." (This particular theory would be hard enough to write up that I may not get around to it if a PhD credential isn't at stake.)

Robin: Of course a PhD in "The Voodoo Sciences" isn't going to help convince anybody competent of much. I am actually more impressed with some of the fiction I vaguely remember you writing for Pournelle's "Endless Frontier" collections than a lot of what I've read recently here.

Poke: "formal education: one is broad, you're exposed to a variety of subject matter that you're unlikely to encounter as an autodidact"

I used to spend a lot of time around the Engineering Library at the University of Maryland, College Park before I moved away. In more than ten years I have never met anyone there as widely read as myself. This also brings to mind a quote from G Harry Stine's "The Hopeful Future" about how the self-taught are usually deficient in some areas of their learning - maybe so, but the self-taught will simply extend their knowledge when a lack appears to them. My lacks are from a lack of focus not breadth. Everybody I have ever met at the University was the other way around, narrow and all too often not even aware of how narrow they were.

Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I'd do the former if you have a choice.

If you started going to college and actually worked at it a bit you could have skipped to Ph.D. work if you wanted to I did. I skipped all of the B.S. and M.S. work straight to Ph.D. But if your math that you've posted is any sign of the state of your knowledge I don't hold much hope of that happening, since you can't seem to do basic derivatives correctly. When I started skipping classes for example skipping all of calculus and linear algebra to differential equations I had a partially finished a manuscript on solving differential equations that I have been working on for a while. Now the question that logically pops up is do I have Ph.D. now? No, I am taking a break from that to start a company or three if I can.

Can't do basic derivatives?
Seriously?!?
I'm for kicking the troll out.
His bragging about mediocre mathematical accomplishments isn't informative or entertaining to us readers.

billswift wrote:

…but the self-taught will simply extend their knowledge when a lack appears to them.

Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on "will simply extend" but on "when a lack appears."

In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.

Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.


Eliezer, I'm sure if you complete your friendly AI design, there will be multiple honorary PhDs to follow.

Sorry about the length of the post, there was just a lot to say.

I believe disagreements are easier to unpack if we stop presuming they are about difference in belief. Posts like this seem to confirm my own experience that the strongest factor in convincing people of something is not any notion of truth or plausibility but whether there are common allegiances with the other side. This seems to explain a number of puzzles of disagreement, including: (list incomplete to save space)

  • Why do people who aren't sure about Elizer's posts about physics/comp science/biology etc wonder what famous names have to say? (hypothesis: you could create names by creating a person who reliably agreed with some subset of public speakers. you could stretch allegiances by stretching the subset after the followers are established)
  • Why is Ray Kurzweil more convincing than code and credentials? (hypothesis: PZ Myers's support would be really convincing to Scienceblog readers)
  • Why does good grammar and spelling help convince? (hypothesis: with younger crowds and faster communication, poor spelling/grammar would help convince)
  • Why do some lies work better than others? (hypothesis: psychics are more likely to be believed when they agree with the person they are working)
  • Why do uninformed supporters of Barack Obama rationalize their reasons for supporting him? (hypothesis: X's supporters would do it too, where X is a mainstream public figure, but would not do it if questioned by someone perceived to be on the Other Side)

When people say "Why don't you have a PhD?" they have executed a search for a piece of evidence that, if someone had, would help convince them that he was on Their Side. However, (like someone who objects to Objectivism and reads Ayn Rand in response to the reaction he gets) even when he returns with a PhD, they still don't wear his colors. In a conversation with a person on the side of science who has not yet heard of it, a creationist mentioning that there is a $250,000 prize for anyone who can give convincing proof of evolution meets an automatic (and in my case, curiously confused) skepticism not from knowing the specifics of the prize but rather a gut reaction "Kent Hovind is a creationist, he must be doing something wrong."

The same thing applies with SIAI. It meets automatic skepticism and people want evidence that the organization's ideas are on their side. Famous names worked where code and credentials don't because the famous names share strong common allegiances. Code and credentials appeal to the "does it work?" mentality which would convince a lot of people allied to engineering and science if not for the fact that these are also signals used by impostors. The support of PZ Myers, on the other hand, would be an incredibly difficult signal to fake, making it a strong signal that would communicate a great deal of common allegiance with people who are scientifically minded and internet savvy. Same with a positive mention in the New York times for liberally minded people.

I have spent years in the Amazon Basin perfecting the art of run-on sentences and hubris it helps remind others of my shining intellect it also helps me find attractive women who love the smell of rich leather furnishings and old books.

Between bedding supermodels a new one each night, I have developed a scientific thesis that supersedes your talk of Solomonoff and Kolmogorov and any other Russian name you can throw at me. Here are a random snippet of conclusions a supposedly intelligent person will arrive having been graced by my mathematical superpowers:

1. Everything you thought you knew about Probability is wrong.
2. Existence is MADE of Existence.
3. Einstien didn't know this, but slowly struggled toward my genius insight.
4. They mocked me when I called myself a modern day Galileo, but like Bean I will come back after they have gone soft.

I can off the tip of my rather distinguished salt-and-pepper beard name at least 108 other conclusions that would startle lesser minds such as the John BAEZ the very devil himself or Adolf Hitler I have really lost my patience with you ElIzer.

They called me mad when I reinvented calculus! They will call me mad no longer oh I have to go make the Sweaty Wildebeest with a delicately frowning Victoria's Secret model.

Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?

Eliezer - 'I would be willing to get a PhD thesis if it went by the old rules and the old meaning of "Prove you can make an original, significant contribution to human knowledge and that you've mastered an existing field", rather than, "This credential shows you have spent X number of years in a building."'

British and Australasian universities don't require any coursework for their PhDs, just the thesis. If you think your work is good enough, write to Alan Hajek at ANU and see if he'd be willing to give it a look.

Ignoring the highly unlikely slurs about your calculus ability:

However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way.

British universities? That's the traditional place to do that sort of thing. Oxbridge.

Specifically with regard to the apparent persistent disagreement between you and Robin, none of those things explain it. You guys could just take turns doing nothing but calling out your estimates on the issue in question (for example, the probability of a hard takeoff AI this century), and you should reach agreement within a few rounds. The actual reasoning behind your opinions has no bearing whatsoever on your ability to reach agreement (or more precisely, on your inability to maintain disagreement).

Now, this is assuming that you both are honest and rational, and view each other that way. Exposing your reasoning may give one or the other of you grounds to question these attributes, and that may explain a persistent disagreement.

It is also useful to discuss your reasoning, if case your goal is not to simply reach agreement, but to get the right answer. It is possible that this is the real explanation behind your apparent disagreement. You might be able to reach agreement relatively quickly by fiat, but one or both of you would still be left puzzled about how things could be so different from what your seemingly very valid reasoning led you to expect. You would still want to hash over the issues and talk things out.

Robin earlier posted, "since this topic is important, my focus here is on gaining a better understanding of it". I read this as suggesting that his goal is not merely to resolve the disagreement, and perhaps not to particularly pursue the disagreement aspects at all. He also pointed out, "you do not know that I have not changed my opinion since learning of Eliezer's opinion, and I do not assume that he has not changed his opinion." This is consistent with the possibility that there is no disagreement at all, and that Robin and possibly Eliezer have changed their views enough that they substantially agree.

Robin has also argued that there is no reason for agreement to limit vigorous dissension and debate about competing views. In effect, people would act as devil's advocates, advancing ideas and positions that they thought were probably wrong, but which still deserved a hearing. It's possible that he has come to share Eliezer's position yet continues to challenge it along the lines proposed in that posting.

One thing that bothers me, as an observer who is more interested in the nature of disagreement than the future course of humanity and its descendants, is that Robin and Eliezer have not taken more opportunity to clarify these matters and to lay out the time course of their disagreement more clearly. It would help too for them to periodically estimate how likely they think the other is to be behaving as a rational, honest, "Bayesian wannabe". As two of the most notable wannabe's around, both very familiar with the disagreement theorems, both highly intelligent and rational, this is a terrible missed opportunity. I understand that Robin's goal may be as stated, to air the issues, but I don't see why they can't simultaneously serve the community by shedding light on the nature of this disagreement.

Mike

"Can't do basic derivatives?
Seriously?!?
I'm for kicking the troll out.
His bragging about mediocre mathematical accomplishments isn't informative or entertaining to us readers."

Did you look at his derivatives? "dy/dt = F(y) = A*y whose solution is y = e^(A*t)" How is e^(a*t) = dy/dt=a*y
Basic derivatives 101 d/dx e^x = e^x

"Solving

dy/dt = e^y

yields

y = -ln(C - t)"

again dy/dt=e^y does not equal -ln(c-t) unless e is not the irrational constant that it is normally even if that it is the case the solution is still wrong... again refer to a basic derivative table...

So I am a troll because I point out errors? Ok, fine then I am a troll and will never come back. Thats interesting so you must be a saint for thinking these errors are the truth.

I apologize that I am not amusing you, but I am not a court jester like yourself.

Mediocre accomplishments hmm... well did you skip all of your bachelors work straight to grad school in mathematics? I would bet not. Don't talk of mediocrity unless you can prove yourself above that standard. So I believe your credentials would be needed to prove that or some of your own superior accomplishments if you have any? I await eagerly.

And with that lovely exhibition of math talent, combined with the assertion that he skipped straight to grad school in mathematics, I do hereby request GenericThinker to cease and desist from further commenting on Overcoming Bias.

Generic,

The y appears on both sides of the equation, so these are differential equations. To avoid confusion, re-write as:

(1) (d/dt) F(t) = A*F(t)
(2) (d/dt) F(t) = e^F(t)

Now plug e^At into (1) and -ln(C-t) into (2), and verify that they satisfy the condition.

You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN"

Interesting. As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb's problem. Eliezer's posts have increased my credence that the latter is correct, although it hasn't helped me with the former.

@Brian

I think Alec Greven may be your man. Or perhaps like Lucy van Pelt I should set up office hours offering Love Advice, 5 cents?

You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that "rational agents must WIN"

You have. We do. And yes, they must.

"Drexler's Nanosystems is ignored because it's a work of "speculative engineering" that doesn't address any of the questions a chemist would pose (i.e., regarding synthesis)."

It doesn't address any of the questions a chemist would pose after reading Nanosystems.

"As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb's problem."

Interesting. Although I would say "approaching women with confidence is an instance of a class of problems that Newcomb's problem is supposed to represent but does not." Newcomb's problem presents you a situation in which the laws of causality are broken, and then asks you to reason out a solution assuming the laws of causality are not broken.

Daniel, I knew it :-)

Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.

"However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy - just write the thesis and defend it - then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written - if anyone has the power to let me do things the old-fashioned way."

I think this is a good idea for you. But don't be surprised if finding the right one takes more work than an occasional bleg. And I do recommend getting it at Harvard or the equivalent. And if I'm not mistaken, you may still have to do a bachelors and masters?

If I have to do a bachelors degree, I expect that I can pick up an accredited degree quickly at that university that lets you test out of everything (I think it's called University of Phoenix these days?). No Masters, though, unless there's an org that will let me test out of that.

The rule of thumb here is pretty simple: I'm happy to take tests, I'm not willing to sit in a building for two years solely in order to get a piece of paper which indicates primarily that I sat in a building for two years.

Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.
But I don't. The problem, phrased in a real world situation that could possibly occur, is that a superintelligence is somehow figuring out what people are likely to do, or else is very lucky. The real-world solution is either

1. if you know ahead of time that you're going to be given this decision, either pre-commit to one-boxing, or try to game the superintelligence. Neither option is irrational; it doesn't take any fancy math; one-boxing is positing that your committing to one-boxing has a direct causal effect on what will be in the boxes.

2. if you didn't know ahead of time that you'd be given this decision, choose both boxes.

You can't, once the boxes are on the ground, decide to one-box and think that's going to change the past. That's not the real world, and describing the problem in a way that makes it seem convincing that choosing to one-box actually CAN change the past, is just spinning fantasies.

This is one of a class of apparent paradoxes that arise only because people posit situations that can't actually happen in our universe. Like the ultraviolet catastrophe, or being able to pick any point from a continuum.

Phil, your commitment ahead of time is your own private business, your own cognitive ritual. What you need in order to determine the past in the right way is that you are known to perform a certain action in the end. Whether you are arranging it so that you'll perform that action by making a prior commitment and then having to choose the actions because of the penalty, or simply following a timeless decision theory, so that you don't need to bother with prior commitments outside of your cognitive algorithm, is irrelevant. If you are known to follow timeless decision theory, it's just as good as if you are known to have made a commitment. You could say that embracing timeless decision theory is a global meta-commitment, that makes you act as if you made commitment in all the situations where you benefit from having made the commitment.

For example, one-off prisoner's dilemma can be resolved to mutual cooperation by both players making a commitment of huge negative utility in case the other player cooperates and you defect. This commitment doesn't fire in the real world, since its presence makes both players cooperate. The presence of this commitment leads to a better outcome for both players, so it's always rational to make it. Exactly the same effect can be achieved by both players following the timeless decision theory, only without the need to bother arranging that commitment in the environment (which could prove ruinous in case of advanced intelligence, since commitment is essentially intelligence playing adversarial game in the environment against itself).

Vladimir, I understand the PD and similar cases. I'm just saying that the Newcomb paradox is not actually a member of that class. Any agent faced with either version - being told ahead of time that they will face the Predictor, or being told only once the boxes are on the ground - has a simple choice to make; there's no paradox and no PD-like situation. It's a puzzle only if you believe that there really is backwards causality.

Phil, you said "if you didn't know ahead of time that you'd be given this decision, choose both boxes", which is a wrong answer. You didn't know, but the predictor knew what you'll do, and if you one-box, that is your property that predictor knew, and you'll have your reward as a result.

The important part is what predictor knows about your action, not even what you yourself know about your action, and it doesn't matter how you convince the predictor. If predictor just calculates your final action by physical simulation or whatnot, you don't need anything else to convince it, you just need to make the right action. Commitment is a way of convincing, either yourself to make the necessary choice, or your opponent of the fact that you'll make that choice. In our current real world, a person usually can't just say "I promise", without any expected penalty for lying, however implicit, and expect to be trusted, which makes Newcomb's paradox counterintuitive, and which makes cooperating in one-off prisoner's dilemma without pre-commitment unrealistic. But it's a technical problem of communication, or of rationality, nothing more. If predictor can verify that you'll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it's all it takes.

"You didn't know, but the predictor knew what you'll do, and if you one-box, that is your property that predictor knew, and you'll have your reward as a result."

No. That makes sense only if you believe that causality can work backwards. It can't.

"If predictor can verify that you'll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it's all it takes."

Your property of one-boxing can't be communicated backwards in time.

We could get bogged down in discussions of free will; I am assuming free will exists, since arguing about the choice to make doesn't make sense unless free will exists. Maybe the Predictor is always right. Maybe, in this imaginary universe, rationalists are screwed. I don't care; I don't claim that rationality is always the best policy in alternate universes where causality doesn't hold and 2+2=5.

What if I've decided I'm going to choose based on a coin flip? Is the Predictor still going to be right? (If you say "yes", then I'm not going to argue with you anymore on this topic; because that would be arguing about how to apply rules that work in this universe in a different universe.)

Compare: communicating the property of the timer that it will ring one hour in the future (that is, timer works according to certain principles that result in it ringing in the future) vs. communicating from the future the fact that timer ringed. If you can run a precise physical simulation of a coin, you can predict how it'll land. Usually, you can't do that. Not every difficult-seeming prediction requires things like simulation of physical laws, abstractions can be very powerful as well.

Vladimir, I don't mean to diss you; but I am running out of weekend, and think it's better for me to not reply than to reply carelessly. I don't think I can do much more than repeat myself anyway.

One boxing because of a lack of precommitment is a mistake. Backwards causality is irrelevant. Prediction based off psychological or physical simulation is sufficient.

Gaming a superintelligience with dice acheives little. You're here to make money not prove him wrong. Expect him to either give you a probabilistic payoff or count a probabilistic decision as two boxing. Giving pedentic answers requires a more formal description, it doesn't change anything.

If I'm ever stuck in a prison with a rational, competitive fellow prisoner, it'd be really damn handy to be omniscient and have my my buddy know it.

I may be wrong about Newcomb's paradox.

You could say that embracing timeless decision theory is a global meta-commitment, that makes you act as if you made commitment in all the situations where you benefit from having made the commitment.
I think this is correct.

It's perplexing: This seems like a logic problem, and I expect to make progress on logic problems using logic. I would expect reading an explanation to be more helpful than having my subconscious mull over a logic problem. But instead, the first time I read it, I couldn't understand it properly because I was not framing the problem properly. Only after I suddenly understood the problem better, without consciously thinking about it, was I able to go back, re-read this, and understand it.

I'm glad that helped.

I don't think it did help, though. I think I failed to comprehend it. I didn't file it away and think about it; I completely missed the point. Later, my subconscious somehow changed gears so that I was able to go back and comprehend it. But communication failed.

Buddhists say that great truths can't be communicated; they have to be experienced, only after which you can understand the communication. This was something like that. Discouraging.

From my experience, the most productive way to solve a problem on which I'm stuck (that is, hours of looking at it produce no new insight or promising directions of future investigation), is to keep it in the background for long time, while avoiding forgetting it by recalling what's it about and visualizing its different aspects and related conjectures from time to time. And sure enough, in a few days or weeks, triggered by some essentially unrelated cue, a little insight comes, that allows to develop a new line of thought. When there are several such problem in the background, it's more or less efficient.

Inferential distance can make communication a problem worthy of this kind of reflectively intractable insight.

Phil - Changing your mind on previous public commitments is hard work. Respect!

It's a fascinating problem. I'm hoping Eleizer gets a chance to write that thesis of his. It's even more interesting once you see people applying newcomelike reasoning behaviorally. A whole lot more of human behavior started making sense after I grasped the newcome problem.

Phil,
I think that's how logic (or math) normally works. You make progress on logic problems by using logic, but understanding another's solution usually feels completely different to me, completely binary.

Also, it's hard to say that your unconscious wasn't working on it. In particular, I don't know if communicating logic to me is as binary as it feels, whether I go through a search of complete dead ends, or whether intermediate progress is made but not reported.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31