« Our Biggest Surprise | Main | ...And Say No More Of It »

February 07, 2009

Comments

I'm both excited for and somewhat disappointed with the return to rationality.. I've enjoyed many of the posts on other topics, but the rationality posts are also immensely useful to me in everyday life. Maybe you can toss in some more fiction now and then? (Of course, I'll probably get speared by other commenters for saying that...)

Very interesting concept... would it mean that a 5 celled organism whose behavior could all be black-box analyzed to suggest that it wishes to protect its own life thus be considered rational?

I'm curious if anyone knows of any of EY's other writings that address the phenomenon of rationality as not requiring consciousness.


"I'm curious if anyone knows of any of EY's other writings that address the phenomenon of rationality as not requiring consciousness."

Cf. Eliezer-sub-2002 on evolution and rationality.

This post needs more explosions.

Value is fragile - isn't that what conservatives/republicans believe? And the liberal/democrat side believe they can undermine little bits here and there of their society's value system and not have the whole thing collapse. Who is right?

Ian C: neither group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value. See the post value is fragile.

Are not all/most organisms built to protect reproductive organs?
Not just "male organisms are built to be protective of their testicles"?

Please explain what you mean by: "that one, centralized vulnerability is why a kick in the testicles hurts more than being hit on the head."
A kick in the head can leave you as unable to reproduce as a kick in the balls. I think it is more likely to kill you, too.

Nick Hay: "[N]either group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value."

Once again I fail to see how culturally-derived values can be brushed away as irrelevant under CEV. When you convince someone with a political argument, you are changing how their brain computes value. Just because the effect is many orders of magnitude subtler than major neurosurgery doesn't mean it's trivial.

Z. M. Davis: Good point, I was brushing that distinction under the rug. From this perspective all people arguing about values are trying to change someone's value computation, to a greater or lesser degree i.e. this is not the place to look if you want to discriminate between "liberal" and "conservative".

With the obvious way to implement a CEV, you start by modeling a population of actual humans (e.g. Earth's), then consider extrapolations of these models (know more, thought faster, etc). No "wipe culturally-defined values" step, however that would be defined.

Where was it suggested otherwise?

Nick: "Where was it suggested otherwise?"

Oh, no one's explicitly proposed a "wipe culturally-defined values" step; I'm just saying that we shouldn't assume that extrapolated human values converge. Cf. the thread following "Moral Error and Moral Disagreement."

I'm happy to hear that Eliezer will go back to posting on rationality.

CFAI 3.4.4: "The renormalizing shaper network should ultimately ground itself in the panhuman and gaussian layers..."

Nick, ZM, this is CFAI rather than CEV and in context it's about programmer independence, but doesn't this count as "wiping culturally defined values"?

TGGP,

Why, precisely?

CFAI is obsolete - nothing in there is my current thought unless I explicitly declare otherwise. I don't think there's anything left in CFAI now that isn't obsoleted by (a) "Coherent Extrapolated Volition", (b) some Overcoming Bias post, or (c) a good AI textbook such as "Artificial Intelligence: A Modern Approach".

With that said, ceteris paribus in terms of reasonable construal, ways of construing someone's 'reflective equilibrium' that tend to depend more heavily on current beliefs and values, will make it less likely for different reflective equilibria to overlap. Similarly with a fixed way of construing a reflective equilibrium, and arguments or observations which suggest that this fixed construal depends more heavily on mental content with more unconstrained degrees of freedom.

"Thus the freer the judgement of a man is in regard to a definite issue, with so much greater necessity will the substance of this judgement be determined." -- Friedrich Engels, Anti-Dühring

Wasn't there some material in CFAI about solving the wirehead problem?

"I plan to go back to posting about plain old rationality on Monday."

You praise Bayes highly and frequently. Yet you haven't posted a commensurate amount of material on Bayesian theory. I've read the Intuitive and Technical Explanation essays, and they made me think that you could write a really superb series on Bayesian theory.

Philosophers have written lots on a priori arguments for Bayesianism (e.g. Cox's Theorem, Dutch Book Arguments, etc.). I'm more curious about the fruitfulness of Bayesianism: e.g. what issues it clarifies and what interesting questions it brings to light. Here are some more specific questions:

1. What are some of the insights you've gained from Pearl's work on causal graphs and counterfactuals? How did reading Pearl change your views about certain topics? What are the insights from Pearl that have been most productive for you in your own thinking? What do you disagree with Pearl about?

2. What are some more practical examples of powerful applications of Bayesianism in AI? That Bayesianism is the correct normative theory of rationality doesn't imply that adopting a Bayesian framework will immediately yield big practical advantages in AI design. It might take people time to develop practical methods. How good are those methods? (I'm thinking, for example, about tractability, as well as the fact that many AI people over 40 won't have had so much early training on Bayes).

3. What areas of the Bayesian picture need development? What problems do you think cannot currently be given a very satisfying treatment in the Bayesian framework?


Given your ability, demonstrated in "Intuitive" and elsewhere, to not just tell people how to think about a topic but to *get* them thinking in the right way, a series on Bayesian that started elementary and built up could be very worthwhile.


Eliezer: Thanks for the clarification.

Wondering, I like rationality posts.

EJ: It takes a much harder kick to the head to hurt as much as a kick to the balls.

As a martial artist (tae kwon do, specifically), I have been kicked in the head and balls many many times - and I would much rather be kicked in the head than the balls. The strongest kick to the head I've taken hurt a fair bit and made me groggy for an hour; but the strongest kick (which wasn't very) to my balls ruined my entire *day*.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31