« Use the Native Architecture | Main | Top Docs No Healthier »

August 25, 2008

Comments

Re: how likely it is that we live in an only partially reductionist universe?

Reductionism is a term which has been debased. It ought to still mean what it did in Hofstadter's time - in which case such a question would make no sense.

AFAICS, the modern corruption is due to the spiritual physicist John Polkinghorne, who deserves ignoring.

If I know your exact state of mind, I will be able to predict your car's trajectory by modeling your current state of mind

This is tangential to the direction of the post, but in fact you will not be able to predict the car's trajectory from the driver's current state of mind, since it depends not only on that state of mind, but also on everything that might happen on the way. Maybe a road is blocked and the driver goes another way. Maybe the driver has a crash and must abandon the journey. You will certainly not be able to predict the detailed movements the driver makes with the car's controls, since those will depend even on transient gusts of wind. Purposes are not achieved merely by making a plan, and then executing it.

Rational choice theory is probably the closest analogue to teleological thinking in modern academic research. Regarding all such reasoning as fallacious seems to be an extreme position; to what extent do they regard the "three fallacies" of teleology as genuine fallacies of reasoning as opposed to useful heuristics?

You may be making this more complex than needed.
Aristotle had a basic premise that the universe has purpose. His reasoning is good given that premise ("future to past cause" eg.)
You seem to have the basic premise that the universe is purposeless.
The disageement is with the premise.
Which premise one subscribes to is based in belief and is more basic than the logic we place on top of it (since all reasoning and observation flow from the premise)

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system. Indeed, one does this every time one speaks of the purpose of an event, rather than speaking of some particular agent desiring the consequences of that event.


I'm vaguely reminded of The Camel Has Two Humps. Perhaps it's the case that some people naturally have a knack for systemisation, while others are doomed to repeat the mind projection fallacy forever.

The teeth example at the beginning is strong because it implies the rest. I skimmed because the conclusions seemed obvious from that example and the following paragraph: intent is the cause, not that which is intended, and teeth are not the kind of things that intend.

Your section on backward causality seems to subsume the argument on mind projection. If the point on backwards causality is that the intent for x is not x itself, that covers most of what you want to say about projecting telos on x. Anthropomorphism would seem to cover the rest, that x is not that kind of thing.

"Similarly with those who hear of evolutionary psychology and conclude that the meaning of life is to increase reproductive fitness - hasn't science demonstrated that this is the purpose of all biological organisms, after all?"

"I call this a "teleological capture" - where someone comes to believe that the telos of X is Y, relative to some agent, or optimization process, or maybe just statistical tendency, from which it follows that any human or other agent who does X must have a purpose of Y in mind."

I think the second paragraph, and specifically the phrase, "in mind," probably paints the wrong picture of the people that hold the first paragraph to be true. They most likely think that increasing reproductive fitness is entirely implicit within the organisms behaviour. Not, "in mind," which would imply a concious goal.

Anyway yes I can see that teleological capture is problematic. However I can't do away with it completely. It seems the only way to be able to try and fix things. Let us say that I started to become besotted with a reborn doll to the extent that I didn't interact with other people. Should I try to stop myself loving the doll? If I ask what love is for, then it seems I should. This seems useful in my book, similarly asking what hunger and my desire for sweet things is for (from an evolutionary point of view) enables me to see that curbing them would be a good idea.

Now I am not consistent in my application of the view (I'm generally nice to people because it seems right to be nice to people), but the corner cases such as should I spend lots of money and attention on a cat (which I find adorable) it gives me something to steer by.

I haven't yet seen how your platonic morality can fill the void left by excising the ability to correct emotions and desires to the purpose for which they evolved.

Should I try to stop myself loving the doll? If I ask what love is for,

Asking what purpose love evolved for...

then it seems I should.

What is doing the seeming here? Is your built-in morality, which makes no direct reference to evolutionary arguments, but which might be swayed by them, evaluating this argument and coming to a conclusion?

If the change in morality suggested by an evolutionary line of reasoning were repugnant to you, would you reject it? Then you're not putting the cart before the horse, and good for you. Eliezer's talking about different people.

This seems useful in my book,

Only to the extent that it gives answers you're happy with by other, more primary criterion. And to that extent, it's just one more kind of moral argument.

Should I try to stop myself loving the doll? If I ask what love is for,

Asking what purpose love evolved for...

then it seems I should.

What is doing the seeming here? Is your built-in morality, which makes no direct reference to evolutionary arguments, but which might be swayed by them, evaluating this argument and coming to a conclusion?

If the change in morality suggested by an evolutionary line of reasoning were repugnant to you, would you reject it? Then you're not putting the cart before the horse, and good for you. Eliezer's talking about different people.

This seems useful in my book,

Only to the extent that it gives answers you're happy with by other, more primary criterion. And to that extent, it's just one more kind of moral argument.

Should I try to stop myself loving the doll? If I ask what love is for,

Asking what purpose love evolved for...

then it seems I should.

What is doing the seeming here? Is your built-in morality, which makes no direct reference to evolutionary arguments, but which might be swayed by them, evaluating this argument and coming to a conclusion?

If the change in morality suggested by an evolutionary line of reasoning were repugnant to you, would you reject it? Then you're not putting the cart before the horse, and good for you. Eliezer's talking about different people.

This seems useful in my book,

Only to the extent that it gives answers you're happy with by other, more primary criterion. And to that extent, it's just one more kind of moral argument.

"This is just some random text to see if I can get my comment to go through."

Will, forgive this personal question, but have you ever had sex with you or your partner using birth control?

I don't think it's asking "But what happened to the evolutionary purpose?" that tells you not to love a reborn doll. That's just an argument that happens to give the correct answer in this case, but gives the wrong answer in many others. And the proof is that you used "What about the evolutionary purpose?" giving seemingly good answer to support "What about the evolutionary purpose?" as a good question to ask. Why do you expect your audience to already know that it's a bad idea to love a reborn doll, even before they accept your evolutionary argument? This is probably how you know as well.

I expected that my audience would already know that it was a bad idea to love a reborn doll because I expected them to be mostly male. Reborn dolls are marketed to females and everything I have seen about them (not a lot), suggests that males find them wrong. It is possible that we don't have the same sort of machinery for attaching to babies and baby shaped things as females.

But what moral argument could I present to someone who did love their reborn doll? Let me present a brief discussion:

Doller: I love my doll, I want to spend all my money on repainting a room in pink and buying a new cot for it.
Moral Functionalist: Can't you see that is wrong? Or at least can't you see that lots of other people think it is wrong, which should give you evidence on the morality function?
Doller: Previously people have been agreed that crushing your neighbours tribe was the correct thing to do. At some point someone had to be the one to say no a raid now would not be a good idea (perhaps not saying that it was morally wrong but thinking it). Should he have been convinced by everyone else saying that neighbour slaughtering and daughter taking was the right and proper thing to do? How do you become the first one to strike out on your own to progress morally. Can you definitively say that I am not a pioneer in moral progress?
MF: But a reborn doll does not promote the properties of love and growth etc..
Doller: Oh but it does, without it I am listless and have feel I have no purpose with an aching hole in my life needing to be filled. With it I have something to protect and work for. A reborn doll is also a lot less time and effort for me than a real baby allowing me to spend more time working and sleeping well, so I am more productive in my job. Much like someone being in a relationship without a child gets the benefits of being in a relationship without the family obligations. People should be encouraged to have and love real dolls so that they can focus on immortality research without having to find ways to afford money for college for the next generation.
MF: ....

Feel free to try and end, or correct the argument to how you would actually take it. I feel I have been weak in the argument for the non-doller point of view, but I don't really get how it is supposed to work.

In answer to your first question, I would use protection, but I have never claimed my meta-morality is consistent and don't drive it to extremes. I just go with what seems to work for me, and not spend too much time and energy on the whole thing. Wasting time and energy is a pretty bad thing to do according to my morality.

This article seems to be a redescription of phenomena, so that you don't need to draw a causal arrow going from the future to the past.
An alternative methodology would be to draw the arrow anyway, and discuss what it means; and have your imagined interlocutors explain why their methodology forbids drawing the arrow. (Perhaps Aristotle was wiser than Bayes.)

It seems to me you've redescribed Aristotle to make him consistent with C20th folk psychology, where intelligence is inside people's heads (and not anywhere else), rather than considering alternative ways of modelling a distributed intelligence implemented as a network of multiple agents.


It's worth being more explicit about why you describe Mary differently from Mary's teeth. Mary is modelled as an autonomous agent, whose actions are directed by intelligence inside her. Her teeth are modelled as responding to intelligence located outside them: by Mary's choosing to bite and chew, and by an evolutionary process selecting who survives.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31