Followup to: Morality as Fixed Computation
I keep trying to describe morality as a "computation", but people don't stand up and say "Aha!"
Pondering the surprising inferential distances that seem to be at work here, it occurs to me that when I say "computation", some of my listeners may not hear the Word of Power that I thought I was emitting; but, rather, may think of some complicated boring unimportant thing like Microsoft Word.
Maybe I should have said that morality is an abstracted idealized dynamic. This might not have meant anything to start with, but at least it wouldn't sound like I was describing Microsoft Word.
How, oh how, am I to describe the awesome import of this concept, "computation"?
Perhaps I can display the inner nature of computation, in its most general form, by showing how that inner nature manifests in something that seems very unlike Microsoft Word - namely, morality.
Consider certain features we might wish to ascribe to that-which-we-call "morality", or "should" or "right" or "good":
• It seems that we sometimes think about morality in our armchairs, without further peeking at the state of the outside world, and arrive at some previously unknown conclusion.
Someone sees a slave being whipped, and it doesn't occur to them right away that slavery is wrong. But they go home and think about it, and imagine themselves in the slave's place, and finally think, "No."
Can you think of anywhere else that something like this happens?
Recent Comments