« The Meaning of Right | Main | Intrade's Conditional Prediction Markets »

July 29, 2008

Comments

Re: Anything out there besides The Evolutionary Origins of Morality?

Well you read "The Origins of Virtue" IIRC. That has 18 pages of references and notes.

...and how about "The Moral Animal"?

Mr. Yudkowsky says: But does that question really make much difference in day-to-day moral reasoning, if you're not trying to build a Friendly AI?

Here is a quote that I think speaks to that:

The Singularity is the technological creation of smarter-than-human intelligence. Ask what “smarter-than-human” really means. And as
the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don’t know
because we’re not that smart.-- Eliezer Yudkowsky

As I understand it, it is not possible for a human to design a machine that is "smarter-than-human", by definition. It is possible, however to design a machine that can design a machine that is smarter than human. According to one of my correspondents, it has already occurred (quoted exactly, grammar and all):

"My current opinion is that the singularity is behind us. The deep discovery is the discovery of the Universal Machine, alias the computer, but we have our nose so close on it that we don't really realize what is happening. From this, by adding more and more competence to the universal machine, we put it away from his initial "natural" intelligence. I even believe that the greek theologians were in advance, conceptually, on what intelligence his. Intelligence is confused with competence today. It is correct that competence needs intelligence to develop, but competence cannot be universal and it makes the intelligence fading away: it has a negative feedback on intelligence.
So my opinion is that the singularity has already occurred, and, since a longer time, we have abandon the conceptual tools to really appreciate that recent revolution. We are somehow already less smarter than the universal machine, when it is not yet programmed."

Best regards,

Bruno Marchal
IRIDIA-ULB
http://iridia.ulb.ac.be/~marchal/

What is "morality" for?(not morality)
The "morality" concept seems so slippery at this point that it might be better to use other words to more clearly communicate meaning.

In any case, be it personal or interpersonal morality, it is more efficient to agree on a single goal and then optimize towards it. The power of optimization is in driving the environment in the same direction, multiplying the effect with each interaction. For personal morality, it means figuring out morality more accurately than you needed before, perhaps more accurately than is possible from mere intuition and argument, and in particular balancing the influences of conflicting drives. For interpersonal morality, it means agreeing on global target, even if it's suboptimal for each party considered separately.

To clarify my question, what is the point of all this talk about "morality" if it all amounts to "just do what you think is right"? I mean other than the futility of looking for The True Source of morality outside yourself. I guess I may have answered my own question if this was the whole point. So now what? How do I know what is moral and what isn't? I mean I can answer the easy question but how do I solve the hard ones? I was expecting to get easy answers to moral questions from your theory Eliezer. I feel cheated now.

PK: Not "do what you think is right", but "deliberate the way you think is right" - keep aiming for the ideal implicit in and relative to, but not currently realized in, yourself. Definitely not a free lunch, no.

As I understand it, it is not possible for a human to design a machine that is "smarter-than-human", by definition.

Then we couldn't design a machine that could design a machine that would be smarter than we were, either. Machine-2's design couldn't be smarter than it, and machine-2 couldn't be smarter than machine-1 which designed it, which in turn couldn't be smarter than the designers of machine-1: us.

We can't hold in our individual minds a design that is more complex than one of those minds, or in our collective minds what is more complex than the collective. But the design doesn't have to be as complex as the thing that is designed, and the representation of the design is simpler still.

It's trivially easy for a human being to design a data-encoding-and-storage system that can hold more data than is contained in that human's brain. The brain just can't represent that system.

Re: As I understand it, it is not possible for a human to design a machine that is "smarter-than-human", by definition.

Maybe not - but a bunch of humans could probably do it.

Re: My current opinion is that the singularity is behind us.

See my essay: The Intelligence Explosion Is Happening Now.

Doesn't anyone read Nietzsche any more? There is no rational basis for morality. To search for such a thing is no different from postulating a god. As for the pie--in a world without god, it belongs to the person who takes it. Weaklings (like me) don't like this, so we develop metaethics (including the form based on "reason").

sophiesdad: As I understand it, it is not possible for a human to design a machine that is "smarter-than-human", by definition.

Caledonian: Your understanding is mistaken.

Mr. Caledonian, I'm going to stick by my original statement. Smarter-than-human intelligence will be designed by machines with "around human" intelligence running recursive self-improvement code. It will not start with a human-designed superhuman intelligence. How could a human know what that is? That's why I'm not sure that all the years of thought going into the meaning of morality, etc. is helpful. If it is impossible for humans to understand what superhuman intelligence and the world around it would be like, just relax and go along for the ride. If we're destroyed, we'll never know it. If you're religious, you'll be in heaven (great) or hell (sucks). I agree with Tim Tyler (who must not be drinking as much as when he was commenting on Taubes) that we already have machines that perform many tasks, including design tasks, that would be impractical for humans alone.

Smarter-than-human intelligence will be designed by machines with "around human" intelligence running recursive self-improvement code. It will not start with a human-designed superhuman intelligence. How could a human know what that is?

How could the machine know what superhuman intelligence is? If the machine can design machines that are smarter than it, precisely why can't humans design machines smarter than them?

I don't think Deep Blue "knew" that it was trying to beat Gary Kasparov in the game of chess. It was programmed to come up with every possible alternative move and evaluate the outcome of each in terms of an eventual result of taking K's king. The human brain is elegant, but it's not fast, and unquestionable no human could have evaluated all the possible moves within the time limit. Deep Blue is quaint compared to the Universal Machines of the near future. David Deutsch claims that quantum computers will be able to factor numbers that would require a human more time than is known to have existed in the history of the universe. It won't have superhuman intelligence, but it will be fast. Imagine if it's programs were recursively self-improving.

If there's a standard alternative term in moral philosophy then do please let me know.

As far as I know, there is not. In moral philosophy, when deontologists talk about morality, they are typically talking about things that are for the benefit of others. Indeed, they even have conversations about how to balance between self-interest and the demands of morality. In contrast, consequentialists have a theory that already accounts for the benefit of the agent who is doing the decision making: it counts just as much as anyone else. Thus for consequentialists, there is typically no separate conflict between self-interest and morality: morality for them already takes this into account. So in summary, many moral philosophers are aware of the distinction, but I don't know of any pre-existing terms for it.

By the way, don't worry too much about explaining all pre-requisites before making a post. Explaining some of them afterwards in response to comments can be a more engaging way to do it. In particular, it means that us readers can see which parts we are skeptical of and then just focus our attention on posts which defend that aspect, skimming the ones that we already agree with. Even when it comes to the book, it will probably be worth giving a sketch of where you want to end up early on, with forward references to the appropriate later chapters as needed. This will let the readers read the pre-requisite chapters in a more focused way.

I would find it very difficult to summarize what I had not written. Any attempt at summary would turn into the whole post. It's how my authorness works. Part of the whole point of writing this on Overcoming Bias is so that I don't have to write it all into the book, and can just put a footnote somewhere with a clean conscience.

I found this post more engaging than the last, and the first half genuinely instructive. I had a vague idea about what social morality would have to look like, but this is actually a working theory. IMHO, it's still a kind of depressing outlook--there's a reason so many find the quest for an objective morality so appealing--but much better than nihilism. And I'm easily depressed. :)

Re: why can't humans design machines smarter than them

The claim was: it is not possible for a human to design a machine that is "smarter-than-human".

Billions of humans collectively haven't managed to construct a superintelligence over the last 50 years. It seems unlikely that a single human could do it - the problem is too big and difficult.

If you are confused by a problem and don't know how to solve it, you do not know how much remaining effort it will take to solve once you are unconfused, or what it will take to unconfuse yourself. You might as easily point out that "billions of humans have failed to do X over the least 50 years" for all X such that it hasn't happened yet.

Imagine the year 2100

AI Prac Class
Task: (a) design and implement a smarter-than-human AI using only open source components; (b) ask it to write up your prac report.
Time allotted: 4 hours
Bonus points: disconnect your AI host from all communications devices; place your host in a Faraday cage; disable your AI's morality module; find a way to shut down the AI without resorting to triggering the failsafe host self-destruct.


sophiesdad, since a human today could not design a modern microprocessor (without using the already-developed plethora of design tools) then your assertion that a human will never design a smarter-than-human machine is safe but uninformative. Humans use smart tools to make smarter tools. It's only reasonable to predict that smarter-than-human machines will only be made by a collaboration of humans and existing smart machines.

Speculation on whether "smart enough to self improve" comes before or after the smart-as-a-human mark on some undefined 1-dimensional smartness scale is fruitless. By the look of what you seem to endorse by quoting your unnamed correspondent, your definition of "smart" makes comparison with human intelligence impossible.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31