« The Meta-Human Condition | Main | Dunbar's Function »

December 29, 2008

Comments

Point of clarification: if human ascension to Mind status is possible, and speeding that ascension is within the power of the NFSI, how are you avoiding having at least one human mind ascend to main character status well ahead of the rest of the species?

At least one of the current six billion squishy things is going to want to enter the godmode code and ascend immediately, and if not them then one of the other trillions of Earth organisms that could be uplifted. Even if the NFSI limits the rate of ascension to the eudaimonic rate, that will vary between people; given six billion rolls of the dice (and more rolls every day), someone will have the value "really really fast" for his/her personal eudaimonic rate. Anything worth waiting for is worth having right now.

The effect seems like passing the recursive buck a very short distance. Humans create a computer that can but will not make all human efforts superfluous; the computer can and does uplift a human to equal capacities; at least one human can and may make all human efforts superfluous. Perhaps CEV includes something like, "No one should be able to get (much) smarter (much) faster than the rest of us," but restricting your intelligence because I am not ready for anyone that smart is an odd moral stance.

This post seems to be missing one important thing about Culture universe (unless I missed it): in that universe "high-grade transhumanism", if I understand the term correctly, is possible, and, if anything, common. The Culture is an aberration, one of very few civilization in that universe which is capable of Sublimation, and yet remains in its human form. The only reason for that must be very strong cultural reasons, which are constantly reinforced, because all those who do not agree with them sublimate into incomprehensibility before they can can significantly influence anything.

My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it.

This is obvious given the degree to which Eliezer's personal morality diverges from the morality of the human race.

Zubon, I thought of that possibility, and one possible singleton-imposed solution is "Who says that subjective time has to run at the same rate for everyone?" You could then do things fast or slow, as you preferred, without worrying about being left behind, or for that matter, worrying about pulling ahead. To look at it another way, if people can increase in power arbitrarily fast on your own playing field, you may have to increase in power faster than you prefer, to keep up with them; this is a coordination/competition problem, and two singleton solutions are to fence off people who grow too fast, or to slow down their subjective time rates so that the competence growth rate per tick of sidereal time is coordinated.

Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened.

Unknown, the question is how much of this divergence is due to (a) having moved further toward reflective equilibrium, (b) unusual mistakes in answering a common question, (c) being an evil mutant, (d) falling into an uncommon but self-consistent attractor.

"(b) my being a mutant,"

It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values without mutation (although personality genetics probably play a role here too). I don't think you would characterize this as an instance of c), would you?

Presumably, because if they increased their intelligence they would realize that the war is stupid and go home, which leaves fighting only those that did not. This starts to look like a rationalization, rather than serious reason, but then I always thought that Culture books are carefully constructed to retain as much as possible of "classic" science fiction (starships! lasers! aliens!) in the face of singularity.

Carl, I would indeed call that an "uncommon but self-consistent attractor" if we assume that it is neither convergent, mistaken, nor mutated. As far as I can tell, those four possibilities seem to span the set - am I missing anything?

I'm just confused by your distinction between mutation and other reasons to fall into different self-consistent attractors. I could wind up in one reflective equilibrium than another because I happened to consider one rational argument before another, because of early exposure to values, genetic mutations, infectious diseases, nutrition, etc, etc. It seems peculiar to single out the distinction between genetic mutation and everything else. I thought 'mutation' might be a shorthand for things that change your starting values or reflective processes before extensive moral philosophy and reflection, and so would include early formation of terminal values by experience/imitation, but apparently not.

"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"

Yes, definititely. If nothing else, it means diversity.

"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"

I do not care, as long as story continues.

And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?

"Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?"

Grow up how? Does it involve uploading your mind to computronium?

"Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?"

Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.

"If we don't have to do it one way or the other - if we have both options - and if there's no particular need for heroic self-sacrifice - then which do you like?"

As you have said - this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.

"Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with."

Please, explain the difference between the Mind created outright and "grown up humans". Do you insist on biological computronium?

As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.

How do you think we will be able to compete with your "no sentient AIs, only grown up humans" bias?

Or: Say your CEV AI creates singleton.

Will we be allowed to create the Culture?

What textbooks will be banned?

Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people's arms?

I saw it from the other side, "why on earth would humans not choose to uplift" - given the contextual quite reasonable expectation they could just ask and receive. The real problem with that universe is not a lack of things for humans to do, but a lack of things for anybody to do. Minds are hardly any better placed. I could waste my time as human dabbling uselessly in obsolete skills, or as a Mind acting as a celestial truck driver and bored tinkerer on the edges of other people's civilizations - what a worthless choice.

Julian Morrison:

Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).

There is no shortage of things to do, there is only a problem with your definition of "worthless".

Eliezer (about Sublimation):

"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."

Just a technical (or fandom?) note:

Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).

luzr, in Consider Phlebas, the term "Sublimed" is never used. It is implied that the Dra'Azon are simply much older than the Culture and hence more powerful - a very standard idiom in SF which makes no mention of deliberately refraining from progress at higher speeds. In Consider Phlebas, the Culture is implied to be advancing its technology as fast as possible in order to fight the war.

Julian, what in any possible reality would count as "something to do"?

Eliezer:

It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but

http://en.wikipedia.org/wiki/Dra%27Azon

Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.

Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity. I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.

A possible problem here is that your high entry requirement specifications may well, with a substantial probability, allow others with lower standards to create a superintelligence before you do.

So: since you seem to think that would be pretty bad, and since you say you are a consequentialist - and believe in the greater good - you should probably act to stop them - e.g. by stepping up your own efforts to get there first by bringing the target nearer to you.

I do have a copy of Consider Phlebas on hand, and reread it, along with Player of Games before writing this post. Wikipedia can say what it likes, but the term "Sublimed" is certainly never used, nor anything like the concept of "deliberately refused hard takeoff" implied. The Culture is advancing its base technology level as implied by the notion of an unusually advanced Mind-prototype, capable of feats thought to be impossible to the Culture's technology level, being lost on Schar's World. "Subliming" is an obvious later graft which simply doesn't fit the world depicted in the earlier novels.

It's questionable how relevant any of this is, since we are arguing over a ficton - but the original Culture does not have anything akin to Subliming and I am criticizing it on those grounds.

Eliezer, I'm confused what you're asking. Read literally, you're asking for a summary description of reachable fun space, which you can make better than I can. All the other parses I can see are more confusing than that. Plain text doesn't carry tone. Please could you elaborate?

Consider Phlebas is subpar Culture and Player of Games is the perfect introductory book but still not full power Banks. Use of Weapons, Look to Windward, Inversions.. and Feersum Endjinn favourite non-Culture.

More to the point however, Look to Windward discusses part of the points you raise. I'm just going by memory here but one of the characters Cr. Ziller, a brilliant and famous non human composer, asks a Mind whether it could create symphonies as beautiful as it and how hard it would be. The Mind answers that yes, it could (and we get the impression that quite easily in fact) and goes on to argue how that does not take anything away from Ziller's achievement. I dont remember the detail exactly but at one point there is an analogy with mountain climbing when you can just use a helicopter.

From my readings i dont get the impression that there is "competing on a level playing field with superintelligences" and in fact when Banks does bring Minds too far into the limelight things break down (Excession)

David:

"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"

On somewhat related note, there are still human chess players and competitions...

I agree with Unknown. It seems that Eliezer's intuitions about desirable futures differ greatly from many of the rest of us here at this blog, and mostly likely even more from the rest of humanity today. I see little evidence that we should explain this divergence as mainly due to his "having moved further toward reflective equilibrium." Without a reason to think he will have vastly disproportionate influence, I'm having trouble seeing much point in all these posts that simply state Eliezer's intuitions. It might be more interesting if he argued for those intuitions, engaging with existing relevant literatures, such as in moral philosophy. But what is the point of just hearing his wish lists?

Off-topic, but amusing:

[Long + off-topic = deleted. Take it to an Open Thread.]

Robin, it's not clear to me what further kind of argument you think I should offer. I didn't just flatly state "the problem with the Culture is the Minds", I described what my problem was, and offered Narnia as a simplified case where the problem is especially stark.

It's not clear to me what constitutes an "argument" beyond sharing the mental images that invoke your preferences, in this matter of terminal values. What other sort of answer could I give to "Why don't you think that's fun?" Would you care to briefly state a contrary view you have, and what you would see as a different sort of argument in favor of it?

Again, my purpose in all this is twofold: To retain people who now turn away from transhumanism, cryonics, or life itself because they can't imagine any future in which they would be happy; and to deliver a further general argument against religions by showing that the present world isn't optimized for eudaimonia, including moral responsibility or self-reliance.

I have a lot of sympathy for what Unknown said here:

"My guess is that Eliezer will be horrified at the results of CEV-- despite the fact that most people will be happy with it."

And Carl Schulmann has a very good point here:

"It looks like (especially young) humans have quite a lot of ability to pick up a wide variety of basic moral concerns, in a structured fashion, e.g. assigning ingroups, objects of purity-concerns, etc. Being raised in an environment of science-fiction and Modern Orthodox Judaism may have given you quite unusual terminal values"

Sorry to keep harping on about this, but if you read Joshua Greene's PhD thesis, pp 194, you'll find this:

"By participating in these interlinked custom complexes regarding the use of space and the purification of the body, children learn that a central project of moral life is the regulation of one’s own bodily states as one navigates the complex topography of purity and pollution….

Social skills and judgmental processes that are learned gradually and implicitly then operate unconsciously, projecting their results into consciousness, where they are experienced as intuitions arising from nowhere"

If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.

I nominate this for the next "Rationality Quotes".

Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almost (not entirely, but as best as we can) cut the probability of existential threats to a minimum and allow for a sustainable eudaimonic increase of intelligence towards a positive singularity outcome. Yes, that is a hard problem, but I'm sure not harder than FAI (probably a lot less hard). It'll probably take generations, and we might have to take a few steps backwards before we take further steps forwards (and non-existential catastrophes might provide those backward steps regardless of our choosing), but it seems like it is the best path. The only reasons to choose an FAI plan is because you 1.) think an existential threat is likely to occur very soon, 2.) you want to be alive for the singularity and don't want to risk cryogenics, 3.) you just fancy the FAI idea for personal non-rational reasons.

Eliezer,

I have to question your literary interpretation of the Culture. Is Banks' intention really to show an idealized society? I think the problem of the Minds that you describe is used by Banks to show the existential futility of the Culture's activities. The Culture sans Minds would be fairly run-of-the-mill sci-fi. With all of its needs met (even thinking), it throws into question every action the Culture takes, particularly the meddlesome ones. That's the difference between Narnia and the Culture; Aslan has a wonderful plan for the childrens' lives, whereas the Culture really has nothing to do but avoid boredom. The Romantic Ideals (High Challenge, Complex Novelty) you espouse are ultimately what is being attacked by what I see as Banks' Existential ones. I think you can take the transhumanism out of the argument and just debate the ideas, since we aren't yet at the point of being infinitely intelligent, immortal, etc.

Aaron

haig, one might also believe that Friendly Artificial Intelligence is easier than Friendly Biological Intelligence. We have relatively few examples of FBI and no consistent, reliable way to reproduce it. FAI, if it works, works on better hardware with software that is potentially provably correct, and you can copy that formula.

AI is often mocked because it has been "almost there" for about 50 years, and FAI is a new subset of that. Versions of FBI have been attempted for at least 4000 years, suggesting that the problem may be difficult.

Eliezer, what do you have against "Excession"? It's been a while since I last read them, but I thought it was the 2nd best of the Culture books after "Use of Weapons". I do agree that "Player of Games" is the best place to start though (I started with Consider Phlebas but found it a little dry).

Anyway, as for your actual point, I think it sounds reasonable at least on the surface, but I think considering this stuff too deeply may be putting the cart ahead of the horse somewhat when we're not even very sure what causes consciousness in the first place, or what the details of its workings are, and therefore to what extent a non-conscious yet correctly working FAI is even possible or desirable.

Eliezer:

"Narnia as a simplified case where the problem is especially stark."

I believe there are at least two significant differences:

- Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).

- There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".

(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).

The concern expressed above over the consistency of the Culture universe seems unnecessary. The quality of construction of the Culture universe and it's stories is non-trivial, and hence, as with all things, one absorbs what is useful and progresses forward.

I read Amputation of Destiny and your subsequent replies with interest Eliezer, here's my contribution.

The Problem With The Minds could also read The Entire Reason For The Culture/Idiran War. The Idirans consider sentient machines an abomination or to quote Consider Phlebas;

'The fools in the Culture couldn't see that one day the Minds would start thinking how wasteful and inefficient the humans in the Culture themselves were".

It's not a plot flaw, it's a plot device and it occurs throughout the series.

Your Living By Your Own Strength Point I don't agree with either as you appear to negate important backstory about the Culture. People leave and join the Culture all the time. The Culture itself splits occasionally as a yearning for personal fulfillment afflicts Minds as well.

The wish to become stronger is fully exhibited by Culture Citizens, I think your analogy doesn't fit. We are told that Culture Citizens all have physical and mental enhancements that put them several notches above their non Culture counterparts on the strength scale. Strength is therefore valued, so is intelligence but they are not the most valued...

I think in common with ourselves and within the non confines of Culture society there is a wish to attain status and respect because many apply to Contact and Special Circumstances. Special Circumstances in particular gives access to more offensive levels of technology.

Read Matter for an account of upgrades and assignments to a person becoming a special circumstances agent and how they made her feel.

This brings me to how it all ties together. You've mentioned that the Minds overshadow the humans this is wrong. First of all Culture citizens have an intimate relationship with Minds and can form friendships and dislikes with them the same as any other being. In Culture society sentience is the most valued thing of all. This is not to say that the humans are blind to the obvious differences in abilites, far from it. In the main they don't feel the need to change themselves to AI, what with the good s**t, ability to change sex and whatnot. It's even considered rude for organics to take on the forms of conventional AI like drones and vice versa. To be sentient within the Culture is to claim equal status with all in the Culture.

Contrast this with the Idirans in Consider Phlebas, their religion gives them the right to rule lesser beings within their influence.

That's why I think the Narnia/Culture comparison isn't right. The Narnia books as you've said have a christian fable at their heart, however Aslan is not a central character, Aslan is THE central character, both creator and destroyer. Although religion plays a prominent part in Consider Phlebas, religion is not the central pillar the Culture is built upon. The Minds may have godlike powers but they're not gods themselves. Billions of humans live within the Culture, billions more and all the rest live quite happily without it.

The nonperson predicate point I find very interesting. It's good to get opinions on AI from specialists such as yourself and certainly I'm not going to use a novel to outpoint proper research. I would like to mention Look To Windward though. As part of the backstory it mentions that the problem with creating AI without the taint of their creators is that the results almost instantly sublime so while the author may be paying lipservice, he isn't ignoring it.

Could it be that the Minds themselves yearn for a purpose? This is only my question.

I like your later point you make about subliming albeit I think this has been undergoing a process of refinement. In fairness it's an authors right to embellish and improve a work in progress, provided there are no inconsistences.

Thanks for the right of reply Eliezer.

"If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done."

This only applies if non-existence is considered a preferable state to existence. Obviously Culture AI's consider existence preferable, and thus strive to make human existence as suffering-free as possible.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31