« Data On Fictional Lies | Main | A Tale Of Two Tradeoffs »

January 15, 2009

Comments

Yes. You and Eric Drexler and a few others have sufficiently convinced me that I absolutely look forward to the future. I'm not sure if I did already (I had vague though), but now I do. Thanks, I guess. :)

My optimism about the future has always been inducted from historical trend. It doesn't require the mention of AI for that or most of the fun topics discussed. I would define this precisely as having the justified expectation of pleasant surprise. I don't know the specifics of how the future looks, but can generalize with some confidence that it is likely to be better than today (for people on average, if not necessarily me in particular). If you think the trend now is positive, but the result of this trend somewhere in the future is quite negative, than you have a story to tell about why. And with all stories about the future, you are likely wrong.

I find it hard to conceive of falling into misery because I do not live in a future society where an all-powerful FAI seeking the best interests of each individual and of the species governs perfectly. I am glad that I do not have to work as a subsistence peasant, at risk of starvation if the harvest is poor, and I have some envy of celebrities that I see.

I think a lot of misery comes from wanting the World to be other than it is, without the power to change it. Everybody knows it: I need courage to change what I can change, serenity to accept what I can't change, wisdom to know the difference. It is not easy, but it is simple (this last sentence comes from House MD).

I feel that a lot of your discussion about Fun Theory is a bit too abstract to have an emotional appeal in terms of looking forward to the future. I think for at least some people (even smart, rational ones), it may be more effective to point out the possibility of more concrete, primitive, "monkey with a million bananas" type scenarios, even if those are not the most likely to actually occur.

Even if you know that the future probably won't be specifically like that, you can imagine how good that would be in a more direct and emotionally compelling way, and then reason that a Fun Theory compatible future would be even *better* than that, even if you can't visualize what it would be like so clearly.

My soul got sucked out a long time ago.

[whine]
I wanna be a wirehead! Forget eudamonia, I just wanna feel good all the time and not worry about anything!
[/whine]

"So how much light is that, exactly? Ah, now that's the issue.

I'll start with a simple and genuine question: Is what I've already said, enough?"

- Enough for what purpose? There are two distinct purposes that I can think of. Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing. I think that enough has been said for this one. How likely is this strategy to work? Well,

Secondly, there is the task of convincing a nontrivial fraction of "ordinary" people in developed countries that the humanity+ movement is worth getting excited about, worth voting for, worth funding. This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics. For this task, abstract descriptions are not enough, people will need specifics. If you tell John and Jane public that the AI will implement their CEV, they'll look at you like you're nuts. If you tell them that this will, as a special case, solve almost all of the problems that they currently worry about - like their health, their stressed lifestyles, the problems that they have with their marriage, the dementia that grandpa is succumbing to, etc, then you might be on to something.

I always got emotionally invested in abstract causes, so it was enough for me to perceive the notion of a way to get things better, and not just somewhat better, but as good as it gets. About two years ago, when exhausting routine of University was at an end, I got generally bored, and started idly exploring various potential hobbies, learning Japanese, piano and foundations of mathematics. I was preparing to settle down in the real world. The idea of AGI, and later FAI (understood and embraced only starting this summer, despite availability of all the material) as perceived ideal target gave focus to my life and linked intrinsic worth of the cause to natural enjoyment in the process of research. A new perspective didn't suck out my soul, but nurtured it. I don't spend time contemplating specific stories of the better, I need to understand more of the basic concepts in order to have a chance of seeing any specifics about the structure of goodness. For now, whenever I see a specific story, I prefer an abstract expectation of there being a surprising better way quite unlike the one depicted.

I'm currently reading Global Catastrophic Risks by Nick Bostrom and Cirkovic, and it's pretty scary to think of how arbitrarily everything could go bad and we could all live through very hard times indeed.

That kind of reading usually keeps me from having my soul sucked into this imagined great future...

Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing.

Not all the object-level work that needs to be done is (or requires the same skills as) FAI programming – not to mention the importance of donors and advocates.

This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics.

...in a desirable way. Effective SL3 "pro-technology" activism seems like it would be very dangerous. I doubt that advocacy (or any activity other than donation) by people who need detailed predictions to sustain their motivation (not just initiate it) has any significant chance of being useful.

@ nick t: I'd be interested to see the justification for the claim that pro technology activism would be very dangerous. Personally, I'm not convinced either way. If it turns out that you're right, then I'd say that this little series on fun theory has probably gone far enough.

One argument in favor of pro-rationalist/technology activism is that we cannot rely upon technology that is conducive to siai or some other small group being able to keep control of things. Robin has argued for a "distributed" singularity based on economic interdependence, probably via a whole host of bci and/or uploading efforts, with the main players being corporations and governments. In this scenario, a small elite group of singularitarian activists would basically be spectators. A much larger global h+ movement would have influence. A possible counterargument is that such a large organization would make bad decisions and have a negative influence due to the poor average quality of its members.

I really liked this post. Not sure if you meant it this way, but for me it mostly applies to imagining / fantasizing about the future. Some kinds of imagining are motivating, and they tend to be more general. The ones you describe as "soul-sucking" are more like an Experience Machine, or William Shatner's _Tek_ (if you've had the misfortune to read any of his books).

For me this brings up the distinction between happiness (Fun) and pleasure. Soul-sucking is very pleasurable, but it is not very Fun. There is no richness, no striving, no intricacy - just getting what you want is boring.

ShardPhoenix - I agree that concreteness is important, but there is still a key distinction between concrete scenarios that motivate people to work to bring them about, and concrete scenarios that people respond to by drifting off into imagination and thinking "yeah, that would be fun."

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31