« Political Parties are not about Policy | Main | Equally Shallow Genders »

October 02, 2008

Comments

The Harlem Renaissance section here is a striking example of raising your goals enough to make success possible rather than just putting out the usual amount of effort.

With the VP debate going tonight, I'm curious about one thing. The media have been going on and on about how there are lower expectations for Palin because Biden is a veteran debater and Palin's recent debate opponents have been moose. [I may be confusing stories here.] I wonder, what does that constant attention to the lower expectations for her actually do to those expectations? If everyone's been made aware that not much is expected in the debate, does that have the effect of raising the expectations? Or does it simply remind more people to expect less?

I've written an essay about the wirehead problem - check it out if you are interested: http://alife.co.uk/essays/the_wirehead_problem/

Tim, interesting stuff. Only thing missing is a foolproof definition of exactly what sort of optimization process constitutes wireheading and what doesn't. This is a very basic gripe, I know, but an important one nonetheless.

You and I would look at an alien paperclip maximiser and say 'Wirehead'. However, the paperclip-worshipping civilisation that built it would think of it as a perfectly sensible, ethically justified system - a utility maximiser. Ditto our AI that reorganises the solar system to maximise computation without harming any living being. Great for us, bad for solar orbit maximisers. Every optimisation process is someone's wirehead. Utility is in the eye of the agent.

The essence of wireheading is the bypassing of evaluative functions and producing reward sensations directly.

A paperclip-manufacturing AI wouldn't be wireheaded by definition, because it has to look at the world and detect certain configurations in order to feel rewarded. A wireheaded AI wouldn't care about external conditions at all -- it would just feel great, all the time, for no reason, and regardless of whatever else was happening.

Can we have a month where we don't talk about paperclips and renegade AIs?

A wireheaded AI wouldn't care about external conditions at all - it would just feel great, all the time, for no reason, and regardless of whatever else was happening.

Maybe - though the example of the heroin addict suggests things are not necessarily always that simple. Curt once gave Enron as an example of a company that had stopped trying to make shareholder profits (the normal utility function for such a company). Yet it still acted as though it had some preferences - it behaved as though it wanted to cover up its accounting scam for as long as possible.

I haven't tried to formally define wireheading in the essay - but it doesn't seem critical to understanding the basic problem. Essentially, wireheading is a change resulting in the generation of reward for something that "shouldn't" generate reward - to the point where decidedly odd behaviour results. (Interpret "shouldn't" as you will.) There's also a kind of "negative" wireheading based on pain-killing.

How about an answer to the Prediction v Explanation puzzle?

Hi. In 3 weeks there'll be even more Bayesians in the Bay Area than usual. Would anyone, locals or visitors, potentially be interested in an informal OB meetup near to that weekend, possibly on an evening or the Sunday?

On the (recently closed) Awww(ful)-thread: The cluster of authors from one IP is (obviously) a fictional persona - a hard core singularitarian transhumanist who takes things a bit too seriously and a bit too far, in particular, the "deny humanity, deny yourself, transcend biology" memeplex. Awful? It's supposed to be awful. But why is it awful? Why are such convictions, the denial of the primacy of human values, needs, and instincts, disagreeable? Non-fictionally, I'll be a fan of Eliezer forever, supporting his freedom of choice, whether abstinence, girlfriend, or one that self-replicates indefinitely. I can't speak for the other guys who mentioned such ideas - are there really people who think like that, in RL?

One of the things I have scheduled - it remains to be seen if I'll get there, because I'm already on overtime - is a sequence on Fun Theory. That answers the objection with respect to the future of humanity. In current practice, the answer is that it isn't necessarily true that you can get more scientific work done without an SO; that will vary depending on temperament, resources, and of course the girlfriend in question. The calculation is worth doing, but it's not a foregone answer one way or the other, and I have no intention of going into the details in my case.

"that will vary depending on temperament, resources, and of course the girlfriend in question"

Including various psychological factors like security/insecurity, aloofness, neuroticism, depressiveness, and other miscellaneous psychobabble.

A paperclip-manufacturing AI wouldn't be wireheaded by definition, because it has to look at the world and detect certain configurations in order to feel rewarded. A wireheaded AI wouldn't care about external conditions at all -- it would just feel great, all the time, for no reason, and regardless of whatever else was happening.
This relies on being able to distinguish internal and external worlds. If the paperclipper is so powerful that you might as well call the solar system its "body", how is detecting configurations in the nearby physical world different from detecting impulses in your brain?
The cluster of authors from one IP
How do you see an author's IP?
Why are such convictions, the denial of the primacy of human values, needs, and instincts, disagreeable?

Because there's no light in the sky, outside of humanity, for values to come from. If you reject all our evolved preferences as philosophically invalid, what's left?

Would anyone, locals or visitors, potentially be interested in an informal OB meetup near to that weekend, possibly on an evening or the Sunday?

Yes. (Visitor, don't know yet exactly which days.)

How do you see an author's IP?

[read thread.] Oh. Never mind.

Thought for the day--probability theory and decision theory push us in different directions: induction insists that you cannot forget your past; the sunk cost fallacy demands that you must.

Recovering: "Would anyone [...] be interested in an informal OB meetup near [Oct. 25]?"

Count me in!

Nick Tarleton: If "right" is just whatever people value, that means that if you kill everyone who doesn't have moral value X, X automatically becomes true.

I guess the paperclip AI isn't so bad after all! Once all the humans are dead, there's nowhere for value to come from but the AI itself, so...

anti: no. the notion, as I understood it, amounted to this:

When we say "should/moral/etc..." we mean something. We may not fully be able to articulate that meaning, and we may have trouble working out what actually fulfills the various criteria corresponding to that, but to the extent that there is an associated meaning/question/computation encoded into our brains that's associated with the relevant words, that's what we ought to appeal to.

That does _not_ mean it's "oh, whatever people happen to value"

it's more the notion that the term morality refers to something specific. It happens to be that people tend to value this stuff called morality. And the beings that don't value it, well... they're by definition immoral, so there's a limit to how much their opinion 'should' count. (should, of course, being a word that translates to whatever those partly "black box" criteria of morality ultimately turn out to be)

To the extent one rejects this notion, one's going to have trouble talking about morality at all. I mean, presumably you mean something by that. Even if you can't articulate precisely what you mean, or what the outcome of the "morality computation" is is something you can't at this time determine accurately. To the extent that it does mean something specific (That is, that it's a lever in your mind to a certain "black box" that computes morality), it doesn't matter what people think is moral.

(did that come out relatively clearly)

ie, recall the distinction between a calculator trying to calculate the answer to the question "what's 3 + 5" and a calculator trying to calculate the answer to the question "what does this calculator think is the correct answer to 3+5"? The latter can be more or less anything, but the former has a unique answer.

Eliezer: have you read or do you plan to read Anathem, the new Neal Stephenson novel? It has elements that remind me of your conspiracy stuff.

Regarding: Past behavior, the one you used to know. 99

Please don't tell me this moment is not a bias:)

http://ca.youtube.com/watch?v=8OyD_ZfqXXw


Anna:) My abstract view

@Psy-Kosh

"It happens to be that people tend to value this stuff called morality."

Not really.

I have no immediate peers. Do a lot of OB readers have my same problem? I still stand by my decision to stay out of college, but I wish I could be around people with similar interests. I do have friends in college, but none of them are as passionate as I am with my interests. Math, writing, reading, good movies and music.... Yet, I feel like I have many years to go before I can contribute anything to any of these fields (meaningful or not, I feel obligated to try).

This is of a few places that I can go and feel among "my people." How many of you are like this?

Something interesting I saw recently on the subject of economics:

This Economy Does Not Compute

My intuition suggests that the kind of modeling being described in this article should be extremely valuable. As there are many economists that read this blog, I'd like to hear what they think.

Thomas Ryan: "[...] How many of you are like this?"

At least one! I'm a lonely dropout-cum-generalist-autodidact as well. You can email me at: zack m davis {-at-} yahoo point cahm (no spaces; you will forgive these cumbersome antispam measures) if you want to talk.

I'll be in the bay area for a month, roughly, 10/14 to 11/17, so yes I'm happy to join people for an OB meet-up, preferably relatively early in that period.

I'm sorry readers have had to endure another month of straw-men, misconceptions, non sequiturs, ideology and superficial analysis.

---

In particular, the idea that intelligence is somehow reducible to a purely functional description ('Bayesian Induction', 'Optimization') could be a *big* mistake. That's *one* aspect of intelligence - *optimization* is a big insight to be sure, but I don't for one moment believe that that's sufficient to encompass a full definition of intelligence. A more abstract (higher-level) description would base intelligence on *the aesthetics/elegence/simplicity of ontological representations*. I don't for one moment believe that calculation of semantic similarites (the basic operation at the ontological level of abstraction) is reducible to Bayesian Induction, although of course there would have to be a Bayesian component to it.

What if it turns out that Bayesian induction is not sufficiently general to fully encompass intelligence? In summary, knock over the 'Bayesian Induction' domino, and the rest of the AGI stuff posted here woud collapses like a house of cards, you EY fan-boys realize?

--

Libertarian fan-boy faith has also come crashing down, with the US economy in near total melt-down. This was a common feature of 'free' markets as far back as records go. Thanks goodness the Libertarain ideology prompted by many self-proclaimed *geniuses* here will never be implemented.

The moral of all this is this great quote:

Conservatism is suspicious of thinking, because thinking on the whole leads to wrong conclusions, unless you think very, very hard

-Roger Scruton

@ Thomas Ryan You're not alone

Thomas Ryan, Z. M. Davis:

I would think there are a good number of OB readers like this. Myself included, and although I was raised homeschooled and I am seriously considering college I find autodidacticism pretty appealing. [My email is naxicasa {-at-} gmail point cahm, if you care.]

Regarding wireheads:
The first question asked in your article, Tim, is "How can we prevent wireheads from arising?"

Why do we want to prevent them from arising?

Indeed, I'm a wirehead and I like it!

:)

Joe, I'd like to sell you a drug that will make you believe you're posting comments to Overcoming Bias. This will be much more convenient than actually posting them.

Eliezer, are you implying there is some goodness that can't be simulated?

Did I interpret that wrong?

Eliezer, are you implying there is some goodness that can't be simulated?
You can't buy integrity. You can't simulate the value of reality. For any method X, you cannot use X to produce 'goodness' incompatible with X.

In the movie Pi, whenever the protagonist gets stuck, he restates his premises:

1. Mathematics is the language of nature
2. Everything around us can be represented and understood through numbers
3. If you graph the numbers of any system, patterns emerge
4. Therefore, there are patterns everywhere in nature
Hypothesis: Within the stock market there are patterns as well.

If that movie was about an AI researcher, what would his premises be?

To Doug S:

you may be interested in this reply to Buchanan; if you are interested in agent-based computational economics, go here.

@Ian

"In the movie Pi, whenever the protagonist gets stuck"

Ian C., may I point out that Max, the protagonist in Pi, is crazy? He is paranoid, delusional, self-harming, and as Darren has stated, "addicted" to a self-created monster. I wouldn't look to Max as a reliable guide to any sane endeavor. Unless you are truly interested in the premises of a lunatic AI researcher?

@fr: Max may have been crazy, but I don't think making your premises explicit is crazy. I would say most AI researchers presume:

1. Intelligence is general
2. Reductionism is true
3. A Von Neumann machine can do everything a human brain can

Max wasn't crazy. He was afflicted with chronic migranes, possibly resulting from his intuitive understanding of the mathematics behind a very strange attractor that may be involved with the spontaneous generation of life.

Unless your premise is that the entirety of the movie was nothing more than a series of hallucinations, Max was perfectly sane. Impressive, given that he was pursued by both Wall Street executives and hysterical Kabbalists.

Recovering irrationalist: an informal [Bay Area] OB meetup near to [the weekend 24-26 Oct] possibly on an evening or the Sunday?


Nick Tarleton: Yes.

Z. M. Davis: Count me in!

michael vassar: yes I'm happy to join


Cool, 4 so far, any more?

OK, time to cast anonymity to the winds. Anyone interested who hates posting, mail me at... *eyes Spambot* ... cursor_loop 4t yahoo p0int com.

As someone who recently moved from a tiny Northern place to London (Wow, real live Bayesians and Transhumanists running wild!) I can confirm, this kind of stuff is much more interesting face-to-face, much more motivating, and definitely worth encouraging!

If I get more replies I'll ask Robin if we can do a meetup post, then we can decide when, where and what.

Mike (aka Recovering)

Eliezer, are you implying there is some goodness that can't be simulated?

Another holodeckist (holodecker?? holodeckard??). Gosh, there are lots of them!

I just saw the list of 2008 Ig Nobel winners.

I'm impressed by how much compartmentalization the brain can do. For example, I have a quite different viewpoint on politics, society, and the world when discussing these things with my roommate than I do back at home. Yet, both viewpoints seem to be built of very strong and genuine convictions. It seems we humans will vary much of our thought depending on the group we are around.

I'd be interested to hear more analysis of mating and reproduction, though I don't have specific to contribute at this point.

@Roger Scruton

Please, avoid inflammatory rhetoric that doesn't contribute. Save that for Youtube commentary. Thanks.

The reality of the simulated happiness of a wirehead seems beside the original issue. A wirehead may well be genuinely ecstatic - but that's a problem for everyone else, since it typically shorts out their motivation circuits and prevents them from usefully contributing to society. Potential wireheads are unreliable - they regularly need a factory reset. Products like that would suffer from poor reviews and reduced sales.

About the ig Nobel winners: I think that it is quite amazing that slime molds are capable of learning.

@Lars

You know I was never a fan of Roger Scruton, but I confess his Xanthippic Dialogues is quite witty.

Ig Nobel:

PHYSICS PRIZE. Dorian Raymer of the Ocean Observatories Initiative at Scripps Institution of Oceanography, USA, and Douglas Smith of the University of California, San Diego, USA, for proving mathematically that heaps of string or hair or almost anything else will inevitably tangle themselves up in knots.
REFERENCE: "Spontaneous Knotting of an Agitated String," Dorian M. Raymer and Douglas E. Smith, Proceedings of the National Academy of Sciences, vol. 104, no. 42, October 16, 2007, pp. 16432-7.

If only one could extract energy, or any utility, out of knotting,

I'm interested in the evolutionary origin of grief and the fear of death. Considering it seems to be a driving force of much of what Eliezer and other Transhumanism

Sure I get choked up emotionally when I think of a loved one dying, but then other emotional responses such as a queasy fear of public speaking, I try and overcome. What makes one emotional response appropriate and another to be squashed. The prevention of death for all humanity has not grabbed me intellectually, like it seems to have done for other thinkers. I'm curious why not.

Can anyone recommend this book?

The prevention of death for all humanity has not grabbed me intellectually, like it seems to have done for other thinkers.

IMO, the most probable way that the "death" problem will be fixed is to use machines whose brains can be backed-up and copied. Since I doubt there will be very many uploads, full implementation of that solution will most likely entail the eventual deaths of most humans.

Will: My case may help to understand all these responses. I have a normal aversion to self-harm, but no fear of death. I have an attachment disorder whereby I'm extremely unwilling to be emotionally open with people. I also have no fear of public speaking at all. So it could be that these are all linked together.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31