« The Thing That I Protect | Main | (Moral) Truth in Fiction? »

February 08, 2009

Comments

Suffice it to say that I think the above is a positive move ^.^

scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.

yes you should. this is a very serious issue. in art the artist caters to his patron. the more I see of the world of research in the U.S. the more I am disturbed by the common source of the vast majority of funding. science is being tailored and politicized.

By my math it should be impossible to faithfully serve your overt purpose while making any moves to further your ulterior goal. It has been said that you can only maximize one variable; if you consider factor A when making your choices, you will not fully optimize for factor B.

What! NOOOOO. I've only been around two months, and I cam *for* the sigularity/AI stuff. Bring it back. Please!

I believe the relevant phrase is 'shut up and multiply', New Reader. :)

You only press the "Run" button after you finish coding and teaching a Friendly AI; which happens after the theory has been worked out; which happens after...

This sounds like the Waterfall model of software development, which is not well thought of these days.

If I had concrete ideas about how to make a strong AI, I'd start coding them at once. I'd only worry about Friendliness if and when what I'd actually built worked well enough to make this a serious question. Irresponsible? Maybe. But thinking out the entire theory before attempting implementation has no chance of producing any sort of AI. Look at the rivers of ink that have already been expended (Ben Goertzel's books, for example).

Working out the theory first is substituting an easy problem for a hard problem, and the rivers of ink are just another way of going crazy.

But I'd been working on directly launching a Singularity movement for years, and it just wasn't getting traction. At some point you also have to say, "This isn't working the way I'm doing it," and try something different.

Eliezer, do you still think the Singularity movement is not getting any traction?

(My personal opinion is it has too much traction.)

So long as the heart doth pulsate and beat,
So long as the sun bestows light and heat,
So long as the blood thro' our veins doth flow,
So long as the mind in knowledge doth grow,
So long as the tongue retains power of speech,
So long as wise men true wisdom do teach.
(from depths of internet and attributed to Prof. Haroun Mustafa Leon)

I will study what you write in addition to my normal readings in any case. Problem with programming, science and math is that one doesn't know how long finding an answer will take in general.

Nominull, now imagine that your agents aren't perfect Bayesians and ask under what circumstances maximizing to first order fails to maximize to second order.

New Reader, there is a lot of stuff in the archives, and Less Wrong is going to try to make the archives substantially more accessible. Meanwhile, see here for links to a couple of indexes.

Kennaway, what works for launching a Web 2.0 startup doesn't necessarily work for building a self-modifying AI that starts out dumber than you and then becomes smarter than you, but on this I have already spoken. Besides, I don't think there's time to do things the ordinary stupid way, and plenty of AI researchers have already found out that 'I'll just write it and see if it works' tends not to generate human-level intelligence - though it sure generates labor.

Hollerith, if by that you're referring to the mutant alternate versions of the "Singularity" that have taken over public mindshare, then we can be glad that despite the millions of dollars being poured into them by certain parties, the public has been reluctant to uptake. Still, the Singularity Institute may have to change its name at some point - we just haven't come up with a really good alternative.

plenty of AI researchers have already found out that 'I'll just write it and see if it works' tends not to generate human-level intelligence

I've noticed. But the failure of hacking does not imply that its opposite must succeed, and it's been enough years since the "AGI" phrase was invented to start passing judgement on the new wave's achievements. Most writings on mental architecture look like word salad to me. The mathematical stuff like AIXI is all very well as mathematics, but I don't see any design coming out of it.

There's nothing wrong with "empirical" research in computer programs, especially with complex systems. If you can get something that is closer to what you want, you can study its behavior and analyze the results, looking for patterns or failures in order to design a better version.

I know Eliezer hates the word "emergent", but the emergent properties of complex systems are very difficult to theorize about without observation or simulation, and with computer programs there's precious little difference between those and just running the damn program. Could you design a glider gun after reading the rules of Conway's game of life, without ever having run it?

It's no way to write a safely self-modifying AI, to be sure, but it might be a valid research tool with which to gain insight on the overall problem of AI.

When a mistake might kill you, the rules are different.

Eliezer: brilliant post.

Kennaway: Working out the theory first is substituting an easy problem for a hard problem

Really? Would you also say that about physics? If not, can you give some other historical examples?

I think the idea of self-improving AI is advertised too much. I would prefer that a person have to work harder or have to have more well-informed friends to learn about it.

Peter de Blanc: Would you also say that about physics?

Probably not. I was talking specifically about AGI. Just compare the mountain of theorising, for example here with the paucity of results.

Actual, solid mathematical theories, that are hard to arrive at and predict things you can test, such as you get in physics, hardly exist in AI.

I think my greatest likelihood of strong involvement promoting rationality would be as the person working in the soup kitchen on behalf of the lawyer.

I'm not sure what the soup kitchen work would entail, or where the lawyers are.

I should write about this later.
I highly encourage you to. I find it an interesting topic without enough attention (with economic-type broad analysis rather than direct participation not part of public knowledge).

It seems you only need to recruit one billionaire (or someone who succeeds in becoming one). You've already done your part to raise the probability of achieving that pretty close to 1 (I wonder how many readers of yours would OVERfund you if they became billionaires? Here's one.) I don't think you need to sell FAI or the Singularity any more. You can move on to implementation. The universe needs your brain interfacing with the problems, not the public.

Friendly AI is too dangerous for pretty much anyone to think about. It will create and reveal insanity in any person who tries, and carries a real risk of destroying the world if a mistake is made. The scary part is that this is very rapidly becoming an issue that demands a timely solution.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31