« Disjunctions, Antipredictions, Etc. | Main | Bay Area Meetup Wed 12/10 @8pm »

December 09, 2008


The main part you're leaving out of your models (on my view) is the part where AIs can scale on hardware by expanding their brains, and scale on software by redesigning themselves, and these scaling curves are much sharper than "faster" let alone "more populous". Aside from that, of course, AIs are more like economic agents than humans are.

My statement about "truly selfish humans" isn't meant to be about truly selfish AIs, but rather, truly selfish entities with limited human attention spans, who have much worse agent problems than an AI that can monitor all its investments simultaneously and inspect the source code of its advisers. The reason I fear non-local AI fooms is precisely that they would have no trouble coordinating to cut the legacy humans out of their legal systems.

'Are'? I'd think 'will be' would be a better verb choice, since no AIs currently exist.

Likewise, it is difficult to determine what AIs might or might not be, since we know so little about what would be necessary to create them and what limits exist on their properties.

Eliezer, economists assume that every kind of product can be improved, in terms of cost and performance, and we have many detailed models of product innovation and improvement. The hardware expansion and software redesign that you say I leave out seem to me included in the mind parts that can be bought or sold. How easy it is to improve such parts, and how much better parts add to mind productivity, is exactly the debate we've been having.

The assumption of exogenous secure property rights is a problem in this discussion, especially in light of various economic literatures that treat government policy and property rights as endogenous.

There should be a fair bit of historical data on the kinds of innovations we expect AI's to make to improve themselves (faster chips, better algorithms), and how much those innovations cost in equipment, time, researchers, education levels, etc..

We talk about technological hurdles and steep payoffs abstractly. Maybe we should just pretend that an AGI was developed decades ago, and figure out out how long it would take it to get to where we are, if it took roughly the same path.

"unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal. Products usually last one period or forever, are identical or infinitely varied, etc."

I know some Austrians who would disagree with almost every word of this.

Suppose a spherical cow...

The usual complaint is that your models neglect relevant factors, make false assumptions, turn out to be empirically wrong, and you keep following models instead of reality.

For example Ricardian comparative advantage theory of trade which you praised many times on this blog clearly predicts that most of the trade will happen between countries with significantly different economies. In reality most of the trade happens between virtually identical developed economies, what makes no sense whatsoever for Ricardian analysis.

So how can economists base their advice (in this case - more free trade always) on models like that which fail all empirical tests? If you start including relevant factors, and fix false assumptions, so that models finally make correct predictions, how do you know they will still predict that free trade is good for everyone in every situation?

Ricardian theories of trade, and free trade advice are just one example. It's infuriating how economists keep doing stuff like that all the time, making up theories that have simple math but match reality very badly, and then using them to advocate policies.

Re: why fear?: Viruses can wipe out whole species - it does not seem intrinsically silly to consider the possibility of something like that happening to us with a malevolent superintelligent agent. Of course, it does seem very likely that humans would successfully act to prevent such an event - but that doesn't mean it isn't worth considering.

The other main associated problematical scenario involves success at enslaving the superintelligences - but then failing to build a migration path for surviving humans. As the superintelligences ascend they would inevitably compete with humans for resources. The humans would have become superintelligences - if they wanted to retain their role as the dominant organisms.

Genetic engineering and uploading appear to be the paths - but genetic engineering of humans builds on an appalling foundation, and doesn't make much sense - while uploading may arrive on the scene late - and uploads would have a hard time competing without very major reconstructive brain surgery.

If no path is successfully built for the humans, most of them will probably have a hard time economically - and will probably be pushed into the fringes of society. Something very similar seems likely - even if the machines are built in such a way that they love us, honour our every request, and do us no harm. In that case, we would eventually become like parasites on the machines - agents that suck resources, while providing little benefit to the hosts. That situation would probably lack long-term stability.

"As the superintelligences ascend they would inevitably compete with humans for resources."

This is elementary antropomorphic bias. Are we speaking about superintelligency here, or about Hitler's WWII Germany?

Would you like to point to an intelligent entity which does not compete for resources?


1) There is no other known intelligent entity than humans. If humans compete for resources, does it imply that any inteligent entity does?

2) If anything can be learnt from history and economy, the real price of resources (as compared to the price of human work) is going down for at least last 200 years. The reason is that smarter technology brings new ways how to obtain more resources at lower price. It is not too hard to extrapolate this to accelerate if there is AI capable of creating even smarter ways how to gather them.

Tim's post touches on probably my biggest disagreement with Eliezer, which is about what is worth saving.

I would have expected anyone who thinks a great deal about AI to agree with me, that what is worth saving is not our bodies, or our genes, but our values and aesthetics. That we should be at least as happy to transfer our memes to the next generation and die, as all previous human generations have been to transfer their genes to the next generation and die. But I would have been wrong.

(To the people who have protested in the past that Eliezer isn't talking about saving fleshly humans, I quote Eliezer from a recent post: "It's much easier to argue that AIs don't zoom ahead of each other, than to argue that the AIs as a collective don't zoom ahead of the humans. To the extent where, if AIs lack innate drives to treasure sentient life and humane values, it would be a trivial coordination problem and a huge net benefit to all AIs to simply write the statue-slow, defenseless, noncontributing humans out of the system.")

Regarding competing with humans for resources: in practice all organisms compete for resources - or elso die out. Machines share our ecosystem. Molecules cannot be part of both a human and a machine - so there's a natural conflict of interests over who gets what between the gene-based entities and those based around the new replicators. I don't think this is antropomorphism - rather it's based on the Malthusian idea of resource limitation.

"rather it's based on the Malthusian idea of resource limitation"

Actually, the Malthusian catastrophe that never happened and accepted explanation of the phenomenon is basis of my claim that for superintelligence, resources are next to irrelevant.

"Machines share our ecosystem."

The relevant question is: How big is our ecosystem? If you count only areas actually populated by people and resource currently economically exploited, then yes, there might be a problem. But what is the point for AI, which does not depend on gravity, food, air etc... to compete for resources in the same area?

luzr, please read The Basic AI Drives.

Self-reproducing systems grow exponentially. Resources grow at best at t^3 (with the light cone). To escape Malthusian resource limitation, you need to limit growth - which has much the same effect as resource limitation (which also acts to limit growth).

Civilization has been resource limited from the beginning. For example, if you were not resource limited, winning a billion dollar lottery would have no effect on your actions.

In my view, machines are currently effectively competing with humans for energy, space, and many chemical elements - and have been doing so for at least 200 years.

Phil: Eliezer has been explicit on this on MANY occasions, to the point of claiming that uploads are a type of human, not a type of AGI for instance. I don't know why you seem stuck on misreading him.

Consider a counterfactual question: All humans suddenly have no intrinsic desire for status. Luxury becomes meaningless. Lust is replaced with an explicit desire to reproduce. Would current economic systems remain even remotely secure?

My suspicion would be no. Our obsession with social games is a massive distraction. It also focusses our competition in a relatively 'safe' arena. Without these training wheels our political and economic assumptions would be irrelevant. AI, or even later generation ems, would have these differences. They would also have the self modification differences that Eleizer mentioned above.

Assuming that humans would be remotely safe in such an environment is reckless. We have no reason to assume we'd even be kept around for our historical significance. Aesthetic atachement for historical creatures is another human quirk that AIs needed be assumed to have.

Cameron, don't you think economists might know something about how behavior would change without status or luxury desires? And how exactly do you know ems or AIs would not have these things?

Assumption, no one made that assumption.

"Self-reproducing systems grow exponentially."

Not all.

"Resources grow at best at t^3 (with the light cone). To escape Malthusian resource limitation, you need to limit growth - which has much the same effect as resource limitation (which also acts to limit growth)."

Pushed to limits, this of course is true. But, pushed to limits, there is also "final limit" to any grow, be it exponential or linear (the size of reachable universe).

Any superintelligence worth of its name should know that. So, when going through FOOM, any AGI, if rational and with a sense of self-preservation, would be very careful not to kill its goal system by imposing exponential grow.

I still stand behind my point that resource conflicts would only be possible if our wannabe strong AGI is, uhm, kind of stupid....

"In my view, machines are currently effectively competing with humans for energy, space, and many chemical elements - and have been doing so for at least 200 years."

That is definitely true (although one would comment that machine desires are a bit different, humans do not need as much iron, cooper and silicon). Anyway, the net effect of this development is that there are much MORE resources available, especially those relevant for human bodies and useless for machines.

BTW: Do you think that winning a "billion dollar lottery" makes you spend billion / your_current_income times more raw resources?

"Cameron, don't you think economists might know something about how behavior would change without status or luxury desires?"

Robin, I expect there is work at the fringes of economics that would give valuable insight into that situation. Could you point me at a significant paper on that explicit topic that you consider worthwhile and makes the kind of assumptions and reasoning that I may benefit from?

Unfortunately, I also know that the disadvantage of expertise is that it tends to make people overconfident in their understanding of things outside their field. When it comes to commenting outside the bounds of their professional knowledge, I expect experts in economics to overrate the importance of their field. It's what humans do.

Economic research and understanding is incredibly biassed towards actual human behavior. Even work that deals with societies of specific counterfactual entities will be biassed. People are less likely to publish conclusions that would be considered 'silly' and are more likely to publish theories that validate the core dogmas of the field. What incentive does an economics researcher to publish a paper that concludes "allmost all of our core political values as a profession wouldn't apply in this situation"? That's the sort of naivety that leaves someone either burnt out or ostracized soon enough.

PS: I agree the irony here is huge! It would be extremely frustrating to be constantly bombarded with claims that your 'homo economicus' assumption makes you irrelevant in the 'real world'. Then, to hear almost the reverse claim would be infuriating!

Nevertheless, I just don't feel comfortable with how casually you account for the change from biological humans to self-modifying AI, keeping even the legal system of property rights of their human creators. For my part, I would need some extremely strong arguments to convince me that humans can comfortably rely on legacy property rights to ensure their long term survival. Given the scope of possible actions superintelligent entities of unknown motives could take, assuming property rights for humans remain in the long term seems like science fiction.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30