« Rationality of voting etc. | Main | Beware Hockey Stick Plans »

December 04, 2008

Comments

So what exactly are you concluding from the fact that a seminal model has some unrealistic aspects, and that the connection between models and data in this field is not direct? That this field is useless as a source of abstractions? That it is no more useful than any other source of abstractions? That your abstractions are just as good?

Eliezer, is there some existing literature that has found "natural selection not running into a law of diminishing returns on genetic complexity or brain size", or are these new results of yours? These would seem to me quite publishable, though journals would probably want to see a bit more analysis than you have shown us.

Here's Hans Moravec on the time-of-arrival of just the computing power for "practical human-level AI":

Despite this, if you contrast the curves on page 64 of "Mind Children" and page 60 of "Robot" you will note the arrival time estimate for sufficient computer power for practical human-level AI has actually come closer, from 2030 in "Mind Children" to about 2025 in "Robot."

Robin, for some odd reason, it seems that a lot of fields in a lot of areas just analyze the abstractions they need for their own business, rather than the ones that you would need to analyze a self-improving AI.

I don't know if anyone has previously asked whether natural selection runs into a law of diminishing returns. But I observe that the human brain is only four times as large as a chimp brain, not a thousand times as large. And that most of the architecture seems to be the same; but I'm not deep enough into that field to know whether someone has tried to determine whether there are a lot more genes involved. I do know that brain-related genes were under stronger positive selection in the hominid line, but not so much stronger as to imply that e.g. a thousand times as much selection pressure went into producing human brains from chimp brains as went into producing chimp brains in the first place. This is good enough to carry my point.

I'm not picking on endogenous growth, just using it as an example. I wouldn't be at all surprised to find that it's a fine theory. It's just that, so far as I can tell, there's some math tacked on that isn't actually used anything, but provides a causal "good story" that doesn't actually sound all that good if you happen to study idea generation on a more direct basis. I'm just using it to make the point - it's not enough for an abstraction to fit the data, to be "verified". One should actually be aware of how the data is constraining the abstraction. The recombinant growth notion is an example of an abstraction that fits, but isn't constrained. And this is a general problem in futurism.

If you're going to start criticizing the strength of abstractions, you should criticize your own abstractions as well. How constrained are they by the data, really? Is there more than one reasonable abstraction that fits the same data?

Talking about what a field uses as "standard" doesn't seem like a satisfying response. Leaving aside that this is also the plea of those whose financial models don't permit real estate prices to go down - "it's industry standard, everyone is doing it" - what's standard in one field may not be standard in another, and you should be careful when turning an old standard to a new purpose. Sticking with standard endogenous growth models would be one matter if you wanted to just look at a human economy investing a usual fraction of money in R&D; and another matter entirely if your real interest and major concern was how ideas scale in principle, for the sake of doing new calculations on what happens when you can buy research more cheaply.

There's no free lunch in futurism - no simple rule you can follow to make sure that your own preferred abstractions will automatically come out on top.

Moravec, "Mind Children", page 59: "I rashly conclude that the whole brain's job might be done by a computer performing 10 trillion (10^13) calculations per second."

But has that been disproved? I don't really know. But I would imagine that Moravec could always append, ". . . provided that we found the right 10 trillion calculations." Or am I missing the point?

When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make *good* ones, that isn't really the point).

This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the *truth* of that abstraction, the *truth* of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal to a very specific way of looking at "things in general", which few are going to share.

On a historical time frame, we can grant pretty much everything you suppose and still be left with a FOOM that "takes" a century (a mere eyeblink in comparison to everything else in history). If you want to frighten us sufficiently about a FOOM of shorter duration, you're going to have to get your hands dirtier and move from abstractions to specifics.

"...what are some other tricks to use?" --Eliezer Yudkowsky
"The best way to predict the future is to invent it." --Alan Kay

It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.

Eliezer, the factor of four between human and chimp brains seems to be to far from sufficient to show that natural selection doesn't hit diminishing returns. In general I'm complaining that you mainly seem to ask us to believe your own new unvetted theories and abstractions, while I try when possible to rely on abstractions developed in fields of research (e.g., growth theory and research policy) where hundreds of researchers have worked full-time for decades to make and vet abstractions, confronting them with each other and data. You say your new approaches are needed because this topic area is far from previous ones, and I say test near, apply far; there is no free lunch in vetting; unvetted abstractions cannot be trusted just because it would be convenient to trust them. Also, note you keep talking about "verify", a very high standard, whereas I talked about the lower standards of "vet and "validate".

Robin, suppose that 1970 was the year when it became possible to run a human-equivalent researcher in realtime using the computers of that year. Would the further progress of Moore's Law have been different from that in our own world, relative to sidereal time? Which abstractions are you using to answer this question? Have they been vetted and validated by hundreds of researchers?

Eliezer, my Economic Growth Given Machine Intelligence does used one of the simplest endogenous growth models to explore how Moore's law changes with computer-based workers. It is an early and crude attempt, but it is the sort of approach I think promising.

I don't understand. If it is not known which model is correct, can't a Bayesian choose policies by the predictive distributions of consequences after marginalizing out the choice of model? Robin seems to be invoking an academic norm of only using vetted quantitative models on important questions, and he seems to be partly expecting that the intuitive force of this norm should somehow result in an agreement that his position is epistemically superior. Can't the intuitive force of the norm be translated into a justification in something like the game theory of human rhetoric? For example, perhaps the norm is popular in academia because everyone half-consciously understands that the norm is meant to stop people from using the strategy of selecting models which lead to emotionally compelling predictions? Is there a more optimal way to approximate the contributions (compelling or otherwise) of non-vetted models to an ideal posterior belief? If Eliezer is breaking a normal procedural safeguard in human rhetoric, one should clarify the specific epistemic consequences that should be expected when people break that safeguard, and not just repeatedly point out that he is breaking it.

Moravec, "Mind Children", page 68: "Human equivalence in 40 years". There he is actually talking about human-level intelligent machines arriving by 2028 - not just the hardware you would theoretically require to build one if you had the ten million dollars to spend on it.

You can hire a human for less than ten million dollars. So there would be little financial incentive to use a more expensive machine instead. When the machine costs a thousand dollars things are a bit different.

I think it misrepresents his position to claim that he thought we should have human-level intelligent machines by now.

Robin, I just read through that paper. Unless I missed something, you do not discuss, or even mention as a possibility, the effect of having around minds that are faster than human. You're just making a supply of em labor cheaper over time due to Moore's Law treated as an exogenous growth factor. Do you see why I might not think that this model was even remotely on the right track?

So... to what degree would you call the abstractions in your model, standard and vetted?

How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?

And if I devised a model that was no more different from the standard - departed by no more additional assumptions - than this one, which described the effect of faster researchers, would it be just as good, in your eyes?

Because there's a very simple and obvious model of what happens when your researchers obey Moore's Law, which makes even fewer new assumptions, and adds fewer terms to the equations...

You understand that if we're to have a standard that excludes some new ideas as being too easy to make up, then - even if we grant this standard - it's very important to ensure that standard is being applied evenhandedly, and not just selectively to exclude models that arrive at the wrong conclusions, because only in the latter case does it seem "obvious" that the new model is "unvetted". Do you know the criterion - can you say it aloud for all to hear - that you use to determine whether a model is based on vetted abstractions?

'How many new assumptions, exactly, are fatal? How many new terms are you allowed to introduce into an old equation before it becomes "unvetted", a "new abstraction"?'

Every abstraction is made by holding some things the same and allowing other things to vary. If it allowed nothing to vary it would be a concrete not an abstraction. If it allowed everything to vary it would be the highest possible abstraction - simply "existence." An abstraction can be reapplied elsewhere as long as the differences in the new situation are things that were originally allowed to vary.

That's not to say this couldn't be a black swan, there's no guarantees, but going purely on evidence what other choice do you have except to do it this way.

"as long as the differences in the new situation are things that were originally allowed to vary"

And all the things that were fixed are still present of course! (since these are what we are presuming are the causal factors)

Steve, how vetted any one abstraction is in any one context is a matter of degree, as is the distance of any particular application to its areas of core vetting. Models using vetted abstractions can also be more or less clean and canonical, and more or less appropriate to a context. So there is no clear binary line, nor any binary rule like "never use unvetted stuff." The idea is just to make one's confidence be sensitive to these considerations.

Eliezer, the simplest standard model of endogenous growth is "learning by doing" where productivity increases with quantity of practice. That is the approach I tried in my paper. Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor. This one parameter of course implicitly combines the number of workers, the number of hours each work, how fast each thinks, how well trained they are, etc. If you instead have a one parameter model that only considers how fast each worker thinks, you must be implicitly assuming all these other contributions stay constant. When you have only a single parameter for a sector in a model, it is best if that single parameter is an aggregate intended to describe that entire sector, rather than a parameter of one aspect of that sector.

Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor. This one parameter of course implicitly combines the number of workers, the number of hours each work, how fast each thinks, how well trained they are, etc.

If one woman can have a baby in nine months, nine women can have a baby in one month? Having a hundred times as many people does not seem to scale even close to the same way as the effect of working for a hundred times as many years. This is a thoroughly vetted truth in the field of software management.

In science, time scales as the cycle of picking the best ideas in each generation and building on them; population would probably scale more like the right end of the curve generating what will be the best ideas of that generation.

Suppose Moore's Law to be endogenous in research. If I have new research-running CPUs with a hundred times the speed, I can use that to run the same number of researchers a hundred times as fast, or I can use it to run a hundred times as many researchers, or any mix thereof which I choose. I will choose the mix that maximizes my speed, of course. So the effect has to be at least as strong as speeding up time by a factor of 100. If you want to use a labor model that gives results stronger than that, go ahead...

Didn't Robin say in another thread that the rule is that only stars are allowed to be bold? can anyone find this line?

Consider the following. Chimpanzees make tools. The first hominid tools were simple chipped stone from 2.5 million years ago. Nothing changed for a million years. Then homo erectus came along with Acheulian tech, nothing happened for a million years. Then two thousand years ago H. Sapiens appeared and tool use really diversified. The brains had been swelling from 3 million years ago.

If brains had been getting more generally intelligent at that time as they were increasing in size, it is not shown. They may have been getting better at wooing women and looking attractive to men.

This info has been cribbed from the Red Queen page 313 hardback edition.

I would say this shows a discontinous improvement in intelligence, where intelligence is defined as the ability to *generally* hit a small target in search space about the world. Rather than the ability to get into another hominids pants.

Also, while economists have many abstractions for modeling details of labor teams and labor markets, our standard is that the simplest versions should be of just a single aggregate quantity of labor.

Granted, but as long as we can assume that things like numbers of workers, hours worked and level of training won't drop through the floor, then brain emulation or uploading should naturally lead to productivity going through the roof shouldn't it?

Or is that just a wild abstraction with no corroborating features whatsoever?

Eliezer, it would be reasonable to have a model where the research sector of labor had a different function for how aggregate quantity of labor varied with the speed of the workers.

Ben, I didn't at all say that productivity can't go through the roof within a model with well-vetted abstractions.

Well... first of all, the notion that "ideas are generated by combining other ideas N at a time" is not exactly an amazing AI theory; it is an economist looking at, essentially, the whole problem of AI, and trying to solve it in 5 seconds or less. It's not as if any experiment was performed to actually watch ideas recombining. Try to build an AI around this theory and you will find out in very short order how useless it is as an account of where ideas come from...

But more importantly, if the only proposition you actually use in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be partially verified by testing your theory.

This is a good idea though. Why doesn't someone combine economics and AI theory? You could build one of those agent-based computer simulations where each agent is an entrepreneur searching the (greatly simplified) space of possible products and trading the results with other agents. Then you could tweak parameters of one of the agents' intelligences and see what sort of circumstances lead to explosive growth and what ones lead to flatlining.

". . . economics does not seem to me to deal much in the origins of novel knowledge and novel designs, and said, "If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things."

A popular professor at Harvard Business School told me that economists are like accountants--they go out on the field of battle, examine the dead and maimed, tabulate the fallen weapons, study the prints on the ground, and try and figure out what happened. Real people, the actors who fought the battle, are rarely consulted. However, the economists and accountants try to summarize what happened in the past. They often do that with some degree of accuracy. However, experience has taught us that asking them what will happen in the future begets less accuracy than found in weather forecasts. And yet the economists have constructed extensive abstract theories that presume to predict outcomes.

I don't believe that applying more brain power or faster calculations will ever improve on this predictive ability. Such super-computations could only work in a controlled environment. But, a controlled environment eliminates the genius, imagination, persistence, and irrational exuberance of individual initiative. The latter is unpredicatable, spontaneous, opportunistic. All attempts to improve on that type of common and diversified genius by central direction from on high has failed.

"Ideas" are great in the hard sciences, but as Feynman observed, almost every idea you can come up with will prove wrong. Super computational skills should alleviate the problem of sorting through the millions of possible ideas in the physical sciences to look for the good ones. But when dealing with human action, it is best to look, not at the latest idea, or concept, but at the "principles" we can see from the past 4,000 years of human societal activity. Almost every conceivable mechanism has been tested and those that worked are "near" and at hand. That record reveals the tried and true lessons of history. In dealing with the variables of human motivation and governance, those principles provide a sounder blueprint for the future than any supercomputer could compute.

This reminds me of the bit in Steven Landsburg's (excellent) book "The Armchair Economist" in which he makes the point that data on what happens on third down in football games is a very poor guide to what would happen on third down if you eliminated fourth down.

Eliezer -- To a first approximation, the economy as a whole is massively, embarrassingly, parallel. It doesn't matter if you have a few very fast computers or lots of very slow computers. Processing is processing and it doesn't matter if it is centralized or distributed. Anecdotal evidence for this abounds. The Apollo program involved hundreds of thousands of distributed human scale intelligences. And that was just one program in a highly distributed economy. We're going to take artificial intelligences and throw them at a huge number of problems: biology (heart attacks, cancer, strokes, alzheimer, HIV, ...), computers (cloud computing, ...), transportation, space, energy, ... In this economy, we don't care that 9 women can't produce a baby in a month. We want a gazillion babies, and we're gloriously happy that 9 women can produce 9 babies in 9 months.

Robin -- But Eliezer's basic question of whether the general models you propose are sufficient seems to remain an open question. For example, you suggest that simple jobs can be performed by simple computers leaving the complicated jobs to humans (at the current time). A more accurate view might be that employers spend insignificant amounts of money on computers (1% to 10% of the human's wages) in order to optimize the humans. Humans assisted by computers have highly accurate long term memories, and they are highly synchronized. New ideas developed by one human are rapidly transmitted throughout society. But humans remain sufficiently separated to maintain diversity.

So, what about a model where human processing is qualitatively different from computer processing, and we spend money on computers in order to augment the human processing. We spend a fixed fraction of a human's wages on auxillary computers to enhance that human. But that sorta sounds like the first phase of your models: human wages skyrocket along with productivity until machines become self-aware.

A welfare society doesn't seem unreasonable. Agriculture is a few percent of the U.S. economy. We're close to being able to pay a small number of people a lot of money to grow, process, and transport food and give the food away for free -- paid for by an overall tax on the economy. As manufacturing follows the path agriculture took over the past century and drops from being around 30% of our economy to 3%, we'll increasingly be able to give manufactured goods away for free -- paid for out of taxes on the research economy.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31