« Expected Creative Surprises | Main | Trust Us! »

October 25, 2008

Comments

I don't know that it's that impressive. If we launch a pinball in a pinball machine, we may have a devil of a time calculating the path off all the bumpers, but we know that the pinball is going to wind up fallin in the hole in the middle. Is gravity really such a genius?

Reminds how real "abstract" concepts are, when they get translated into simple physical events. The concept of winning, though intermediary of abstract inference, gets translated into individual physical moves, which bring the environment into a winning state. The concept of a world champion captures the whole process, and translates into an expectation of winning, maybe into an action of betting. And if you can't accurately translate these concepts into individual moves, you can't control the winning moves.

"Is gravity really such a genius?"

Gravity may not be a genius, but it's still an optimization problem, since the ball "wants" to minimize its potential energy. Of course, there are limits to such a reasoning: perhaps the ball will get stuck somewhere and never reach the lowest-energy state.

Nature continually runs its own optimisation process. It maximises entropy. That's why water runs downhill, why gas expands into a vaccum, and why pinballs fall down their holes. For details, see my Bright Light essay, and the work of Roderick Dewar.

I wouldn't count gravity (or the pinball machine) as an optimization process, and I think Elizier would say the same, only better. Once you start saying that gravity, gas expansion etc. are optimization process, it kinda sounds like *everything* is an optimization process, which isn't very useful.

The pinball machine question is still usefuln because it helps refine the concept. I'd say the difference between the pinball machine and Kasparov is that once you know the probability distributions for every individual "choice" of the machine (whether a ball will bounce left or right, etc.), you know all about the machine and can use those distributions to prove the ball will reach the bottom. (If it's a bit hard to imagine for a pinball machine, so imagine a simplified model with a bunch of tubes going around, and some "random" nodes where the ball may fall left or right - like a pinball or patchinko machine, but with a countable number of paths)

Unlike the pinball machine, Having a probability distribution for each of Kasparov's choices isn't enough to predict the result of the Kasparov-Elizier game.

You need to know that Kasparov is trying to win to predict the result.

There's no equivalent knowledge for the Pinball machine - having a model of each of it's possible "choices" is enough.

(damn, it's Pachinko, not patchinko)

You still need to know about gravity to make local predictions about the path of the ball in the pinball machine, but unlike for Kasparov, you don't need to think at all about future or final states.

(Damn, it's Eliezer, not Elizier. Sorry!)

A similar phenomenon arises in trying to bound the error of a numerical computation by running it using interval arithmetic. The result is conservative, but sometimes useful. However, once in a while one applies it to a slowly converging iterative process that produces an accurate answer. Lots of arithmetic leads to large intervals even though the error to be bounded is small.

We also have concepts such as "wealth" and "power" which describe an agent's ability to achieve its goals. Will you be distinguishing "intelligence" from these, or are they synonyms for your purposes?

Pinball machines are optimization processes to move quarters in through a little slot and out through a little locked door.

@Nominull:
Not impressive? The genius is in making a pinball machine that works. Even in the simplest case: a 'plinko' game, the board, kasparov, and the driver are all finely tuned results of some other optimization process.

Gravity may not be a genius, but it's still an optimization problem, since the ball "wants" to minimize its potential energy.

Using the terms as Eliezer has, can you offer an example of a phenomenon that is NOT an optimization?

I feel like I've read this exact post before. Deja Vu?

Once you start saying that gravity, gas expansion etc. are optimization process, it kinda sounds like *everything* is an optimization process, which isn't very useful.

Entropy-generating processes are the ones that perform optimisation - specifically processes where there a range of different outcomes with different entropies. Gravity barely qualifies - but friction certainly does - and it is not gravity but friction that keeps the pinball down.

And my work, in a sense, can be viewed as unraveling the exact form of that strange abstract knowledge we can possess; whereby, not knowing the actions, we can justifiably know the consequence.

You need to learn control theory. There is nothing strange about the situation you describe: this is a characteristic of all control systems, and hence of pretty much any prediction you try to make about living organisms (or AIs).

If I set the room thermostat to 20°C, I can predict what the temperature will be in the room for the indefinite future. I will not be able to predict when it will turn the heating on and off (or in hotter places, the air conditioning), because that will depend on the weather outside, which I cannot predict, and the number of people or other power sources in the room, about which I may know nothing. I can do no better than a very mushy prediction that the heating will be turned on for a greater proportion of the night than the day, and less during a LAN party. Not knowing the actions, we can justifiably know the consequence. This is an entirely unmysterious fact about control systems -- it is what they do.

It is also isomorphic to the situations of predicting that Kasparov will beat Mr. G and that the driver will arrive at the airport. People act so as to achieve goals. The actions will depend on the ongoing circumstances: each action will be whatever is effective at that moment in reaching the goal. In all three examples, the circumstances cannot be predicted, therefore the actions cannot be predicted. Not knowing the actions, we can justifiably know the consequence.

We can justifiably know the consequence because the result is not merely a consequence of the actions: the actions were chosen to produce the result. That is why we need not know of the actions. We only need know that there is a system in place that is able to choose such actions. Even when it is a black box, we can tell that there is a control system inside by observing the very fact that the result is consistent while the actions vary.

For the thermostat, we know exactly how it chooses its actions. For the driver's task, we almost know enough to duplicate the feat, if not the actual mechanism in the driver. For Kasparov, we know almost nothing about how he wins at chess. But this is merely ignorance, not mysteriousness. His track record demonstrates that he can defeat all ordinary masters of the game. Knowing that he can, we can predict that he will.

Ordinarily one predicts by imagining the present and then running the visualization forward in time.

On the contrary, imagining the present and running the visualisation forwards is a very bad method of making predictions. You can only do it successfully for very simple systems, such as the planets orbiting the sun, and it doesn't work very well even for that (replacing "imagining" by "measuring" and "running the visualization forward in time" by "numerically solving Newton's laws by iterating through time"). It does not work at all for any control system, because its actions of the moment depend not only on the goal, but also on the current situation.

For example:

Last Friday there was announced a meet-up of OB people for Saturday. Several announced on OB that they would be there. I predict that the meeting took place, and that most of those who said they would be there were. If the meeting did happen, presumably everyone who went made a similar prediction, and set out expecting to meet the others.

Did anyone who made such a prediction do so by imagining the present and working forwards? I cannot see how such a thing could be done. You might not know -- I certainly do not -- where each of them would have been beforehand, what means of transportation they had available, and so on. I have no information to base such an imagining on. Those who went might be more well-informed about each other, but not to the extent of carrying out the computation. What I do know is the goals that were expressed, and on that basis alone I can predict that most of those goals were accomplished. I predict the result, while predicting none of the actions that caused the result, and imagining no trajectory linking the RSVPs to Saturday evening.

Richard,
You're making the exact point Eliezer just did, about how modeling the effects of intelligence doesn't generally proceed by running a simulation forward. The "ordinarily" he speaks of, I assume, refers to the vast majority of physical systems in the Universe, in which there are no complicated optimization processes (especially intelligences) affecting outcomes on the relevant scales.

Patrick(orthonormal): The "ordinarily" he speaks of, I assume, refers to the vast majority of physical systems in the Universe, in which there are no complicated optimization processes (especially intelligences) affecting outcomes on the relevant scales.

My point is that modelling the effects of unintelligence doesn't generally proceed by running a simulation forward either. No intelligence and no optimisation processes, complicated or otherwise, need be present for the system to be unpredictable by this method. The room thermostat is not intelligent. My robot is not intelligent. Neither do they optimise anything. Here is Eliezer's own example of an "ordinary" system:

Ordinarily one predicts by imagining the present and then running the visualization forward in time. If you want a precise model of the Solar System, one that takes into account planetary perturbations, you must start with a model of all major objects and run that model forward in time, step by step.

But this is, in fact, not how astronomers precisely predict the future positions of the bodies of the Solar System. They do not "run a model forward in time, step by step". Instead, from observations they compute a set of parameters ("orbital elements") of the closed-form solution to the two-body problem (the second body being the Sun), then add perturbative adjustments to account for interactions between bodies. This is a more accurate method, at least when the bodies do not perturb each other too much. It also allows future positions to be predicted without computing any of the intermediate positions -- without, that is, running a model forwards in time.

So not only does forward simulation not work at all for intelligent systems, neither does it work at all for unintelligent control systems, and it does not even work very well for a bunch of dumb rocks.

Both forward simulation and the method of elements and perturbations are mathematically derived from Newton's laws. How is it that the latter method can predict where an asteroid will be at a point in the future, not only without computing its entire trajectory, but more accurately than if we did? It is because Newton's laws tell us more than the moment-to-moment evolution. They mathematically imply long-range properties of that evolution that allow these predictions to be made.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31