« Touching the Old | Main | Loving Loyalty »

July 21, 2008

Comments

"So, across the whole of the Tegmark Level IV multiverse - containing all possible physical universes with all laws of physics, weighted by the laws' simplicity"

This is not evidence. There is no way you can extract an answer to the question that doesn't influence your decision-making. Weighting the universes is epiphenomenal. We can only look at the laws and content of out world in the search for the answers.

In the last similar thread someone pointed out that we're just talking about increasing existential risk in the tiny zone were we observe (or reasonably extrapolate) each other existing, not the entire universe. It confuses the issue to talk about destruction of the universe.

Really this is all recursive to Joy's "Grey goo" argument. I think what needs to be made explicit is weighing our existential risk if we do or don't engage in a particular activity. And since we're not constrained to binary choices, there's no reason for that to be a starting point, unless it's nontransparent propaganda to encourage selection of a particular unnuanced choice.

A ban on the production of all novel physics situations seems more extreme than necessary (although the best arguments for that should probably be heard and analyzed). But unregulated, unreviewed freedom to produce novel physics situations also seems like it would be a bit extreme. At the least, I'd like to see more analysis of the risks of not engaging in such experimentation. This stuff is probably very hard to get right, and at some point we'll probably get it fatally wrong in one way or another and all die. But let's play the long odds with all the strategy we can, because the alternative seems like a recursive end state (almost) no matter what we do.

Has anyone done an analysis to rule out existential risks from the possibility of time travel at LHC? Also, what about universe creation (in a way not already occurring in nature), which raises its own ethical issues?

Both of these do seem very improbable; I would bet that they can be ruled out completely through some argument that has not yet been spelled out in a thorough way. They also seem like issues that nonphysicists are liable to spin a lot of nonsense around, but that's not an excuse.

One response to the tougher question is that the practicality of advising said species to hold on to its horse saddles a wee bit longer is directly proportional to the remaining time (AGI_achieved_date, less today's_date) within the interval between transistors and AGI. If it's five years beyond transistors and thousands of years yet to go to AGI, with a low likelihood of success, ever; then maybe the reward attributable to breakthrough (and breakout of some theoretically insufferably stagnant present) outweighs the risk of complete annihilation (which also is maybe not all that bad -- or at least fear and pain free -- if it's instantaneous and locally universal). If, on the other hand, there is good reason to believe that said species is only 5, 50, or maybe even 500 years away from AGI that could prevent Global Inadvertent Total Annihilation (GITA? probably not a politically correct acronym, ITA is less redundant anyway), perhaps the calculus favors an annoying yet adaptive modicum of patience which a temporary "limit up" in the most speculative physics futures market might represent.

Heh, somedbody's been reading Unicorn Jelly ...

Silverton,

Biological intelligence enhancement and space travel (LHC on Mars?) do not appear to be thousands of years away.


Funny you should mention that thing about destroying a universe by breaking a triangular world plate.....
http://unicornjelly.com/uni296.html

Eliezer: "Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?"

But--if you're right about the possibility of an intelligence explosion and the difficulty of the Friendliness problem, then building a novel AI is much, much more dangerous than creating novel physical conditions. Right?

I would advise them that if they were to do novel physics experiments that they also take time to exercise, eat less, sleep well and be good to others. And to have some fun. Then at least they probably would have improved their experience of life regardless as to the outcome of their novel experiments. That advice might also lead to clearer insights for their novel experiments.

Off for my day hike into the wilderness :)

But what if we'll need the novel-physics-based superweapons to take out the first rogue AI? What a quandary! The only solution must be to close down all sciencey programmes immediately.

I am fond of this kind of multiverse reasoning. One place I look for inspiration is Wolfram's book A New Kind of Science. This book can be thought of as analogous to the early naturalists' systematic exploration of the biological world, with their careful diagrams and comparisons, and attempts to identify patterns, similarities and differences that would later be the foundation for the organization system we know today. Wolfram explores the multiverse by running a wide variety of computer simulations. He is often seen as just using CA models, but this is not true - he tries a number of computational models, but finds the same basic properties in all of them.

Generally speaking, there are four kinds of universes: static, repeating, random, and chaotic. Chaotic universes combine stability with a degree of dynamism. It seems that only chaotic universes would be likely abodes of life.

The question is whether there are likely to be universes which are basically stable, with predictable dynamics, except that when certain patterns and configurations are hit, there is a change of state, and the new pattern is the seed for an explosive transition to a whole new set of patterns. And further, this seed pattern must be quite rare and never be hit naturally. Only intelligence, seeking to explore new regimes of physics, can induce such patterns to exist. And further, the intelligence does not anticipate the explosive development of the seed, they don't know the physics well enough.

From the Wolfram perspective, it seems that few possible laws of physics would have these properties, at least if we weight the universes by simplicity. A universe should have the simplest possible laws of physics that allow life to form. For these laws to incidentally have the property that some particular arrangement of matter/energy would produce explosive changes, while other similar arrangements would do nothing, would seem to require that the special arrangement be pre-encoded into the laws. That would add complexity which another universe without the special arrangement encoding would not need, hence such universes would tend to be more complex than necessary.

Maybe this is naive of me but why would you not just do the standard act-utilitarian thing? Having all of future scientific knowledge before intelligence augmentation is worth let's say a 10% chance of destroying the world right now, future physics knowledge is 10% of total future knowledge, knowledge from the LHC is 1% of future physics knowledge, so to justify running it the probability of it destroying the world has to be less than 10^-4. The probability of an LHC black hole eating the world is the probability that LHC will create micro black holes, times the probability that they won't Hawking-radiate away or decay in some other way, times the probability that if they survive they eat the Earth fast enough to be a serious problem, which product does indeed work out to much less than 10^-4 for reasonable probability assignments given available information including the new Mangano/Giddings argument. Repeat analysis for other failure scenarios and put some probability on unknown unknowns (easier said than done but unavoidable). Feel free to argue about the numbers.

steven: do you mean "(having all of future scientific knowledge [ever]) before intelligence augmentation" or "having [now] (all of future scientific knowledge before intelligence augmentation)"? Also, if (physics|LHC) knowledge is X% of (all|physics) knowledge by bit count, it doesn't follow that it has X% of the value of (all|physics) knowledge; in particular, I would guess that (at least in those worlds where the LHC isn't dangerous) high-energy physics knowledge is significantly less utility-dense than average (the utility of knowledge currently being dominated by its relevance to existential risk).

"Yes," if Schrodinger's Cat is found to be dead; "no" if it is found to be alive.

a ban on all physics experiments involving the production of novel physical situations
Each instant brings a novel physical situation. How do you intend to stop the flow of time?

Shall we attempt to annihilate speech, lest someone accidentally pronounce the dread and fearsome name of God? Someone might try to write it - better cut off all fingers, too.

I am deeply honored.

As am I.

But I am deeply dishonored, having turned out to be the dead Schrodinger's Cat.

It's interesting to me that you refer to "AI" as a singular monolithic noun. Have you fleshed out your opinion of what AI is in a previous post?

An alternative view is that our intelligence is made of many component intelligences. For example, a professor of mine is working on machine vision. Many people would agree that a machine that can distinguish between the faces of the researchers that built it would be more "intelligent" than one that could not, but that ability itself does not define intelligence. We also have many other behaviors besides visual pattern recognition that are considered intelligent. What do you think?

"Or an even tougher question: On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?"


Yes I would. What's tough about this? Just a matter of whether novel physics experiments are more likely to create new existential risks, or mitigate existing ones in a more substantial manner.

Actually, for it to be justified to carry out the experiments, it would also be required that there weren't alternative recipients for the funding and/or other resources, such that the alternative is more likely to mitigate existential risks.


Let's not succumb to the technophile instinct of doing all science now, since it's cool. Most science could well wait for a couple thousand years, if there are more important things to do. We know of good as-yet-undone ways to (1) mitigate currently existing existential risk and (2) increase our intelligence/rationality, so there is no need to go expensively poking around in the dark searching for unexpected benefits, while we haven't yet reached out to low-hanging benefits we can already see. Let's not poke around in the dark before we have exhausted the relatively easy ways to increase the brightness of our light sources.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31