« Devil's Offers | Main | Nonperson Predicates »

December 25, 2008

Comments

Isn't it a little late to worry about that kind of problem? Anything close enough to be a potential threat is close enough that signals that have already been sent have reached or will reach them. What we're sending out now is either going to reach something that has already been alerted by previous signals or is too far away to be a real threat.

Thinking about this with care, Robin, why would aliens be a threat to us anyway?

Previously you've argued that rational beings would be more interested in trade than war - that's what you said to counter the fear that the direct hand-coded AI would exterminate us, iirc - why wouldn't this hold likewise for the little green men when they arrive?

Also, since presumably any beings intelligent and advanced enough to arrive would most likely have achieved their own Singularity, couldn't we feel fairly confident they would trade Singularity-related technology with us?

This makes me think that instead of dampening signals, we should increase them. Thus we could benefit from alien technologies sooner. The work therefore might be to ensure that we are ourselves Friendly when they arrive?

frelkins, I'd never say war or genocide is impossible - I've said it is less likely between groups that share the same legal and social institutions for within-group coordination. Tall people and short people who intermingle well are unlikely to go to war. Aliens from far away share very few institutions with us.

Ok Robin, that makes sense to me - it's true, they could be hostile of course. But is there any decent way to think about how you could create a fair estimation of that probability?

What then comes to mind: are any plausible ways to consider what institutions space-faring, BBC-watching, and classical-music-broadcast listening aliens might have?

Then we could find points of commonality now to reduce the likelihood of conflict and increase the possibility of trade. Obviously it would be to our benefit to have our diplomatic ducks in a row before the grays arrive.

Assuming of course, that aliens travel.

the question seems to be whether or not any alien or artificial intelligence places a negative utility on destroying other life. This acts as a "barrier to entry" for destroying us even if it is of positive utility to them.

@nazgulnarsil

"even if it is of positive utility to them"

Hmm, nn. I'd rather not have to rely on anyone's purity of heart - xenomorality. This draws me to consider if we should direct a METI that outlines capitalism and the benefits of xenotrade.

If aliens are monitoring broadcasts to learn about us either for xenobiology or possible conquest, it might be best to attempt to send them Adam Smith if they don't already know it. This would probably be the most crucial, useful institution we could share at first contact to ensure peace, no?

I think the overwhelming current evidence is any aliens that could get to us aren't going to distinguish us much from the rest of the matter in the region, and that the most likely outcome is that we'll be von neuwmannized.

Seems to me we should be more explicitly considering these negative costs of radar astronomy.

I don't think we'll alert that many aliens in a century and by then we'll probably have started our own stellar expansion, transcended or died out.

It's not entirely beyond the bounds of possibility that an alien civilization might watch our radar signals with equanimity but be moved to xenocide by seeing a content-rich signal, going to the trouble of working out what it is, and finding that their reward for doing so is ... Keanu Reeves. :-)

I'm mostly joking, of course, but there's a serious point there too: the probability that a signal will get us into trouble isn't the same as the probability that it will reach a potential troublemaker with enough strength to be detected. Some signals might be more trouble-provoking than others. And I don't think I'd choose, as something to send out deliberately to the stars, a depiction of one species threatening another with extermination.

And the point isn't just that we might be *first detected* as a result of the TDTESS broadcast, but that someone who's already detected us might see it and not like what they see.

If you worry about aliens, you're not a rationalist.

Bill and Anonymous, why cannot far aliens be a potential threat? Sure the threat would be realized later, but still it could be realized.

Hopefully, you have overwhelming evidence on the preferences of aliens?

Vladimir, rationalists don't put probability zero on non-excluded possibilities.

Rationalists don't worry about all the possibilities they put a non-zero probability on. Alien waves seem like the kind of thing that happen once every couple billion years at most.

Robin, are you susceptible to Pascal's wager? How is it different from aliens?

On the main topic, anti-METI people aren't idiots and so I'd bet they have some counterargument to "unintentional signals are stronger" that Shostak isn't representing.

All, our knowledge of bio isn't strong enough to say which much confidence what density of alien origination events to expect in our space-time region. Yes we don't see much going on out there, but to use that to infer things about the density and preferences of aliens nearby, you must resort to social science. I'm telling you (again) as a social scientist that while we do know valuable things that can change your expectations, we cannot be very confident about such inferences. So you just can't be as confident as many of you seem to be that there aren't aliens around out there.

Frelkins, Robin doesn't think you get a hard takeoff. Aliens from far away arrive with a huge tech advantage.

(Reads further.)

Robin, I'm surprised that you cite common institutions rather than tech advantage as the distinguishing factor why you fear aliens more than AIs. I re-express my interest in a post from you on why you think advanced unsympathetic Bayesian agents, governed under a legacy legal system, that allocates say 10% of systemic capital to archaic semi-Bayesian agents, would not coordinate to remove that system. I've had similar conversations with Steve Omohundro but he talked about punishment of nonpunishers (a very scary phrase to me) and continuous thought monitoring, not about coordination problems.

Well, if super-intelligent AIs are viable, we're probably more likely to encounter alien AIs than aliens themselves.

Any semi-rational agents in a heterogeneous world benefit from the division of knowledge and labor (i.e., trade). Given the extent of division of knowledge needed for space travel, it would seem unlikely that any aliens encountered wouldn't have institutions of private property, even if that property is owned by hives and not individual creatures. So I don't think sending them Adam Smith will tell them anything they don't already know.

Eliezer, to me the question seems to come down to whether or not we have many highly-specialized AIs, or AIs with highly-general intelligence. The former could easily be too specialized to understand how to re-create a legal system that protects them after the old one is gone, or to completely understand the actions and motivations of other AIs (I think this scenario explains human law pretty well; few of us have accurate knowledge of our shared institutions).

"radar astronomy is an important and indispensable component of the asteroid hazard and defense system."

We have an asteroid hazard and defense system?

Does it involve Bruce Willis?

If an alien race is technologically ahead of us it's overwhelmingly likely that it is at least 100,000 years ahead of us. If these aliens cared about intelligent life on other planets they would have long ago sent out probes to planets that might support life. It would probably be very cheap for these advanced aliens to send out billions of such probes that could probably travel close to the speed of light. So chances are that any such alien race that is within 1,000 light years of us has already figured out that there is intelligent life on earth.

Any aliens that will be alerted to our presence by signals we have not already sent out will not arrive for over 100 years (unless they have an FTL drive of some sort). Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?

billswift, given the number of posters here who expect to be alive and still in their relative youth in 100 years, yes. If you expect to live 10,000 years, an existential threat 1% of your lifespan away is roughly the same as having nine months to live given current lifespans. If you think our species will probably be gone in a century, then this is definitely not a priority.

@billswift

"Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?"

Sure, in a reasonably proportional amount. Wouldn't otherwise be mistaken?

But as Robin points out, maybe we are biased to think about "an inevitable march toward a theory-predicted global conflict with an alien united them." Maybe any space-farers will more likely be "trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures."

In which case we have 100 years to get ready to do business with them. As a result of your post billswift, I'm now thinking we should perhaps have a permanent METI channel advertising eco-tourism. Aliens will probably just want a nice beach vacation with mojitos after a quick stop at XenoDisney.

Interview with a Famous Scientist:

Q: Do you think there is life on other worlds?
A: Oh, yes, almost certainly.

Q: Do you think intelligent life exists on other worlds?
A: Statistically, I think it is quite likely.

Q: Do you think interstellar travel is possible?
A: I think it will be in perhaps our distant future, say in several thousand years.

Q: So, if life on Earth is about 4 billion years old, and the universe is about 14 billion years old, there could vast numbers of civilizations in our galaxy ahead of us by the few thousand years required to be able to travel between stars?
A: Yes, the statistics indicate that such might indeed be the situation.

Q: So what do you think when you hear about people seeing what they think might be alien spaceships, you know, UFOs?
A: They are all either crazy, or hoaxers, or they are unfamiliar with common astronomical phenomena like meteors, or the planets.

Q: But surely in this day and age, when almost everyone is familiar with common astronomical phenomena, who could report something totally different as a UFO?
A: Simple. They are the ones that are crazy, or hoaxers.

I intend to be alive and active in 100 years. My point was that with increasing knowledge our capabilities in 100 years will likely be such that we would not be vulnerable to an attack, especially one mounted across interstellar distances. There is risk, but it has nothing to do with anything we may be sending out now. The risk is that something is already on the way, either having when it detected our signals or just random bad luck that it's coming. See Ringo & Taylor's "Von Neumann's War" for a recent fictional depiction.

While the huge number of star systems out there with planets makes it highly likely that there is life elswhere, and some form of intelligent life somewhere out there, the lack of any pickup of anything looking like an intelligent transmission by the long-running SETI project is not very encouraging about there being much of the latter anywhere nearby in our galaxy, or even pretty far away in our galaxy. Somebody might be listening to us, but they do not seem to be sending anything out on their own. Given how difficult it is to get life going and then to get multi-cellular life going, intelligent life out there may in fact be very scarce.

Also, as long as Einstein remains correct and the speed of light is an essential limit to velocity, interstellar travel remains very unllikely or difficult.

Of course, if one wishes to accept that perhaps there are much higher civilizations than us, able to overcome the speed of light limit, and even some kind of galactic or even inter-galactic civilization that maintains some kind of higher order as in so many sci-fi series, then it would not be illogical to have had an outbreak of visitations after 1945, coinciding with the big outbreak of UFO sightings, not all of which have been explained, given that in 1945 we humans set off explosive nuclear weapons, something that a higher order interplanetary civilization would presumably keep track of in "developing" plantary civilizations, indeed, the theme of the original "The Day the Earth Stood Still."

Not sure I agree with this, but here is one argument that worrying about aliens might not be "bad guy bias", but rather a very reasonable worry (not something we should spend every day in fear of, more like the worry that we might be wiped out by an asteroid or comet hitting the earth without alien intervention)

http://sites.inka.de/mips/reviews/TheKillingStar.html

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31