« True Sources of Disagreement | Main | Disjunctions, Antipredictions, Etc. »

December 09, 2008


Quite possibly a good bias to have.

Indeed, I've found that people repeatedly ask me about AI projects with ill intentions - Islamic terrorists building an AI - rather than trying to grasp the ways that well-intentioned AI projects go wrong by default.


The point of post is that real AI has intentions of its own, I believe.

I am much more worried about artificial nano-replicator killing all humans because of some design bug or mutation, without any ill intentions involved, rather than superintelligent AI using nano-replicator with ill intentions to kill all humans.

In fact, I hope that we will develop strong AI before nano-replicators - maybe only true strong AI can prevent this kind of bugs.

On consideration, I would like to suggest that this bias arises out of an overlap of 3 tendencies. As they clash, this is the seeming reasonable ground.

1 - language issues. We call plagues, earthquakes etc "acts of God." This gives the impression that nothing can be done about them. Thus it is hard slow work to pass better building codes, enact public health measures and such.The attribution to god appears to make any planning action we could take pointless and even may subtly hint that the victims deserved it through "sin."

2- control bias. People have too much confidence in their own control. So we are not concerned so with smoking & auto deaths because the victim appeared to be in control of the smoking or driving behavior. For a long time despite the health knowledge we had, we were reluctant to acknowledge that nicotine is actually addictive, and rationally act on that even though the cost to the public purse was high. Anti-smoking efforts began more effective only when the issue became "second hand smoke" - that is, something another person did to you. Once harm was removed from your sphere of control, then society moved rapidly. We will suffer much to preserve this illusion of control.

3 - tribal justice. When "outsiders" - serial killers, terrorists - attack us, they deprive us of tribe members. We need tribe members to promote the survival of our lineage either as genetic relatives or allies to support our relatives. We need to prove to our allies that their loyalty is rewarded so we hunt down the outsiders and are angry for what they have cost us. In the past our allies were also more probably related to us, so they had many of our genes (& memes). If we don't play detective to find the bad guys our allies and genetic relatives will not trust us.

"Islamic terrorists building an AI"
Right, because there are so many brilliant Islamic terrorists who split their time between Al Qaeda and advancing cognitive science that there is a non-negligible chance that they will do so. What was the context?

Frelkins: I think that the "acts of God" terminology is not in common use among many populations and in my observations these populations still show bad guy bias, even to the point of explicit beliefs that it's somehow worse to be killed than to die in other similarly violent ways. 3 begs the question "Why don't allies distrust us if we are indifferent to their non malevolently caused plight?". 2 is definitely valid, but is it enough by itself?

All esp Luzr: In the >H community concern with accidental damage, e.g. from grey goo, remained greater than concern over malevolent damage, e.g. from nanotech warfare, until careful fairly independent analysis from multiple trusted sources largely dismissed the former as a problem relative to the latter. Also, while uploads and maybe ems have something that humans would recognize as intent, neither designed AGI nor evolution do, and those seem to be the major dangers perceived here.


"What was the context?"

When I was in India this summer, my visit co-incided with a spate of terrorist attacks in Bangalore, Assam, Gujurat, and other areas. In Bangalore - the tech center - a radical Muslim student group called SIMI claimed to be involved. They include elite engineering students, and much consideration was given to the idea that soon they would move from crude flowerpot bombs at bus shelters to sophisticated computer attacks on government and financial sites.

That groups like these are funded and given support from state actors outside India cannot be denied. Surely no one doubts the scientific capabilities of jihadists insofar as they are coming from the best schools in India and abroad. If all it would take to build even a crude low-level dangerous AI is a lot of money and a cadre of the dedicated, certainly Middle Eastern state actors have the oil wealth and can recruit from well-educated radicals in India, perhaps even the UK and Germany.

I think it is rational to list it as a catastrophic risk altho I would leave it to folks like Robin to prioritize it among the others.

"Also, while uploads and maybe ems have something that humans would recognize as intent, neither designed AGI nor evolution do."

Maybe it is just my poor English, but do you really mean that AGI does not have intent?

Do you suggest that intent is only reserved for humans?


Ah, sorry, finaly got the point: You mean that most human would not say that AGI has intent, not that AGI does not have one, correct?

OTOH, I believe that while this is true for general population, many of the subgroup frequent here are ready to consider that AGI has intent... (IMHO).

At a more mundane level than global catastrophe, there's apparently some evidence that people experience more pain if they perceive that the pain is intentionally inflicted.

The study seems to be based on self-reports, so it could be either a partial justification of the bad guy bias in some circumstances, or possibly just a result of it. Hard to tell without more "objective" measures...

Carl, I just seriously get asked that question all the time, and not just by reporters. "Islamic terrorists" are simply the default bad guys of our time.

I am surprised that the "not just reporters" need to name the bad guys.

Eliezer, just so long as the communists don't build it, nor the next default bad guys that come after the terrorists. Or the nazis and whoever the default bad guys were before them.

Although you must admit, if they do build an unfriendly, superintelligent, recursively improving AI with nano-replicators, then the terrorists will have already won. Ha, beat that, smart guy! (I am imaging Eliezer on a cable news network interview, on the split screen, as the person on the other side of the split makes that point with smug condescension.) (I think it works for some values of "unfriendly" and "won.")

Nations tend to focus far more time, money and attention on tragedies caused by human actions than on the tragedies that cause the greatest amount of human suffering or take the greatest toll in terms of lives.

There are dissimilarites which may justify this choice of focus as correct. The term "nation" might refer to government. While it may be popular nowadays for government to help people in case of natural catastrophe, the "regular job" of the government is often considered to be to enforce the law - i.e., go after bad guys, not go after earthquakes.

something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy.

This is hard to measure. "Seem worse"? The way to measure this is surely by looking at reaction. And a dissimilarity in the cause may justify a dissimilarity in the reaction, even if the harm considered in isolation from the cause is equal.

We want to punish those who act and cause harm much more than those who do nothing and cause harm.

First, we should rationally punish those who are morally culpable, not merely those whose actions contributed causally to the harm. After all, it takes two to cause a head-on collision, but often only one of the participants is morally culpable (e.g. the person whose car was in the wrong lane). There is an important distinction to be made between those who are culpable and those who are not. And generally, those whose inaction "causes" a harm (in the sense that the person, by failing to act, fails to prevent it) are not culpable.

It would be hasty to call existing moral distinctions between those who are and are not culpable "biased". There are generally good reasons for these distinctions.

If you ask how much should victims be compensated, [we feel] victims harmed through actions deserve higher compensation.

Probably they are the only ones who deserve any compensation at all. For a person to deserve compensation, another person must be obligated to give them compensation. This other person is the person morally responsible for the harm. In the case of a natural disaster there is likely no person morally responsible for the harm, therefore no person obligated to give compensation to the victim. Therefore the victim is not entitled to compensation from any particular person. Therefore the victim is not entitled to compensation, period.

Yes the Friendly AI concern is not about ill intended creators, but it is about ill intended creations.

Right, I think the point at the end is that this bias makes us too concerned about unfriendly AI, because it fits into this "evil actor" model. One might also point to the many criticisms raised here over Robin's "ems" scenario along these lines, that the ems would be evil (or at least selfish and/or uncaring) and do things that would wipe out the human race or worse. Both of these concerns direct us to focus on the motivations of intelligent and powerful beings as a principal threat to our happiness. This framework is much the same as how people today view the threat of terrorism.

Even more naturalistic threats, like global warming or resource exhaustion, tend to be interpreted in a model based on "bad guy" actions. It's nobody's fault, really, that the world may be running out of oil, or capacity to absorb CO2 - it's just an unfortunate fact of nature. Kind of an odd coincidence that it is happening just as a demographic transition is reducing population growth rates, as well as when technology is almost there for cheap work-arounds to these problems. If the world had had one order of magnitude more capacity for these resources then we'd probably have a much smoother time transitioning. But the issues are often framed such that we look for someone to blame, we see the problem as a result of bad behavior.

Robin: Only in the sense that evolution or economic attractors are ill intentions.
It's VERY hard to avoid anthropomorphizing AGI unless you keep those in mind as part of your default sample set of "optimizers".

I could make the same point with respect to unFriendly AI; the ones you have to worry about aren't "evil" in the sense of carrying out deliberately malevolent deeds, but rather, the ones who don't care about your existence one way or another (and you are made of atoms that they could use for things they do care about).

Minds are more powerful and have a larger impact than just about anything else; smarter minds have larger impacts. People concerned with large impacts on the future have a natural cause to be concerned about extremely intelligent actors. And for obvious reasons, we shouldn't be worrying about those whose desires approach our own reflective equilibria, but rather those which are neutral, twisted-good, or (in the case of augmented uploads) evil.

I think war gets treated as more like a natural disaster than the result of ill intent. The number of Jews killed in the Holocaust is quite well known. The number of non-Jews killed in the Holocaust is fairly well known. The number of people killed by Hitler's invasions is generally not mentioned.

"these populations still show bad guy bias, even to the point of explicit beliefs that it's somehow worse to be killed than to die in other similarly violent ways" -michael vassar

Michael, are you saying you shouldn't be more upset if I kicked you in the shin than if you just bumped your shin on a railing?

That's a good question, Nancy; I was wondering about the same thing. It is certainly possible that warfare will turn out to be the overwhelmingly biggest disaster facing humanity in the 21st century, and that perhaps time spent aiming to avoid disasters involving super-intelligence might be better spent working to improve institutions to prevent war. Of course, catastrophic future wars, armageddon, post-apocalyptic survival, etc, are among the oldest tropes in speculative fiction. So I'm not sure that the bias here applies, that we under-estimate the impact of war.

Eliezer, minds do have a disproportionately large impact, but at the same time, historically humanity has arguably been more harmed by non-minds than minds. This bias makes us focus too hard on the actions of minds.

I am trying to think of what future problems we might be ignoring because of this bad-guy bias. I suppose it comes down to the kinds of things Vedantam mentions, natural disasters and accidents. Maybe the real key to happiness and prosperity in the future is encouraging more people to buy insurance. That will not only protect them against certain harms, it will send price signals to avoid risky behavior.

Aging is probably the biggest non-badguy damage happening in the civilized world.


"perhaps time spent aiming to avoid disasters involving super-intelligence might be better spent working to improve institutions to prevent war"

I would go even further saying that perhaps Strong AGI is the institution to prevent the war and other disasters.

There are so many risks to human civilization and most of them can be prevented in posthumanity environment...


"Aging is probably the biggest non-badguy damage happening in the civilized world."

Yep. Another case for moving things ahead.

subsidiary to repugnancy bias and commission bias (bad things committed by an entity with apparent agency is more repugnant).

This bias is easily explained: we expect (in the sense of "demand") people to treat each other decently. One can't be sapient without having this duty. Thus, people close to the victim of a killer or rapist quite reasonably believe there's no excuse for the crime to have occurred -- a specific person is responsible and needs to be punished and/or made to pay restitution, so feeling outrage toward him/her is the only appropriate response.

In contrast, car wrecks (excluding those caused deliberately or by gross dereliction) are sufficiently unforseeable (and the precautions that might have prevented them are sufficiently non-worthwhile) that there is nobody to blame: such things inevitably happen to a few. The same reasoning applies to lung cancer, if we credit the victim's own decision that smoking is enjoyable enough to be worth the risk to him/her. (If we don't, then it is self-inflicted.) In either case there is no reasonable target for outrage, except maybe "God".

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30