« BHTV: Jaron Lanier and Yudkowsky | Main | Building Something Smarter »

November 02, 2008

Comments

Some subscribers might be interested in this YouTube video of me - discussing my estimate of the time of arrival of superintelligent machines:

http://alife.co.uk/essays/how_long_before_superintelligence/

Here is a post that I tried to send to the SL4 list, just before the list went silent at the end of October and didn't propagate my message. Ah well, it's probably more on-topic to this venue anyway.

========================================
Applying Bayes' Theorem in an actual lab.

I'm intrigued by Eliezer's assertion that Bayes' Theorem is a general case of the scientific method. If I understand his 'gentle introduction' correctly, the Bayesian formulation of the scientific method is:

"Rather than having necessarily one null hypothesis and one alternative hypothesis, there is a set of possible hypotheses for explaining a particular phenomenon, each with a particular prior likelihood. Each piece of new experimental evidence updates all these likelihoods, and these posterior likelihoods become the prior likelihoods for your next experiment. Repeat this process until one theory dominates."

Okay, this is facially plausible, endorsed by individuals whose judgement I respect, and doesn't cost me a lot to try in my own research. In fact, in my field (longevity and aging) there is a proliferation of alternative hypotheses for, among other things, why calorie restriction extends lifespan. And right off the bat, I run into difficulties applying Bayes' Theorem.

Difficulty 1: Priors
Obviously if I have no other information about prior likelihoods, I must assign an equal likelihood to each hypothesis-- 100%/K for K hypotheses. But what if I know the likelihoods are different, and can arrange them from most to least likely, but I don't know how much more likely any one of which is than any other? Furthermore, how should I go about estimating the likelihood of hypotheses that nobody has thought of yet?

Difficulty 2: Qualitative Results
Let's say hypothesis K1 predicts A and hypothesis K2 predicts ~A. Great, so you do an experiment and observe ~A. This drops the likelihood of K1 to 0 and commensurately increases the likelihood of hypotheses that predict ~A or make no predictions about A. Great, but in practice, biological hypotheses tend to make families of predictions: K1 -> {A,B,C,D}. Also in practice, it's rare enough to hit {A,B,C,D} spot on in a biological experiment that you can probably get a Nature or Science paper out of it if you do. A more typical outcome is something like {A,~B,C,?D}... two out of four predictions borne out by the study, one decisively not borne out, and one where the change falls short of statistical significance. How do you calculate the posterior likelihoods for your hypotheses in view of complex experimental outcomes like this? Presumably you assign weights to each prediction, but again, what if you only have relative information (A > B > C > D) rather than quantitative information (A = .9, B= .7, C= .3, D=.15)?

In short, is there a "non parametric" formulation of Bayes' Theorem for situations where you can rank your priors and your conditional likelihoods, but cannot assign them numeric values?

Thanks.

To end the hijack of the bhtv comment thread:
I said:
Can you name a belief which is untrue, but which you nevertheless believe?

I think, on reflection, that I have one. I believe that my conscious perception of reality is more or less accurate.

Suppose that this universe (or any possible universe) ends in heat death, rather than a "big crunch", repeated inflationary periods, etc, which is a plausible outcome of the cosmological debate on the ultimate fate of the universe. In that case, there is a very high probability that my brain is a random fluctuation in a maximum entropy universe, rather than a meaningful reflection of reality. Nevertheless, I believe and act as though my memories and perceptions describe the universe around me.

Tim Tyler said:
You buy the Boltzmann brain argument? How did you calculate the probabilities? Nobody knows the probability of what seems to be our universe forming, and certainly nobody knows the probability of a Boltzmann brain forming in a universe of unknown size and age. The Boltzmann brain paradox argues that one unknown highly speculative probability is bigger than another unknown highly speculative probability.

If you are a Boltzmann brain, don't worry about it - it's the other case that should be of interest to you.

My response: That's the point. Regardless of the probability of me being a Boltzmann brain, the best solution is to assume I'm not. Even if the probability of that is close to zero.

I'm not sufficiently expert at cosmology to estimate the actual probability of me being a Boltzmann brain, given the unresolved questions about the ultimate state of the universe. Especially since the answer is irrelevant.

The point is that "I am a not a Boltzmann brain" is a (more or less) falsifiable statement, in that it could be shown, depending on cosmology, that any given consciousness is likely to be a Boltzmann brain. Nevertheless, I believe that statement, and recomend others believe, regardless of the probabilty of it being true.

So, who's voting for what on Nov. 4?

> So, who's voting for what on Nov. 4?

As the resident of a non-swing (US) state, I decided that the most optimal use of my vote is to vote for a third party. For practical purposes there is a negligible chance that my vote will alter the presidential election (which Intrade and other prediction markets are all calling for Obama by a landslide), or the fact that McCaine will win in my state. However, every vote cast for a third party counts, because they can use their growing share of the votes from year to year to convince the two main parties to take them seriously and to attract additional voters. Furthermore, third parties actually have a chance of winning in some weakly contested local elections (albeit probably not in a presidential election year).

If I lived in a swing state such as Florida or Ohio, I would have a tougher choice because I would have to vote for the party that has in recent history been more fiscally responsible, as measured by deficit... and this would, ironically, be the Democrats. To help deny the Democratic party a mandate for implementing its misguided definition of social justice (i.e. equal outcomes rather than equal opportunities) I would try to promote gridlock by voting for Republican senators and representatives.

Gregory: Even before making transcendental assumptions*, I assign a very high probability to not being a Boltzmann brain. The amount of order in my memories and observations is truly overwhelming evidence for this, enough to dominate any plausible prior I could derive from a field as tentative and confused as present cosmology.

*Which do seem sensible here, although one easily accessible epic failure always makes me uneasy about this.

I assign a very high probability to not being a Boltzmann brain. The amount of order in my memories and observations is truly overwhelming evidence for this,

I think you mistake what the hypothesis implies. A mind embodied in random fluctuations doesn't perceive the fluctuations, it perceives the changing state of data within itself. And while most perspectives within a given universe would have such a coherent set of states decohere rapidly, the perspective of a mind embodied in those states never perceives the decoherence. Where there is no order, outside or inside, there is no mind.

A simple example of this is to think of a multiverse, then have vacuum decay propagating at the speed of light from a random point source be a possibility within it. How many of the 'verses have inhabitants that perceive their standard physics failing?

What caused the economic crisis? Seriously could anyone provide a link to a lucid explanation without any spin attempting to blame Republicans/Democrats/Rich people/Poor people/bankers/Real estate agents etc.? I just want facts of the mechanics of it without being told who to be angry at. Yes, I tried google. Thanks in advance.

I think that the economic crisis was caused by a widespread belief that house prices could not go down (at least significantly) coupled with an excessive use of leverage. The belief that house prices could not go down encouraged everybody to engage in risky practices (taking out loans they could not afford, loaning to people who could not pay back, etc.). The leverage allowed banks and hedge funds to get in far deeper than they could ever get out once things went bad. By "leverage" I include securitization, which allowed banks to get loans off their books but still be on the hook for them if things went bad.

From the following personal observation, I would like to know whether men or women are more biased towards a relationship between two people from different ethnic group.

I'm from an Asian country which sends few thousand students to the US every year (with probably 60% of them men). Most of these students tend to form a relationship with people from their own country. However, I notice that more females are in a relationship with or married to an American than the guys.

From this observation, I was inclined to say that men from my country are more biased than women to be in a relationship with Americans. However, it could be the case that American women are more biased against having an inter-ethnic relationship.

Is it possible to find the probability of a man or woman (of either country) forming an inter-ethnic relationship from the above "data"?

@aso

"For all races except Asians, all the coefficients on the race indicator variables are negative, implying a same-race preference. . .Finally, we can reject the hypothesis of equal preference against partners of other races for white, black, and Hispanic subjects, owing largely to the greater preference against Asian males by all other races. . . Even in a population of relatively progressive individuals who have self-selected into participation in a multi-cultural Speed Dating event, we observe strong racial preferences. . . .Women of all races exhibit strong same race preferences, while men of no race exhibit a statistically significant same race preference.”

Racial Preferences in Dating, Review of Economic Studies (2008) 75, 117–132.

The only thing, aso, that stops me from being depressed by this study is the comfort I derive from remembering most studies are wrong. Or else women really, really suck.

Oh, aso, welcome to a painful and controversial topic in evolutionary psychology and mate selection.

http://www.isteve.com/IsLoveColorblind.htm

...an article on the topic that will have something to depress/disturb every reader.

aso:
I read a study based on online dating websites that suggested that men were significantly more willing than women to date outside their own race. White women were the most reluctant to date outside their own race, while Asian women were an exception, actually preferring to date white men.

I'm 26. If I decided to dedicate the rest of my life exclusively to personal reproduction, I could probably manage, lets say, 10,000 offspring through various means, fair and foul. Is that a waste of my time? Are we beyond the point where the application of personal eugenics can be expected to impact future outcomes in a significant way?

@PK

www.debtdeflation.com

I don't swear by it but there are a few obvious insights worth considering and less obvious bias than most sources.

It's Australian, not American. Just subtract 5 years from every date and it should pretty much still be relevant to you. ;)

10,000 direct offspring would be a world record. The current title holder is Moulay Ismail.

Recently I have been having a "crisis of faith" about rational thinking. Why is it good to think rationally and in an unbiased manner 100% of the time? Obviously it is a good thing to be rational when making decisions affecting your well being and well being of others. But what about such things as belief in God or in afterlife or other such concepts. A belief like that might be completely irrational, but seems to bring a lot of comfort to a lot of people. On this blog people mentioned that this is bad because it makes you lie to yourself, since you are capable of rational thought, you must be lying to yourself when proclaiming that you will go to Heaven after you die. I disagree, I think it actually takes effort to think rationally about the world, so all you have to do is stop making this effort when thinking about particular topics. I think human mind is flexible enough to do such a trick without getting neurotic. This, obviously, doesn't do much for personal integrity, but so what? Who says we should always think in the same way no matter what we are thinking about?

@PK

The clearest exposition I've seen recently that actually matches my understanding of what happened was presented by Arnold Kling on Ross Roberts' EconTalk PodCast recently. Follow the blog pointers to discussions at Cafe Hayek for a little more.

@Dmitriy

I just can't seem to get away from Robin's OSCON07 talk today. His reply to you, I think would be that your irrationality has some private benefits, but major public costs. And that should bother you. We have to ask ourselves: What's wrong with us? More important, what's wrong with me?

Daily Kos is holding a contest where you can win an Apple Macbook if you come closest to guessing the following: # Democratic Senate seats, # Republican Senate seats, # Democratic House seats, # Republican House seats, # Obama Electoral Votes, # McCain Electoral Votes, Obama Popular Vote Percentage, McCain Popular Vote Percentage (both to one decimal place).

Currently over 5300 people have submitted their guesses. I would love to see what the averages are. Another wisdom of crowds experiment in the works.

@Firebrand

You've got your ranking A > B > C > D. Is D just a make-weight that you don't really believe in? "No." Well, you don't want it falling off the bottom, put D = 0.1. Is A the clear favourite. "No, it is less than half." So provisionally A = 0.4.

How about B and C. Is it a three way race with D trailing fourth? *Firefbrand looks like a deer caught in headlights* "I can't argue against that, but I can't argue against other possibilities either." Well, is A well ahead, with B and C not really any better than D? "Maybe" In that case A = 0.4, B = 0.3, C = 0.2, D = 0.1

Can I actually justify that assignment? No, but I don't have to. Calculate the entropy. A uniform prior with 1/4 each has an entropy of 2 bits. 4,3,2,1 has an entropy of 1.85 bits only 0.15 bits less. These kinds of vague priors don't do anything much, so don't sweat it.

(My entropy argument is unconvincing, writing in a big fat zero for the dis-favoured option is only worth 0.4 bits, but it is a big and unwise commitment. Help!)

OK, since these priors don't do anything much, why bother at all? Jaynes nails it on page 52 of PT:tLoS

we are concerned with the logic of consistent reasoning from incomplete information...

If we don't give actual numbers but leave things implicit we run the risk of implying different values for the same number in different places. Since logic is bittle against contradictions (you can prove anything from a contradiction) we cannot easily bound the consequences of any such inconsistancy.

Suppose we have A,B,C,D as above and some test that we can run. Lots of tests, several for each hypothesis. Each test targets one hypothesis and has a chance, p, of proving it. Otherwise we are left in the dark. How can we come up with a number for p? Well, why are we bothering? Do we think we are ever going to get lucky? How many tests can we run before the funding runs out? We are in a forced choice situation. Do we go for it or look for another project. If we expect to run 10 or 20 or maybe even 30 tests before getting a result we are looking at p = 0.1.

Now we can start asking questions. Obviously we test A first. What if it fails? Do we Round-Robin, B,C,D,A,B,C,D,... Do we stick with A for 2 or 3 tests? Then we try B, but what next? Tradition is to fly by the seat of the pants and just guess. We cannot do better than consistant guessing because our information is incomplete, but we can suffer a large loss if our guessing is not self-consistant. We commit ourselves to definite numerical values and calculate, not because our values are "correct" but because we avoid guessing more values than we have undetermined variables.

If we assume that this tests are independent we can update using Bayes Theorem. If we have A=0.4, B=0.3, C=0.2, D=0.1, we test A with p=0.1.

If the test fails our new values are

0.375 0.3125 0.208333 0.10416666

where I have quoted spurious precision so that those playing along at home can compare my calculation with theirs.

That is interesting. The leading hypothesis has taken damage, but it is still comfortably in the lead. So we do a different test for A and fail again.

0.351 0.325 0.216 0.108

"A" remains the favourite and gets a third try.

0.327 0.336 0.224 0.112

Oh well, now it is time to try B. If B fails we get

0.338 0.313 0.232 0.116

Notice that fifth time is neither another test of B nor moving on to C, A gets another test, and if that fails

0.315 0.324 0.24 0.12

Well, here is a curious point. Initially A was well in the lead, so it got three tests in a row before we lost faith and tried B. But B didn't get three in a row, A was only just worse, and now A isn't getting three in a row again, B has edged ahead. If B fails

0.326 0.302 0.248 0.124

It is A again. Once A has lost its lead, we alternate

0.303 0.312 0.257 0.128
0.313 0.290 0.265 0.132
0.291 0.299 0.273 0.137
0.300 0.277 0.282 0.141
0.278 0.286 0.291 0.145

At long last A and B have faded and it is worth giving C a go. If that doesn't work:

0.286 0.295 0.269 0.150

So it is back to B. Then A,C,B,A,C,B,... round-robining

D is still at 0.150 and is only very slow creeping into contention. If our experiments fail to prove A,B, or C (perhaps none of them are true) eventually D will be tried and we will start circulating round all four.

Even such a simple illustrative example has points of interest. There is a recipe: stick with the best guess for a bit, it that doesn't work out, alternate between the best two, If that still doesn't work out start circulating around the best three. Eventually, circulate around all four. That is a reasonably complicated recipe. Would you get it by common sense? I don't think so, but it just drops out of turing the handle on Bayes Theorem.

So, I just emailed Kos and asked if he would be publishing the results. All he wrote back was "Good idea" (I guess he's busy), which I interpret to mean he will.

"This, obviously, doesn't do much for personal integrity, but [...] [w]ho says we should always think in the same way no matter what we are thinking about?"

Your inner sense of personal integrity.

1. nutrition/health

2. how to become successful in academia, sociology and/or policy in particular.

"Recently I have been having a "crisis of faith" about rational thinking. Why is it good to think rationally and in an unbiased manner 100% of the time? Obviously it is a good thing to be rational when making decisions affecting your well being and well being of others. But what about such things as belief in God or in afterlife or other such concepts. A belief like that might be completely irrational, but seems to bring a lot of comfort to a lot of people. On this blog people mentioned that this is bad because it makes you lie to yourself, since you are capable of rational thought, you must be lying to yourself when proclaiming that you will go to Heaven after you die. I disagree, I think it actually takes effort to think rationally about the world, so all you have to do is stop making this effort when thinking about particular topics. I think human mind is flexible enough to do such a trick without getting neurotic. This, obviously, doesn't do much for personal integrity, but so what? Who says we should always think in the same way no matter what we are thinking about?"

I was really struck by Eliezer's assertion a few weeks back that "If you once tell a lie, the truth is ever after your enemy." It's exactly the reason you can't entertain a belief in the afterlife while still honestly practicing rationality.

Let's say you admit there's a Heaven, it's much better than Earth, and all good people go there. Then ceteris paribus it seems reasonable to (if you are a good person) commit suicide to get there faster. So either you commit suicide and (if there's no Heaven) waste your life, or you pull a "The dragon is permeable to flour!" and tell yourself that suicides can't get to Heaven. But then why not kill some of your friends to help them get to Heaven faster? Then you either kill your friends or pull another "permeable to flour" and add "God doesn't want you to kill your friends, even though it would be a nice thing to do and helpful to them". But then why not refuse to donate any money to famine relief, since that's a way to get people into Heaven faster that doesn't involve actively killing them? Then you have to add "God demands you donate to famine relief, or else He will get mad at you." Wait, I wasn't going to donate to famine relief anyway, do I still have to? "God demands you donate exactly as much to famine relief as you would if you did not believe in Heaven."

I understand that most religious people don't actually consider these implications of their beliefs. But in order to not consider them, they need to adopt an entire mindset of not considering the implications of their beliefs, or of dismissing anything that logically follows from statements they believe to be true if it contradicts "intuition". Once you have that mindset, it's hard to get rid of, and it could negatively affect you elsewhere. For example, I think that's part of what's behind people refusing to consider cryonics even when they can't think of any flaws in the science behind it. If you tend to believe false things, then ignoring what logically follows from your beliefs is a useful defensive reaction - it's much safer to never believe anything if it sounds unusual. But doing that also removes your ability to think creatively.

So it's a trade-off. The more illogical beliefs you hold, the more you have to herd your thoughts into illogical patterns in order to hold them.

I'd accept Yvain's comment as an Overcoming Bias post.

1. Genuine ways to improve one's fluid intelligence.

Exercise, nutrition, and a "stimulating" environment are obvious, but are there any other non-obvious ways currently to genuinely improve one's fluid intelligence (gF)? For instance, I've read (Jaeggi, et al.) that increasing working memory capacity can improve one's gF.

Evidently being smarter (ie. higher gF) is desirable for most if not all people since the smarter one is the easier general problem solving and all sorts of other cognitive tasks are.

Yvain, excellent.

1. What scientific hypotheses do you think have gotten less investigation than they merit?

2. What used to be on your list for #1, but isn't any more?

My current list:

No longer on my list:


  • Vaccine-induced autism. Pretty well discredited now.

  • Solar explanations for global warming. Jury is still out, but it's getting the attention it deserves now.

Abiotic oil will probably be the next to migrate off the list, due to the recent surge of attention.

Yvain: I see how a belief in afterlife rewards and punishments for your deeds in this life is morally faulty. It becomes faulty at the point where it tries to tie local events with mystical events. I am not convinced that any irrational belief is faulty that way. Lets say I believe in Haven where everyone goes no matter how good or bad. I will still be rational about everything, but derive some comfort when someone close to me dies, believing that they went to a "happy place".

Maybe I'm going in a different direction than Dmitriy Kropivnitskiy, but seems to me that the key point in his comment is "I think it actually takes effort to think rationally about the world." It seems to me that the Eliezer party line is to simply refuse to do a cost-benefit analysis. Yvain does use the phrase "trade-off" at the end, but he never acknowledges possible costs of truth-seeking. Also, this reminds me of when Eliezer expressed disbelief when a commenter claimed that this blog had damaged his abilities as a businessman.

Dmitriy: "Let[']s say I believe in H[e]aven where everyone goes no matter how good or bad. I will still be rational about everything,"

As Yvain pointed out, this is really an incentive to suicide.

Douglas: "It seems to me that the Eliezer party line is to simply refuse to do a cost-benefit analysis."

No, a truthseeker cannot permit herself to deny the existence of costs to truthseeking if it is in fact true that such costs exist. However, if one embraces a moral obligation to seek truth, then one simply pays the costs insofar as one can. But even if you're not the type for moral zealotry: seriously, how often is the situation really going to come up in which it make makes sense to not make sense?

Robin and Eliezer have been on quite a productive jaunt. The blog looks great. Though I'm glad I've been too busy focusing on building my net worth to participate.

Today I saw my first bumper sticker since Robin posted about the character traits of people who have bumper stickers. I drive about an hour every day. I have seen an awful lot of cars with no bumper sticker and only one with one. This is in Sydney Australia.

I recall bumper stickers being much more popular 20-30 years ago. I also remember that the bumpers in those days used to be chrome coloured metal. Nowadays most cars in Australia are Japanese design and the bumpers tend to be plastic and the same colur as the body of the car. In between was a phase where the bumpers tended to be black plastic with maybe a stripe of chrome coloured plastic trim. Are bumper stickers still popular in America? Is this because bumpers are still chrome coloured?

@Yvian:
People purporting to hold these beliefs don't behave rationally in accordance with them because they do not truly believe them.
The rationale for holding strong ideological convictions are legion: to bond with a social group; to seek solace against primeval fears; to buffer doubts about one's own moral fitness; and countless others. The dogmatic positions, providing such certainty necessary to end internal worry, are at odds with the pragmatic nature of the world. Thus many worries are removed, and replaced with one: faith.
Faith creates its own problems, as it is a two-legged table upon which these previous uncertainties must weigh. Even when it is at rest it requires propping up, but when shaken necessitates the kind of rushing around the 'dragon in the garage' enjoys in its defence.
Having faith means presenting as quintessentially solid what is by nature wobbly in order to mask other, deeper vulnerabilities.
It does religion a disservice to take dogmatic claims at face value. The solace of religion is in sublimating one's own despair and picking up a prefabricated facade. Is it any wonder that the gap between word and deed is so large?

A cost benefit analysis will involve four numbers, not two. Rationality has cost, Qc, and benefit, Qb, while Religion has cost, Rc, and benefit Rb. We seek to quantify both Qb-Qc and Rb-Rc, intending to ask whether Qb-Qc > Rb-Rc.

Do the net benefits of rationality exceed the net benefits of religion? As a matter of simple algebra this is the same as asking whether the benefits of rationality sufficiently exceed the benefits fo religiosity to justify the additional cost.

Qb-Qc > Rb-Rc <=> Qb-Rb > Qc-Rc

But maybe there are no additional costs. Perhaps Qc < Rc and people adopt rationality as a cost cutting measure. Then the analysis is simplified.

Qb > Rb /\ Rc > Qc => Qb-Qc > Rb-Rc

What prompts me to post is remembering a very clever friend who was brought up as a Christian and who studied Christian apologetics. He studied many books that argued in favour of Christianity and doubts did not surface until his late twenties. Then he looked, for the first time, at some books that argued against Christianity and his doubts multiplied out of control. I have other odd friends, one was very committed to astrology for a while, and a third lost a year of his life to obsessive dream analysis.

The image of people being consoled by a cheap faith rings false with me. Looking around me, I see people getting sucked into the imaginery worlds conjured by their supernatural consolations and finding them more costly than rationality. Are the benefits greater than the benefits of rationality? If arguments against rationality are to persuade, they had better be, and by a large margin, because the costs are much higher.

Yes it is costly to maintain beliefs you at some level know to be false, keeping them from polluting all your other beliefs. But that cost can be the reason for such beliefs, as a credible signal of commitment to your associates.

@Daniel Franke:

Without question, the one on my list would be the biological underpinnings of intelligence, and human biodiversity in general.

I think the readers here might be amused by this.

Principles of Economics, translated by Yoram Bauman, Ph.D., the world's first (and only) stand-up economist.

Also, be sure to check out Doug Zonker's presentation of his groundbreaking paper.

So Kos published the results per my suggestion:

"9,493 of you made predictions for tonight's prediction contest. The average of all those predictions is:

57.21 Democratic Senate seats
39.96 Republican Senate seats
255.43 Democratic House seats
169.47 Republican House seats
351.69 Obama Electoral Votes
189.91 McCain Electoral Votes
52.79 Obama Popular Vote Percentage
45.14 McCain Popular Vote Percentage

We'll see how the "wisdom of the crowds" matches up with the final results.

S.S. coverage links: http://www.singularitysummit.com/media/buzz

Two million hits!!

I feel that yesterday's presidential election provided evidence for the viability of prediction markets. I can't rewind and look at what InTrade was saying a month ago, but if I remember correctly the final tally of electoral votes was almost exactly as expected. (I didn't put money up because I had no reason to think I could make a better guess than the market.)

Did anyone here actually save those data? Just how well did the by-state predictions pan out?

@Ian

Masse has a huge thing on this.

I notice some discussion of cryonics in one of the threads.

Just note that there are legal problems in the United States. Waiting until after 'legal death' for freezing is just too risky for my liking (too many things can go wrong before your body even gets suspended).

That's why I strongly recommend Switzerland for those who are serious about the cryonics option. It's one of the very few places in the world where assisted suicide is legal, which means you could in theory option on a 'live freeze' (deciding to get suspended while still alive).

Switzerland is also a wonderfully laid-back, reasonable country (social democratic, people very low-key, few extremists).

---

There is no cryo facility in Switzerland yet (at least as far as I know).
Readers note, that I'm right now en route to Europe (no kidding, I now have my EU passport, and am moving to Europe next month - going to the UK first). I estimate that I have 20 good years left to make sufficient money to set up in Switzerland.

I judge that stroke/cancer/heart disease will likely kill me long long before unfriendly AI does). We are biased to 'over-rate' exotic risks, whereas it is much more likely that banal, common things (stroke, cancer, heart disease) will be our demise.

Every one is welcome to make their way to Switzerland and hopefully, my eventual health centre there. I won't turn anyone down (be they friends or enemies - if Nick, Eliezer or Robin show up I'm perfectly willing to lop off their heads and stick 'em in the freezer too). Every-one gets frozen, no one gets left behind.

I intend to survive friends, and I've judged that a 'live freeze' will likely prove to be the only option. Survival rates plummet after age 60, and I just can't run the risk of aging past that (Too many things can go wrong).

So readres, if you're serious about seeing Singularity, and old you have become, remember Switzerland!


Ian Maxwell,
intrade saves the old data. just click on "show expired." For example, here's Florida. (Note that IN, MO, and NC have not closed.)

"Yes it is costly to maintain beliefs you at some level know to be false, keeping them from polluting all your other beliefs. But that cost can be the reason for such beliefs, as a credible signal of commitment to your associates."

This reminds of the old advice about how a girl should not act intelligently so as to catch a husband.

Why would I want to associate with **anyone** who would want to associate with someone who acted that stupidly. That is actually even more insulting to the believers than just admitting that they are not all that bright.

Why would I want to associate with **anyone** who would want to associate with someone who acted that stupidly.

If they have power over you.

Ian, I've made a comparison of Intrade versus polls over several months here.

In the interest of track records for pundits, let me note that so far in the '08 election it seems the most accurate pundits were Nate Silver of 538 on the percentage of popular vote, Mark Halperin of Time magazine for the electoral vote, and the most accurate major poll was the Rasmussen.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31