« I Heart CYC | Main | Recursive Self-Improvement »

December 01, 2008


Maybe interesting: Reconfigurable computing for learning Bayesian networks

I don't care what the writers intended; "Click / Clash" is obviously about the Singularity.

I wanted to ask for a long time. What is the relationship between SIAI and Novamente? Ben Goertzel is already building an AI, but it is my understanding that Eliezer thinks it's way too early to start coding. So is there disagreement between them, or did I misunderstand? Did they ever discuss this apparent disagreement?

Tiiba: Much looser than it looks. Goertzel doesn't make SIAI policy.

But he's Director of Research. What does he do for you?

More importantly, is he one of the people whom you classify as poor misguided fools?

Yesterday I saw a sample Raven matrix, from an IQ test that is allegedly good in high ranges. It consisted of 8 3x3 matrices with patterns, and some rules governing the transition from one matrix to another, although with no indication whether you are supposed to read the 8 matrices left-to-right top-to-bottom, etc. You are then to choose the matrix for the lower-right corner (multiple choice).

In several cases, adjacent pairs of matrices were related by an observable rule: one was made from its neighbor to the left by moving each token 1 place through the matrix in a snakelike "S" pattern; one was made from its neighbor by rotating the matrix 90 degrees; some were made by permuting rows and columns. In one case, a matrix on the far right was a mirror-image of one on the far left, indicating that the sequence in which to visit the matrices was not an orderly one.

Given that there is a separate rule governing each transition, and no requirement on in which order to visit the matrices, I would be surprised if the answer were even determined. That is, there might be several equally good "rules" producing the observed output.

It doesn't seem to me that the task of picking out a meta-rule governing the rules of matrix transitions has a lot to do with what I think of as intelligence. I think that people tasked with writing IQ tests designed to pick out people much smarter than themselves have little insight into the levels above them, and so they just make more and more complicated puzzles. IMHO this approach will favor autistics over original and accurate thinkers. (The "quiz show" approach to intelligence probably does the same.)

Empirically, there is AFAIK no validation that performing well on any IQ test (say, above 140) correlates with anything other than performing well on other IQ tests. (Probably somebody here knows whether that's true.)

In sum, I'm not aware of any indication, nor of any theoretical reason, why we should think that IQ scores above some range - say, 140 - have any meaning.

I think a better IQ test could be made by taking the type of reasoning errors that get written up in Overcoming Bias posts, and making questions out of them.

Tiiba, don't publicly ask questions that I could not reasonably be expected to publicly answer.

Tiiba, if you're really interested in Ben and Eliezer's disagreements, consider taking some time to poke around the SL4 list archives: see, e.g., Ben's 2002 parable, or Eliezer-sub-2005 on a Novemente Singularity. (Since the SL4 archives are public, I'm assuming it's okay to link these.)

Bayesians may be interested in "The Logic of Reliable Inquiry" by Kevin T. Kelly (1996). Here is a quote from the introduction related to Bayesian updating:

"Almost all contemporary, epistemological discussion assumes some recourse to the theory of probability. A remarkable feature of logical reliability is that it provides a nonprobabilistic perspective on scientific inquiry ... In order to emphasize the difference between logical and probabilistic reliability, a simple example is given in which updating by Bayesian conditionalization can fail to converge to the truth in the limit on each member of an uncountable set of possible data streams, ... even though trivial methods are logically guaranteed to succeed."

Here is the dust jacket summary of the whole book:

There are many proposed aims for scientific inquiry--to explain or predict events, to confirm or falsify hypotheses, or to find hypotheses that cohere with our other beliefs in some logical or probabilistic sense. This book is devoted to a different proposal--that the logical structure of the scientist's method should guarantee eventual arrival at the truth given the scientist's background assumptions. Interest in this methodological property, called "logical reliability," stems from formal learning theory, which draws its insights not from the theory of probability, but from the theory of computability. Kelly first offers an accessible explanation of formal learning theory, then goes on to develop and explore a systematic framework in which various standard learning theoretic results can be seen as special cases of simpler and more general considerations. This approach answers such important questions as whether there are computable methods more reliable than Bayesian updating or Popper's method of conjectures and refutations. Finally, Kelly clarifies the relationship between the resulting framework and other standard issues in the philosophy of science, such as probability, causation, and relativism.

I have no connection with Kevin Kelly.

Eliezer: You can reasonably be expected to answer whether another programmer's AI theory is reasonable. You do it one thread down. And it's not like your or his theories are trade secrets - I just wanted clarification.

I don't consider Ben's AI theory reasonable.

Looks like a country just tried to ban bad news, literally.

"I don't consider Ben's AI theory reasonable."

That begs the question, do you have a better option? Or do you have some formal partially worked out theory of AI or just some philosophical and scientific speculations?

Interesting talk about understanding what "morality" is(not a link to preachy crap). Apologies if this has already been linked.

@ Eliezer:
Do you have a body of knowledge about AI that you don't publish because it's potentially a weapon of math destruction, or do you only consider what's available online of your writings sufficiently worked out to place some weight on (modulo standard math and follow-up works that don't change the nature of the game)?

In his book Honest Signals -- which I got after Robin quoted the press release, thanks! -- Alex Pentland writes (p.58-60; quoted at length because it's interesting and not online),

Clearly, we need a solid, working explanation of how prelinguistic communication, including social signaling, could produce intelligent, coordinated group behavior in early humans. Studies of primitive human groups reinforce the idea that social interactions are central to human decision making; ethologists have found that almost all decisions affecting the group are made in social situations (Buchanan 2007; Wilson 2002). The major exception to this pattern of social decision making, in humans as well as other animals, is when extremely rapid decision making is required for situations such as battles or emergencies (Conradt and Roper 2005).

In our close cousins the apes, whose only known communication is nonlinguistic, decision making via the use of social signaling is a familiar scenario. ... (Stewart and Harcourt 1994) ... Sue Boinski and Aimee Campbell describe how capuchin monkeys use trilling sounds to cooperatively decide when and where the troop should move (DeWaal 2005). Monkeys at the leading edge of the troop trill the most, encouraging others to follow the path they have found, and others take up the trilling in order to coordinate everyone's movements.

Similar processes of social decision making are common in many animals and virtually all primates ... [C]ycles of signaling and recruitment, until a point is reached where everyone in the group accepts that a consensus has been reached (Conradt and Roper 2005; Couzin et al 2005; Couzin 2007). Some evolutionary theorists think that this type of "social voting" process could be the most common type of decision making for social animals ...

[The fact that it's common] doesn't explain how social decision making can produce successful, adaptive behavior. We still need a good explanation of how social signalling mechanisms can produce intelligent decisions ... Over one hundred years ago in Victorian England, the Reverend Thomas Bayes developed a mathematical theory for combining information. His theory shows how to weight bits of information in proportion to their expected payoff in order to make the best overall prediction. ... Idea markets built on this Bayesian solution for combining information are an effective way to integrate people's opinions (Kambil 2003; Chen, Fine and Huberman 2004).

I left in the part about how Bayes and proportion to expected payoff to illustrate why I don't completely trust this book (or do I have my history wrong?), but the question burning in my mind is, is there any analysis of animal behavior specifically in terms of prediction markets? (Or in terms of "trading" information, more generally?) Anybody know?

I haven't checked the references yet. I'll probably report back when I have.


Buchanan, M. 2007. The social atom: Why the rich get richer, cheaters get caught, and your neighbor usually looks like you. New York: Bloomsbury.

Chen, K.Y., L. Fine and B. Huberman. 2004. Eliminating public knowledge biases in information-aggregating mechanisms. Management Science 50 (7): 983-994.

Conradt, L., and T. Roper. 2005. Consensus decision making in animals. Trends in Ecology and Evolution 20 (8): 449-456.

Couzin, I. 2007. Collective minds. Nature 445, no. 7129 (February 15): 715.

Couzin, I., J. Krause, N. Franks, and S. Levin. 2005. Effective leadership and decision-making in animal groups on the move. Nature 433, no. 7025 (February 3): 513-516.

De Waal, F. 2005. Our inner ape. New York: Riverhead.

Kambil, A. 2003. You can bet on idea markets. HBS Working Knowledge for Business Leaders, December. See <http://hbswk.hbs.edu/archive/3808.html>

Stewart, K.J., and A.H. Harcourt. 1994. Gorilla vocalizations during rest periods: Signals of impending departure. Behavior 130 (1-2): 29-40.

Wilson, D. 2002. Darwin's cathedral: Evolution, religion, and the nature of society. Chicago: University of Chicago Press.

how would a rational society deal with the unproductive? this seems to be the core question of government, economy, and ethics.

If Eliezer had novel, powerful AI insights but not Friendliness, it seems clear that he would not publish them and possibly (according to his beliefs) help someone else destroy the world. So until there is Friendliness, I suggest no one will know what Eliezer knows. (Or doesn't.) Unless Robin convinces him!

"If Eliezer had novel, powerful AI insights but not Friendliness, it seems clear that he would not publish them and possibly (according to his beliefs) help someone else destroy the world. So until there is Friendliness, I suggest no one will know what Eliezer knows. (Or doesn't.) Unless Robin convinces him!"

That is one interpretation but by no means the only valid one. In fact there are easier ones, for example Eliezer has lots of philosophical ideas without the math to back them up. This would seem to be supported by the research assistant he is looking for as well as the lack of technical papers. In fact I would surmise that this is the more likely interpretation of the facts. Of course Eliezer may feel free to disprove me.

Eliezer, you do a good job of pointing out the shortcomings of other AI work, I would be interested in knowing what avenues you consider promising. I'd also be interested in other OB readers' opinions.

I'm assuming most OB readers are Atheists with a capital 'A'. What do you guys think of this?

P.S. It's not arguments for creationism.

Its things like this that make me less worried about Friendliness. Unless they do something out and out stupid, like turning the universe into paperclips or PebbleSorting, I can't help but think AIs will probably be a big advance over humans, and not just intellectually.

@Eliezer: I don't consider Ben's AI theory reasonable.

Your dialog with Robin has been valuable. Dialogs with other smart people with something to say in this area -- Ben Goertzel, Steve Omohundro, Scott Aaronson, Ray Kurzweil, and Peter Cheeseman come to mind -- would be very good.

I have ongoing occasional conversations with Steve; he's a Bayesian and smart, no mere above-average AI researcher, but I don't think we're in quite the same reference frame. Scott Aaronson is an interesting person who I'd converse with if he wanted to talk to me. My dialogs with Ben are recorded in the ancient archives of the SL4 mailing list; I can't say I'm really interested in continuing them. Ray Kurzweil is too scheduled to get to. Peter and I have talked quite a lot, but he doesn't read Overcoming Bias.

Phil: Your reasoning on IQ tests seems to be roughly the concensus of professionals in the field, though with some disagreement within the field over exactly how rapidly reliability falls with increasing score. We seem to be able to measure math ability at a high enough level however that we would expect to be able to estimate some fairly general measure of intelligence at a fairly high ceiling via regression from math ability in a noisy but informative manner if we properly controlled for quantity and quality of training.

Phil Goetz:
Empirically, there is AFAIK no validation that performing well on any IQ test (say, above 140) correlates with anything other than performing well on other IQ tests.

That's very much false. See, for instance, Why g matters and a huge body of other literature. Performance on IQ tests correlates with performance on just about all other cognitive tasks, as well as with a number of real-life outcomes.

...and then I read what I quoted more specifically, and noted the "above 140" bit, realizing you weren't talking about IQ in general... gah.

Ignore me, I'll just go hide under a rock.

"Do you have a body of knowledge about AI that you don't publish because it's potentially a weapon of math destruction"

I'm curious... is this an intentional pun, or a very funny typo?

@Z.M. Davis Eliezer-sub-2005 on a Novemente Singularity.

Reading that, what comes to mind is ... Cassandra

I have questions for both Eliezer and Robin. Long story short, I think I have important and interesting ideas, and it would be beneficial to disseminate them. What I am struggling with right now though is what the best method is. I did well enough in undergrad that grad school is a possibility, but what field exactly is a problem (one of the many social sciences or philosophy).

Eliezer - you seem to have little formal training, but people still seem to take you seriously. How did you build up your reputation? Was it entirely an online phenomenon, did you go to conferences, etc? As someone who has already achieved some measure of success through this method, do you have any advice for those who would follow? Pitfalls to avoid? Something you wish you had done earlier?

Robin - you got your PhD in "social science" according to Wikipedia. What exactly did that entail, and is it something you would suggest for others? Were you able to pick your course load from among all the different social sciences? My interests are so broad that I am having trouble deciding what PhD program I would want to apply to (which makes me lean towards philosophy, to be as general as possible).

Thanks for sharing your thoughts. I apologize if you've covered these questions already - if so, would you provide a link? Thanks!

Will, don't try it my way unless you've got some kind of incredibly strong reason to do so.

Will, you might also want to consider exactly how much you care about being taken seriously by other people. If there're just some specific things you desperately need to understand, then you could just directly work on those, without shattering your heart into ten-to-the-twentieth pieces trying to optimize for reputation.

On the other hand, you have no reason to take me seriously. Get the Ph.D.

Is there a consensus on agent based models or computational social science models as it pertains to them becoming more widely accepted in economics? I have to think that the recent and continued financial meltdown will push to more experimental approaches in economics, particularly when in relaxing the homogenous, rational informed approach.

Eliezer - Noted. I figured as much from the outside looking in, but you might have thought of something I didn't.

Davis - I care very little about what people think of me in a personal sense. I'm actually a pretty weird guy. I just want my ideas, which I believe to be correct, to have the best chance to be taken seriously. There is a reason people get PhDs after all, it has to signal something (plus I'd love to be a professor for hedonic reasons). I can definitely learn a lot from whatever field I pick, but for the most part I am an autodidact, and that is not my only concern when it comes to choosing a program.

This is wishing you the best of luck, Will. (Sorry if my previous comment came off the wrong way.)

"Weapons of math destruction" have come up before.

Will - Pick the field that your ideas fit into. Estimate earnings increase due to PhD vs. cost of PhD in dollars and foregone salary.

I don't know how it works in the social sciences. In the "hard sciences", getting a PhD is of much less benefit if you don't go to an ivy league school, since you're unlikely to get a job as a professor or as a lab head. In computer science, people who don't get PhDs often, maybe usually, get paid more than people who do; because they have more years experience, and because they're more likely to learn high-paying skills (DB management, Java Enterprise) than interesting ones, and because the field has a history of caring less about degrees.

I think the best way to do research is to start a clothing store or something equally mundane; run it for 20 years; retire; then do your research. (Good AI option: Get an MD in neurology, work 20 years, retire, work on AI.) It's very hard to do basic research in a research job. Grants are very application-oriented.

@Joshua Fox re Cassandra:

So I take it that you think Eliezer's predictions are right but will not be believed?

Or are you misusing the reference?

So I take it that you think Eliezer's predictions are right
Yes; not that I'm an expert.

... but will not be believed?
So far, there seems to be little uptake.

Friendly AI.

Why should we want it to be friendly? If humans could create something that would outlast our Sun, would that be worth the price of the destruction of other life on Earth now?

Thinking on that made me imagine an AI which was only concerned for its own survival, and increasing its control over more and more stars. A monster, which would kill everything in its path.

So how could you program in a sense of wonder, a delight in diversity, valuing life as a Good in Itself, in machine code?

random link: objectivists vs. subjectivists, fight! (not upper-case objectivists).

Did the whole "hypocrisy" thread just get deleted?

There is a fairly old meme floating around, I don't know where it originated but have recently seen it in open source circles and long before that in science fiction, - "It's amazing what one person can accomplish if they don't care who gets the credit."


The human mind is (a least in one way!) similar to a modern home computer. The computer can do "general" tasks only serially (or with a low level of parallelism if you have dual/quad core), but is able to do graphics tasks with a high level of parallelism (GPUs have the equivalent of many hundreds of cores).

Geniuses often describe the process of reaching their discoveries visually. Daniel Tammet - the idiot savant who could do large number multiplication in his head, described the process as seeing two shapes, and then a third shape (the answer) emerges before his eyes. Einstein imaged riding on a beam of light.

By transforming the problem from words in to pictures, they are transferring it from their CPU to their GPU - from the serial part of their brain to the parallel, and exploiting the extra power there. Visual analogies may be an important tool geniuses use.

So my question is (obviously, since I am posting it on OB): What is a good visual analogy for thinking itself?

Probably, most of the readers of this site are from the US or other English-speaking countries, and probably find most acronyms (which are heavily used here) rather obvious. For me, it is not so obvious that AFAICT means "as far as I can tell", at least at first sight. So I've found this link which I hope others who, like myself, are not native English speakers, will find useful:


random link: http://www.reddit.com/r/programming/comments/7j1gr/accelerating_bayesian_network_200x_using_a_gpu/

Eliezer, can you give us a review of Wendell Wallach's recent book on machine ethics ?

for those wondering how to define / create Intelligence:


Bias on TED: "Dan Gilbert: Exploring the frontiers of happiness"

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30