« My Favorite Liar | Main | More Referee Bias »

February 24, 2008

Comments

I'll second the recommendation for Tom Mitchell's book (although it has been a long time since I have read it and I have moved away from the machine learning philosophy since).

Are you going to go on to mention that the search in a finite concept space can be seen as a search the space of regular languages and therefore a search in the space of FSM? And then move onto Turing Machines and the different concepts they can represent e.g. the set of strings that have exactly the same number of 0s and 1s in.

Hmm, lets fast forward to where I think the disagreement might lie in our philosophies.

Let us say I have painstakingly come up with a Turing machine that represents a concept, e.g. the even 0s and 1s I mentioned above. We shall call this the evenstring concept, I want to give this concept to a machine as it I have found it useful for something.

Now I could try and teach this to a machine by giving it a series of of positive and negative examples

0011 +, 0101 +, 000000111111 +, 1 -, 0001 - etc...

It would take infinite bits to fully determine this concept, agreed? You might get there early if you have a nice short way of describing evenstring in the AIs space of TM.

Instead if we had an agreed ordering of Turing machines I could communicate the bits of the Turing machine corresponding to evenstring first and ignore the evidence about evenstring entirely, instead we are looking at evidence for what is the TM of! That is I am no longer doing traditional induction. It would only take n bits of evidence to nail down the Turing Machine, where n is the length of the turing machine I am trying to communicate to the AI. I could communicate partial bits and it could try and figure out the rest, if I specified the length or a bound on it.

If you want to add evidence back into the picture, I could communicate the evenstring concept initially to the AI and make it increase the prior of the evenstring concept in some fashion. Then it collect evidence in the normal way, in case we were wrong when we communicated evenstring in some fashion, or it had a different coding for TMs.

However this is still not enough for me. The concepts that this could deal with would only be to do with the outside world, that is the communicated TM would be a mapping from external senses to thing space. I'm interested in concepts that map the space of turing machines to another space of turing machines ("'et' is the french word for 'and'"), and other weird and wonderful concepts.

I'll leave it at that for now.

Actually I'll skip to the end. In order to be able to represent all the possible concepts (e.g. concepts about concept formation, concepts about languages) I would like to be able to represent I need a general purpose, stored program computer. A lot like a PC, but slightly different.

Of course, once someone has defined the word "wiggin" in that way, this gives us a reason to attend to that concept, and additionally, presumably the definer would never have thought of it in the first place without SOME reason for it, even if an extremely weak one.

As I was reading about that learning algorithm, I was sure it looked familiar (I've never even heard of it before). Then suddenly it hit me - this is why I always win at Cluedo!

Say there's no correlation between the size of a rock and how much Vanadium it contains. I collect Vanadium you see. However, I can only carry rocks under a certain size back to my Vanadium-Extraction Facility. I bring back rocks I can carry, and test them for Vanadium. However, for ease of reference, I call all carrying-size, Vanadium-containing rocks 'Spargs'. This is useful since I can say 'I found 5 Spargs today', instead of having to say 'I found 5 smallish rocks containing Vanadium today'. I have observed no correlation between rock size and Vanadium content. Is 'Sparg' a lie?

Ben, to put that point more generally, Eliezer seems to be neglecting to consider the fact that utility is sometimes a reason to associate several concepts, even apart from their probability of being associated with one another or with other things. An example from another commenter would be "I want a word for red flowers because I like red flowers"; this is entirely reasonable.

Infants do not possess many inborn categories, if they have any at all. They perceive the world as directly as their senses permit. But they do not remain this way for long.

If avoiding categorization and referring to properties directly offered a strong benefit, we would never had developed the categories in the first place. Instead, we do so - even when we recognize that developing a category can introduce biases into our reasoning.

We can be more intelligent in our designs than evolution has been in its own. But usually, we are not. We think we've found an obvious way to outsmart reality? Then we are most likely mistaken.

Simplification can be a curse. A blessing as well.

this reminds me of The Dumpster.

http://www.flong.com/projects/dumpster/

also, i can't help but think of the concept of the Collective Unconscious, and the title of an old Philip K Dick story, Do Androids Dream of Electric Sheep?

Unknown - pretty much, yeah. I just wanted to use the words 'Vanadium' and 'Sparg'.

Obvious counter: utility is subjective, the joints in reality (by definition!) aren't. So categorisation based on 'stuff we want' or 'stuff we like' can go any way you want and doesn't have to fall along reality's joints. There is a marked distinction between these and the type of (objective?) categories that fit in with the world.

If I am searching for a black-haired, green-eyed person to be in my movie, I have a motive for using the word Wiggin. However, the existence of the word Wiggin doesn't reflect a natural group of Things in Thingspace, and hence doesn't have any bearing on my expectations. Just as the coining of a word meaning 'red flower' wouldn't be a reflection of any natural grouping in Thingspace - flowers can be lots of colours, and lots of things are red. Sound good?

Not every regularity of categorization is derived from physical properties local to the object itself. I watch Ben Jones returning to the facility, and notice that all the rocks he carries are vanadium-containing and weighing less than ten pounds, at a surprisingly high rate relative to the default propensity of vanadium-containing and less-than-ten-pound rocks. So I call these rocks "Spargs", and there's no reason Ben himself can't do the same.

Remember, it's legitimate to have a word for something with properties A, B where A, B do not occur in conjunction at greater than default probability, but which, when they do occur in conjunction, have properties C, D, and E at greater than default probability.

It's legitimate for us to have a word for any reason. Words have more than one purpose - they're not just for making predictions about likely properties.

The comments to this entry are closed.

Less Wrong (sister site)

May 2009

Sun Mon Tue Wed Thu Fri Sat
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31