Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
Developers, get involved!
The AI Foundry promotes the development of Open Source artificial intelligence projects. Why not join it now? All projects welcome extra help!
Visit the AI Foundry
Contest Results
And the Winner is...
 
Contest Results
AI Article Writing Contest

All the contestants in the AI Article Writing Contest have voted, and the results have been counted. Each had to vote for their three other favourite articles, earning respectively 4, 2 and 1 points. The entry with most points won; this time there was a draw for second place!

  • 1st -- Rule-Based Systems and Identification Trees by James Freeman-Hargis (18 points)
  • 2nd -- Beginners Guide to Pathfinding Algorithms by "Senior Diablo" (12 points)
  • 2nd -- Collective Intelligence in Social Insects by David Gordon (12 points)

So James wins the first prize: the awesome AI Techniques for Game Programming by Mat Buckland (Amazon US / UK). Big thanks to fup for offering this brilliant prize!

David wins the jury's prize for an "historically oriented essay about the roots of AI." His essay does a great job of establishing the chronology of CI, and the influencial people. He wins the book Turing: The Great Philosophers (Amazon US / UK).

The other entries were also of very good standard and will also be published regular intervals too (so everyone gets focus), and also had their fair share of the votes -- alphabetical order:

Creating Self-Aware Intelligence by Clint O'Dell
Creation of a Simple Sonorant Detector by "Smoke Mirrors"
Decision Trees and Evolutionary Programming by Kirk Delisle
IBM's Robocode: A Platform for Learning AI by Nick Loadholtes
Self-Organising Maps For Colour Recognition by Mas Dennis Luesebrink

I would like to congratulate everyone on writing such great articles. Thanks for even taking the time to participating at all, and for having such a passionate desire to contribute their knowledge back into the community.

There will be another contest starting in a few weeks. Rest assured the prizes will be wicked too! So, why not show your interest?

934 posts.
Saturday 23 November, 12:47
Reply
Figure C4 question

Hi I really liked this article, it was very enlightening. (That's why I like these contests, you always learn something new!) But I do a question about Figure C4 in Appendix C.

Figure C4 show the results of the splits of the original information. However, I noticed that ball #5 is listed under the size tree as Medium, when the table above lists it as "Large".

Additionally, #5 is listed as being made of rubber, but with the way the tree is drawn it can't be listed under the rubber node. (Otherwise there would be a child node with two parents, which violates the definition of a tree.) It isn't stated in the example, but I had presumed that the ID tree being built is meant for all the data in the table, not just a subset (small rubber balls).

Does this mean that ID trees should be built with a specific "target" in mind? (Obviously a targeted tree will be optimised for the specific data it was built for, but then it looses some of its generality.)

And if the the example in the appendix was being built for all of the data in the tree (say that it was part of a classification system in a factory that sorts balls), Does this mean that the base data in this example should be split differently to ensure all cases are represented by the final ID tree?

Thanks for reading this question, and thank you again for the article!

-Nick

2 posts.
Monday 25 November, 09:44
Reply
Re: Figure C4 question

"Figure C4 show the results of the splits of the original information. However, I noticed that ball #5 is listed under the size tree as Medium, when the table above lists it as "Large"."

You are absolutely right. It is listed as large in the table and medium in the image. You might also notice that 3, 6 and 7 are listed as medium in the table and large in the image. I must have absentmindedly swapped the labels (I'll send a correct version to Alex after I post this message). That doesnt change the tree, just the labeling. Looking again, I notice I made a similar mistake on some others. Again, the information is correct, just a slight error in the image creation.

"Additionally, #5 is listed as being made of rubber, but with the way the tree is drawn it can't be listed under the rubber node."

We create the tree using a greedy method by finding the choice with the least disorder. We create one small tree, with root node representing the category (size, color, weight, rubber?) and children nodes representing each possible value. So, since we have four possible categories: size, color, weight, rubber? we create four individual trees. Of those, the one which best divides the data is chosen (size). From there, we look at the size tree. Medium and large, despite their labels having been accidentally switched, are perfectly divided into homogenous subsets. The small branch, however, is not and has two of each outcome: two bounce and two do not. Therefore, we need only divide that branch of the tree with the test that creates the most homogenous subsets (in this case, rubber?) It is true that 5 is rubber, but it is not listed under the rubber node because it isn't needed. 5 is already part of a homogenous subset, made up of itself.

"Does this mean that ID trees should be built with a specific "target" in mind? (Obviously a targeted tree will be optimised for the specific data it was built for, but then it looses some of its generality.)"

ID trees arent built with a target in mind, but they are built with the intent to divide the possible targets into distinct groups. The goal of an ID tree is to have some number of leaf node outcomes, each representing a group of outcomes from the data. In these groups, there should be only one type of outcome. So, since our outcome in my example is to test if it bounces or not, each group should only contain "yes" results or "no" results.

I hope this helps explain some of your questions and if you have any more I'll do my best to answer them. I appologize if this response rambled a bit, I'm running on very little sleep right now ;)

1 posts.
Monday 25 November, 23:57
Reply