Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
The Intelligent Wireless Web
A vision of the Web's near future and overviews the technologies that will make it possible, exploring developments in speech recognition, mobile wireless devices, network integration, and software development.
More information at Amazon US UK

Reply to Message

Not registered yet?

The AI Depot has a focused community of friendly users. Rather than let anyone abuse the site at the brink of promiscuity, we prefer to let only those with an active interest participate... this simply requires registering.

Why not sign up!

Joining the site's community is completely free. You can then post messages freely, and customise your personal profile at will. Specific privileges will also be granted to you, like being able to access printer-friendly articles without restrictions. So, why not register?

Username:
Password:
Subject:
Email me when someone replies.
Body:

Parent Message

Alternative model for a chess AI.

Before I begin, I must give credit where it is due. A form of this idea was suggested to me by my Intermediate Logic (a philosphy class with nasty symbolic logic) professor (whose name I can't remember). His suggestion was essentially using a min-max approach. While not any different than what is now out there, his thoughts on how it may be implemented were more in tune with an emergent behavior scheme than using a min-max tree search approach. So, your essay got me to thinking again about this problem, and how could we better model intelligence better than through raw ability to compute billions of possible moves.

One method of accomplishing this may be to have an alterable rule set for the AI. This rule set could be added to and modified by the AI as time went on. I think the initial rule set should be just the rules of the game. From there for each game the AI played/trained on more rules would be added. Of my ideas this is the most formless.

Another method would be using a neural network typ approach. The network would possibly have 64 inputs where each input would represent a square on the chess board. The information passed on these inputs would be what piece, if any, was on that particular square. It may, also, be wise to add an input for the immediately previous moves of both the AI and the opponent. The output, for the network, would consist of the piece to move and where. The difficult part of this task would be to have the network understand what does and does not constitute legal moves.

What if we treat each piece on the chess board as an entity all its own? We give each piece the ability to move according to its type, and let them move themselves. But, we add in a couple of constraints that would allow team work. 1) Defend each other. Each side must seek to defend its own members, especially (and in most cases w/o exception) the King. 2) Winning may be more important then defense. This would enable sacrifes on the part of the individual pieces. 3) Only one piece may move per turn. (This really goes without saying.) I think this sort of methodology would create some very interesting AIs. For instance, you could create an AI that would be willing to lose pawns, but that would never submit to a queen exchange. Just by changing what rules the queen lives by compared to the pawns.

Well that's my 2 cents. (I just have to get off my lazy butt, and actually code some of this up.)

16 posts.
Friday 04 January, 12:37
Reply
Multi-Agent Chess

Very interesting thoughts. A few quick comments...

The rule-based approach is simular to that of TD gammon, except it uses a NN to decide the move. However, it selects different features to base its decision on as it learns by playing itself repeatedly. This self-adapting mechanism is reminiscent of what you describe.

The neural network approach you describe would be difficult:
* 64 inputs of piece type is not ideal for traditional models, you'd really need 64x6 pieces inputs.
* The number of possible moves each turn is huge, and varies: difficult to model with a fixed size nn.
* That's a lot of inputs and outputs, and you'd need a huge middle layer... no matter how it learns, it will be a hard problem.
* Legal moves would not be learnt, but you'd use the output of the network as a voting system, like some NN Othello players.

The multi-agent version sounds like a great idea. I was reading about some theory yesterday that could apply to this: W-Learning. Essentially, you learn to select which of the small agents (or Q-Learners) gets to move. This seems to work well for some fairly complex problems, but chess is another matter. This is not necessarily a gready approach, but I don't know whether it would learn about planning... Definitely worth experimenting upon.

935 posts.
Saturday 05 January, 11:44
Reply
Tutorials on the Subject

Do you know of any tutorials on the W-Learning/Q-Learners? So far, what I'm finding on Google is a little over my head.

16 posts.
Wednesday 09 January, 09:46
Reply
State of the Art RL

They're both parts of Reinforcement Learning. Look up stuff on that in Google, or just wait a couple days... (read on ;)

W-Learning is brand new -- well, almost. It started with a guy who submitted his Ph.D. in 1997, so there's only recent work on this.

Q-learning is well documented, and I'm in fact there's a bit of stuff in the new Knowledge Warehouse about this; it'll hopefully see the light of day on saturday or sunday (if I can get my perl distribution working at home).

So keep posted!

935 posts.
Wednesday 09 January, 11:47
Reply
Reinforcement Learning

Thanks for the tip on Reinforement Learning. I found a couple of tutorials on the Reinforcement Learning Repository at the Univeristy of Mass. at Amherst. Some of the links are broken, but among them is a pre-print book Reinforcement Learning: An Introduction. I can't wait to see what gets add to the Knowledge Warehouse.

16 posts.
Thursday 10 January, 10:51
Reply

Back to the Artificial Intelligence Depot.