Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
Developers, get involved!
The AI Foundry promotes the development of Open Source artificial intelligence projects. Why not join it now? All projects welcome extra help!
Visit the AI Foundry

Reply to Message

Not registered yet?

The AI Depot has a focused community of friendly users. Rather than let anyone abuse the site at the brink of promiscuity, we prefer to let only those with an active interest participate... this simply requires registering.

Why not sign up!

Joining the site's community is completely free. You can then post messages freely, and customise your personal profile at will. Specific privileges will also be granted to you, like being able to access printer-friendly articles without restrictions. So, why not register?

Username:
Password:
Subject:
Email me when someone replies.
Body:

Parent Message

Multi-Agent Chess

Very interesting thoughts. A few quick comments...

The rule-based approach is simular to that of TD gammon, except it uses a NN to decide the move. However, it selects different features to base its decision on as it learns by playing itself repeatedly. This self-adapting mechanism is reminiscent of what you describe.

The neural network approach you describe would be difficult:
* 64 inputs of piece type is not ideal for traditional models, you'd really need 64x6 pieces inputs.
* The number of possible moves each turn is huge, and varies: difficult to model with a fixed size nn.
* That's a lot of inputs and outputs, and you'd need a huge middle layer... no matter how it learns, it will be a hard problem.
* Legal moves would not be learnt, but you'd use the output of the network as a voting system, like some NN Othello players.

The multi-agent version sounds like a great idea. I was reading about some theory yesterday that could apply to this: W-Learning. Essentially, you learn to select which of the small agents (or Q-Learners) gets to move. This seems to work well for some fairly complex problems, but chess is another matter. This is not necessarily a gready approach, but I don't know whether it would learn about planning... Definitely worth experimenting upon.

935 posts.
Saturday 05 January, 11:44
Reply
Tutorials on the Subject

Do you know of any tutorials on the W-Learning/Q-Learners? So far, what I'm finding on Google is a little over my head.

16 posts.
Wednesday 09 January, 09:46
Reply
State of the Art RL

They're both parts of Reinforcement Learning. Look up stuff on that in Google, or just wait a couple days... (read on ;)

W-Learning is brand new -- well, almost. It started with a guy who submitted his Ph.D. in 1997, so there's only recent work on this.

Q-learning is well documented, and I'm in fact there's a bit of stuff in the new Knowledge Warehouse about this; it'll hopefully see the light of day on saturday or sunday (if I can get my perl distribution working at home).

So keep posted!

935 posts.
Wednesday 09 January, 11:47
Reply
Reinforcement Learning

Thanks for the tip on Reinforement Learning. I found a couple of tutorials on the Reinforcement Learning Repository at the Univeristy of Mass. at Amherst. Some of the links are broken, but among them is a pre-print book Reinforcement Learning: An Introduction. I can't wait to see what gets add to the Knowledge Warehouse.

16 posts.
Thursday 10 January, 10:51
Reply

Back to the Artificial Intelligence Depot.