Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
Get Creative: AI Article Writing Contest
Fancy the chance of getting developer focus, improving your research skills, sharing your artificial intelligence ideas, obtaining expert feedback, getting published online AND winning a prize?
Enter the AI Article Writing Contest!

Reply to Message

Not registered yet?

The AI Depot has a focused community of friendly users. Rather than let anyone abuse the site at the brink of promiscuity, we prefer to let only those with an active interest participate... this simply requires registering.

Why not sign up!

Joining the site's community is completely free. You can then post messages freely, and customise your personal profile at will. Specific privileges will also be granted to you, like being able to access printer-friendly articles without restrictions. So, why not register?

Username:
Password:
Subject:
Email me when someone replies.
Body:

Parent Message

Sense and sensibility

Yes ultimately most intensive visual processing tasks are best implemented in hardware. This is demonstrated quite well by some of the off-the-shelf industrial vision systems that are available, whch are self contained units that you can connect directly to digital inputs and outputs.

After some experimentation it seems that the neural classifier that I'm using on Rodney isn't terribly efficient. Simply feeding the visual input into the classifier makes it very difficult for the system to learn anything remotely useful. Instead I've taken advantage of the fact that the robot sits (some might say languishes) in a stationary position on my desk. From a stationary point it can build up a panoramic database of what it expects to see for any given configuration of servo positions. The panoramic data is again implemented as a leaky integrator type system, so that the robot continuously updates what it expects to see and slowly forgets old information. This allows the robot at any point in time to compare what it actually sees against its panoramic database and find the differences (things which have moved or changed). It's generally more important to learn about things which change over time rather than things that continuously remain the same, so the difference between expected and actual visual input is fed into the classifier system.

I had a look at some stuff on neural gas type networks on a web site recently (I think Sussex university, UK), but I'm still unsure as to exactly how they work.

- Bob

136 posts.
Wednesday 05 December, 12:07
Reply

Back to the Artificial Intelligence Depot.