Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
Multiagent Systems
The first comprehensive introduction to multiagent systems and contemporary distributed artificial intelligence suitable as a textbook, providing detailed coverage of basic topics as well as several closely related ones.
More information at Amazon US UK

Reply to Message

Not registered yet?

The AI Depot has a focused community of friendly users. Rather than let anyone abuse the site at the brink of promiscuity, we prefer to let only those with an active interest participate... this simply requires registering.

Why not sign up!

Joining the site's community is completely free. You can then post messages freely, and customise your personal profile at will. Specific privileges will also be granted to you, like being able to access printer-friendly articles without restrictions. So, why not register?

Username:
Password:
Subject:
Email me when someone replies.
Body:

Parent Message

Good Points

As I said, I don't have any experience in motion detection, so you'll have to forgive my naivety ;)

All the points you put forth are very pertinent indeed! It just goes to show it's all about experience anyway, and it really makes me want to do some work on real robots.

About the factoring out of self-induced movement, I was lucky yo assist a talk last friday by a guy from the Robotics Group at Geneva University. They talked about custom designed VLSI chips, which they used to automatically find the point with the most motion (with a competitive neural model I believe). They used this to simply avoid close objects and not to detect motion.

However, together with distance information sensors, you could determine if this point is actually fixed in space (and that it's not perspective trying to play a trick on you ;) ... But it sounds like a lot of hard work to me!

PS: Did you want the release announced as news or not?

935 posts.
Sunday 02 December, 08:10
Reply
Motion Detection

For hardware detection of moving targets this would be fairly easy to achieve without the use of neural networks. Taking the difference between successive frames and then applying a gaussian filter would be sufficient.

What they were probably using the VLSI circuit for would be optical flow detection. This means finding feature points within the image and then tracking their movements over successive frames. This sort of thing is more advanced than simple motion detection and does require the use of neural nets. Optical flow - sometimes refered to as the "looming effect" - can be used to estimate the distances to objects for obstacle avoidance tasks. The principle is exactly the same in stereo vision (as on the Rodney robot), except that the matching is done in space rather than over time. Other means of distance measurement such as ultrasonics or lasers would be used to calibrate the visual system, but otherwise vision alone is sufficient for obstacle avoidance.

I've been experimenting with an additional feature on Rodney, where the visual input is classified using a self-organising topological feature map. Interesting (i.e. moving) features are classified such that hopefull the robot will learn to discriminate between different situations such as someone sat at the desk in front of it, or a person walking across the room, etc. Additional simple behaviors can be added which give the robot some rudimentary emotional feedback. Using a flexible piece of rubber attached to a minature servo the robot can be made to look as though its either smiling or scowling. The motion detection algorithm can be used to calculate a general overall level of visual stimulation for the robot (as a leaky integrator). If the robot detects that there is a moving object close to it and the visual stimulation level is within certain limits then the robot should smile. Too much or not enough stimulation could lead to scowling. This is exactly the sort of simple reflex behavior that you might find in a human baby, which is designed to encourage nurturing and interactive behaviors from the parent.

- Bob

136 posts.
Sunday 02 December, 12:09
Reply
Stereoscopy and Time-based Analysis

It seems like the time-based analysis of successive frames is a lot of work, and not only from a theoretical research point of view, but also in the speed of the implemented algorithm. Hence the reason for hardware.

That said, it seems a bit silly to require such analysis when you can have two cameras instead, and analyse those with 'slightly' simpler maths. Humans do it that way too, having nearly two eyes on average ;)

SOFM are great... I've never worked with them personally, but some other Ph.D's here are doing some sexy stuff with Neural Gas Networks: mapping raw input space onto NGN nodes based on imitation. In your case, obviously emotions would guide the learning rather than feedback.

Sounds like rodney is a great lot of fun!

935 posts.
Wednesday 05 December, 05:51
Reply
Sense and sensibility

Yes ultimately most intensive visual processing tasks are best implemented in hardware. This is demonstrated quite well by some of the off-the-shelf industrial vision systems that are available, whch are self contained units that you can connect directly to digital inputs and outputs.

After some experimentation it seems that the neural classifier that I'm using on Rodney isn't terribly efficient. Simply feeding the visual input into the classifier makes it very difficult for the system to learn anything remotely useful. Instead I've taken advantage of the fact that the robot sits (some might say languishes) in a stationary position on my desk. From a stationary point it can build up a panoramic database of what it expects to see for any given configuration of servo positions. The panoramic data is again implemented as a leaky integrator type system, so that the robot continuously updates what it expects to see and slowly forgets old information. This allows the robot at any point in time to compare what it actually sees against its panoramic database and find the differences (things which have moved or changed). It's generally more important to learn about things which change over time rather than things that continuously remain the same, so the difference between expected and actual visual input is fed into the classifier system.

I had a look at some stuff on neural gas type networks on a web site recently (I think Sussex university, UK), but I'm still unsure as to exactly how they work.

- Bob

136 posts.
Wednesday 05 December, 12:07
Reply

Back to the Artificial Intelligence Depot.