Artificial Intelligence Depot
Visiting guest. Why not sign in?
News, knowledge and discussion for the AI enthusiast.
FEATURES COMMUNITY KNOWLEDGE SEARCH  
Data Mining: Practical Machine Learning Tools and Techniques
Data mining techniques are used to power intelligent software. The book shows how machine learning can be used to find predictive patterns in data comprehensible to managers and developers alike.
More information at Amazon US UK

Reply to Message

Not registered yet?

The AI Depot has a focused community of friendly users. Rather than let anyone abuse the site at the brink of promiscuity, we prefer to let only those with an active interest participate... this simply requires registering.

Why not sign up!

Joining the site's community is completely free. You can then post messages freely, and customise your personal profile at will. Specific privileges will also be granted to you, like being able to access printer-friendly articles without restrictions. So, why not register?

Username:
Password:
Subject:
Email me when someone replies.
Body:

Parent Message

Learning hand-eye coordination

I've started working on an automatic procedure for learning hand-eye coordination on my Rodney robot. The system is taken from similar experiments done on the MIT Cog robot, where the robot sticks out its arm and moves its hand. The motion detection part of Rodney's vision system then locates the moving hand and returns its 2D coordinates in vision space. These 2D coordinates are then mapped to 3D coordinates of the arm and head.

After the learning is complete when the robot is presented with an interesting object it simply performs a lookup within the database of 2D/3D mappings to find the appropriate arm coordinates so that the hand intercepts the object.

- Bob

http://www.fuzzgun.btinternet.co.uk/rodney/rodney.htm

136 posts.
Friday 01 February, 16:26
Reply
Technology and Pictures

Bob,

I have to congratulate you on the new pictures, they're very good quality and quite artistic. They make me go 'Aaaw', isn't he cute!

Your work on hand-eye coordination is very interesting, but I have one quick question. When you convert to 2D camera-space, how is depth taken into account? That is an equally crucial dimension for intercepting objects... Do you work it out from the position of the limbs?

935 posts.
Monday 04 February, 08:48
Reply
Depth perception

Nothing gets past you Alex :-)

There is indeed the problem of determining how far away the moving object is. During the training procedure the same 2D visual coordinates appear at a number of different arm extensions. After training when the robot sees a moving object at a particular 2D location how does it know how far to move its hand to meet it?

There are two methods for doing this. The best solution is to use both cameras to detect the moving object and then to use stereo matching to calculate an approximate distance. The second method is to use a single camera but move the head by a small distance (pan or tilt the head). The apparent movement of the object will then be proportional to its distance from the camera - assuming that the object has remained in the same place during the intervening time.

Having calculated a depth value the robot can then select an appropriate arm extension to reach the object. The arm extension can be found easily by looking at the angle between the forearm and upper arm within the body mapping database.

There are some technical problems with observing a moving object in stereo. These are related to the way that Windows handles multiple imaging devices of the same type. The problems are reduced when using winXP, but not altogether solved, and this usually means that there is at least a one second time delay between capturing an image from one camera and then capturing from the other.

- Bob

136 posts.
Monday 04 February, 13:57
Reply

Back to the Artificial Intelligence Depot.