Depth perception
Nothing gets past you Alex :-)
There is indeed the problem of determining how far away the moving object is. During the training procedure the same 2D visual coordinates appear at a number of different arm extensions. After training when the robot sees a moving object at a particular 2D location how does it know how far to move its hand to meet it?
There are two methods for doing this. The best solution is to use both cameras to detect the moving object and then to use stereo matching to calculate an approximate distance. The second method is to use a single camera but move the head by a small distance (pan or tilt the head). The apparent movement of the object will then be proportional to its distance from the camera - assuming that the object has remained in the same place during the intervening time.
Having calculated a depth value the robot can then select an appropriate arm extension to reach the object. The arm extension can be found easily by looking at the angle between the forearm and upper arm within the body mapping database.
There are some technical problems with observing a moving object in stereo. These are related to the way that Windows handles multiple imaging devices of the same type. The problems are reduced when using winXP, but not altogether solved, and this usually means that there is at least a one second time delay between capturing an image from one camera and then capturing from the other.
- Bob
|