Vision-based motion generation and recognition for humanoid robots
Résumé
Pragmatic techniques for humanoid robot motion generation generally do not inspire from biological motion. However, the anthropomorphic structure of the humanoid robot naturally reveal questions linked to the study of biological systems: typically, the choice of the referential for generating a vision-guided hand-reaching motion. We will directly discuss robotic motion techniques related to vision. A common way to describe and control a motion with robots is to use the so-called task-function formalism: objectives to be fulfilled by the robot are described in properly-chosen small vector spaces (the task spaces), very often directly linked to the sensor output. Numerical methods are then used to compute the motion from the set of active tasks. The use of these methods for motion generation will be exemplified by detailing the generation of a visually-guided object grasping while walking. More recently, we have shown that the same formalism can be used to characterize an observed motion. The second part of the presentation will thus discuss the use of the task-function approach to perform out-of-context motion recognition and disambiguation of similar-looking motions.