next up previous
Next: Locomotion with anticipation Up: Agents and Avatars Previous: Gesture Control


Attention Control

A particularly promising connection is underway to connect PaT-Nets into other high level ``AI-like'' planning tools for improved cognitive performance of virtual humans. By interfacing Jack to OMAR (Operator Model Architecture) [13], we have shown how an autonomous agent can be controlled by a high level task modeler, and how some important human motor behaviors can be generated automatically from the action requests. As tasks are generated for the Jack figure, they are entered into a task queue. An attention resource manager [11] scans this queue for current and future visual sensing requirements, and directs Jack 's eye gaze (and hence head movement) accordingly. For example, if the agent is being told to ``remove the power supply,'' parallel instructions are generated to locomote to the power supply area and attend to specific visual attention tasks such as searching for the power supply, scanning for potential moving objects, and periodically watching for obstacles near the feet. Note that normally none of this attentional information appears explicitly in the task-level instruction stream.

  
Figure 3: Jack as virtual casualty and medic for training scenario.



Dr. Norman Badler
Thu Apr 17 08:17:25 EDT 1997