next up previous
Next: Control for Interactivity Up: Agents and Avatars Previous: Agents and Avatars

Appearance and Motion

Avatars can be portrayed visually as 2D icons, cartoons [27], composited video, 3D shapes, or full 3D bodies [2,42,38]. We are mostly interested in portraying human-like motions, so naturally tend toward the more realistic surface and articulation structures. In general, we prefer to design motions for highly articulated models and then reduce both the model detail and the articulatory detail as demanded by the application [18].

Along the appearance dimension, the Jack figure has developed as a polygonal model with rigid segments and joint motions and limits accurate enough for ergonomics evaluations [3]. For real-time avatar purposes, simpler geometry can be used provided that the overall impression is one of a task-relevant figure. Thus a soldier model with 110 polygons is acceptable if drawn small enough and colored and/or texture mapped to be recognized as a soldier. On the other hand, a vehicle occupant model must show accurate and visually continuous joint geometry under typical motions. It must be both an acceptable occupant surrogate as well as a pleasing model for the non-technical viewer -- who may be used to going to the movies to see the expensive special effects figures. Our ``smooth body'' [1] was developed using free-form deformation techniques [41] to aid in the portrayal of visually appealing virtual humans (Fig. 2.1).

  
Figure 2: Smooth body Jack as virtual occupant in an Apache helicopter CAD model.

The motions manifest in the avatar may arise from various sources:

In general, we will not consider 2D or purely video presentations of avatars, rather we will concentrate on avatars that more-or-less mimic human structure.

The distinction between ``synthesized'' motions and the other types is roughly that the former generate transformations for more than one joint at a time. Thus, for example, we store a time series of joint angle changes (per joint) in channelsets so that specific motions can be re-played under real-time constraints [18]. No deviation from the pre-stored local transformations are allowed, although the whole body may be re-oriented or the playback speed varied. In a particularly effective modification of this technique, Perlin adds periodic noise to real-time joint transformations to achieve greater movement variability, animacy, and motion transitions [33].

In a motion synthesizer, a small number of parameters control a much greater number of joints, for example:

The relative merits of pre-stored and synthesized motions must be considered when implementing virtual humans. The advantages to pre-stored motions are primarily speed of execution and algorithmic security (by minimizing computation). The major advantages to synthesis are the reduced parameter set size (and hence less information that needs to be acquired or communicated) and the concomitant generalized motion control: walk, reach, look-at, etc. The principal disadvantages to pre-stored motion are their lack of generality (since every joint must be controlled explicitly) and their lack of anthropometric extensibility (since changing joint-to-joint distances will change the computed locations of end effectors such as feet, making external constraints and contacts impossible to maintain). The disadvantages to synthesis are the difficulty of inventing natural-looking motions and the potential for positional disaster if the particular parameter set or code should have no solution, fail to converge on a solution, or just compute a poor result. In particular, we note that inverse kinematics is not in itself an adequate model of human motion -- it is just a local positioning aid [3,25]. The issue of building adequate human motion synthesis models is a wide open and complex research topic.

Since accurate human motion is difficult to synthesize, motion capture is a popular alternative, but one must recognize its limited adaptability and subject specificity. Although a complex motion may be used as performed, say in a CD-ROM game or as the source material for a (non-human) character animation, the motions may be best utilized if segmented into motion ``phrases'' that can be named, stored, and executed separately, and possibly connected with each other via transitional (non-captured) motions [8,39]. Several projects have used this technique to interleave ``correct'' human movements into simulations that control the order of the choices. While 2D game characters have been animated this way for years -- using pre-recorded or hand animated sequences for the source material -- recently the methods have graduated to 3D whole body controls suitable for 3D game characters, real-time avatars, and military simulations that include individual synthetic soldiers [35,18,20].



next up previous
Next: Control for Interactivity Up: Agents and Avatars Previous: Agents and Avatars



Dr. Norman Badler
Thu Apr 17 08:17:25 EDT 1997