Contents

Harnessing the Next Great Thing

Scaling the Content

Scaling Animation Quality

Scaling Special Effects

Scaling Animation Quality

Many 3D games today use pre-stored animations to move the characters and objects in a scene. An artist positions the model (for example, a human character) in all of the poses necessary to create an animation sequence. For example, consider a 3D character that needs to be able to run, jump, and swing a sword. The artist first creates a collection of poses for each of these animated movements. This is typically done much like the way we've all created simple 2D animations, using a pencil and a pad of paper. On each page of the pad, you create the next "step" of the animation; when you flip between the pages rapidly, the animation comes alive.

When performing 3D animation, the game code typically cycles between the 3D models that represent the animation (sometimes moving the object in space at the same time, such as when a character is running or jumping). The problem is that the artists have to create all of the in-between steps. The maximum number of available steps limits the maximum frame rate of the animation. For example, for an animated running sequence, suppose that the artist creates ten steps that are designed to be played back in a one-second time frame. If the game is running at ten frames a second, then we get one step of the animation per frame. So far, so good. However, if the game is running at sixty frames a second, then we only get a new frame of the animation once every six frames. Not so good. We would be better off generating new animation steps algorithmically to correspond with our frame rate.

This is the basic idea behind interpolated, key-frame animations (or key-frame morphing). Animations are stored at some predetermined rate (say ten frames per second). Then, at run-time, the game determines how long the previous frame took and algorithmically creates a new frame that is an interpolation between two stored frames. This produces animations that always change smoothly from frame to frame, regardless of the frame rate at which the game is running. The net effect creates a satisfying experience for the gamer running the game at fifteen frames a second on a low-end system, as well as for a gamer running the game at sixty frames a second on a high-end system. If you consider the previous paper and pad animation example, this process would be the equivalent of adding pages in between existing ones, and drawing a new figure between the two neighboring ones to smooth out the animation.

A variation of the key-frame animation technique uses a key-frame of a "skeleton" to which a polygonal mesh is then attached. In its simplest form, the parts of the mesh are attached to individual "bones" of the skeleton. Picture a character running-the key-frame animation would describe the movement of the character, if you can imagine one bone for each moving part (much like you would draw a stick figure running). Then, polygonal meshes (which are boxes, in their simplest form) are attached to each bone. As the upper arm and lower arm move, a mesh is then attached to the upper arm and another mesh is attached to the lower arm.

This technique can look pretty good onscreen, but newer games use a better technique that avoids some of the problems inherent in this simple form of "skinning". The biggest problem is that the polygonal meshes attached to the bones often end up intersecting with one another. For example, the two meshes composing the upper arm/lower arm can overlap or leave a gap at the juncture. Aside from the nasty looking creases this creates, there is also a discontinuity created by the textures between overlapping meshes.

To avoid this overlap problem, developers are starting to use a technique called "single skin meshes". Basically, instead of having one mesh associated with each bone, they have one mesh for the whole model, but one or more bones influence each vertex of the mesh. At runtime, the positions of all bones affecting a vertex are used to calculate the final position of the vertex. The end result is the removal of the intersection and elimination of the texture discontinuity problems. See figure 2 for an illustration of this technique.

Figure 2. Bones, mesh in wireframe mesh shaded,
mesh shaded and being influenced by bones.

Single-skin meshes add some overhead to 3D-model creation and require additional CPU cycles for calculating the final vertex positions. In return for this processing overhead, the user of the game sees more visually appealing animations.

Scalable Lighting Issues

While increasing the amount of geometry in game applications provides more realistic looking objects, as well as allowing for more objects within scenes, there is more to creating a realistic looking environment than just the models in it. Lighting is another area that has a major impact on the realistic appearance of a created environment.

Calculating and displaying the lighting in a scene requires determining what light sources are in a scene, which objects they illuminate and how brightly, and how, in turn, those objects cast shadows and reflect the light.

It's difficult for developers to scale the lighting in a game, since this one element can be fairly important to the overall playability. Several possibilities for scaling the lighting do exist, however, and certain techniques can be used to make a game run across the wide range of platforms available.

Lighting effects can be scaled by choosing different lighting techniques to correspond with the relative performance of different systems. So, for example, the high-end systems can use lighting that is entirely dynamic with dynamically cast shadows and moving light sources (such as rockets). Low-end systems can have the lighting tuned down so that shadows are either non-existent or less dynamic (perhaps computed only once every other frame). Moving light sources might be removed or implemented with light maps and changing texture coordinates.

Another possibility is to use some of the scalable geometry techniques described earlier. Lighting could be calculated for a lower level-of-detail model and then, using calculated and stored connectivity information, the displayed vertices that aren't part of the lower LOD model would have their lighting values interpolated from the nearest vertices in the low LOD model.

This technique can apply well to parametric surfaces where the lighting calculations for generated surface points can be performed less often than the calculation of the surface points. Since the connectivity information is implicit in a parametric surface, it's easy to do the interpolation for the in-between vertices.

Shadows are represented in 3D applications in a number of ways. Most 3D games of the past year or two have resorted to simple shadow maps that resemble a dark circle or a blur underneath the object. In cases where the object sits on a flat ground plane, sometimes a simple geometric form of the object is 'squashed' on the ground to represent a shadow. More recently, developers are moving towards increasingly complex methods of generating shadows, such as projected textures, dynamic shadow maps, or using hardware features such as stencil buffers.

Most of these more advanced techniques of generating shadows involve calculations that chew up processor cycles, so developers need to adapt the techniques to suit low-end systems. Simpler mechanisms can be used to scale these shadow techniques across different systems. For example, if the system is incapable of handling a more complex shadow casting method, (such as using stencil buffers to create shadow volumes) then the application can be designed to switch to the more basic shadow map approach. Optionally, the application could disable shadows altogether.

Using 3D hardware accelerators that support stencil buffers, more exact shadow representations can be created based on the actual shapes of the objects casting the shadows. This technique involves creating a "shadow volume" for each object. The shadow volume is itself a 3D object, created by imagining a point at the light source and casting planes that intersect each silhouette edge of the object. This volume is then rendered through the stencil buffer to shadow all triangles that intersect the volume. In actual use, this technique produces impressive results. It can be used to generate self-shadowing of objects. An object, such as a donut, can be rendered so that it casts a shadow on itself when the light is off to one side. By using different level of detail models to create the shadow volume, this technique can also be made scalable, although in some cases artifacts can result as a byproduct of the scaling process.

You can also create shadows by rendering the object casting the shadow to a texture surface, and then using that texture as the shadow. This approach can produce more natural, softer shadow edges, and it can also be scaled in a number of ways. One scalability technique is to update the shadow's movement at half the frame rate as the rest of the application. This will minimize computations for use on lower end machines. On the down side, self-shadowing is much harder to accomplish with this technique than with the stencil buffer technique

________________________________________________________

Scaling Special Effects