Contents

Rendering to Texture Surfaces

Methods of Rendering to Texture Surfaces

Uses of Rendering to Texture Surfaces

Potential Areas for Scalability

Uses of Rendering to Texture Surfaces

Now that we've seen how to render to texture surfaces and how to use less desirable methods to gracefully handle systems that cannot, let's examine a few of the effects that we can produce using this capability.

Mirrors

One of the first uses that springs to mind is mirror reflections, where objects are texture mapped with a reflection of the scene in front of them. This effect requires rendering the scene's geometry from the point of view of a reflected camera, using a rectangle around the mirror (which could be the mirror itself if it is rectangular). The new mirror view frustum is sheared based on the angle between the mirror's normal vector and the vector from the viewer to the mirror position (see Figure 2). The shearing lets the reflection point in the right direction, while letting the mirror plane act as the front clipping plane of the mirror view frustum.

Of course, mirrors can be done by just projecting geometry against a plane. However, if the mirror is a complex shape, there is a lot more clipping work involved. Also, there are advantages in the area of scalability we will discuss later in this article.

Figure 1. Mirror done by rendering to texture surface.

The executable and source code used to generate the above example are provided in the FLATMIRROR directory of the sample code.

Dynamic Environment Maps

A logical extension of the previous effect would be to render environment maps on the fly. In this way the environment maps would not only be a representation of the distant scene, but they could be rendered on the fly to reflect nearby objects. (Typically, environment maps are typically of sky and distant mountains, so that they can remain relatively constant within the scene).

Often, environment maps are represented as a 'sphereized' image; one that is distorted to look as though it were captured with an extremely wide-angle lens (See figure 3). You can approximate this effect by using a view frustum with an extremely wide field of view, placing the camera at the center of the object intended to receive the environment map. Because it is impossible to use a field of view of 180 degrees, by necessity, we are going to have to limit the field of view to something less than that (our example uses 120 degrees). Additionally, there is the issue of mapping polar coords onto a rectangular texture. For most environment mapping uses, the reflection is subtle enough that the effect can work quite well.

As well, a hybrid of static and dynamic environment maps may make sense. For example, it may make sense to initialize the background of the dynamic environment map with a static environment map texture, and then render the reflection of nearby objects on top of the background.

Figure 2. A sphereized bitmap for use in an environment map.

Once the dynamic environment map has been rendered, texture coords for the object to receive the environment map are calculated as with other hemispherical environment map cases. For every vertex in the object, the vertex normal is transformed into camera space, and the X and Y components of the reflected camera vector then become the texture coords. An example of this can be seen in figure 4, and the source code and executable can be found in SPHEREMAP directory of the sample code.

Figure 3. Dynamically rendered environment map.

The reason that polar-to-rectangular mapping is a problem is that while we are adequately (while not completely correctly) calculating the UV coordinates for each vertex, the UV coordinates for intermediate pixels are incorrect. That is to say that while we go across the surface of the sphere, the reflected view vectors generate UV coordinates that fall away exponentially. However, the graphics hardware only does a linear (not exponential) interpolation of the UV coordinates between vertices. The extent to which this problem shows up will depend on how highly tessellated the model is. A model with a vertex per pixel will appear perfect, but the texture will begin to 'jiggle' slightly as the triangles get larger. One way around this may be to do another render-to-texture step that approximates the 'sphereize' filter that many photo editing packages do, using a highly tessellated mesh.

Soft Edged Shadows

In his March 1999 Game Developer Magazine article entitled "Real-time Shadow Casting," Hubert Nguyen presents an approach to rendering shadows into the frame buffer, and then copying them to a texture surface. While this technique is a fitting example of rendering to texture, it uses one of the fallback methods mentioned earlier in this article (Nguyen implemented his method using a 3Dfx Voodoo card, which can't render to texture surfaces).

If you haven't read the article, I'll summarize the approach:

Figure 5 is a screenshot of this technique in action. The image in the upper left corner is the shadow texture (i.e. the view of the object casting the shadow, from the point of view of the light). The source code and executable are available in the SHADOWTEX directory of the sample code.

Figure 4. Realtime shadow texture
using render-to-texture.

Mip Map Generation

One use for rendering to textures is to create mip-map chains. To accomplish this, set up a chain of surfaces, copy the source texture to the first, and then loop through to the smallest of the chain. At each iteration of the loop, the render target is the next smallest in the chain. A rectangle is rendered over it using the previous one in the chain as the texture, and bilinear filtering helps create the mip map. While this approach doesn't offer any great advantage over storing them on the hard drive and loading them at start time, it may be useful for creating mip-maps of textures created using one of the previously mentioned techniques, or perhaps other procedural textures.

TV-Style Scene Transitions

When transitioning from one scene to the next, it would be possible to keep the last frame from a scene by rendering it to a texture, and then use it when transitioning to the next scene in a style similar to those scene on TV, or in video editing applications. Typical transitions are ones like barn-door, vertical blind, page turn, etc.

What Else Is Possible?

I am certain many other techniques exist. For example, in the screenshot in Figure 6, I tried some feedback buffer effects by rendering to one texture, and then using that as the background texture while rendering to a second texture, and repeating the process, swapping the pointers to them both. By drawing some random pixels along the bottom of the texture, I tried creating a 'fire effect', and by drawing the objects in my scene with a noise texture, I created some 'smoke trails'. The effect was propagated upwards by slightly offsetting the UV coordinates of the triangles used to draw the background on each texture. The code and executable for this demo can be found in the FEEDBACK directory of the sample code.

Figure 5. Feedback effects using render to texture.

Potential Areas for Scalability