Refractive Texture Mapping, Part Two
Contents |
|
Part One of this article investigated the use of sphere mapping to simulate curved-surface reflections. Part Two looks at how refractive texture mapping can be implemented to simulate refractions.
The steps to implement refractive texture mapping are similar to those used in Part One, but instead of figuring out the reflected ray, the refracted ray is computed. These rays are then used to "hit" the texture generating the UV coordinates. The texture doesn't need to be spherical; a simple planar map produces the best results. Finally, the polygons are sent to the hardware to be transformed and rendered. As usual, several details need to be considered for implementation.
Refracted Ray: Is Snell's Law Really Necessary?
The refracted ray (Figure 10) is obtained from the Snell's law that states:n1*sin (q1) = n2*sin (q2) (Equation 7)
n1 = index of refraction for medium 1 (incident ray)
n2 = index of refraction for medium 1 (refracted ray)
q1= angle between the incident ray and the surface normal
q2= angle between the refracted ray and the surface normal
![]() |
Approximating the Refracted Ray
It is possible to approximate the refracted ray by adding the incident ray with the inverted normal and multiplying a factor to the normal. In this article it's called "refraction factor." This refraction factor has nothing to do with the index of refraction (Snell's Law). Indeed the refraction factor is a made up term. What the refraction factor does is to weight the normal's contribution to the final expression. For instance, if the refraction factor is zero, the refracted ray is equal to the incident ray. On the other hand if the refraction factor is a big value, the incident ray's contribution will be so small that the final refracted ray is the inverted normal. By looking at Figure 11 the final expression is:
![]() |
By using Equation 8 to compute the refracted ray, we get rid of angular formulas. Also, because of its nice format, the implementation is straightforward. Figure 12 shows the refracted ray computed by Equation 8 displayed in debug mode.
![]() |
"Hitting" The Texture Map
Once the refracted ray has been computed, two more steps are required to get the UV coordinates for rendering. First, the ray needs to be extended to see where it hits the texture map. Second, linear interpolation is used to transform the intersection point to UV coordinates. Note that one more variable is necessary, the depth -- that is, how deep the ray needs to travel until it hits the map. This can be easily done by using the parametric line equation in 3D space. By looking at the line equation we have:
x= x0 + Rfx*t (Equation 9)
y= y0 + Rfy*t
z= z0 + Rfz*t, where:Rfx, Rfy, Rfz = components of the refracted ray
x, y, z = point of intersection
x0, y0, z0 = initial point
t = parameter
In Equation 9, x0, z0, and y0 are the coordinates of the vertex being processed. Now the program needs to figure out where would be the new XZ coordinates. This can be done by computing the t parameter using the y part of the line equation and the y component from the refracted ray, so that:
y - y0 = Rfy*t,
t = (y - y0) / Rfy (Equation 10)
What is y - y0 exactly? Here is where the variable depth comes into scene. The term y - y0 tells how far down the ray travels, so y - y0 can be replaced by the depth. As y0 is moving up and down throughout the mesh because of the perturbations, the final depth needs to be computed correctly to make sure all the rays stop at the same depth. This is done by adding the initial depth (given variable) plus the displacement between the mesh's initial y and the current vertex. Equation 11 shows how the final t parameter is computed, and Figure 13 a side view of the mesh.
p = pi + (yi - y0)
t = p / Rfy, where: (Equation 11)p = final depth
pi = initial depth
yi = initial y mesh (mesh without any perturbations)
y0 = current vertex's y component
![]() |
Once the t parameter has been computed, the z and x part of the line equation is used to compute the ZX intersection. Then, simple linear interpolation is used to transform the new ZX values (intersected point) into UV coordinates. The x-max, x-min, z-max and z-min from the mesh are used for the interpolation. Equation 2 is used, but this time the x-min, x-intersected and x-max values are used to compute the U coordinate. Equally, the z-min, z-intersected z-max values are used to compute the V coordinate. The final equations are:
![]()
u, v = texture coordinates
xi = mesh's x minimum value
xe = mesh's x maximum value
zi = mesh's z minimum value
ze = mesh's z maximum value
If we replace xe - xi by the length in the X dimension and ze - zi by the length in the Z dimension we have:
u = (x - xi) / (x_length) (Equation 12)
v = (z - zi) /
(z_length), where: (Equation 13)
x_length = xe - xi
z_length = ze
- zi
By getting the UV coordinates with these procedures, a problem arises. Depending on the depth and the vertices' positions (especially the ones close to the mesh borders), the refracted ray may not hit the map. What that means is that the intersection region won't be valid, therefore generating or negative UV coordinates, or UV coordinates greater than 1.0. With some hardware, these UV coordinates will make the textures warp, creating undesirable artifacts.
A few things can be done to correct this problem. The simplest way is just to clamp the UV values if they are invalid. Some "ifs" can be added to the inner loop checking for invalid UV coordinates, for example, if they are negative they are clamped to 0 and if they go over 1 they are clamped to 1. This procedure can still generate undesirable artifacts, but depending on the camera position and the texture, it may not be too noticeable. Another way is to figure out what would be the maximum and minimum X and Z values the refracted ray can generate at the corners of the mesh (with the maximum angle). Then you can change the interpolation limits taking into account this fact. Now, instead of interpolating the UV from 0 to 1, let's say for example they would go from 0.1 to 0.9. By doing this, even when the X and Z values extrapolate the limits, the UV coordinates are still in range. My sample program uses the first approach, and Listing 4 shows the implementation of refractive texture mapping.
________________________________________________________