Meshing
My meshing
algorithm is based onand
. There are a few key
modifications, but much of the basic approach is the same, and
I borrow a lot of the
terminology.
There are two
parts to meshing. I call the
first part Update() and the second part Render(), after
. During Update(), we'll
decide which vertices to include in the output mesh. Then, during Render() we'll
generate a triangle mesh that includes those vertices.
I'll start by explaining Update() and Render() for an
extremely simple heightfield: a 3x3 grid (Figure 2). To Update() it, we'll
look at each of the optional vertices and decide whether to
include them in the mesh.
Following the terminology of, we'll say that if and
only if a vertex is "enabled", then we'll use it in the mesh.
|
Figure 2. A 3x3
heightfield. Dashed lines and vertices are optional in an LOD
mesh. |
Take
as given that the center and corner vertices are enabled. So the task is to
calculate the enabled state for each of the four edge vertices,
according to some LOD calculation which takes the viewpoint and
the vertex attributes into
account.
Once we know
which vertices are enabled, we can Render() the mesh. It's
easy; we just make a triangle fan with the center vertex as the
hub, and include each enabled vertex in clockwise order around
the outside. See
Figure 3 for examples.
 |
Figure 3. Examples of
LOD meshes on the 3x3 heightfield. Disabled vertices in black.
|
To
Update() and Render() an adaptive quadtree heightfield, we extend
the above process by starting with that same 3x3 square and
recursively subdividing it.
By subdividing, we can introduce new vertices, and treat
them like we treated the vertices of the original square. In order to prevent cracks,
however, we'll have to observe some
rules.
First, we can
subdivide any combination of the four quadrants. When we subdivide a
quadrant, we'll treat the quadrant as a sub-square, and enable
its center vertex. For mesh
consistency, we will also have to enable the edge vertices of
the parent square which are corners of the quadrant (Figure
4). We'll define enabling a
square to imply the enabling of its center vertex as well as
those corners.
 |
Figure 4. Subdividing
the NE quadrant of a square. The gray vertices are already known to
be enabled, but the black vertices must be enabled when we
subdivide. |
Next, notice
that an edge vertex in a sub-square is shared with a
neighboring sub-square (except at the outside edges of our
terrain). So when we enable an edge vertex, we will have to
make sure that the neighboring sub-square which shares that
vertex is also enabled (Figure 5). Enabling this neighbor square can
in turn cause other vertices to be enabled, potentially
propagating enabled flags through the quadtree. This propagation is necessary to
ensure mesh consistency.
Seefor a good description of these dependency
rules.
 |
Figure 5. While
updating the NE quadrant, we decide to enable the black vertex.
Since that vertex is also shared by the SE quadrant (marked in
gray), we must enable that quadrant also. Enabling the SE quadrant
will in turn force us to enable the gray
vertices. |
After we're
done with the Update(), we can Render() the quadtree. Rendering
is actually pretty simple; the complicated consistency stuff
was taken care of in Update().
The basic strategy is to recursively Render() any
enabled sub-squares, and then render any parts of the square
which weren't covered by enabled sub-squares. (Figure 6) shows
an example mesh.
 |
Figure 6. An example
mesh. Enabled vertices are marked in black. The gray triangles are
drawn by recursive calls to Render() on the associated sub-squares.
The white triangle are drawn by the original call to
Render(). |
Evaluating vertices and
squares
In the above description, I
glossed over the part about deciding whether a vertex should be
enabled. There are a few
different ways to do this. All of them take into account what
I'll call the "vertex interpolation error", or vertex error for
short. What this is, is
the difference in height between the correct location of a
vertex, and the height of the edge in the triangle which
approximates the vertex when the vertex is disabled (Figure
7). Vertices which have
a large error should be enabled in preference to vertices which
have a small error.
The other key variable that goes into the vertex enable
test is the distance of the vertex from the viewpoint. Intuitively, given two
vertices with the same error, we should enable the closer one
before we enable the more distant
one.
 |
Figure 7. Vertex
interpolation error. When a vertex is enabled or disabled, the
mesh changes shape. The maximum change occurs at the enabled
vertex's position, shown by the dashed line. The magnitude of
the change is the difference between the true height of the
vertex (black) and the height of the original edge below the
vertex (white). The white point is just the average of the two
gray points. |
There are other
factors that can be included as well. for instance takes
into account the direction from the viewpoint to the
vertex. The
justification is based on the idea of screen-space geometric
error; intuitively the vertex errors are less visible when the
view direction is more vertical.
goes through the math in
detail.
However, I
don't think screen-space geometric error is a particularly good
metric, for two reasons. One, it
ignores texture perspective and depth buffering errors -- even if a vertex
does not move in screen space because the motion is directly towards or
away from the viewpoint, the vertex's view-space z value does affect
perspective-correction as well as depth-buffering. Two, the
viewpoint-straight-down case is both an easy case for terrain
LOD algorithms, and not a typical case.
In
my opinion, there's no point in optimizing for an atypical easy
case in an interactive system. The performance of the more
typical and difficult case, when the view axis is more
horizontal and much more terrain is visible, will determine the
minimum system frame-rate and hence the effectiveness of the
algorithm.
Instead of
screen-space geometric error, I advocate doing a similar test
which results in 3D view-space error proportional to view
distance. It's really
very similar to the screen-space-error test, but without the
issues I mention above. It
involves only three quantities: an approximation of the
viewpoint-vertex distance called the L1-norm, the vertex error,
and a detail threshold constant.
Here it is:
L1 =
max(abs(vertx - viewx), abs(verty - viewy), abs(vertz -
viewz));
enabled = error * Threshold < L1;
You probably
recognize the L1-norm, even if you didn't know it had a fancy
name. In practice, using the
L1-norm instead of the true viewpoint distance will result in
slightly more subdivision along the diagonals of the horizontal
terrain plane. I've never
been able to detect this effect by eye, so I don't worry much
about it. and
others use view-space-z rather than the L1-norm, which is
theoretically even more appropriate than true viewpoint
distance. Nevertheless, the L1-norm works like a champ for me,
and uses it too.
You can treat
the Threshold quantity as an adjust-for-best-results slider,
but it does have an intuitive geometric interpretation.
Roughly, it means: for a given view distance z, the worst
vertex error I'll tolerate is z / Threshold. You could do some view-angle
computations and relate Threshold to maximum pixel error, but
I've personally never gone past the adjust-for-best-results
stage.
So
that covers the vertex enabled test.
But if you were paying attention earlier, you may also
have noticed that I glossed over another point, perhaps more important:
during Update(), how do we know whether to subdivide a quadrant
or not? The answer is to
do what I call a "box test". The box test asks the question:
given an axis-aligned 3D box enclosing a portion of terrain
(i.e. a quadtree square), and the maximum vertex error
contained within that box, and no other information about
what's inside the box, is it possible that the vertex enable
test would return true? If
so, then we should subdivide the box. If not, then there's no reason to
subdivide.
The beauty of
it is, by doing the box test, we can potentially trim out
thousands of vertices from consideration in one fell swoop. It makes Update()
completely scalable: its cost is not related to the size of the
full dataset, only to the size of the actual data that's
included in the current LOD mesh. And as a side benefit, the
precomputed vertical box extent can be used during Render() for
frustum culling.
The box test is
conservative, in that a square's max-error could be for a vertex on the
opposite side of the box from the viewpoint, and thus the
vertex test itself would/will fail for that actual vertex,
whereas the box test might succeed. But once we subdivide, e'll
go ahead and do four more, more accurate box tests on the
sub-squares, and the penalty for conservatism is fairly small:
a few extra vertex and box tests, and a couple extra vertices
in the mesh.
Fortunately, given
the above simple vertex test, a suitable box test is easy to
formulate:
bc== coordinates of box
center
ex== extent of box
from the center (i.e. 1/2 the box dimensions)
L1 = max(abs(bcx - viewx) -
exx, abs(bcy - viewy) - exy, abs(bcz - viewz) -
exz)
enabled =
maxerror * Threshold < L1
_________________________________________________________
Details