Welcome to Muscle
Beach
Most of
the academic research on facial animation has not approached the problem
from a viseme basis. This is due to a fundamental drawback to the viseme
frame based approach. In the viseme- based system, every source frame of
animation is completely specified. While I can specify the amount each
frame contributes to the final model, I cannot create new source models
dynamically. Say, for example, I want to allow the character to raise one
eyebrow. With the frames I have described so far, this would not be
possible. In order to accomplish this goal, I would need to create
individual morph targets with each eyebrow raised individually. Since a
viseme can incorporate a combination of many facial actions, isolating
these actions can lead to an explosive need for source meshes. You may
find yourself breaking these targets into isolated regions of the
face.
 |
Figure 3. The
zygomaticus major muscle will put a smile on your
face. |
For this reason, researchers such as Frederic Parke and Keith
Waters began examining how the face actually works biologically. By
examining the muscle structure underneath the skin, a parametric
representation of the face became possible. In fact, psychologists Paul
Ekman and Wallice Friesden developed a system to determine emotional state
based on the measurement of individual muscle groups as “action units.”
Their system, called Facial Action Coding System (FACS), describes 50 of
these action units that can create thousands of facial expressions. By
creating a facial model that is controlled via these action units, Waters
was able to simulate the effect that changes in the action units reveal on
the skin.
While I’m not sure if artists are ready to start
creating parametric models controlled by virtual muscles, there are
definitely some lessons to be learned here. With this system, it’s
possible to describe any facial expression using these 50 parameters. It
also completely avoids the additive morph problem I ran into with the
viseme system. Once a muscle is completely contracted, it cannot contract
any further. This limits the expression to ones that are at least
physically possible.
Artist-Driven Muscle-Based
Facial Animation
Animation tools are not really developed to a point
where artists can place virtual muscles and attach them to a model. This
would require a serious custom application that the artists may be
reluctant even to use. However, that doesn’t mean that these methods are
not available for game production. It just requires a different way of
thinking about modeling.
For instance, let me take a look at creating a simple
smile. Biologically, I smile by contracting the zygomaticus major muscle
on each side of my face. This muscle connects the outside of the zygomatic
bone to the corner of the mouth as shown in Figure 3. Contract one muscle
and half a smile is born.
 |
Figure 4.
Pucker up: Incisivus labii at
work. |
O.K. Mr. Science, what does that have to do with modeling? Well,
this muscle contracts in a linear fashion. Take a neutral mouth and deform
it as you would when the left zygomaticus major is contracted. This mesh
can be used to create a delta table for all vertices that change. Repeat
this process for all the muscles you wish to simulate and you have all the
data you need to start making faces. You will find that you probably don’t
need all 50 muscle groups described in the FACS system. Particularly if
your model has a low polygon count, this will be overkill. The point is to
create the muscle frames necessary to create all the visemes and emotions
you will need, plus any additional flexibility you want. You will probably
want to add some eye blinks, perhaps some eye shifts, and tongue movement
to make the simulation more realistic.
The FACS system is a scientifically-based general
modeling system. It does not consider the individual features of a
particular model. By allowing the modeler to deform the mesh for the
muscles instead of using this algorithmic system, I am giving up general
flexibility over a variety of meshes. However, I gain creative control by
allowing for exaggeration as well as artistic judgement.
The downside is that it is now much harder to describe
to the artists what it is you need. You need to purchase some sort of
anatomy book (see my suggestions at the end of the column) and figure out
exactly what you want to achieve. Your artists are going to resist. You
had this nice list of 13 visemes and now you are creating more work. They
don’t know what an incisivius labii is and don’t want to. You can explain
that it is what makes Lara pucker up and they won’t care. You will have to
win the staff over by showing the creative possibilities for character
expression that are now available. They probably still won’t care, so get
the producer to force them to do it. I have created a sample muscle set in
Chart 1. This will give you some groups from which to pick.
|
Chart 1. The
basic muscle groups involved in facial
animation. |
Now I need to relate these individual muscle meshes to
the viseme and emotional states. This is accomplished with “muscle macros”
that blend the percentages of the basic muscles to form complex
expressions. This flexibility permits speech and emotion in any language
without the need for special meshes.
I still need to handle the case where several muscles
interact with the same vertices. However, now there is a biological
foundation to what you are doing.
Certain muscles counteract the actions of other muscles. For
example, the muscles needed to create the “oo” viseme (incisivius labii)
will counter the effect of the jaw dropping (digastric for those of you
playing along at home). One real-time animation package I have been
working with called Geppetto, from Quantumworks, calls this Muscle
Relations Channels. You can create a simple mathematical expression
between the two to enforce this relationship. You can see this effect in
Figure 5.
|
Figure 5. W.C.
Fields’s jaw is open and then blended with the “oo” viseme.
Image courtesy of Virtual Celebrities Productions and
Quantumworks. |
________________________________________________________
Now for
the
Animation