Contents |
|
The Navigator
Once a position goal has been selected, the navigator must find a path to get there. The navigator first determines if the target can be acquired directly (i.e. can I walk straight to it?). My initial implementation of this test used a ray cast from the current location to the target location. If the ray was blocked, then the target was not directly accessible. The ray cast method has two problems:
My final solution for obstacle detection uses a step-wise walk-through. Each step (~500 millimeters) along the path to the target is tested for obstacles and drop offs. This method produces reliable obstacle detection and is a good basis for navigation through a world composed of triangles.
![]() |
Figure 2. Side view of linear ray cast vs. step-wise walk
through obstacle
detection. |
If a position goal is not blocked by the world, the position goal
servo goes directly to the target. Otherwise a path finding algorithm is
used to find an alternate route to get to the target position. The path
finding algorithm that is used in SpecOps II is based on Navigation
Helper Nodes that are placed in the world by the game designers. These
nodes are placed at the junctions of doors, hallways, stairs and boundary
points of obstacles. There are typically a few hundred Navigation Helper
Nodes per level.
The first step in the path finding process is to update the known
goals queue with all Navigation Helper Nodes that are not blocked by the
world. Because the step-wise walk through obstacle test is fairly time
expensive, it is distributed over a number of frame intervals. Once the
know goals queue been updated with all valid navigation helper nodes, the
next position goal can be selected. This selection is based on when the
Navigation Helper was last visited and how close it is to the target
position. When a Navigation Helper Node is acquired by the position goal
servo, it is updated in the acquired goals queue with the time of arrival.
By only selecting Navigation Helper Nodes that have not been visited, or
that have the oldest time of arrival, ensures that the path finder will
exhaustively scan all nodes until the target can be reached directly. When
two Navigation Helper Nodes have the same age status, the one closer to
the target position is selected.
Direction and Position Goal
Servos
The direction and position goal servos take an X, Y, Z position as
their goal. This position is transformed into local coordinates by
translation and rotation. The direction servo drives the local X component
to 0 by applying the appropriate yaw velocity. The local Y component is
driven to 0 by applying the appropriate pitch velocity. When the magnitude
of the local X, Y coordinates goes below the target threshold, the goal is
"acquired". The position goal servo is nested within a direction servo.
When the direction servo is pointing at the goal to within the desired
tolerance, the AI approaches the target using the movement mode (i.e.
IO_FORWARD, IO_FORWARD_SLOW) set by the directive. Once the distance to
the position goal falls below the inner radius, the goal is "acquired",
actions at goal can be evoked and the acquired goals queue is updated. The
acquired goals queue is used as a form of feedback loop to tell the goal
selector when certain goals are completed. This allows the goal selector
to step through a sequence of actions (i.e. state machine).
Brain/Body Interface
Most actions are communicated to the body through a 128 bit
virtual keyboard called the action flags. These flags correspond directly
to keys the player can press to control his avatar. Each action has an
enumerated type for each bit mask (i.e. IO_FIRE, IO_FORWARD,
IO_POSTURE_UP, IO_USE_INVENTORY etc.) These action flags are then encoded
into animation states. Because the body is articulated, rotation is
controlled by separate scalar fields for body yaw velocity, chest yaw
angle, bicep pitch angle and head yaw/pitch angle. These allow for
partially orthogonal direction goals (i.e. the head and gun can track an
enemy while the body is pointing at a different position goal).
Commands
Because of their modular nature, directives can be given to an AI
by a commander at runtime. Each brain has a special slot for a commander
directive and a commander goal. This allows the commander to tell one of
his buddies to attack an enemy that is only visible to himself. Commands
can be given to a whole squad or to an individual. Note that is very easy
to create directives for commander AI's to issue commands to their
teammates. The following is a list of commander directives used in
SpecOps II:
TypeDirective
CommanderDirectiveFormation ={ TEAMMATE_GOAL, GoBackToFormation,
BaseWeight, NoDecay};
TypeDirective CommanderDirectiveHitTheDirt={
POSTURE_GOAL, HitTheDirt, BaseWeight+1,NoDecay};
TypeDirective
CommanderDirectiveAttack = { SEEN_ENEMY_GOAL,
ApproachAttackEnemy,BaseWeight, NoDecay};
TypeDirective
CommanderDirectiveDefend = { FIXED_POSITION_GOAL, DefendPosition,
BaseWeight, NoDecay};
TypeDirective CommanderDirectiveDemolish = {
DEMOLISH_POSITION_GOAL, DemolishPosition, BaseWeight,
NoDecay};
Future
Improvements
Because this brain model is almost entirely data driven, it would
be fairly easy to have it learn from experience. For example, the priority
weights for each directive could be modified as a response to victories or
defeats. Alternatively, an instructor could punish (reduce directive
priority weight) or reward (increase directive priority weight) responses
to in-game events. The real problem with teaching an AI during game play
is the extremely short life span (10-60 seconds). However, each
personality could have a persistent communal brain, which could learn over
the course of many lives. In my opinion, the real value of dynamic
learning in game AI is not to make a stronger opponent, but to make a
continuously changing opponent. It is easy to make an unbeatable AI
opponent; the real goal is to create AIs that have distinctive
personalities, and these personalities should evolve over
time.