Contents

The Biological Model for Artificial Intelligence

Overview of Data Flow / Data Structures

Overview of Data Flow

The data flow begins with the Stimulus Detection Unit, which filters sound events and visible objects and updates the Known Goals queue. The Goal Selector then compares the Known Goals and Acquired Goals against the personality and commander directives and then selects Target Goals. The navigator determines the best route to get to a position goal using a path finding algorithm. The direction and position goal servos drive the body until the Target Goals are achieved and then the Acquired Goals queue is updated.

Data Structures


The primary data structures used by this brain model are: BRAIN_GOAL and DIRECTIVE. AI personalities are represented by an array of Directive structures and other parameters. The following is a typical personality declaration from SpecOps II:

PERSONALITY_BEGIN( TeammateRifleman )
PERSONALITY_SET_FIRING_RANGE ( 100000.0f ) \\ must be this close to fire gun (mm)
PERSONALITY_SET_FIRING_ANGLE_TOLERANCE( 500.0f ) \\ must point this accurate to fire (mm)
PERSONALITY_SET_RETREAT_DAMAGE_THRESHOLD( 75 ) \\ retreat if damage exceeds this amount (percent)
DIRECTIVES_BEGIN
DIRECTIVE_ADD( TEAMMATE_FIRING_GOAL, AvoidTeammateFire, BaseWeight+1, AvoidTeammateFireDecay )
DIRECTIVE_ADD( EXPLOSIVE_GOAL, GetAwayFromExplosive, BaseWeight+1, NoDecay )
DIRECTIVE_ADD( HUMAN_TAKES_DAMAGE_GOAL, BuddyDamageVocalResponce,BaseWeight, AcquiredGoalDecay )
DIRECTIVE_ADD( DEMOLISH_POSITION_GOAL, DemolishVocalResponce, BaseWeight, AcquiredGoalDecay )
DIRECTIVE_ADD( SEEN_ENEMY_GOAL, StationaryAttackEnemy, BaseWeight-1, SeenEnemyDecayRate )
DIRECTIVE_ADD( HEARD_ENEMY_GOAL, FaceEnemy, BaseWeight-2, HeardEnemyDecayRate )
DIRECTIVE_ADD( UNCONDITIONAL_GOAL, FollowCommander, BaseWeight-3, NoDecay )
DIRECTIVE_ADD( UNCONDITIONAL_GOAL, GoToIdle, BaseWeight-4, NoDecay )
DIRECTIVES_END
PERSONALITY_END

The DIRECTIVE structure contains four fields:

The BRAIN_GOAL structure contains all necessary data for object recognition and action response.

The stimulus detection fields are:

The response fields are:

The Stimulus Detection Unit

Modeling stimulus detection in a physical way can achieve symmetry and help fulfill the user's expectations (i.e. if I can see him, he should be able to see me). This also prevents the AI from receiving hidden knowledge and having an unfair advantage. The stimulus detection unit models the signal strength of an event as a distance threshold. For example, the HeardGunFire event can be detected within a distance of 250 meters. This threshold distance can be attenuated by a number of factors. If a stimulus event is detected, it is encoded into a BRAIN_GOAL and added to the known goals queue. This implementation of stimulus detection considers only three sensory modalities: visual, auditory and tactile.

Visual stimulus detection begins by considering all humans and objects within the field of view of the observer (~180 degrees). A scaled distance threshold is then computed based on the size of the object, object illumination, off-axis angle and tangential velocity. If the object is within the scaled distance threshold, a ray cast is performed to determine if the object is not occluded by the world. If all these tests are passed, the object is encoded into a BRAIN_GOAL. For example, a generic human can be encoded into a SeenEnemyGoal or generic object can be encoded into SeenExplosiveGoal .

As sounds occur in the game, they are added to the sound event queue. These sound events contain information about the source object type, position and detection radius. Audio stimulus detection begins by scanning the sound event queue for objects within the distance threshold. This distance threshold can be further reduced by an extinction factor if the ray from the listener to the sound source is blocked by the world. If a sound event is within the scaled distance threshold, it is encoded into a BRAIN_GOAL and sent to the known goals queue.

When the known goals queue is updated with a BRAIN_GOAL, a test is made to determine if it is was previously known. If it was previously known, the matching known goal is updated with a new time of detection and location. Otherwise the oldest known goal is replaced by it. The PREVIOUSLY_KNOWN flag of this known goal is set appropriately for directives that respond to the rising edge of a detection event.

Injuries and collision can generate tactile stimulus detection events. These are added to the acquired goals queue directly. Tactile stimulus events are primarily used for the generation of vocal responses.

The Goal Selector

The goal selector chooses target goals based on stimulus response directives. The grammar for the directives is constructed as a simple IF THEN statement:

IF I detect an object of type X (and priority weight Y is best) THEN call target goal function Z.

The process of goal selection starts by evaluating each active directive for a given personality. The known goals queue or the acquired goals queue is then tested to find a match for this directive object type. If a match is found and the priority weight is the highest in the list, then the target goal function is called. This function can perform additional logic to determine if this BRAIN_GOAL is to be chosen as a target. For example, if the AI is already within the target distance of a BRAIN_GOAL's position, an alternate goal (i.e. direction) could be chosen. Once a target goal is selected, the position, direction and posture goals can be assigned. Unconditional directives do not require a matching object type to be called. These are used for default behavior in the absence of known goals.

The priority weight for a directive can decay at a linear rate based on the age of a known goal (current time minus time of detection). For example, if an AI last saw an enemy 20 seconds ago and the directive has a decay rate of 0.01 units per second, the priority decay is: -2. This decay allows the AI's to lose interest in known goals that haven't been observed for a while.

The goal selector can assign the three target goals (direction, position and posture) orthogonally or in a coupled fashion. In addition to these target goals, the goal selector can also select an inventory item and directly activate audio responses. When a direction goal is assigned, the action at target field can be set. For example, the stationary attack directive sets the action at target field to IO_FIRE. When the direction servo gets within the pointing tolerance threshold, the action is taken (i.e. the gun is fired). When a position goal is selected, an inner and outer radius are set by the directive - the outer radius specifies the distance threshold for the goal selector to acquire, and the inner radius is the distance threshold that the position goal servo uses for completion. The inner and outer radius thresholds are different by a small buffer distance (~250 millimeters), so as to prevent oscillations at the boundary. When a position goal is acquired, the action at target can be evoked. For example, the Demolish Target directive sets the action at goal field to IO_USE_INVENTORY. This directive also selects the satchel explosive from the inventory. Some directives can set the posture goal; for example, the StationaySniperAttack directive sets the posture goal to prone. Likewise the HitTheDirt directive sets the posture goal to prone.

The Navigator and Goal Servos