Contents

Making Them Think

Predefined Behavior

Goal-Directed Behavior

The Middle Ground

A Simple Tutorial: Maze Solving

Discussion

Notes and References

Goal-Directed Behavior

The first step in describing goal-directed behavior is to come up with a way to define a cognitive character's goals. The situation calculus provides a simple and intuitive theoretical framework to explain how this can be done. In particular, a character's goals can be expressed in terms of the desired value of various relevant fluents. A goal can therefore be expressed as a defined fluent, i.e., a fluent defined in terms of other fluents. For example, suppose we have two characters, call them Dognap and Jack, such that Dognap is armed with a gun, and wants to kill Jack. Then, we can state that Dognap's goal is to kill Jack:

Clearly, Dognap will have achieved this goal in any situation s' for which is goal(s') true. We recall that any situation is either the initial situation s0, or of the form:

Therefore, if goal(s0) is not true, then Dognap must search for a sequence of n actions, a0,...,an-1 such that
is true.

Situation Tree

To explain how characters can automatically search for sequences of actions that meet their goals, we will introduce the idea of a situation tree. In particular, we can think of the actions and effects as describing a tree of possible future situations. The root of the tree is the initial situation s0, each branch of the tree is an action, and each node is a situation. Figure 4 shows an example of a tree with n actions, a0,a1...,an-1.

Figure 4. An abstract situation tree

The value of the fluents at each node (situation) is determined by the effect axioms. Figure 5 shows a simple concrete example using the Dognap and Jack example, and the corresponding effect axioms, that we described earlier.

Figure 5. A concrete example of a situation tree

Figure 5 A concrete example of a situation tree. A goal situation is a situation in which the fluent is true. For example, in Figure 5 we can see that if the goal is still to kill Jack then the situation

is a goal situation. We can see that in this example there are many goal situations, for example

is another goal situation. In general, however, there is no guarantee that a goal situation exists at all. If a goal situation does exist, then any action sequence that leads to one of the goal situations is called a plan.

Figure 6. An abstract situation tree with just three actions.

Figure 6 shows a simple abstract situation tree with just three actions, and three goal situations. We will use this figure to illustrate how a character can search the tree to automatically find a plan (a path) that leads from the initial situation (the root) to a goal situation. Depending on how we choose to search the tree we will find different plans (paths). In particular, we can see some common search strategies being applied. We can see that a bounded depth-first search strategy finds the plan.

A breadth-first search tries exhaustively searching each layer of the tree before proceeding to the next layer. That is, it considers all plans of length 0, then all plans of length 1, etc. Thus, a breadth-first search is guaranteed to find a plan if there is one. Moreover it will find the shortest such plan. Unfortunately, a breadth-first search requires an exponential amount of memory as the character has to remember all the previous searches.

A depth-first search doesn't require an exponential amount of memory, as there is no need to explicitly store large portions of the tree. That is, a depth-first search only needs to remember one branch of the tree at a time. It keeps looking down this one branch until it gets to a goal, or it reaches a leaf node. If it reaches a leaf-node, it backs up to the previous node and searches another branch. If there are no more branches, it backs up one step further and proceeds recursively until it has searched the entire tree. Unfortunately, even if there is a goal in the tree, depth-first search is not guaranteed to find it. In particular, it is quite likely that the tree will have branches that are infinite. That is, the character can just keep doing some sequence of actions over and over again, but it never leads to a goal. A depth-first search can get sidetracked by searching down one of these fruitless infinite branches. Because it never reaches a goal, or a leaf node, the algorithm never terminates. Another drawback of a depth-first search is that even if it does find a plan, this plan is not guaranteed to be the shortest possible plan. Depending on the application, this may or may not be important.

A bounded depth-first search attempts to resolve some of the limitations of a depth-first search by putting a bound on how deeply in the tree the search can proceed. Now the search backs up if it finds a leaf node, or if the maximum search depth is exceeded. It is even possible to iteratively search with a deeper and deeper bound. To avoid redoing the work of the previous search, the results of the last search can be stored so that we don't have to begin from scratch each time the depth bound is increased. Unfortunately, we are now back to remembering large portions of the tree and, just like a breadth-first search, this requires an exponential amount of memory.

In the worst case, the situation tree does not contain any goal situations. If this is the case, then any exhaustive search algorithm will take an exponential amount of time to respond that there is no plan available to achieve the goal. This is one of the major limitations of planning and is something we will look at in more detail in the next section. In the meantime, we mention that looking for different search algorithms is an important topic in AI research and the interested reader should consult the further reading section. One of the most interesting new developments is the use of stochastic search algorithms.

It should also now be apparent how choosing actions nondeterministically entails searching for appropriate action sequences in a search space that potentially grows exponentially. This corresponds to the usual computer science notion of computational complexity. Another interesting point to note is that CPU processing power is also growing exponentially. Therefore, according to Moore's law, our computer characters can be expected to be able to search one layer deeper in the situation tree every eighteen months or so.

________________________________________________________

The Middle Ground