By Donald Kehoe
Over the course of the last few decades, the gaming industry has seen great strides. Beginning with simple games like Pong* and Pac-Man* which offered players a short escape from reality and growing into such involved games like World of Warcraft* and Call of Duty 4* which are serious hobbies to those that play them. Today’s gamers, who according to the Entertainment Software Association (ESA) have an average of 13 years of gaming under their belt, have grown accustomed to seeing each new game become increasingly complex, engaging, and intelligent. For developers, the challenge becomes pushing the envelope to create games that are increasingly compelling. Computer-controlled Artificial Intelligence (AI) has evolved in many forms to meet the test. However, creating an adaptive foil for the player that can match their moves and encourage player growth is no simple task. This article begins a four part series that explores the following important AI concepts and how to optimize them to run on today’s cutting edge multi-core processors:
Part 1: Design and Implementation
At its most basic level, “artificial intelligence” consists of emulating the behavior of other players or the entities (that is, all the elements of the game that can act or be acted upon-from players to missiles to health pickups) they represent. The key concept is that the behavior is simulated. In other words, AI for games is more “artificial” and less “intelligence”. The system can be as simple as a rules-based system or as complex as a system designed to challenge a player as the commander of an opposing army.
Traditional research in AI seeks to create a real intelligence-albeit through artificial means. Projects such as the Massachusetts Institute of Technology’s (MIT) Kismet* are trying to create an AI that can learn and interact socially as well as exhibit emotions. As of this writing, MIT is working on creating an AI that has the faculties of a young child, with promising results.
For the purposes of today’s games, true AI is above and beyond the requirements of a piece of entertainment software. Game AI does not need to be sentient or self-aware (in fact, it is best if it isn’t); it does not have to learn about anything beyond the scope of gameplay. The real goal of AI in games is to simulate intelligent behavior, providing the player with a believable challenge-a challenge that the player can then overcome.
AI can play multiple roles in gaming. It can be a general set of rules used to govern the behavior of entities in the game world. You could also consider the pre-scripted events that entities follow a type of AI. For example, in the game F.E.A.R*, the creepy little girl who appears to frighten players and foreshadow future events is a pre-scripted event. What comes to mind for most people when they think of AI and games is the computer-controlled players in multiplayer games. However, all of these are different roles that AI can fulfill.
Figure 1: F.E.A.R.'s (Vivendi Universal*) use of scripted events is a type of AI
Depending on the nature of the role that the AI is supposed to fill, there can be very little in the way of system needs. The more complex the system, the more requirements an AI will have. Basic needs are nothing more than the processing time needed to run the AI. More complex systems require some means of perceiving the AI’s environment, a record of player actions, and some means of evaluating the success of previous decisions.
The core concept behind AI is decision making. To execute these choices, the intelligent system needs to be able to affect the entities using the AI system. You can organize this execution in either an “AI push” or an “entity pull” strategy.
AI push systems tend to isolate the AI system as a separate element of the game architecture. Such a strategy often takes on the form a separate thread or threads in which the AI spends its time calculating the best choices given the game options. When the AI makes a decision, that decision is then broadcast to the entities involved. This approach works best in real-time strategy games, where the AI is concerned with the big picture.
Entity pull systems work best for games with simple entities. In these games, the entities call on the AI system when the entity “thinks,” or updates itself. This approach works very well in systems with large numbers of entities that do not need to think very often, such as shooters.
For the AI to make meaningful decisions, it needs some way of perceiving its environment. In simpler systems, this perception can be a simple check on the position of the player entity. As systems become more demanding, entities need to identify key features of the game world, such as viable paths to walk through, cover-providing terrain, and areas of conflict.
The challenge for designers and developers is to come up with a way to identify key features important to the intelligence system. For example, cover can be predetermined by the level designers or can be pre-computed when a map is loaded or compiled. Some elements must be evaluated on the fly, such as conflict maps and imminent threats.
The most basic form an intelligent system can take is that of a rules-based system. This system stretches the term “artificial intelligence”. A set of preset behaviors is used to determine the behavior of game entities. With a variety of actions, the overall result can be a behavior system that is not obvious although there is very little actual intelligence involved.
A good example of a rules-based system is a Black Jack dealer (either video Black Jack or real Black Jack). The dealer has a simple rule that it follows: Always hit when the cards add up to 17 or less. To the average player, the perception is that the dealer is playing competitively. The player will imagine a more competent adversary than the one he or she faces (unless the house advertises the rule that the dealers play by).
The classic application of this system is Pac-Man. Four ghosts plagued the player. Each ghost followed a simple rule set. One ghost was always to turn left, another was always to turn right, one turned in a random direction, and the last turned toward the player. Individually, the ghosts would be easy to figure out, and the player would be able to handily avoid them. As a group, the pattern of their movement appears to be a complex, coordinated search party hunting the player. In reality, the only one that even checks the player's position is the last one.
Figure 2. Visual representation of the rule set governing Pac-Man ghosts, where arrows represent the “decisions” that will be made.
As this example suggests, rules do not need to be hard-coded: They can be based on perceived states (as the last ghost was) or on editable parameters of the entity. Variables such as aggression, courage, range of sight, and rate of thinking can all lead to more diverse entity behavior, even within a rules-based system. Rules-based systems are the simplest structure for an AI. More complex intelligent systems are built upon and governed by a series of conditional rules. In tactical games, rules govern which tactics to use. In strategy games, rules govern build orders and how to react to conflicts. Rules-based systems are the foundation of AI.
A finite state machine (FMS) is a way of conceptualizing and implementing an entity that has distinct states throughout its life. A “state”can represent physical conditions that the entity is in, or it can represent emotional states that the entity can exhibit. In this example, emotional states are nothing like a true AI’s emotional states but predetermined behavior models that fit into the context of the game.
Here are common examples of states for an AI system for a game with stealth elements:
Figure 3. Layout of the states in a typical FSM, where arrows represent the possible changes in state
There are at least two simple ways to implement an FMS within the entity system. One is to have each state be a variable that can be checked (often through a massive switch statement). The other is to use function pointers (in the C language) or virtual functions (in the C++ and other object-oriented languages).
The previous sections discussed methods for designing intelligence systems that fit into the predefined events of a game. For most games, this is adequate as long as the designs were thorough and there is a clear understanding of the goals of the intelligent entities. When a game calls for more variability and a better, more dynamic adversary for the player, the AI may need to be able to grow and adapt on its own.
Adaptive AI is used commonly in fighting games and strategy games, in which the mechanics are deep and the options for gameplay are innumerable. To provide a constant challenge for the player without the player eventually figuring out the optimal strategy to defeat the computer, the AI needs to be able to learn and adapt.
The ability to effectively anticipate an opponent’s next move is crucial in an adaptive system. Different methods can be used, such as past-pattern recognition (covered in a future article) or random guess, to determine the next action to take.
One basic method for adaptation is to keep track of past decisions and evaluate their success. The AI system keeps a record of choices a player has made in the past. Past decisions must be evaluated in some manner. (e.g. in fighting games, the advantage gained or lost-health lost or time advantage-can be the measure for success.) Additional information about the situation can be gathered to give the decisions some context, such as relative health, previous actions, and position in the level (people play differently when their backs are to the wall).
This history can be evaluated to determine the success of previous actions and whether a change in tactics is required. Until the list of past actions is built, general tactics or random actions can be used to guide the actions of the entity. This system can tie into rules-based systems and different states.
In a tactical game, past history can decide the best tactics to use against a player team, such as defensive, offensive, berserk, or some balanced means of play. In a strategy game, the optimal composition of units in an army can be discovered on a per-player basis. In games where the AI is controlling supportive characters for the player, the adaptive AI can better complement the player's natural style by learning the way the player acts.
The field of AI is a complex area of research. AI for games takes on different forms depending on the needs of the game designed, ranging from simple sets of rules for computer-controlled entities to more advanced adaptive systems. Applying AI concepts to games is a necessary way to increase the believability of the virtual characters created in electronic entertainment, but it is not an impossible challenge. The next article in this series will discuss the challenges an AI faces in perceiving and navigating a complex environment as well as how those challenges can be addressed.
Donald "DJ" Kehoe: As an instructor for New Jersey Institute of Technology's Information Technology Program, DJ developed the specialization in game development and teaches many of the program's courses on Game Architecture, Programming and Level Design, as well as courses that integrate 3D graphics with games. He is currently working on his PhD in Biomedical Engineering where he applies game and virtual reality to enhance the effects of neuromuscular rehabilitation.
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804