top of page

Games & AI - Decision Making

Learning:

Perception, is an interface, that allows states to drive decision making, and actions; such as animations, and physics calculations; essentially an action can be anything

Artificial Intelligence – Decision Making

Perception (world interface): decision making requires some form of interface with the game world and environment; data about the state of the world and or objects in the world, are essentially arguments that go into decision making functionality. The perception itself is abstraction for the consideration of internal and external knowledge (states or data), that are processed, for an action to take place. Bear in mind, an action can be anything; such as the beginning of another processing loop, or something as simple as an animation.

Internal Knowledge: essentially is information about the internal state of a game artefact, such as a character; and the goals, motivation, memory (previous states of goal and any internal data) and typically stats such as Health and Mana points

External knowledge: is the information a games artefact (such as a NPC) has about the state of the environment around them, typically through their ability to sense or a similar method; for instance the position of friends and enemies, use of switches or devices in the environment (i.e. are the on or off), and even layout of a level.

Internal and external states that are inputs to decision making, and consequently actions that take place

Request Action: based on the processing of internal and external knowledge, decision making algorithms will request an action, either to one of the following:

  1. Change the internal state: changing the internal state in some way.

  • Change the emotional state a character has

  • Change the knowledge a character has

  • Change a goal the character has.

  1. Change the external state: changing the environment in some way.

  • Flipping a switch.

  • Move an object.

  • Navigate somewhere.

  • Fire a weapon.

Core capabilities of AI:

Decision Making – Self feeding loop: the process of perception, decision and action feeds itself; as the capacity for perception will be active through out the game, there will a sequence of internal and external change.

Internale and External, changes that drive a cycle of actions and change

AI techniques:

There are a host of techniques available for decision making, such as Finite State Machines (FSM), Hierarchal Finite State Machines (HFSM), Binary Decision trees; and additional decision-making techniques such as the Markov Model and Neural Networks.

Binary Decision tree: is a decision tree in its simplest form, is a tree made from nodes that can have two other nodes connected to it – hence the use of the term binary, in the naming of this technique. Starting at the root node, a range of Boolean states can reflect movement through the tree. For instance, it is not hard to image Binary tree used in which areas a player can access in a game or the choices that opened up to the player or closed because of their actions.

FSM nodes, where states allow switching to other nodes, and display of bahviour and intelligence

FSM: is essential a node with conditions that must be true to access other nodes, and typically used for NPC decision making. Always beginning with an idle state, that is checked for conditions to move to other states, and the edges that link nodes representing such conditions. However, unlike basic Binary Trees, FSM can transition back to any state easily. Making it a favoured choice for NPC behaviour. Giving the illusion of higher intelligence.

HFSM: when an FSM has several states which are different forms of the same state, the graph becomes messy, much like a mesh network topology when there are huge numbers of nodes. As such, the resolution to this was the encapsulate all node of the same type within a node. For example, if an NPC have several attack they will all be in an attack node and the attack selected is based in the state that is active for reaching that attack node.

Node of a Neural Network, with a hidden intermediate layer (representing deep learning), ending with  visiable output nodes.

Neural network: (Shiffman, 2012) would agree, that a neural network mimics the neuron of the brain by the use of node data structures, and can have up and downwards stream of data into the nodes that process the data as inputs that are multiplied by weight, a bias and a constant.

Essentially a neural network can take player data as inputs and learn from it and produce enemies that can become smarter, and not just follow simple FSM action, and find new ways to challenge the player.

Actions: In a game an action can essentially be anything, as there are not true definition of what an action is not in game. However typically we can think of actions and animations and changing states. Action can be combined, such as shooting a gun and moving forwards, or picking up health points and firing a gun. However, tend to be managed, by an Action Manager.

Action Manager: essentially, actions are managed using levels of priority and interrupt capacities . For example, the action of reloading a gun, might take priority over an aiming action; this is carried out by giving actions priority levels and placing actions into a que, for processing and carrying them out.

Open Problems in AI:

Presentation: click here!

Reflection:

There are a host of AI techniques that can be applied for decision making. However, not all AI techniques are suited to specific tasks; as such, one should consider what the aim or goal in a game is, and how 'we' expect the player to perceive the gameplay. For example, a binary tree has strength, in that it can be used to predict all the moves in a board game, offer different story lines through a game, and generate states for reactions should a specific path be selected. However, it does have weaknesses when compared to the Finite State Machine (FSM) and Hierarchical Finite State Machine (HFSM) models - FSM and HFSM models are intuitive, easily understood, and make generating NPC behaviours relatively easy, and make processing less elongated when compared to tree structures. Additionally, HSFM overcomes FSM’s weakness of a messy meshed topology, when large numbers of nodes exist, by grouping similar nodes into hierarchies, and is specifically stronger at organising AI with behaviours it can switch between. Additionally FSM and HFSM, share the strength tree structures have, as they are not explicitly used for behaviour. However, Neural Networks (NN) in gaming is a relatively new phenomenon, and unlike the other models discussed, allows a game to learn, by mimicking brain neuron activity. As such, NPC behaviour can be based on human players, and generate a more human NPC, increasing immersion. My perspective is that NN will become a one size fits all AI, and that the complexity (considering hoe to create weight, bias, and constants values) will be worth time. As the human brain only uses 10% of the brain per task, thus I hypothesise that the current issue of integrating individual neural networks will be overcome someday; and a NN (server) in combination with HFSM, could create a game where a network connection improves intelligence. Complementing the internet, however if internet connectivity is lost NPC's become less intelligent but games remain playable.

References:

Shiffman, D., 2012. The Nature of Code. California: Creative Commons.

Khan, A. et al., 2017. State-of-the-Art and Open Challenges in RTS Game-AI and Starcraft. International Journal of Advanced Computer Science and Applications, 8(12), pp. 16-24.

Woodcock, S., 2000. Gamasutra. [Online] Available at: https://www.gamasutra.com/view/feature/3570/game_ai_the_state_of_the_industry.php [Accessed 9th February 2018].

Featured Posts
Recent Posts
Search By Tags
No tags yet.
Follow Us
  • Facebook Classic
  • Twitter Classic
  • Google Classic
bottom of page