Logo

Sistance's EXPLODE!
AI development


Diese Seite auf Deutsch

While developing Sistance's Explode! I read about different kinds of artificial intelligence strategies often used in games like that.
In the following chapters I introduce a few of them and work out their advantages and disadvantages, especially concerning Sistance's EXPLODE!.

requirements for the AI

The main target of an AI is to choose exactly the one special move from a list of possibles moves, that promises the greatest advantage or the least disadvantage.
Every move that lets the AI win the game is worth far more than every other move. If no such move exists, the AI must rate the resulting game state. The AI should choose the move that leads to the best following state.

weighting algorithm

The AI uses so called weighting algorithms to judge a state as good or bad. These weighting algorithms return a simple value -usually a number - to describe how good or bad a situation is (for a specific player).
Depending on the definition a high value means it is a good situation and a low value (or in some cases even a negative value) means it is a bad situation.
A game state that represents a won game (or a game that will soon automatically be won) will get the best possible value. A game state that represents a lost game (or a game that will soon automatically be lost) will get the worst possible value.

The algorithm can look at simple or complex circumstances to rate a situation. The minimax algorithm even uses the rating of future game states.

A weighting function for Sistance's EXPLODE! could factor some of the following informations:
  • How many fields do I own? How many fields does my opponent own?
  • How many opponent fields do I threaten? How many of my fields does the opponent threaten?
  • Which total value do my fields have? Which total value do the fields of my opponent have?
  • How many moves can I do before I must attack my opponent? How many moves can my opponent do before she must attack me?
  • Can I expand without destroying several high valued fields? Can my opponent expand without destroying several high valued fields?
Soon we realize that we early use these considerations as human players. The actual implementation of my game AI uses even more complex considerations to find its best move. But yet these simple rules would be good enough to implement a beginner's AI opponent.

Forecasting - deep-first search, breadth-first search, minimax algorithm

"Which move will the opponent make if I do this one? How will the opponent react if I cause another situation?" - a forecasting AI tries to plan (or calculate) every possible future move to find the best situation it can reach in any case - and the move or moves needed to reach it.

Two common methods are breadth-first search and depth-first search, both combined with a minimax algorithm. The minimax algorithm assumes that every player will choose the move that guarantees the best situation apart.
The AI assumes that its opponent will choose a move that will lead to the AI's worst situation. Goal of the algorithm is to find the move that guarantees the best situation the AI can enforce for sure.

If it finds a move that guarantees a win, it will rate this move as good as possible. If a move will lead to game loss, the algorithm will rate this move worst possible.

In theory an AI can forecast and rate all moves "infinitely" to find a sequence of moves that will grant it to win the game or to enforce a draw.
In reality an AI is caged inside several borders that prohibit such calculations in a complex game like Sistance's EXPLODE! or even chess.

Border number 1 is calculation time a computer would need for infinite calculations. A player expects the AI to react after a short time.
But depending on the algorithm, calculation time increases exponentially per additional depth.

Border number 2 is the finite amount of memory a computer can provide. In Sistance's EXPLODE! you have 25 fields. In theory almost every field can take one of 11 values. That (theoretically) results in 11 power 25 unique game states - practically can only a few of them be reached in a real game. However the remaining number of possible game states is still much bigger than the number of bytes an android device offers for a single app. And just one byte is too optimistic to represent all needed information of a game state: we need a unique ID to represent the state itself (11 power 25 states exist theoretically) and we need to know which player may choose to move (which would double the number of game states). In addition we need some reference data, ie. the rating of that state and links to future states.

To respect that borders, AI algorithms limit the amount of examinated moves in different ways. The minimum amount of moves for calculations in Sistance's EXPLODE! is 10. If the AI doesn't use this range, it cannot identify tactical maneuvers that would push it in a better situation. If it uses more than a depth of 10, the calculation costs increase.

I tested both of these algorithms and described them at the following pages.

human rating

Human players (usually) don't think in such algorithms. They examine the game board and recognize tactical arrangements that the AI would not "see" by itself. Depth-first search and breadth-first search forecast every movement sequence stoically until the rating of a situation is clear (won or lost - in some games also draw) or can be calculated in a relative simple weighting algorithm.

If the programm cannot offer the needed amount of memory (for breadth-first search algorithms) or the needed amount of calculation power (for depth-first search algorithms) to forecast up to the needed depth, a better weighting function must be supplied.
Its goal is to examine more complex questions. Depth-first algorithms and breadth-first algorithms can predict the result after N moves relatively easily. A weighting function, that doesn't forecast, must know and detect strategical elements by itself.
Simplified spoken the weighting functions of forecasting algorithms only detect a dangerous game state because these states lead to bad following states. They don't know the reason - and they don't have to.
A complex weighting function, that simulates the tactical intellectual power of a human player, must be able to assign field constellations to patterns it already knows as dangerous patterns.

I described my attempt to write such an AI in the chapter "rule of thumb".

 Impressum