Planned AI movement for better dogfights

    NeonSturm

    StormMaker
    Joined
    Dec 31, 2013
    Messages
    5,110
    Reaction score
    617
    • Wired for Logic
    • Thinking Positive
    • Legacy Citizen 5
    Retreating might be determined by the players attempt to get ground between it and the AI or distance in short. Countless states can be added and then they can be prioritized. Then they can be given different behavior patterns also. Such as one AI might heal first before attacking if he is really low in health another AI could be set to go berserk and just attack being close to death.
    No AI however can actually tell what a players intent is. At least not.
    But what I tried to say is: If the AI is too stupid to decide the proper action itself, it should watch the player's behaviour to define it's own.

    If one human yawns, others around might yawn too.
    The AI as opponent should act like a bubble of space dragging and pushing on the swarm of AI ships.
    It also should have ranges of preferable and non-preferable weapon ranges and try to stay in one preferable range. But if the player tries to get into his preferable range (which is non-preferable to the enemy), the AI could make a sudden turn to skip that range and go into the next preferable one (think about electron orbits around an atom - they avoid the space between orbits).

    Run if you can run. Go berserk if you can't run.
    For piloted ships: attack if you can survive. You don't get anything from being a killing-machine, only from looting (when surviving).
    For defence-drones: yes, they can go berserk, but only if they can do damage - if it is effective.
     
    Joined
    Dec 14, 2014
    Messages
    745
    Reaction score
    158
    • Community Content - Bronze 1
    • Purchased!
    • Legacy Citizen 2
    But what I tried to say is: If the AI is too stupid to decide the proper action itself, it should watch the player's behaviour to define it's own.

    If one human yawns, others around might yawn too.
    The AI as opponent should act like a bubble of space dragging and pushing on the swarm of AI ships.
    It also should have ranges of preferable and non-preferable weapon ranges and try to stay in one preferable range. But if the player tries to get into his preferable range (which is non-preferable to the enemy), the AI could make a sudden turn to skip that range and go into the next preferable one (think about electron orbits around an atom - they avoid the space between orbits).

    Run if you can run. Go berserk if you can't run.
    For piloted ships: attack if you can survive. You don't get anything from being a killing-machine, only from looting (when surviving).
    For defence-drones: yes, they can go berserk, but only if they can do damage - if it is effective.
    What you are talking about is AI individual tactics. I only gave a few examples. There are countless tactics that fit in that category.
    There are also what are known as fleet tactics and combined tactics and mixed. Such as maybe I will fain being crippled so you come after me.
    I'll back of and slowly correct my course back into my own fleet. Which means that fight which was 1 vs 1 became you vs many.

    Generally better AIs use goal or objectives with state machines.
    A goal may be something on the line of attack ship, destroy ship, capture ship, take cargo....
    Then there would be a subset of objectives to meat that goal.
    Such as engines & disable weapons, board ship, eject cargo, pick up cargo with own ship, escape.

    Basically individual tactics are used as part of the operations to accomplish a goal at some level.

    AIs can have different levels of complexity depending on what is needed. You could have a simple fight AI for a monster for example.
    For pirates you may have a goal orientated tactical AI.

    You can also have different types of AI for different levels of stuff. Such as an AI could control a country or faction. Vs an AI that is used to control peasants or one that controls troops. Booth are AIs and have entirely different goal systems and one can actually be in charge of the other.
     

    NeonSturm

    StormMaker
    Joined
    Dec 31, 2013
    Messages
    5,110
    Reaction score
    617
    • Wired for Logic
    • Thinking Positive
    • Legacy Citizen 5
    Imagine something simpler: a checker-board, 3x3 fields.
    PHP:
    Where do you move your ship to? You can solve this with a "potential"-field.
    [4-9] [3-6] [2-4] → [0-5] [0-3] [0-2]
    [3-6] [2-4] [1-1] → [0-3] [0-2] [1-0]
    [2-4] [1-1] [0-0] → [0-2] [1-0] [0-0]
    
    For this example, I used negative values of -9,-6,-4,-1 for avoiding explosion damage.
    I used used positive values (left of the minus-sign) as a factor for closing-in to the point of explosion top left.
    
    These positives and negatives sum up with good moves to the right and down (from the middle).
    To compute this efficiently, you don't need a matrix for every possible state, but to detect the upper and lower points of change in a graph^-1
    With these points of change, you can create a [Voronoi-diagram: example on paperjs.org] - each point being the centre of a cell, each decision one dimension.

    If you also add decision weighting with formulas (+- = 1 | */ = 2 | ^x ^-x = 3) it's easy to visualize it. (use colours for the 3rd dimension and bold drawing on horizontal/vertical axis for 4rd and 5th dimensions for example).

    This explanation might not be the most accurate or helpful, but it's an way way to start understanding the distinction between a simple state-machine and higher level thinking.