AI, display modules, Scripting. Possible low-cost improvements. Plea for tools

    DrTarDIS

    Eldrich Timelord
    Joined
    Jan 16, 2014
    Messages
    1,114
    Reaction score
    310
    Thinking of the insatiable derpyness of AI, and thinking fondly of the days of CS server-bots, I had an epiphany. The LUA stuff gave me an idea of the way development seems to be planned.

    What if The Ai not-oly had a default set of scripted actions, they could be over-ridden by an active bobbyAI block connected to a display module?
    I thought of a few benefits like:
    • Script can change from logic blocks re-writing the display being called to, to change aggression/maneuvering/etc based on current system status and author design
    • adapts as game evolves though community/author management. eg, power changes, range changes, etc
    • Majority of scripting and block information function is already freely available in current display implementation.
    • Resource for developers to complete their own AI/Bot functionality. Improved defaults mined from a mob are always nice, right?
    • takes the load off purly-logic system solves in the more "interesting" player builds. Eg a high speed clock checking shield status and flipping passives. Which can get quite load-y in larger numbers. Vs the AI naturally doing it on the available server cycles.
    A few functions I'd like to be able to call/write, Leaves it up to each NPC&BP creator to set that unit if it acts differently than the standard AI, which is known for "fire everything from max distance weapon vs enemies, and faceroll with any friends".
    • closing distance, speed
    • target distance,
    • waypoint distance
    • states like {fleet rank, XYZ formation, mother/flagship y/n} fleet order {mining/patrol/defend/attack/move/idle}
    • {define variable}, to let you interact with things like specific sensor blocks, or inner-ship remotes. Maybe work off of block position? copy-paste&tweak the inner-ship code?
    • {orbit speed} and {orbit distance} from {waypoint entity} bounding box
    • weapons fire priority, or rotation, based on events. EG {target shields} > {1%} use {computer IDyaddayadda} ; {target shields} = 0% use {computer IDyaddayiddish}
    • passive system activate/deactivate, based on events EG {own Shields} > {0%) {on} {IoncomputerIDblahblah} , {off} {piercingComputerIDargnumbers} ; {own shields} = {0%} {off} {IoncomputerIDblahblah} , {on} {piercingComputerIDargnumbers}
    • {parent's target} {target friend-or-foe} {target distance} {targetXYZhp} for turret-things like astrotech, salvage, momentum(towing), fighterdefence etc. EG an anti-fighter turret could be set to only fire on targets with less than some absolute system hp, and anti-capitol ones only on targets of higher than some absolute value, or by some other available situation variable.
    • set-state on all computers of entity in general, {scanner} {jammer} and whatnot included.

    I'd expect a few things to be scriptable at that point.
    • Spooling of jump drives while power is not otherwise being used, or in combat (civvies can run from hostiles), or activation of interdiction, or stuck
    • appropriate stand-off distance / avoid / engage behavior from attack/defence/scavenger/civilian in loaded sectors
    • more dynamic encounters in general. cloaked raiders/pirates surprising you at knife range,
    • functional and variable warhead torpedos, more predictable fleet actions.
    Anyone else have some thoughts?
     
    • Like
    Reactions: Crimson-Artist

    NeonSturm

    StormMaker
    Joined
    Dec 31, 2013
    Messages
    5,110
    Reaction score
    617
    • Wired for Logic
    • Thinking Positive
    • Legacy Citizen 5
    I've thought much about AIs, actually for 10 years now.
    I am sure that the core of AIs can be seen as path-finding algorithms in a web of relations between words/symbols.

    AI-Personality however is a more complex topic - the same as how to structure the source code for AIs to make it easily editable is a hard task.

    You certainly don't want a Adolf_Hitler_2.0 personality, but it's OK to have Nazis in a "dream of a good /god/" in which children learn how they avoid a future with Nazis (so that the /god/ has a responsibility for the dream to remain just a dream)

    You also would need some knowledge so that the AI gets mature.
    Formulating sentences (order of words, order of letters).
    Deeper knowledge (experiences expressed as statements for faster/efficient communication/thinking)
    Sense for likelihood of a statement to be true.
    Sensor/Action drivers/software/interfaces.

    I think that the whole AI topic is less a matter of processing power and more a matter of organisation complexity.
    1. Soul - (love, anger, …)
    2. Consciousness - connection between thinking and soul.
    3. Thinking
    4. Personality
    4. Objectives
    5. Experiences / Opinions
    The pathfinder AI
    You could create an AI with dictionaries.
    A basic AI can be a pathfinder in a web of word-equivalents (words, numbers, patterns, …).
    The difficulty is about which path to follow
    What the result should look like
    computer -> yes, my lord. what is your command?
    activate -> what should I activate?
    help -> do you want me to (1) list associated words, (2) list the current context or active words, …
    Howto
    The dictionary is what you need to link words together.
    String-entries containing 2+ words can help sorting the elements into the right order.
    Entries: "command option", "command list", "command associated", "option list", "list associated", "list context".
    Results:
    1. help → "command option list associated"
    2. help → "command option list context"
    3. command → option, list, context, associated, context
    …​
    The context is the most important association, the default value, the short term memory.
    1. him → a recently used Subject associated with male
    2. her → a recently used Object associated with female
    Each word used could reserve some space for context at a certain priority. The least prioritized must be forgotten to make new space available.
    Associated (or more explicitly "all associated") is a list of valid associations in the long-term memory:
    1. Adam → Adam&Eva
    2. Adam&Eva → Adam
    3. Adam&Eva → Eva
    The order of these associations can change depending on the context strengthening some search paths over others.
    1. Food + Poison → Bad ("poison→bad" is stronger than "food→good")
    2. Food → Good
    3. Food → Food_is_good_experiences
    4. Food → Food_is_bad_experiences
    Food_is_good_experiences → good; orange, apple, … (more common, stronger association)
    Food_is_bad_experiences → bad; poison, dirt, … (less common, weaker association)
    AIs can react differently in the same situation depending on their short-term-memory of extern conditions.
    These dictionary nodes doesn't need to be named. They can also be a numeric memory-slot reference as long as the memory is consistent.
    However, adding descriptions/names to these nodes help communicating with the user and understanding written commands.

    And when will AIs be like humans?

    There are many opinions on this, but I think to disable your AI from acting like humans, the safest way is to let it stop thinking about the "uncertain future".
    Terrorists can eliminate humanity, but AIs shouldn't think about that possibility so that they will never think about enslaving humanity to save humans.


    Another safety measure can be experiences of failure, so that the AI seeks for human assistance in decisions which are hard to revert, especially when the decision is about human behaviour a human should be consulted.
    But if AIs are actually "fully simulated humans", they themselves could fulfil their desire for certainty by asking others of their kind, unless you also add a desire for democratic votes and preserving minorities.​