Concurrent and Adaptive Narrative

Narrative and Player Expression

As a narrative and game designer, you must consider that you are designing for two processors. The technology, the hardware running the code and algorithms, and the psychology, what is happening in the player's mind. Players are exploring possibility spaces, a set of problems that are interesting and enjoyable.

When presented with a single narrative, humans can sense linearity. As we see Indiana Jones running down the cave corridor being pursued by a giant rolling boulder, the film by nature is linear in its narrative, however, in our minds, we can produce 'what if' moments of imagination. "What if he had tripped on that root?" "What if his jump was too short?". We are able to flesh out the possibility of space with possibilities of what could have happened.

This actually serves to hurt us in game design, as we cannot get away with passing simple or naïve finance the machine behaviors off as realistic or believable. The human mind is literally designed to determine and establish patterns, especially when these patterns are applied to another pseudo-human simulation type entity.

And yet, the players imagination is perhaps one of our greatest assets. If we can provide a foundational base for the player to root and ground their experience, and we create believable and realistic relationships and conflicts, the player’s mind will craft more dynamic and complex scenarios than millions of designers ever could.

It can be common as game designers to believe that the fun and happy gameplay exists between the lines of code that we have written. It has almost always certainly been the case that the entirety of a games experience exists beyond the players mind and into our social contexts.

In early Sims games, sims would try to balance material and social success. This, in essence, is a hill-climbing challenge of the characters reward vectors.

Player psychology suggests:

  • fear, aggression, violence
  • compassion, empathy, reflection

Quick feedback loops. This is a chart describing some success and failure states used in an early Sims game:

Basic ControlNeedsJobs/Skills EconomySocial / FriendsFamily
InteractFree TimeHouseRomanceChildren
CollapseLose JobInsultSocial Services
LonelyRepo ManJealousyAffair
StarveElectrocutionFightMilitary School

These needs need to be met left-to-right across the table, so there is success at every timescale of the simulation.

Giving life to lines of Code

For our simulation, we are going to describe a brief of events of which can be seen as its own hierarchy of needs for our entities.

needsjobs / skillssocialaspirations

Experiencing failures is part of the experience and serves to teach behaviors. In the Sims, there were many failure states (some of which were caused by programming bugs/errors), and players were very on board with this illusion of reality.

It is very easy in video games to determine the limit of the possibility space imposed by the designers and developers of the simulation. Games that present open-ended worlds allow for exploration of the possibility space.

We must define the model, as models are the communication channels for the system and the user. It is about the reduction of data into information, leaving only the essential data you want to convey.

Humans maintain a model of other humans, and also a model of their model of you. You can act in ways to confirm the model to others.

Aspirational vs real life projection of the players wants and desires into the game. Humans are incredible at seeing patterns, and we can apply patterns even when there are none present. We can empathize with the entities in a simulation if such simulation is sufficient in controlling the level of abstraction, allowing the player to fill in the blanks with their own imagination.

In a constructed environment you must balance between a high and low level of detail. At higher levels of detail, it becomes much harder to edit, and in order to keep the malleability, as a toy would, it is important to abstract and simplifies ideas where necessary, it's what invites you in.

Emergence is about simple rules and a large possiblity space.

What entities are capable of, and what they are constrained by, how we organize and arrange their interactions and the transitions between these interactions.

Concurrency Vs Multitasking

There is no true concurrent action in modern computing, and there is much to learn from how concurrent problems are solved in modern computing.

Multitasking is a form of context-switching. Concurrency is the form of context overlap.
Multi-tasking is talking over dinner between bites.
Concurrency is talking and eating with your mouth full.
A single-core CPU is multi-tasking, switching between distinct computations. A traditional GPU is concurrent, performing multiple computations at the same time.

Multitasking Context cooperative switching, among behaviors. Gives the illusion of doing things at the same time without the complex computation. Humans do this as well.

Each multitasking timeline is a set of events appropriate to that task. Interactions are defined as a set of tasks, and there are always some number of tasks for any given interaction.

A Task is a fundamental unit of behavior. Tasks represent a finite action where its complexity should be limited to its purpose as a general use function. They have rules such that they are compatible with all other running interactions. They must be cheap to run so that they can be used frequently. They can also be limited by additional constraints. These sub-events can be selected in a weighted or random nature.

eatfind foodeat foodentity.hunger +=x1 - entity.hunger
sleepfind +=x1 -
chatfind +=x1 -

As with the NPC characters themselves, most objects and things within the system are entities which provide interactions and affordances to complete those actions. Entities maintain a set of interactions which encapsulates all the things this entity, and any other entity needs to do to perform on that object.

As humans most things are done naturally concurrently, standing, sitting, drinking, talking, etc, we are constantly multitasking. Creating convincing and realistic environments and scenarios are thusly not naturally linear or serial.

Such a system needs to be designed so that it can be explored implemented and iterated quickly. Per system basis is a lot of work require custom logic, some interactions are still serial.

Topologies of systems include Entities, Networks, and Layers. The way these structures change through time can be seen as the dynamics of the system.

eat + chatfind foodfind friendeat foodtalk

Each entity has a set of active interactions, an expression of what the entity is currently doing at that point in time. The tasks list for that entity is a dynamic list that interleaves elements from all active interactions.

The Queue and Active Interactions

In order to determine which interactions or tasks an entity will make requires a "backlog" of available interactions to the entity to select from.

Entities have a queue of pending interactions, the player, the Entity itself, or the Story Compositor can input or add to these.

When choosing the constraints for a system of events, or determining whether actions are compatible starts by asking questions about the environment you are constructing:

Can the action be performed by the entity, and how does the entity perform the actions. For a specific action, what is the condition that must be done to satisfy this condition?

The interaction runs for every entity that has available interactions or can interact with other entities. It is a first-in-first-out list of pending interactions with a Priority value.

User actions are of high priority ( p ~= 1 ), low for autonomous ( p ~= 0 ). This priority is used to compare against the current set of active interactions. Active interactions are interactions selected from the Queue and are currently running for the entity.

Every moment the queue is being compared to the active interaction list, which also has its own priority, and set of constraints. When interactions are first selected, removed from the queue, and added to the active interactions list, their priority is temporarily increased. Again, this is not done with boolean checks but through using the age of the interaction as a modifying input during the priority calculation phase. E.g, watching tv when selected, at first has an increased priority to regulate task switching, and after some time, say 15 seconds, the priority will gradually fall off to idle.

Along with the priorities being compared, queue entries will not become active unless if intersected constraints between any active interaction are incompatible. If and when an action of highest priority enters the queue and it is incompatible with the current set of active interactions, all current active interactions are dropped and the queue action is started.

In this case, reading a book is incompatible with the constraints set by the current active actions. It will wait. In the case of a fire breaking out which cooking, which has a high priority, baking a cake would be dropped.

Eventually, events will begin to complete, for instance, drinking is complete, so now those constraints are lifted and we can reevaluate the next action. Eventually, only sit will be left, and the "read book" will become added to active.

Scoring Constraints and the Comparator

Interactions, as they are being applied and evaluated Divide scoring into the likelihood of their outcome, or the action being taken. It is the weighing and comparing of these interaction priorities and constraints that create a dynamic and breathable system. Priorities and weights may shift overtime, or with attributes specific to characters, or traits that are passed down through generations.

Scoring function likelihood should always be done on the scale of 0 to 1, this allows for normalization as an expression of the aptitude to perform the action.

Some priorities may be described as constraints, as we may have mentioned, and interaction may not be considered for activation if it does not satisfy be polygonal position constraints of a currently running interaction. Weighing of the function should not take into account subtasks when determining positional constraints, and should primarily refer to groups when referring to positional aspects.

An example of this is an entity can still keep the sit interaction active, even when the entity stands and walks to grab a drink. Because the two actions were compatible as they were initially compared, they at one point, did meet the positional constraints. After grabbing the cup, the entity will then to return to the place in which they were sitting.

  • Position: an entity must be within a particular area or position to do this interaction.
  • Orientation: it’s not enough to just be inside of the polygon representing the area but facing away from it.
  • Animation slot: constrains, combo of both. Dynamical extracted from animations and models. If an entity is at this point or position at this table.
  • Posture: standing, sitting, posture, neutral animation pose eg, same action from multiple postures.
  • Carrying: what is in the hands of the entity, if something passive like talking it won’t matter. Implies object is tangible to the character, single hand, or both.
  • Object on surface: must be on a surface in front to access. Drinking can be performed by an entity who is holding or grabs it from a. Surface. Finding empty counters etc.
  • Line of sight is a modification of position. Eg watching tv, from behind a wall.

Scoring functions don’t eliminate areas but allow entities to prioritize. We represent probabilistic implications through all facets of a characters identity.

Convex polygons for socialization groups from the group when the event starts or when it moves.

Instead of expressing pairs of conditions for interactions, express preconditions as data-driven rules, so that complexities arise naturally from the conditions and priorities of the interaction. This is also beneficial if we need to be able to support entities doing multiple interactions, and when the conditions of those actions need to be computed.

For instance, while talking, it is preferable to be close to the other participants, but while playing the guitar and talking you would want to be seated or a step or two further away.

By designing a homogeneous system for describing the priority of interactions, the conditions of constraints, our system continues to produce interesting results beyond a hard-coded approach.

Clicking to add a position constraint when a constraint is applied, like watching tv, now applies those orientation constraints in the shared space of that event. If you were to leave the event space of one, that influence would be lost.

Asking if interaction compatible becomes now a question of comparing constraints. If the interaction space between both events is non-empty, they are compatible.

Transitioning events is just a behavior constrained by preconditions for performing an interaction on the graph.

Interactions are to be used generatively, if you know you have to be in the polygon area or specific position, go to that position. If you’re drinking a drink, you need a drink in your hand, go pick up the drink.

The Behavior Graph

Because the system relies on probabilities of concrete principles, like distance, position, and other entities dates, our systems sequence and total graph of behaviors is walkable and calculable in real time. This also means that we are not actually simulating entities that learn, as their entire possibility space is of a fixed size.

Instead of mapping and graphing their underlying 0 -1 traits, we can display their likelihood to complete some set of interactions. Beyond this, because the interaction space is relative, we can calculate the likelihood of all interactions with its other participants.

Find the transition between states, these are the tasks that create reproducible behaviors.

Multiple nodes can match requirements. Edges are weighted by cost, approximate distance, etc. search determines the optimal path, where the optimal path is the one that looks the best by doing the optimal amount of nodes given where they’re trying to go. Good to use with scoring functions.

Carry transferrence. As an entity may choose to change seating location it will also take the items associated or constrained by the eat or drink event with them. Bi-directional searches create efficiency.

Simplify the graph by asking defining characteristics about an entity. What are they carrying, do they have slots, is it carrying if you have nothing in your hand but items in your slots? Are there nodes that can satisfy this action? Positional queries.

Socialization as an example, is inherently multitask-able. Heuristically which points make the most sense to stand at, what conditions need to be satisfied can be described through probabilities.

Socialization clustering. Move the polygons when sims are talking but sometimes you can’t move the polygon constraint because another entity isn’t moving and you cannot force that entity out of its event by changing your event space.

An entity event space is the sum of its event space and the entities it’s socializing with.

A task is a computable and finite set of instructions, therefore allows us to simulate in hyperspeed the environment with little cost to performance. This is because the fundamental logic behind tasks are shared, and keeping a low number of easily computable tasks, which are reused amongst many interactions.

Shared Entity Interaction Model

Where most of the traditional research is limited, we present new research and development into the simulation of multi-agent environments.

While interactions on their own are complex based on their own constraints, this is compounded by the communication system employed by the entities. As an interaction begins, it may enforce constraints such as requiring other NPC’s.

This interaction is entered into the Queue of other nearby entities to be evaluated just like any other interaction. These interactions may contain some special logic, however, to allow them to break out early, or decay depending on other factors in the interaction.

As the interaction object is shared between the entities, it’s execution should be concurrent amongst each of the participants. This shared interaction is also used when computing global restraints for entity interaction selection.

Crowds are another interesting area of research and discussion, and this hybrid methodology allows for the descriptive narrative to drive entity behavior, even for large sets of participants.

Groups as Priority Influence

Just as an entity maintains the state of its interactions, it also maintains the state of its known configuration of entities within the world. These entities, when related to the initial entity, share a relationship factor, or other parameters to represent the relationships between them.

As entities broadcast shared interactions into others’ queues, and when multiple entities are performing the same interaction, they create and share an interaction group. Interaction groups are useful because they may also apply constraints and influences that maintain, contain, or transform the behavior of participants during the given interaction.

entity relationship types

Just as in reality and to support the integrity of the interaction, generally these interactions would only be added to a shared queue of the receiving party is within speaking range.

These relationships are specific to an entity but can be 'passed' on in many ways, for instance, through the birth of a child. When a child entity is born, the family relationship between itself and its parents is p ~= 1. In the case of family relationship, this calculation is recursively halving from parents to grandparents and beyond.

Ownership is an odd one out of the bunch because it represents the entities subjective understanding of what is "theirs". This is non-trivial because as these relationships are relative per entity, there exists the possibility that multiple entities may "claim" an object/entity as their own. A heuristic may be applied to enforce ownership to a single entity, however, complex interactions are practically encouraged in such an open an dynamic system.

During scripted sequences, groups may be used as a narrative tool to ensure the execution of specific narrative-driven tasks.

Scripted Narrative in a Generated Environment

With a combination of broadcasting interactions into shared queues and the priority-weighting system, a composer can maintain and compose narrative sequences to entities globally throughout the environment. Because this story narrative takes place in the same context as the tasks, and actions, it not only happens in real-time but allows for fluidity of unexpected and unscripted events to unfold.

Scripted interactions may also maintain the functionality to be re-entered into an entities queue if another more pressing interaction takes precedent. This is not generally the case for many interactions, which are canceled and removed by default, but does allow for the resumption of interactions and tasks at a later time.

These works and research are built on the greater works of:

Original works and research by:

Gordon Goodrum [bio]

All Pages in Design Systems

    Design Systems

    A complete, reliable, flexible, and cost-effective infrastructure is a fundamental requirement in modern companies....

    Designing Systems of Study

    Systematic Space and Representation If we can examine a system in which the structure and rules can be performed in...

    Multimedia Marketing Strategy

    1.  Pillars and Funnel To follow the flow of the potential customer, we must direct their attention and their focus...

    Fluid Viewports vs Fixed Screens in Native Web Apps

    When designing an experience we must consider the rendering capabilities of the devices and platforms we run on....

    Concurrent and Adaptive Narrative

    Narrative and Player Expression As a narrative and game designer, you must consider that you are designing for two...

    The Anthology of Ants

    Any sufficient simulation must be reproducibly computable. Its entropy is a function of the ability to stabilize...

    Product Development

    What is Product Development? Systematic decision making related to all aspects of the development and management of...

Explore OakFrame

    Basic Programing with Javascript

    Welcome to the OakFrame Interactive Basic Programming with Javascript course. This course is provided through an...


    Play games in your browser for free, handcrafted by us here at OakFrame!

    Oak Studio Documentation

    Simple to Embed and Integrate We designed OakFrame to be easy to extend in order to provide further integration...

    Design Systems

    A complete, reliable, flexible, and cost-effective infrastructure is a fundamental requirement in modern companies....

    About OakFrame

    OakFrame is a design, development, and research firm located in Dallas, Texas, developing experiences for audiences...