Tracking Morality in Bridge Simulators
Table of Contents
This post builds on “Violence in Interactive Storytelling” and “Nonviolence and Mechanics in Bridge Simulators”. Here, we will explore how unpredictable crew choices can be made morally legible in deterministic storytelling. Moral legibility enables branching narrative that follows a crew’s moral choices.
I’m going to consider a bridge simulator to be built out of roughly three components:
- Stations that have commands or tools crews can use to interact with the simulation, mostly trying to meet whatever mission objectives they may have
- A fast mechanics loop for things like physics, running at approximately 60 frames per second
- A slow narrative loop for things like story progression, running on the order of seconds or minutes between changes in state
I have outlined how the above might work using typical crew controls, an entity-component-system architecture for the fast mechanics loop, and the Ink scripting language for the slow narrative loop.
ECS and Ink are both inherently deterministic. Both march forward in the ways computers do: one CPU tick at a time. We can assert that, given a known set of inputs, we can expect a known set of outputs.1 Though this provides some certainties during development, first contact with a crew brings enough entropy to break any assumptions we might make about how things will go once we dim the lights and start the flight.
Given that crews are unpredictable (and that’s a good thing), how might we track the development of moral decisions during a flight?
Generally speaking, moral consequences are encoded in the narrative structure. Story authors consider potential consequences or choices, and then write them into various branches. That is probably good enough for most stories. That said, stories rely on constraints in what is possible so that stories remain coherent and engaging, and so that authors don’t pull their hair out from the iceberg of material they need to write as you add more dimensions into the space of possible actions.
Flattening out moral decision making into something measurable could make it easier for authors to create units of content that become available under certain moral conditions without knowing in advance what moral conditions brought us to that point. This essay aims to explore how we might track moral consequences over time, and how that can then be used to gate content without inciting a riot of story authors who get crushed under the burden of all the content they have to write to make stories work.
In other words: The goal is for storytelling to be a fractal art, one where we can chuck bits and pieces of stories into a pile without a ton of regard for what may have come before or what may happen next, and for the system to naturally weave that into a fully coherent narrative experience. Units of narrative content should compound without requiring a superhuman understanding of every angle of potential story that could occur.
Fabling Labeling crew decisions
Let’s start with a really simple moral measurement system. Taking inspiration from the game Fable, what if we were able to assign a morality score of plus or minus to each action taken by the crew? In Fable, choices are good or bad, and players accumulate a moral score based on their actions.
This could be developed within Ink itself: as you take certain actions, you might gain or lose points in your moral score. The story could then gate access to content based on this morality score.
As a starting point, this is perhaps good enough. Beyond this, things start to get complicated. Narrative structures will implicitly guide moral options: If you are going out on a mission with a companion NPC ship and you blow that NPC ship up, is the flight over? Do you reset back to before your mistake? Questions like these are going to happen in any storyline regardless of formal moral simulations. Hence, it may be good enough to just trust the narrative structure, and lean into writing content for all the different branches. Perhaps it’s good enough to use preconditions when generating content or choices and, if the main storyline runs out of options before you reach an intentional end state, the flight stops and you are required to rewind back to where you could make a different decision.
However, because we can, therefore we must what if we want to account for flights without flight directors, where the crew starts trying to push the boundaries of what is possible in the storytelling space? Or, even with a flight director, crews may make choices that story authors haven’t fully considered yet, meaning the moral impact of an action may be real and unaccounted for. How do we account for choices that influence the moral arc of a story that we simply didn’t expect?
Constraints are king
In Inkle Studio’s “Heaven’s Vault” (An RPG built with Ink with high player agency), the player can move around and pick up objects. The player can talk to NPCs. A few “mini game” experiences around language translation unlock new areas of the map.
That finite list of actions or “tools” in a player’s toolbox is rather short. This means that story writing is reasonably constrained, making it possible to account for potential outcomes.
The vastness of space, on the other hand, lends a lot of options to players. What if players zip off course in some other direction? Though the emptiness of space provides its own constraint (You’re a million light years away from nearby stars), nonetheless players could employ tools in ways that make storytelling rather hard.
This is true, if only because of the vastness of the number of tools available to players. Bridge simulators are fun partially because you have multiple players. Each player needs things to do for the entire flight, so they have a workstation full of buttons and sliders that actually affect the flight. You can set a course, adjust thrusters, go slower- or fastser-than-light, scan inside or outside the ship, write messages, call NPCs on the radio, target stuff outside your ship, deploy kinetic or energy weapons, launch probes with any number of payloads, and more.
All of these options create a combinatorial explosion of potential paths that could be taken during a flight. Ink works as a storytelling medium when you balance the number of possible options at any given time with the iceberg of content players can interact with. On a playthrough of 80 Days (a relatively simple Ink game from over 10 years ago), you might experience 2% of the total content in the game on your first playthrough. The fun in that game, and its inherent replayability, is a balance between getting good at the mechanics (Can you beat the clock and win the bet?) and experiencing page after page of prose in a massive tree of content. The sheer mass of the content is part of the magic.
Sometimes magic is just someone spending more time on something than anyone else might reasonably expect. ~ Teller
More than labels
The problem, then, with our Fable-based morality system is that for any action players could take, there is a surrounding moral context that determines if what is being done is good or bad.
Moral philosophy may come to the rescue here to a degree, since how we determine if something is good or bad is going to depend on the moral underpinnings of the flight itself. To make this concrete, let’s explore “A Cry From The Dark”, a flight about rescuing a planet that’s being destroyed by an environmental disaster. The key moral issue in this flight might as well be straight from the Titanic: Given more survivors than lifeboat seats, who do you rescue?
- Do you overload your lifeboats and risk everyone’s lives, trying to save as many as possible?
- Or do you choose who lives and who dies, increasing the odds that those chosen for survival will actually make it?
- When choosing the survivors, do you choose the strong and powerful?
- Or do you rescue those needing the most aid?
- What if the strong are strong because they put in more work to protect themselves from environmental collapse, digging bunkers deep in the earth?
- What if the weak are weak because they were forced to do the digging?
In this series of essays on nonviolence in interactive storytelling, I’ve been rooting morality in empathy - the idea that “Others matter like we matter.” Under this moral framing, one would seek to rescue as many people as possible, perhaps prioritizing those at the most immediate risk of dying. Crew members would be risking their own lives to save others, considering that a harm to one is a harm to all.
If we were to root our morality in an ethic of “If it’s good for my tribe, then it’s inherently good” then perhaps we would prioritize rescuing the strong and powerful - those refugees who are most likely to contribute to our society in ways that benefit our tribe the most. Bringing back the weakest refugees would be a waste of resources and harmful to our own people.
Utilitarianism would suggest that we would morally sort each refugee and fill up our lifeboats until we run out of seats.
Authoritarian morality would defer to some supreme moral authority who would rule on the best course of action, keeping our hands clean even if our crew is the one doing the dirty work of rescuing people. Whether this moral authority considers a matrix of pre-written rules, or intuitive judgements from a council of leaders, ultimate responsibility would be placed on some third-party authority. Deviating from those edicts would be a moral violation, even if it saved more lives or reduced harm.
Without making this post a cliff’s notes on “How to Be Perfect: The Correct Answer to Every Moral Question”, the point is that our moral foundations vary based on a matrix of beliefs and values2, and a flight may need to be responsive to a variety of moral framings in order to account for different story paths.
This, however, nudges us back in the direction of moral weight being a matter of narrative content: choices that are immoral are not explicitly weighed as such, but simply take the crew down a different branch of the narrative. Moral impact becomes a matter of what a story author is willing to consider in their storytelling. This, however, runs back into the broader combinatorial explosion of every action that players can take during a flight, against every entity that they can interact with. Can authors craft content that accounts for the nuance involved in every branch of the story?
Perhaps the answer is, “Whether they can or they can’t, they must.” How a bridge simulator handles “dead ends” becomes a matter up to a flight director or pre-written content. In theory, the number of paths players take through a flight could simply be limited by the narrative affordances presented to players. There is only one destination they can set a course to, there are only so many weapons, they can’t defeat an opponent with violence so they have to try something else, etc.
Taking one last crack, however, at computing morality, let’s consider what such a system could look like. We’ll call it Anubis.
The Anubis System
This is admittedly pretty speculative, so take the following with a hefty grain of salt. The goal of this exercise is to explore what considerations an automated moral engine might need to consider in a game context.
Anubis is a record keeper and a judge. Anubis records events that happen during the game, and on certain trigger conditions will digest a series of events and proclaim a Moral Judgement that records the moral impact of those events. That judgement can then inform narrative consequences.
What Anubis does is already informally encoded in an Ink story by an author. The goal with Anubis is to try and make that encoding more explicit and formal. By so doing, the goal is to make it easier to generate morally interesting narratives.
The moral events that Anubis tracks ought to be relatively infrequent. We don’t need a journal of every tick of the event loop. Events are more about atomic actions taken by an agent (NPC or human): sensors scan, tractor beam engaged or disengaged, messages sent, etc. The contents of these events are going to vary considerably. Perhaps the most important thing is being able to trace a set of events so as to assign “blame” for what comes next: The Moral Judgement.
When Anubis “Proclaims Moral Judgement”, such judgement describes a “moral vector” in some abstract moral space. The dimensions of that space correspond to moral foundations theory building blocks:
- Care/harm
- Fairness/cheating
- Loyalty/betrayal
- Authority/subversion
- Sanctity/degradation
- Liberty/oppression
Each Judgement will describe the moral impact along each of those dimensions. How each of these vectors is computed is yet to be determined, and perhaps is the “hard part” of all of this. Even so, it isolates the complexity down into a single component: Given a sequence of described events, output a moral vector corresponding to moral foundations theory. This has been done in sentiment analysis settings, so it’s not impossible3 4.
This “Moral Judgement” can then be viewed through a Moral Lens when interpreting the impact of that Moral Judgement, relative to a given Moral Tribe. Such a “moral lens” is about interpreting a judgement, relative to ones own moral matrix. A crude example is if A hits B with a torpedo, A’s tribe may celebrate while B’s tribe may be outraged - all from the same underlying event. Thus, the Moral Judgement offered by Anubis is more about “Moral nutritional content” rather than a fully metabolized conclusion that bears narrative weight. Even so, considering a series of Moral Judgements about a crew’s actions could be enough to indicate whether they are acting in good or bad ways.
Anubis in Action
Let’s suppose the crew takes some action, like launching a probe. This creates an ECS entity with a position in space and information about its payload and so forth. At the same time, Anubis records the event, recording “blame” information about where the probe came from. Capturing objective facts is all that matters at this point. This journal of events serves as the backbone for Anubis to make Moral Judgements later on.
Let’s suppose that another ship comes along and uses a tractor beam to nudge this probe off course, crashing into another ship. Anubis would record the tractor beam nudge, the course change, and the impact. ECS would handle simulating the velocities, the collisions, and so forth.
Anubis could see that the population of the planet took a sudden nosedive. Or, we may hardwire ECS to signal to Anubis that when a collision happens, Anubis ought to proclaim Moral Judgement, preloading this with the impact of the collision. Offering hints to Anubis unburdens some of the weight of the Moral Judgement process, making it more efficient and accurate. A collision between a probe and an asteroid is unfortunate and perhaps of zero moral value within a flight. A collision with a populated plant is a moral catastrophe. However, a pattern of repeatedly wasting probes (and seeing this sequence of waste) could perhaps rise to the level of a Moral Judgement that impacts narrative.
Anubis’ job would then be to consider the chain of events leading up to the impact, constructing our moral foundations theory vector.
Tracking “Favor”
Moral systems generally consider the harm/help impact of some chain of events, where the harm/help is relative to one or more populations of morally influenced entities. If I see myself as “one with the universe”, kicking a rock could be considered a matter of moral peril. If I’m a sociopath, then the only thing that matters is what I want. Naturally, most morality falls somewhere between those extremes of self-interest and self-annihalation.
Hence, Anubis would need to have some sense of “tribal boundaries” - the scope of entities and the relative moral polarity of a given event.
Going back to our probe-nudged-into-planet scenario, We have three sets of moral agents to consider: The planet’s population, the ship that did the nudging, and our own crew. The affiliation of each moral agent is part of what we need to weigh the secenario.
Anubis sees the sequence of events:
- Probe launched from A
- Probe pushed by B
- Probe impacts C
A sufficiently smart Anubis would be able to predict that, if step two didn’t happen, then step three couldn’t happen, so the “blame” involved here is on B. There are bigger considerations around why B might have chosen to push the probe (were there preceding events like collusion between A and B, so both are to blame? Was B acting in concert with another agent D?).
Under this line of reasoning, Anubis would be able to assess the magnitude of the impact against C (Perhaps just using raw data emitted from ECS at the time of the Moral Event, like “3000 civilians lost their lives”), and then determine the polarity of the event relative to agents who learn about the Moral Event.
Anubis may record a Moral Judgement at this point, declaring that “B’s morality is affected by the following vector”, and that vector describes the magnitude of the impact against C and the moral direction that the winds are blowing from that event in our abstract moral space. In this case, those winds are against C. But, how should A feel about them?
Once a Moral Judgement has been recorded, the “conscience” of each moral agent in the simulation will eventually be updated - but only if they learn of the events themselves.
Anubis is not a Gossip
Considering that Anubis can see everything, but moral agents like A and B can’t, what moral consequences should be dealt, and when? Ultimately, that’s the province of the narrative engine. Anubis’ job is perhaps done at this point, at least until perhaps the narrative engine asks Anubis for recent Moral Judgements that could be used to inform higher-level narrative content decisions. If a “really bad thing” happened recently that the narrative isn’t fully equipped to handle, we could still fudge our way through some narrative consequences.
- Ink asks Anubis for recent Moral Judgements
- Ink checks the threshold on those moral vectors and gates content accordingly
-
More generic story paths could unfold as a fallback, things like “Captain, what have you done? We’re receiving reports about an incident that recently unfolded.”
- This is clearly unsatisfying, and again nudges the conversation in favor of, “Authors need to consider all of the moral consequences involved, constraining the potential space such that the narrative can’t go off the rails so much that it becomes incoherent or demand too much content”.
Ultimately, it’s in the hands of the narrative and narrative designer to weigh the moral consequences of actions in a way that is consistent with the story being told.
- ↩
Even so, under a running flight we can’t fully predict the outcome all the time, since subtle variations in input can lead to chaotically different outputs, and some dice rolling may shift us down unpredictable paths in a predictable way. Such variations are, of course, mostly due to crew influence. Even so, Ink is going to keep us on a certain fixed set of rails that is known beforehand, though the potential web of paths might feel seemingly infinite.
- ↩
For further reading on Moral Foundations Theory, check out “The Righteous Mind” by Jonathan Haidt.
- ↩
Wikipedia’s Moral Foundations Theory hits on this under “Morality in Language”.
- ↩
Whether all of this is worth it for a bridge sim is left as an exercise for the reader.