NOTE: This is the first edition of the project documentation, compiled in a form of a short book. It's still work in progress, which means a lot of errors and unfinished pages. If you've spotted some silly error, or want to add a paragraph somewhere, you can create an issue on github.

What does this book contain?

It contains a compilation of easy to read documents describing the project, it's various goals and elements.

It also covers things that are relevant to the process of creating simulation models and handling data using the proposed API.

Who is it intended for?

It's for anyone who wishes to understand the design behind outcome simulations, how they are run, how the data and simulation models (called collectively content) can be created and handled, among other things. Getting a good grip on the conceptual make up of the architecture is recommended before getting into creating content.

It's also for anyone who doesn't want to dive into specifics of how things are done "behind the scenes", but instead is interested in learning about creating content and learning how to use the provided tools. It's okay to skip the boring chapters and get into running simulations right away - one can always revisit certain chapters as you encounter problems later along the way.

How is it written - aiming for accessibility

Documentation for this project is meant to be easy to read and understand by anyone who doesn't have any prior experience with computer programming or game modding.

That said, this first edition is not kept up to this high standard of accessibility. It still contains things that can be considered to require some prior experience/knowledge related to the categories mentioned above. The goal for the next editions will be to further refine the documents in this book to:

  • reduce usage of highly "industry-specific" terms
  • increase the number and quality of explanations where the above is not possible
  • increase the number of links to external resources
  • provide illustrations to make things easier to understand

Introduction to the project

This page serves as a quick introduction to the project. It's written in a form of a list of answers to some of the frequently asked questions.

What is this project about?

It's about creation of user friendly environment for simulation model design and processing. It's about discovering possibilities for collaboration on certain kinds of simulation models.

At a more basic level it's about discovering a good minimal simulation architecture that's useful, extendable and easy to use.

What are the overall goals for this project?

  • provide a system for modeling and simulating social, economic as well as natural systems, and relationships between them
  • provide an inclusive environment for simulation modeling
  • provide a basic and easy to reason about simulation framework
  • provide a relatively easy to learn and simple to use interface for simulation modeling
  • provide a simple programmatic interface for interacting with models and simulations that can be used by custom applications

How useful is it right now?

Right now the project consists of a proposed system for how collaborative simulation-modeling could happen, as well as experimental software implementing things that are necessary for this to happen.

If you're ready to build from source (Rust programming language) you can already run some of the software.

See project overview for more information about the software sub-projects, and the project status page to learn more about what's being actively worked on right now.

How useful could it become?

That's hard to say. It depends on how useful the base simulation engine and it's API interface is.

It's designed to be relatively basic and generic so it can scale well, but it's not certain that it will.

The design of the engine itself imposes important limitations on the possible simulations to be created for it. There are trade-offs to be had, as with most things, and the overall design here is influenced by the larger goals of the project.

Community created content?

The goal is to create a situation where multiple users can collaborate on files organized into versioned modules.

User files (for the sake of simplicity also collectively called content) are parsed and a simulation instance is spawned using that data.

User files provide both the initial state information (here state meaning a data-based representation of an object at some point in simulation time; we call this data) as well as the computation instructions necessary for running the simulation.

Project Overview

This project is not monolithic, it's composed of a bunch of smaller subprojects.

As mentioned in the introduction, at the core of the project lies the simulation engine itself. It handles functionality related to parsing and running simulations, exposing a simple interface for interacting with the simulation data to the programmer.

Then there are the tools that provide an environment for working with the simulations. Tools also provide new layers of functionality and interoperability, for example offering a network interface (see endgame).

Finally there are games and other applications.

Simulation engine

The engine is not an executable application. It's a library that can be used by other applications to create and run simulations.

Simulation engine handles:

  • parsing input data
  • creating simulation instances
  • processing simulation instances
  • reading and writing simulation instance data

Implementation details aside, these tasks enable applications to make use of outcome simulations, including .

The engine takes care of all the details of creating and processing simulations, using the library doesn't require complete knowledge of how it works. To learn about how the engine works check out the next chapter.


To be able to run simulations we need some kind of application that makes use of the simulation engine.

The most basic tool is the command-line based endgame. Some of it's functionality is also exported as a library so it can be used within other applications. One useful example is the networking layer functionality.

Command-line is not for everyone, that's why there's also furnace, which is a GUI app with a window-based interface. It works on all popular operating systems (Linux, Windows, Mac). furnace is not exactly a replacement or alternative for endgame, rather it builds on it's features to be even more useful to the user.

Games and other applications

One good example of incorporating outcome simulations in different kinds of projects are games.

Anthropocene is a modern-day global strategy game. It doesn't use the simulation engine library directly, instead it uses the networking layer provided by endgame. It serves as a demonstration of how all kinds of projects, including games, can make use of outcome simulations no matter the framework or the programming language used - the only requirement here is the ability to send and receive data using a tcp connection.

Basic concepts

In this chapter we’ll take a high-level perspective on some of the basic concepts behind the simulations.

It's recommended to go through this chapter to get an understanding of the basic ideas around which the system is organized.

If you're not so much interested in learning all that now, and want to get to creating mods and scenarios right away instead, check out the light-weight guided intro to modding under the guides and tutorials section.

At a glance

  • simulation models and data are created by users, therefore
  • the engine itself doesn't contain simulation models or data ("moving parts" like entities and components are generic, almost nothing is hardcoded)
  • data-driven architecture, everything is based on addressed variables (including "event-like" behavior)
  • processing scheme based on single clock, multiple clock events
  • logic based on state machines, each state containing a set of executable commands ("micro programs")
  • built-in separation of the entity objects and their data from each other, reflected in the way inter-entity data operations are written compared to intra-entity

Entity and component

Outcome simulations are based on an approach where the system consists of a set of entities to each of which multiple components are attached.

Result is a flexible arrangement of objects that can be used to accomplish many different tasks.

Exact arrangement of the entities and their components can be dynamic, entities can be created and destroyed, components can be attached and removed. It can also stay pretty static. Both approaches can be used, either alone or complementing each other. All depends on how the user decides to design their mods.


Entity is the fundamental object in the system. It's most important feature for us right now is that it can hold components.

We can create many entities, or only a few. There are no built-in entities.

Entity type

When it comes to entities, a type helps define what components can be attached to an entity. Component definition includes the ability to specify entity type with which that component is compatible.

Components will use entity-local addresses to get variables, and we need this compatibility notion to be able to make some assumptions about what entity our component is attached to.

Each entity type introduces a new namespace for entities of that type. This means we can have entities /yellow/banana and /green/banana and they won't collide namespace-wise.


Component lies at the core of computation. Each component instance is assigned to a single entity.

Each component defines a set of it's own variables and contains a single state machine (see next sub-chapter).

Component type

We can use component type to create sets of components that will have common characteristics.

Declaration of a new component type can contain things that we would normally declare for components themselves. What we define here will act as default for any new component of that type we might declare elsewhere. This default can be overriden for any of the entries by just declaring that entry on the component.

Component type can be also used as a way of organizing components, and/or expanding the component namespace (like with entity type).

# declare a new component type
- id: decision
  - id: template_var
  - id: template_state_1

# use the new component type
- id: choice_213
  type: decision

# component `choice_213` has a var `template_var`
# /region/e_01001/decision/

State machine

Approach based on finite state machines is an important feature of the system. It defines the style in which all the computation is handled.

While it is a fairly low-level model for computation, it remains a fairly intuitive one. There are only a handful of rules we need to remember:

  • only one state is active at any given moment
  • we can transition between the states during the course of the simulation
  • state transitions can be triggered both internally (from within the state machine) and externally (by another state machine)

Where do the state machines exist

State machines are tied to components. Every component contains a single state machine. Indeed we could say that every component is a state machine.

When components are discussed the notion of the state machine is usually present, but without specifically calling it a "state machine".


Each state machine can have an arbitrary number of states defined.

Each state on a state machine can contain a number of commands, which are small executable instructions.

By default every component has a single empty state, called none. The none state is incapable of transitioning to any other state. In that case the transition can only be invoked from outside the state machine.

Trigger events

Directly related to the notion of "clock ticks" is the notion of clock events. Each single processing turn is called a "tick". Clock events are "events" that get triggered every 'n' ticks.

Each component state machine contains a notion of trigger (clock) event. It simply means that the component state machine will be processed only once the proper clock event is triggered.

For example we could have a day clock event that is triggered once every 24 ticks. If we created a component state machine with the day clock event trigger, it would only get processed once every 24 simulation ticks.

Address and variable

Variables existing in the simulation can be referenced using addresses.

Address looks similar to the urls you know from your browser, or the file paths from your operating system. Multiple parts are separated by forward slashes /. Here are a few examples of different addresses:

# full path (sometimes called a "global" path, as opposed to "local"), starts with '/'
# relative path ("entity-local"), starts with '~'
# relative path ("component-local")
# dynamic path, contains another address within curly brackets

Building addresses we generally follow this scheme:


Relative address

Relative (or local) means that some of the address is omitted because it can be inferred from the execution context.

Dynamic address

Curly brackets can be used to insert some other variable’s value into the path by referencing the path to that other variable.


Command, abbreviated to cmd, is a small executable instruction built into the simulation engine.

Commands can do very different things, from simple mathematical operations to including scripts.

All commands accept some kinds of arguments.

Declaration using map or string

User often has a choice to use either a map representation or a string representation of a command. Here's a very simple example of both

# string
- print str/something
# map
- cmd: print
  addr: str/something

Not all commands support both representation modes. Some commands such as lua_script only support the map representation.

Required arguments

Required arguments are also called positional arguments. Commands can have different numbers of required arguments. print only requires a single argument, for example.

String and map representations have slightly different requirements when it comes to positional vs optional arguments. For string representation positional arguments should always be placed right after the command and before any optional arguments.

command <positional-1> <positional-2> [OPTIONS]

Optional arguments

As the name suggests, these are the arguments that are not required, but instead optional.

For string representations optional arguments follow a syntax inspired the general practice of conventional GNU/POSIX syntax. Don't worry if that isn't at all familiar, the rules are very simple:

  • we use - to declare a short option, e.g. -f
  • we use -- to declare a long option, e.g. --false
  • options can take in values, e.g. -f some_value or --false some_value
  • options can be without values, these are so called "flags", e.g. -t
    • short flags can be combined, e.g. -ft

Speed vs complexity

For the most part, commands are meant to be simplest possible implementations of some specific actions. They are usually meant to do one task, and to do it well.

For more complex algorithms we shall use scripts. They can be slower but we can do even more interesting computation with them.

Execution flow

Commands are bound to component states. Each component state contains a list of commands which constitute it's logic. When a component state is triggered for execution, it's commands are executed one by one, starting with the first one on the list.

Execution flows from the first command to the last. This flow can be broken or changed. Any command can return a result that will signal breaking or jumping to another command on the list. This introduces the possibility for simple loops.

Command definitions can include options which will dictate their behavior in certain circumstances. This includes the possibility of breaking or changing the execution flow.

# this will break before printing anything
- set string/name "pancakes"
- eval string/name == "cupcakes" --if-false break
- print "this should not get printed"

And here's an example of a switch of sorts.

# this will change current component state based on the evaluated variable
# suppose string/name is currently "Horse"
# first eval "fails"
- eval string/name == "Mouse" --if-true goto.mouse_state
# second eval changes current state to "horse_state"
- eval string/name == "Horse" --if-true goto.horse_state

Content structure

Data management is about handling, organizing and distributing the data that is used to create and run simulations. The main features relevant here are mods, scenarios and snapshots.

Put in one sentence:

Mod is a collection of files, scenario is a collection of mods, snapshot is a serialized simulation instance state.

In this chapter we'll go through those features in more detail.

File formats

The format of choice with this project is YAML. Supported filename extensions include both .yaml and .yml. YAML is used for all module files, as well as all manifest files for modules, scenarios, etc. YAML was chosen because it's much more readable than JSON or XML.

There are also other file formats that are supported, though specifically designated for storing data. For example .json and .png files can be used. See the API section on data blocks for more information and examples.

Semantic versioning

Versioning is used throughout content features for a number of things, including modules, scenarios, and the the engine itself. It makes things easier to track as changes are made over time.

One look at any scenario manifest already gives as a bunch of version information for different things:

version: "0.1.0"
engine: "^0.2.0"
- module_one: "0.1.1" # require this exact version
- module_two: "*" # require any version of the module
- module_three: "2.1.*" # can also be written using '~' as "~2.1.1" or simply "2.1"
- module_four: "1.*.*" # can also be written using '^' as "^1.3.0" or simply "1"

All versioning is following the so called Semantic Versioning format. While it's not crucial to always follow the versioning spec, it's useful to know the basics. The above block gives a few examples. You can play with an online semver calculator if you're totally new to this.


Module, mod for short, is a collection of data files. It’s helpful to think of mods as packages - they allow for modularity, in the sense that we can put different collections of mods together and achieve different runnable simulations.

For managing multiple mods within one “environment” we’re moving into the domain of scenarios.

Module as part of scenario

Mods always exist in dedicated mods directory of scenarios.

File structure

Each mod exists as it's own directory structure. Inside the mod directory there has to exists the module manifest.

There are not many strict requirements in terms of internal organization of files in a module other than the module manifest.

Here's a directory tree for an example module:

├── data_test.yaml
├── json_data
│    └── data_file.json
├── module.yaml
├── orgs.yaml
└── regions.yaml

Module manifest

Each mod needs a module.yaml file present in it's top directory.

# the following three entries are required
name: "test_mod" # unique name of the module, string without spaces
version: "0.0.1" # version of the module
engine: "0.0.1" # version of the engine the module is compatible with
# the rest is optional
title: "Test mod"
desc: "Just testing."
desc_long: "This is just a testing module, not really usable."
author: "John Doe"
website: ""
dependencies: # list of modules required for this module to work
- another_mod: "0.1.1"

Flexibility within the mod

There is much flexibility when it comes to organization of user files inside a mod. This flexibility is possible because the files themselves specify everything inside them - the declarations made within module files don't need any additional context. Thus, structure of directories within the mod, even the names of the files are not an essential part of the module file processing.

Program reading a module will read all files recursively (given they have the proper .yaml/.yml extension).

Organization of files into directories can be useful, so can be certain approaches to naming the module files. This is left entirely to the user.


Scenario wraps a collection of mods into one simulation environment, so to speak. It's a structure representing, well, a scenario, meaning a certain outline of the plot meant to unfold when the scenario is simulated.

Scenario can be used to create a simulation instance.

File structure

The scenario exists as it's own separate directory. Inside the scenario directory there has to be the scenario manifest. All modules should be placed inside the mods directory.

Here's a directory tree for an example scenario called test_scenario.

    ├── mods
    │   ├── module_one
    │   │   ├── data_test.yaml
    │   │   ├── json_data
    │   │   │    └── data_file.json
    │   │   ├── module.yaml
    │   │   ├── orgs.yaml
    │   │   └── regions.yaml
    │   ├── module_two
    │   │   └── module.yaml
    │   ├── module_three
    │   │   ├── comps.yaml
    │   │   ├── events.yaml
    │   │   └── module.yaml
    └── scenario.yaml

Manifest file

Scenario manifest file is always named scenario.yaml.

# those first three fields are required
name: "test_scenario" # scenario name, string without spaces
version: "0.1.0" # scenario version
engine: "0.1.0" # simulation engine version required by the scenario
# the rest is optional
title: "Test scenario"
desc: "Scenario for testing."
desc_long: "This is a long description of our test scenario.
            Notice how we can introduce a line break here.
            Check out YAML specification for more information
            on what you can do with strings."
author: "Adam Adamsky"
website: ""
# modules are loaded in the order presented here
- module_one: "0.2.0"
- module_two: "*"
settings: # settings are an easy way to tweak some crucial variables
  /uni/const/quantum_drive_tech_possible: false


Snapshot is a serialized state of the simulation. You can think of it as of freezing the whole thing at some one time instance, with all the information preserved.

Snapshot can be used to create a new simulation instance.

For practical purposes, snapshots function like game-saves. Even though we can create a simulation instance from a snapshot, it's not the same as if we used the data we normally use to create sim instances (modules organized into scenarios). Snapshot can still be modified, of course, but it's way less convenient to modify than a collection of versioned modules.

TBD: Snapshot could potentially optionally contain a collection of archived past simulation states.

Snapshot is contained in a single file. Serialization format is YAML, which means it's human-readable and can be modified relatively easily. Snapshot files can be compressed.

Right now snapshot creation is not well optimized for large simulations.


Proof is the proposed way to handle running multiple simulations to figure out how the complex systems we create actually operate.

It's based the following ideas:

  • complex dynamic systems don't yield consistent results, but
  • restricting the length of the simulation makes for more consistent results, also
  • we can run our simulation multiple times and try to spot correlations in the data


For the purposes of this project we use the word modding in a bit more broader sense than usual. Usually modding, as in modification, revolves around modification of data that's already well structured and used to feed an existing set of structures baked into an executable.

Our system is more open ended. Modding is closer to modeling, and it's about creating simulation structures almost "from scratch", without content hard-coded into the executable. Of course that's not to say this approach is always superior. It's just more flexible on the user level and potentially allows for some interesting developments.

Design patterns

Important part of modeling of any system is good design.

Particular properties of the computation scheme for outcome models requires us to think about our design in a quite specific way. We'll need to take the time to talk about some of the important design patterns.

Patterns discussed here are mostly related to specific theme and "use case" promoted currently by the project. In particular modeling anthropocene (game) simulation is one of the main focuses with most of these patterns.

This list is by no means complete. Treat it more like an initial exploration of the topic.


Commands API

NOTE: this API is still very unstable, it's a proof of concept. It will change drastically.

This page includes a list of commands available to the user.

Formats for the string and map representations:

# string
- command <required> [OPTIONS]
# map
- cmd: <cmd_name>
  <required_name>: <required_value>
  <option_name>: <option_value>


Print value from address.

# string
- print <addr> [OPTIONS]
# map
- cmd: print
  addr: <address>


Increment var at address by the given value.

# string
- add <addr> <value> [OPTIONS]
# map
- cmd: add
  addr: <address>
  value: <value>


Set var at address to the value from another address.

# string
- set <target:address> <source:address> [OPTIONS]
# map
- cmd: set
  target: <address>
  source: <address>


# string
- eval <addr:address> <test_value> [OPTIONS]
# map
- cmd: eval
  addr: <address>
  test_value: <value>


Set var at address to the result of a calculated expression.

# string
- calc <target:address> <expr:expression> [OPTIONS]
# map
- cmd: calc
  target: <address>
  expr: <expression>

Guides and tutorials

This section contains a guided tour to get you started with modding. Each chapter takes on a different topic.

There is a natural progression, going from the complete basics up to more complex topics. Some of the tutorials will include references to previous ones.

NOTE: Each page contains information about the versions of software it's been written for. If you find any of the documents outdated and/or not working as expected with newer versions please file an issue.

First module

outcome 0.2.2

In this short tutorial you will create a simple module, learning about the structure of a module, contents of the module manifest, the nature of declaration blocks and more.

Module structure

Internal structure of modules is almost completely arbitrary. You can organize your files as you want. This is because all module files (also called user files) are "the same" in terms of their internal structure. Inside the files we have declaration blocks that allow us to declare different kinds of elements. No external context is required.

The only element that's always required is the manifest. The manifest always exists in the top directory of the module.

A single module is itself a single directory with a module manifest module.yaml inside. Such a module with only the manifest present is a valid one, even though it's totally empty otherwise.

Create module manifest

Let's start by creating a new directory and naming it: first_module.

Remember that the name of the directory has to always match the name of the module as defined in the manifest.

Module name is a string of characters without spaces. Doesn't really matter whether it's written as first_module or FirstModule; although it's useful to keep the convention the same for any single project you work on (e.g. a single module or scenario).

Next we'll create a new file. You can use any text editor you want. Call the file module.yaml. This is our module manifest. Paste the following lines into the new file and modify the entries as you see fit.

name: "first_module"
title: "First module"
desc: "First module created using the glorious tutorial."
author: "User"
version: "0.1.0"
engine: "*"

There are a few more entries we could use, like desc_long for long description and website to point anyone who uses our mod to the right place if they needed help or wanted to submit a fix. We also didn't declare dependencies which simply means there aren't any.

If you use some sort of version control solution like git it's useful to include an online address of the repository in the website field.

A new user file

Let's create a new file. Call it whatever you want, just make sure the extension is .yaml or .yml. I created a file named examples.yaml. The directory tree should look something like this

├── examples.yaml
└── module.yaml

User file declaration blocks

We will include a couple things in our file:

  • new entity type
  • new component type
  • new entity
  • new component

Each of these elements requires us to create a new block. Consider the following example.

- element_1 # '-' means a list in yaml, so this is an element in the "new_block" list
- element_2 # '#' means a comment, so anything after it on that particular line is ignored:)

And here's what we're going to include in our file

- id: "region"

- id: "property"

- id: "greenland"
  type: "region"

# a component to track the bear population in a region
- id: bear_population
  type: property
  entity: region
  start_state: main
  # "tick" clock event is built-in, called on every clock tick
  trigger_event: tick
  - pub int count = 2275
  # this is the only state in this component's state machine
  - id: main
    # it will simply print the "count" integer var every tick
    - print int/count

That should do it. There is only one problem though - we can't run a simulation with just a module. Before we can run this and see how it prints the bear population count every tick we have one more step ahead of us. We need to create a scenario.

First scenario

outcome 0.2.2

In this short tutorial you will create a very simple scenario, using the module created in the first module tutorial.

Creating scenario manifest

Scenario can be defined as a "collection of modules". That's because scenario itself will not provide any data other than the scenario manifest. Hence there is really not much to creating a scenario other than creating a proper manifest.

Of course, we also have to include a mods directory and include in it all the modules we will declare in the manifest.

Scenario manifest defines the scenario's list of modules, and specifies a bunch of other things. Here's what we'll be creating today:

name: "first_scenario"
title: "First scenario"
desc: "Scenario for testing."
author: "User"
version: "0.1.0"
engine: "*"
# modules are loaded in the order presented here
- first_module: "0.1.0"

As mentioned in an earlier chapter under content structure, only the name, version and engine fields are required, we can omit any of the others and still have a working scenario.

Including the first_module mod

We need a copy of first_module inside scenario's mods directory.

REMINDER: the name of the module directory has to match the name specified in the module manifest. In out case that name is first_module.

Running the scenario

Scenario is treated as a simulation model and can be used to create a simulation instance.

Next up, we'll learn how to use endgame command-line tool to run our newly created scenario.

Using endgame

endgame 0.1.3

endgame is one of the tools in development right now. It's a command-line interface (cli) tool, meaning it involves interacting with the command-line interface (shell) provided by your operating system.

If you're not sure how to access the shell on your operating system consult an appropriate tutorial for that.

Running the program

You can either download and run a pre-compiled binary or compile the program yourself.

Downloading binary

Probably the easiest way to run endgame is to download a binary suitable for your system and run it. You'll find compiled binaries attached to the bigger releases. Note that the binaries may not be available for every release, also not all architectures/operating systems are covered.

To run the program you will need to navigate to the directory where you saved the binary from the command line. Then type in the name of the program to run it.

If you don't know how to run a binary file from the command line please consult the nearest web search.

Compiling yourself

You can also compile endgame yourself.

To do this you will need to install the Rust programming language on your machine. Installing Rust is easy, simply follow the instructions on Installing git is also a good idea, but is not required.

Clone the repository with git

git clone

Alternatively just download the repository as zip archive and unpack it.

From the command line, change the directory to wherever you cloned/unpacked the repository. Then run

cargo run --release

Cargo is the package manager for Rust programming language. It takes care of downloading all dependencies, building everything, and finally running the program. The --release flag turns on optimizations and makes our program faster in the end, but it makes the build process itself longer.

In this guide we'll be passing arguments to the endgame program. You can pass those arguments right from the cargo run command.

cargo run --release -- --help

Anything after the -- will be passed to our program.

Available functionality

Running endgame you should be greeted with something similar to the following:

endgame 0.1.2
Adam Adamsky <>
Endgame is a command line toolkit for creating,
running and analyzing outcome simulations.

It's is part of the effort to create an accessible
playground for world modeling and simulation.
For more info check out

    endgame [FLAGS] <SUBCOMMAND>

    -d               Print debug information verbosely
    -h, --help       Prints help information
    -V, --version    Prints version information

    init      Initialize new content data structure
    lint      Check content for errors
    test      Test content configuration for memory requirements, average processing speed, etc.
    run       Run simulation
    server    Run endgame in server mode.
    client    Run endgame in client mode.
    coord     Run a cluster coordinator.
    worker    Run a cluster worker.
    gen       Generate optimized binary.
    help      Prints this message or the help of the given subcommand(s)

Descriptions for each subcommand tell us what it is they do. For additional information we can run any of the subcommands with an added --help flag to learn more.

For this guide we will not be looking into all of the subcommands. We will focus specifically on run.

NOTE: endgame is currently in an early version, a lot of the features are not yet useful or are simply placeholders. The functionality and it's specific layout (API) is likely to change over the course of development.

Interactive runner

Most interesting feature available to us in the current version is the interactive simulation runner under the run subcommand.

./endgame run <path-to-scenario>

--interactive option is by default set to true, so this will start an interactive session.

Let's run the scenario we made in the guide called "first scenario".

You should be greeted with a few lines looking similar to this:

Running interactive session using scenario at: ".../endgame/test_scenario"
[INFO] generating sim instance from scenario at: .../endgame/test_scenario/
[INFO] there are 1 mods listed in the scenario manifest
[INFO] mod found: "test_module" version: "0.1.0" (version specifier in scenario manifest: "^0.1.0")
[INFO] found all mods listed in the scenario manifest (1)
[INFO] successfully created sim_instance
You're now in interactive mode.
See possible commands with "help". Exit using "quit" or ctrl-d.

Config file interactive.yaml doesn't exist, loading default config settings

The interactive mode enables you to interact with the simulation as it's being run. There are a few different commands that enable you to do a bunch of different things. Run help to see all available commands. You can use TAB key to navigate through the commands more easily too (autocomplete). Try just typing "h" and then press TAB twice, it should show you possible commands beginning with "h".

Simply pressing enter with no input ("empty command") progresses the simulation by one turn. A turn is a number of base simulation ticks.

cfg and show

You can set the number of ticks per turn using the configuration (cfg) variable ticks_per_turn. By default it's set to 1. Let's change this and make one turn do 24 ticks:

cfg ticks_per_turn 24

Preview the list of currently set cfg variables with:

ticks_per_turn          24
show_on                 true
show_list               []

Now when we process one turn the prompt should change by 24 at a time.

show command shows takes data from the simulation and shows it to us. It uses the show_list cfg variable as input - it's a list of addresses that can be used to pull data from the simulation. Let's add an address to the show_list.

show-add /region/greenland/property/bear_population/int/count

Now when you use the show command it should print out the value from the address.

Of course the address has to point to a variable that exists in the simulation.

show_on config variable defines whether on each turn show should be invoked. We can easily toggle this with show-toggle.

You can export the current configuration to file with


Adding and modifying the variables from the command line can be tiresome, you can edit the cfg file itself (interactive.yaml) and then just use


to load it into the currently running interactive session. Every time an interactive session is started it will look for this file in the current working directory and automatically load it if it exists.

run and run-until

run command takes in a number of base hour ticks and executes exactly that number. run-until takes in an integer and will run the simulation until the simulation clock count is equal to that number. You can use CTRL-D (EOF) or CTRL-C to break out of run and run-until. There are also runf and runf-until which are faster but don't really allow for breaking out of the execution (because they're not taking any time to listen for signals while running).

Let's make a clock

endgame 0.1.3, outcome 0.2.2

Building on what you've learned so far, let's create a clock.

Our clock will consist of hours, days, months and years. Once we build an appropriate module for the clock, we will also configure endgame to display our clock in the prompt of the interactive runner, like so:

You're now in interactive mode.
See possible commands with "help". Exit using "quit" or ctrl-d.

Loading config settings from file (found interactive.yaml)
[1-1-2015 1:00]
[1-1-2015 2:00]
[1-1-2015 3:00]
[1-1-2015 4:00]
[1-1-2015 5:00]

Clock module

Let's create a new scenario called clock_tutorial.

HINT: You can use endgame to initialize a scenario for you if you don't want to create the files manually.


endgame configuration

We can change what endgame displays in the interactive runner's prompt. To do this we will need to include the following in the interactive.yaml config file:

prompt_format: "{}-{}-{} {}:00"
- /uni/uni/generic/clock/str/day
- /uni/uni/generic/clock/str/month
- /uni/uni/generic/clock/str/year
- /uni/uni/generic/clock/str/hour

REMINDER: endgame by default loads configuration from a file called interactive.yaml from inside the directory where it's run from. If the file doesn't exist you can just create it. Alternatively use cfg-save while in the interactive mode to export current config to file.

Now you can either quit and start the interactive runner again, or just use the cfg-reload command. The second option is useful because you don't have to exit the current simulation run in order to change the prompt. Instead you can tweak the config and reload it inside the interactive runner at any time.

Note that if any of the addresses listed in prompt_vars is not reachable then the prompt will ignore the custom format entirely and default to showing the usual tick count number.

Creating a modifier


Introduction to Lua scripting

outcome 0.2.2


Regular commands include most basic operations we would want to do on our data. Simple calculations, evaluations, getting and setting variables can all be done without using any complex scripts.

Whenever we have the need to include more complex algorithms however, we have the ability to include Lua scripts in our models.

To make use of Lua scripts you will need to master at least the basics of Lua. If you didn't have any contact with programming up until now, don't be discouraged. You will be able to follow this tutorial just fine.

What is Lua

Lua is a lightweight programming language designed primarily for embedded use in applications, one of the most widely used scripting languages.

If you don't know much about Lua there are lots of resources to learn from online.

Performance question

Lua itself is highly optimized and really quite fast. But the Lua scripts we include in our components will always be slower than the bare commands. This is something to keep in mind while designing your model.

Ideally we will only want to use Lua for complex tasks that occur infrequently during the processing. Having a complex Lua script execute many times every simulation tick can have a noticeable effect on performance.

Declaring a simple lua_script

Here's a map representation of a simple lua_script cmd.

- cmd: lua_script
  src: |
    print("Hello world!")

inputs and outputs

The scripts we pass to our lua_script command don't have access to any of the variables we can normally access. Instead we're required to specify what data they will get and what data we will get out from them after they're finished executing. It really is a quite simple arrangement.

Inputs are passed to the script as it's global variables. Outputs are

- cmd: lua_script
    # lua global named "string_var" will be set to the value of `str/local_string`
    string_var: str/local_string
    string_var: str/string_var
  src: |
    print("Hello world!")