NOTE: This is the first edition of the project documentation, compiled in a form of a short book. It's still work in progress, which means a lot of errors and unfinished pages. If you've spotted some silly error, or want to add a paragraph somewhere, you can create an issue on github.
It contains a compilation of easy to parse documents describing the project, it's various elements and goals.
It also includes information and guides relevant to the process of creating and running simulations, as well as integrating them with other software using existing APIs.
It's for anyone who wishes to understand the thought process behind the design of various elements of the project. It's for people who want to learn about how the simulations are created and run, how the models and data are handled.
It's also for anyone who doesn't want to dive into design details, but instead is interested in experimenting with already existing models and tools. It's fine to skip the chapters one may find boring or complicated - one can always revisit certain chapters as any problems come up later along the way.
Documentation for this project is meant to be easy to read and understand for anyone, whether they have prior experience with computer programming or not.
That said, first edition is not kept up to this high standard of accessibility. It still contains language and references that may be difficult to parse for people without prior experience with programming. The goal for the next editions will be to further refine the documents in this book to:
- reduce usage of highly industry-specific terms
- increase the number and quality of explanations where the above is not possible
- increase the number of links to external resources
- provide illustrations to make things easier to understand
This page serves as a quick introduction to the project. It's written in a form of a list of answers to some of the frequently asked questions.
It's about creation of user friendly solutions for simulation model design and processing. It's about discovering possibilities for collaboration on different kinds of simulations, from multiplayer game worlds to models of cities and economies.
At a more basic level it's also about discovering a good minimal simulation architecture that's useful, extendable and easy to use.
Currently there are not that many projects aiming at making distributed simulations for games and science easier and more accessible.
One commercial project that exists in this space, which is actually using a similar underlying design approach, is SpatialOS. That said, they are still very much a black box company with no interest in sharing their technology with the community.
Right now the project consists of a proposed system for how collaborative simulation-modeling could happen, as well as experimental software implementing things that are necessary for this to happen.
If you're ready to build from source (Rust programming language) you can already run some of the software.
Yes. Once the software gets mature enough and provides a stable API you will be able to create all sorts of interactive experiences with it.
The main selling point in terms of using
outcome to create multiplayer experiences, is the possibility for creating very large game worlds with hundreds, even thousands, of concurrently connected players.
Sure. This project may become quite useful for studying complex emergent systems. As it grows it will probably become more and more useful for researchers. It's still not there yet though.
It means that, due to modularity of the system, it's easier to stick different solutions together and expect them to work. If you wanted to simulate a whole city, you shouldn't have to model things like pedestrian behavior or weather systems from scratch.
Solutions that enable multiple people to work together on different parts of simulation models are still few and far in between. Projects like this one can start to attempt to change that.
This project is not monolithic. It's composed of a bunch of smaller efforts aimed at different subgoals. This includes not only core software, but also tech demos, documentation and community outreach.
This is where the magic happens.
outcome repository consists of the core library and the main command line tool.
At the core of it all lies the core engine library. It defines all the basic functionality related to creating and running simulations, exposing a simple interface to the programmer.
If we were to look into the code we would find that
outcome-core package itself doesn't actually implement any concrete networking functionality. What is does is it provides a set of abstractions, like nodes, connections and basic messages ("signals"), which it uses internally to sketch out processing routines in a distributed setting without using any specific networking solution. Implementing different network transports and topologies on top of
outcome-core is therefore fairly easy and doesn't involve hacking on the library itself.
Alongside the core library there exists the main command-line interface tool. It is what we will actually invoke working with the
outcome engine from the command line.
One of the important things it implements is networking functionality.
outcome-cli creates it's own notions of servers, clients, workers and more, to enable networked processing patterns, including running distributed simulation deployments. It builds on the generic concepts from the core library and gives them more substance, importing established solutions like the
zeromq messaging library.
In the future, more networking middleware options may get integrated, either into
outcome-cli itself, or as separate tools. For example, one very promising alternative to
zeromq is the
As with any journey, we have to start somewhere. Why not kick things off by writing a simple "hello world" simulation model and running it locally. Before we do that though we will first need to install
outcome on our machine.
Currently the best way to run
outcome deployments, whether local or distributed, is using the provided command-line tool. This will require us to access and use the terminal, also called the shell of our operating system.
outcome is written in Rust programming language, we can leverage it's native build tool
install subcommand can take care of downloading, building, and adding the resulting binary to our system's path so that we can easily run it from the command line.
Once the installation is finished, run the following command:
cargo install outcome-cli
NOTE: During compilation you may get an error message about missing dependencies. By default,
zeromqmessaging library, or more accurately, it compiles it from c++ source. Long story short, using
cmakeand a c++ compiler, like
gcc, to be installed on your system. This is not optimal, and in the future could be changed by replacing the currently used
libzmq-rsbindings crate with native Rust implementation of the library.
After the build process is complete, you should be able to use
outcome from the command line like so:
For major releases pre-compiled binary executables for selected operating systems may be provided. Check out the download section on the website for links to code repositories.
So you have installed
outcome on you machine and can now invoke it from the command line. Great! Why not run take a quick dive into things by running an example.
outcome is designed around running distributed simulations with multiple machines, there is nothing stopping us from running it on just a single machine locally. Indeed it's very easy to do:
outcome run <path-to-scenario>
The above command will spin up a simulation instance on our machine using the provided path to a scenario directory. By default it will start in an interactive mode, which can be seen as somewhat similar to a classic REPL — we will be able to step through the simulation and query data in-between the steps.
Now we only need an actual scenario we can run.
In this chapter we’ll look at some of the core concepts behind
It's recommended to go through this chapter to get an understanding of the basic ideas around which the system is organized.
Descriptions in this chapter are mostly introductory. For more details on the individual concepts consult later chapters.
- simulation instance is essentially a collection entities, each with a collection of attached components
- data is stored in variables, which are referenced by globally unique addresses
- the engine features a built-in interpreter, logic execution happens on the component level and is based on clock-synchronized, event-triggered state machines
- globally synchronized model contains
- project files are organized into modules and scenarios
- external processes that query simulation data using provided APIs are called services
- arrangement of entities and services can be setup (and changed at runtime) to ensure efficient and performant computation, we call this load balancing
In terms of core design aspects,
outcome draws heavily from the ideas behind Entity-Component-System architecture, often shortened to ECS.
At it's most basic, any
outcome simulation consists of a set of entities, each with a number of components attached to it. Result is a flexible arrangement of objects that can be used to accomplish many different tasks.
Exact arrangement of the entities and their components can be either very dynamic or more static. Entities can be created and destroyed, components can be attached and removed, or things can be established once in the beginning and not really change much during the course of the simulation.
Whether more dynamic or more static, the idea of entities and components is crucial to understand. It influences not only the data layout of a simulation, but also to a large extent the execution model itself.
Entities are the fundamental objects in the system. The most important elements an entity holds internally are:
- data storage object, and
- component collection
We can spawn as many entities as needed. They can be created at the initial set up point, or later during the simulation.
An entity is described by it's type and it's id, which together form it's unique identifier. Here's an example entity signature:
When it comes to entities, an entity type helps define what components can be attached to an entity. Registering a component requires us to specify entity type for which it will be available.
As components will use entity-local addresses to get variables, we need this idea of matching types to be able to make some assumptions about what entity our component is attached to.
Each entity type introduces a new namespace for entities of that type. This means we can have entities
:green:banana and they won't collide namespace-wise.
Component lies at the core of computation. Each component instance is assigned to a single entity.
Each component defines a set of it's own variables and contains a single state machine (see next sub-chapter).
We can use component type to create sets of components that will have common characteristics.
Declaration of a new component type can contain things that we would normally declare for components themselves. What we define here will act as default for any new component of that type we might declare elsewhere. This default can be overriden for any of the entries by just declaring that entry on the component.
Component type can be also used as a way of organizing components, and/or expanding the component namespace (like with entity type).
# declare a new component type component_type: - id: decision vars: - id: template_var ... states: - id: template_state_1 ... # use the new component type component: - id: choice_213 type: decision # component `choice_213` has a var `template_var` # /region/e_01001/decision/
All the data that exists within the scope of a simulation is organized into variables, which are referenced using addresses.
Since variables don't exist in a regular global state, and are instead stored on the level of individual entities, and are also organized around components, address format includes all that additional information.
Working with variables we use the notion of variable types that specify the kinds of data stored by specific variables. For example variable of type
int stores an integer number, while variable of type
str_list stores a list of character strings.
Current implementation of the engine supports 4 basic variable types:
bool, along with list and grid types for each of those:
bool_grid. Basic variable types may be expanded in the future to include different sizes for numeric variables, e.g.
Addresses are used for referencing variables. Any full address is a unique reference to a specific variable stored within the scope of a simulation instance. This is an important feature in a distributed setting — we don't really need to know on which node the variable is currently stored, as long as we know the address we will be able to access it.
Each address is made up of three distinct parts, referencing respectively: entity, component and variable. Each of those three parts can be further broken down into two smaller chunks, broadly defined as type and id, though as we will learn later this means slightly different things for each of the parts.
Here is an example of a full address:
The same address but with a more visual breakdown into the three different parts:
So the the above address references some integer named
main, existing as part of component of type
health, which is attached to entity of type
Since the embedded interpreter, which we will learn more about in later chapters, executes logic within certain scopes, we need to be able to use scope-aware addresses.
When executing logic within the scope of some component, which itself exists within the scope of a certain entity, the engine will automatically use that scope to properly handle our "local addresses".
Let's use the address from the previous paragraph as an example. Imagine we were defining some logic to be executed on a
property:health component. No matter what kind of logic we're dealing with (don't worry, we will dive into all that in a later section) we wouldn't want to include specific entity reference, since our component could be attached to other
monster type entities — it has to remain "entity instance agnostic", so to speak.
Writing our logic we would refer to the variable simply as
int:main, or alternatively
property:health:int:main. The latter could also be used to access that same variable from another component attached to the same monster entity.
Sometimes we will need to reference things like component types. For that we shall use certain combinations of the different address parts. These combinations are not usable as variable references. We call those signatures.
Again, following the earlier monster health example, signature of a
property component type looks like this:
First of all, as you can see we use a
* star symbol to signalize a wildcard, or in other words — an unknown. Since component types have to be bound to specific entity types, the signature specifies an entity type, but omits specifying
of all, we always include both chunks of a single address part. That's why we include the
Based on the above it's not hard to imagine how we shall form a signature of an entity type:
Notion of wildcards, or unknowns, is also used for simple pattern matching. Querying a simulation instance for health of all monsters we could simply use:
On the level of the engine library, this is called expanding the address. Expanding the above pattern simply means creating a collection of matching addresses, like so:
:monster:m02:property:health:int:main :monster:m03:property:health:int:main :monster:big_m01:property:health:int:main
At the heart of any simulation there usually lies some form of a model that defines the relations and rules that apply to objects existing within that simulation.
In our case the model not only provides the initial layout of entities, components, events, etc., but is also read, and mutated, at runtime.
Main approach championed by the engine is one focused on incremental assembly, meaning building up the model through the process of executing commands on already existing components.
Incremental assembly is not the only way. One can also define the model in a more static fashion, for example using
json structured data files.
Built-in logic processor makes use of the global model to store instruction information. Definitions of models of individual components contain logic information in the form of lists of commands.
The engine will not attempt to load all the files within a scenario into memory and store within the global model object. This would cause problems with larger datasets.
Instead the model is understood more broadly to include the project (scenario, module) files stored on disk. Internally, the model object stores paths to all the files found within the scenario.
As the model serves as a kernel holding all the important simulation rules, it has to be globally accessible and globally synchronized.
In a distributed setting with multiple separate nodes, the global model is one of the few things these nodes share in common. As such, it serves as main source of truth when it comes to both entity/component prototypes and executable logic attached to components.
Since the model itself can be mutated at runtime, the system is able to centrally handle any changes and ensure proper propagation in case of such changes.