The process definition is the compilation of configurations, dependencies, and references that make up the process.
The process definition node groups all definitions required for a process to function. The Multi-Period-Market Process deals with time frames of one hour and above. The Multi-Period-Daily Process deals with time frames of 45 minutes and below.

As hinted above, most bots—in particular indicators—have two different processes. The reason is that different data structures need to be handled in different manners. The Multi-Period-Daily process handles daily files, while the Multi-Period-Market process handles market files.

The Multi-Period-Market process deals with time frames of one hour and above. Because these time frames produce relatively small numbers of records, the process builds one single file per time frame spanning the whole market history—hence the name Multi-Period-Market.

On the other hand, the Multi-Period-Daily process deals with time frames below one hour. These time frames produce huge numbers of records, therefore, the corresponding data must be fragmented in multiple files. The Multi-Period-Daily process builds one file per day for each time frame—hence the name Multi-Period-Daily.

Click to learn more about process definitions

Adding a Process Definition Node

To add a process definition, select Add Process Definition on the bot’s menu. A process definition node is created along with the basic structure of nodes comprising the definition.

Configuring the Process Definition

Select Configure Process on the menu to access the configuration.

Multi-Period-Market:

  {
    "codeName": "Multi-Period-Market",
    "normalWaitTime": 0,
    "retryWaitTime": 10000,
    "framework": {
      "name": "Multi-Period-Market"
    }
  }

Multi-Period-Daily:

  {
    "codeName": "Multi-Period-Daily",
    "normalWaitTime": 0,
    "retryWaitTime": 10000,
    "framework": {
      "name": "Multi-Period-Daily"
    }
  }
  • codeName is the name of the process as used within the code of the system.

Process Definition

Process Output

Output Dataset Folder

Output Dataset

Output Dataset

Process Dependencies

Status Dependency

Data Mine Data Dependencies

Bot Data Dependencies

Data Dependency Folder

Data Dependency

Data Dependency

Status Report

Execution Started Event

Execution Finished Event

Process Output

The process output groups the definitions of which datasets are impacted by the process, that is, which datasets the process builds or takes a part in building.

Click to learn more about process outputs

Adding a Process Output Node

To add a process output node, select Add Missing Items on the process definition node menu. Items that may be missing are created along with the basic structure of nodes required to define them.

Output Dataset Folder

An output dataset folder is an organizational device used to create arrangements of output datasets, particularly useful when the bot has many products.

In cases in which a single bot has many different products, output dataset folders may help organize the outputs referencing each product, making their management easier. Folders may be nested like folders in the file system.

The use of output dataset folders is optional, as product definitions may also exist outside of folders.

Click to learn more about output dataset folders

Adding an Output Dataset Folder Node

To add the output dataset folder node, select Add Output Dataset Folder on the parent node menu.

Output Dataset

The output dataset is a reference to a dataset definition. By establishing such reference, the process acquires the definitions as of how the dataset is to be constructed.

There are other effects of establishing a reference from the output dataset to a product dataset definition. Upon execution, every time a process finishes a processing cycle, it triggers an event that may be consumed by other entities. This event indicates that the datasets impacted by the process have been updated.

An example of other entities that may be listening to such events is that of plotters. Plotters read datasets and create graphical representations of this data over the charts. Charts are constantly updating the information in the form of candles and indicators in realtime, synchronized with the data being extracted from the exchange by the sensor bot. That kind of automatism is possible thanks to the events that processes trigger every time an execution cycle is finished, signaling to everyone listening that new data is available on each of the impacted datasets.

Indicators-Process-Output-01

The image above shows the typical references from output datasets to datasets definitions.

Click to learn more about output datasets

Adding an Output Dataset Node

Output datasets may be added in bulk, for all defined products, or one by one.

To add a single output dataset node, select Add Output Dataset on the process output menu.

If you have defined multiple products, each with their dataset definitions, and wish to create all corresponding output datasets in bulk, select *Add All Output Datasets” on the process output menu. The system maps the product definition folder structure with output dataset folders, creates all required output datasets, and establishes the references with the corresponding dataset definitions, with a single click of the button.

Process Dependencies

Process dependencies are references to various data structures on which the process depends to function.

While processes run autonomously, most processes participate in a value-adding chain by which a process produces a data product that other processes may consume as an input to be processed further. This means that bots—while autonomous in their particular jobs—do depend both on other bots and on the data other bots produce.

Click to learn more about process dependencies

Adding a Process Dependencies Node

To add a process dependencies node, select Add Missing Items on the process definition node menu. Items that may be missing are created along with the basic structure of nodes required to define them.

Status Dependency

Status dependencies are references to a status report that define which process the process establishing the reference depends on.

The reference is established to acquire the information relative to what the target process is doing. For example, by reading a status report a process may learn when was the last time the referenced process ran, and what was the last file processed.

The status report referenced may belong to the same process— which is called a self-reference. In such a case, the process is learning what it did the last time it ran. Also, the status report referenced may belong to another process—another bot. In that case, the dependency may be of the Market Starting Point or Market Ending Point types.

  • Self Reference is mandatory, as a process needs to read its own status report every time it wakes up.

  • Market Starting Point is a status dependency existing on Multi-Period-Daily processes so that the process establishing the reference learns the datetime of the start of the market. Usually, the reference is established with the sensor’s Historic-OHLCVs process status report. Multi-Period-Market processes do not have this type of status dependency as the date of the start of the market is implied in their dataset (a single file with all market data).

  • Market Ending Point is a status dependency existing both in Multi-Period-Market and Multi-Period-Daily processes so that the process establishing the reference knows the datetime of the end of the market.

Indicators-Process-Dependencies-01

The image above shows a case of a self-reference status dependency as well as a market ending point status dependency.

Click to learn more about status dependencies

Adding a Status Dependency Node

To add a status dependency, select Add Status Dependency on the process dependencies node menu.

Configuring the Status Dependency

Select Configure Status Dependency on the menu to access the configuration.

{ 
"mainUtility": "Self Reference|Market Starting Point|Market Ending Point"
}
  • mainUtility determines the type of status dependency, with possible values being Self Reference, Market Starting Point, or Market Ending Point.

Data Dependency

Data dependencies are references established with dataset definitions of other bots, determining which datasets the process establishing the reference uses as input.

Most bots consume data other bots have produced. Because bots need the data as input for their calculations, processes establish a data dependency with the dataset definitions of other bots. The reference provides the process with all the information needed to decode the dataset, enabling it to perform the required calculations.

Indicators-Process-Dependencies-02

The image above shows data dependencies in one bot referencing dataset definitions of another bot.

Click to learn more about data dependencies

Adding a Data Dependency Node

To add a single data dependency, select Add Data Dependency on the process dependencies, bot data dependencies, or data dependency folder node menus.

In cases in which multiple data dependencies must be established, you may use the option to create data dependencies in bulk:

  • The Add All Data Dependencies option on the data mine data dependencies node menu adds a bot data dependency for each bot in the data mine, and a data dependency for each dataset definition or each product of each bot. You may use this option after manually adding a data mine data dependencies node and manually establishing the reference with the desired data mine, or after adding all data mine data dependencies, by which the references with data mines are established automatically.

It is unlikely that a bot requires numerous data dependencies, thus, the most common scenario is setting up individual data dependencies and establishing references manually. However, if your bot requires multiple data dependencies, the bulk features may be quite useful, as you may create all data dependencies for any given data mine, and simply delete those that are not required.

Data Mine Data Dependencies

Data mine data dependencies are references established with entire data mines to facilitate establishing data dependencies with multiple datasets in the given data mine.

The node may be used as an organizational device, simply to arrange bot data dependencies. However, the smart use of the node involves automating the deployment of multiple data dependencies and their references.

Click to learn more about data mine data dependencies

Adding a Data Mine Data Dependencies Node

To add a data mine data dependencies node, select Add Data Mine Data Dependencies on the parent node menu. This action adds the node but does not establish a reference with any data mine.

The smarter use of the node involves using the Add All Data Mine Dependencies option on the parent node menu. This action creates a data mine data dependencies node for each data mine in the workspace, establishing a reference with the corresponding data mines. This is the first step in the direction of quickly setting up multiple data dependencies when needed.

Bot Data Dependencies

A bot data dependencies node is an organizational device used to arrange data dependencies corresponding to a specific bot.

The device exists as an offspring of a data mine data dependencies node, and does not require a reference to a bot in the given data mine.

Click to learn more about bot data dependencies

Adding a Bot Data Dependencies Node

To add the bot data dependencies node, select Add Bot Data Dependencies on the parent node menu. When adding a bot data dependency in this manner, the node does not inherit any particular label. In fact, it may even host data dependencies pointing to other data mines.

The bot data dependencies node may also be created automatically. When created using the Add All Data Dependencies option on the data mine data dependencies node, the node inherits the label of the corresponding bot in the corresponding data mine.

Data Dependency Folder

A data dependency folder node is an organizational device used to map the arrangement of product definition folders of a given bot.

The use of product data dependency folders is optional, as data dependencies may also exist outside of folders.

Click to learn more about data dependency folders

Adding a Data Dependency Folder Node

To add the data dependency folder node, select Add Data Dependency Folder on the parent node menu.

The data dependency folder node may be added automatically when using the Add All Data Dependencies option on the data mine data dependencies node menu.

Status Report

Status reports serve as temporal annotations that bots read every time they run to know what was done in the previous cycle and what the state of affairs is at present. Status reports are dynamic, and they change constantly, with updates after every cycle of the associated process.

Bots do not run continuously. Instead, they run in cycles. A cycle usually lasts until there is no more data to process, and once they finish, they shut down until the next cycle is due. A status report is a file every bot writes at the end of each cycle with information about the last run, including the datetime of the run and the last record processed.

A status report may be consumed by the same bot producing it, or by other bots.

Click to learn more about status reports

Adding a Status Report Node

To add a status report, select Add Missing Items on the process definition node menu. Items that may be missing are created along with the basic structure of nodes required to define them.

Execution Started Event

The execution started event is the event that triggers the execution of a process. It usually references the execution finished event of another process on which the process depends on.

These references determine when a process is due for another run. By listening to the execution finished event of the process it depends on, it may wake up just in time to process the new batch of data the dependency has just delivered.

Bots form a sort of multi-branched execution sequence with an indeterminate number of dependencies. Every time the bot further down the tree of dependencies finishes a cycle, it triggers the execution of multiple bots listening to its execution finished event.

In the context of a trading process instance running a trading session on the network hierarchy, the execution started event may be used to force the trading process to run only after the last indicator bot dependency finishes its job. This guarantees that all dependencies are up to date and that the trading bot will evaluate the information corresponding to the same candles for all indicators used by the trading system.

Not setting up this event on a trading session may result in eventual data inconsistencies, as—in theory—the trading bot may run with some indicators up to date and some slightly delayed.

Click to learn more about execution started events

Adding an Execution Started Event Node

To add an execution started event, select Add Missing Items on the process definition node menu. Items that may be missing are created along with the basic structure of nodes required to define them.

Execution Finished Event

The execution finished event is the event that processes trigger once they have finished an execution cycle. The event is broadcasted to whoever wants to listen, so that other bots may know when the process has finished its execution cycle.

The execution finished event is responsible for triggering the execution of every process that depends on the data a bot produces. If bot Alice depends on bot Bob, Alice listens to the execution finished event of Bob so that it may start a new execution cycle as soon as Bob finishes its cycle. Alice listens to Bob’s execution finished event by establishing a reference from its execution started event.

Indicators-Process-Execution-Started-Finished-Events-01

The image above shows a reference established from the execution started event of a process to the execution finished event of another process.

Click to learn more about execution finished events

Adding an Execution Finished Event Node

To add an execution finished event, select Add Missing Items on the process definition node menu. Items that may be missing are created along with the basic structure of nodes required to define them.