Insolar aims to deliver an open and collaborative environment required to enable third-party companies to build and maintain templates and services, provide hardware capacities, and adapt services and functions to local practices and legal and regulatory requirements.
Below is an illustration of the layered architecture that facilitates such a collaborative environment. The use of multilayer architecture makes platform design a challenging task, but with proper use it enables building complex solutions with better control of development risks and (later) of ownership costs. Insolar is currently designing the proposed platform in an incremental fashion allowing it to progressively grow into the ultimate decentralized collaborative environment for various kinds of industries, companies, governments, and communities.
The architecture is split into four layers:
At the top layer are applications (contracts) owned by and tailored for companies who serve other companies.
The next layer represents business services and templates (domains) for business applications provided by vendors.
At the third layer is the federation of clouds. Their infrastructure can also be public and offered by governments or even communities as a public good (crowd-sourced computational resources).
At the bottom layer, there are providers of hardware capacities organized into national and/or industrial compute & storage resources.
Let’s take a closer look at the three bottom layers since they are the Insolar’s development focus.
Below is the platform architecture diagram aimed to address the aforementioned interconnected layers. The architecture has multiple components and consensuses to address the complexity and variety of requirements.
Components in the diagram are clickable, the links will lead you to respective definitions.
Clouds and Their Federations¶
Clouds organize and unify software capabilities, hardware capacities, and the financial and legal liability of nodes to ensure seamless operation of business services. The Insolar Platform transparently connects multiple clouds and each cloud is governed independently, e.g., by a community, company, industry consortia, or national agency. Thus, multiple clouds can unite into a federation on the Insolar network.
The cloud itself establishes governance of both network operations and business logic. Therefore, it is a dual entity that controls:
globula discovery and split-protection protocols;
node activation and deactivation protocols with the list of currently active nodes and blacklisted ones;
real-time detection protocols of execution fraud.
A special domain that is stored by the cloud itself and carries rigid configuration and rules such as:
procedures for registering and deregistering nodes;
postexecution fraud detection procedures;
compensation and penalization procedures;
marketplace rules for processing capacity.
Domains establish governance of contracts and nodes, thus, acting as super contracts that can contain objects and their history (lifelines) and can apply varying policies to the lifelines contained within. Policies can differ with regards to particular rules:
Changing the domain itself.
Access to/from other domains for lifelines.
Logic validation, e.g., consensus, number of voters.
Code mutability – possibility of changing the code and change procedures.
Mutability of object history contained in the lifeline. These rules allow to implement GDPR or legal action via authorization requirements defined by the domain.
Applicability of custom cryptography schemes requested from the cloud that deploys them.
Insolar also supports larger node networks of up to 100 globulas (a total of 100,000 nodes) that behave transparently across such networks in accordance with whichever contract logic is in place. Such networks rely on the inter-globula network protocol with leader-based consensus.
Insolar utilizes a multi-role model for nodes: each node has a single static role that defines its primary purpose and a set of dynamically assigned roles. Dynamic role allocation functions enable the omni-scaling feature of the Insolar Platform.
The node’s static role defines what kind of resource and functionality are delivered by that node to the network, and how the network uses such nodes. The network recognizes four static role categories:
virtual – performs calculations;
light material – performs short-term data storage and network trafficking;
heavy material – performs long-term data storage;
neutral – participates in the network consensus (not in the workload distribution) and has at least one utility role.
Static role correlates with the type of resource the node can provide to the cloud, and is a part of the omni-scaling feature of the Insolar Platform. All static role categories are detailed below.
Neutral nodes participate in the network consensus but do not receive any workload automatically distributed by the Insolar network. Neutral nodes serve particular functions:
block explorer support,
Virtual nodes are stateless, fast, easy to join and leave, and do not need data recovery. On the Insolar network, virtual nodes do the following:
Light material nodes¶
Light material nodes are stateful and they automatically collect hot data and indices upon restart. On the Insolar network, light material nodes do the following:
manage data access and do audit;
provide caching for recent data;
enable scalability of network throughput;
perform data retrieval and storage operations for virtual nodes;
redirect requests to relevant material nodes when the required data is not available;
maintain indices of the most recent records, attribute indices, and other functions;
deduplicate and recover requests in case of virtual node failures;
assist heavy material nodes by serving as temporary backup and cache for individual blocks;
serve as integrity validators, recovery sources, proof-of-storage approvers, and handover voters;
collect and register dust (e.g., service inconsistency reports, long operations, logs).
Heavy material nodes¶
Heavy material nodes are stateful and require recovery and content revalidation (proof-of-storage), both periodically and upon rejoining the network. On the Insolar network, heavy material nodes do the following:
provide long-term data storage and scalability of storage capacity;
check data integrity but are unable to introduce or change data or form a block;
ensure the required level of block replication and the maximum data density (scattering) to reduce the impact of data leakage from a single material node (heavy or light).
Heavy material nodes differ significantly from other nodes – they store lots of data and must take additional measures to mitigate the following risks:
losing (or corrupting) data but not having enough copies, or
data leakage caused by the accumulation of too much data on a single node.
Heavy material node’s implementation is simplified for the TestNet 1.1 and will gradually extend during the development of Insolar’s enterprise version.
Moreover, additional network protocol is implemented to maintain backups and archival storage nodes without burdening the main Insolar network consensus.
In addition to the node’s static role, it can be equipped with dynamic ones – roles able to change.
Virtual nodes can have the following roles and respective responsibilities:
Virtual validator verifies virtual executor’s actions from previous pulses.
Light material nodes can have the following roles and respective responsibilities:
Material executor forms new blocks and grants access to previous blocks.
Material validator checks the block’s validity and consistency.
Material stash caches hot data and relevant indices (current states of all objects) and syncs the indices among other stash nodes.
In essence, all the nodes take part in two kinds of execution and validation procedures, depending on their dynamic roles: virtual and material. Heavy material nodes rely on validation performed by light material ones.
Dynamic roles are designed to:
enable dynamic and straightforward scaling of the network;
require minimal preparation to become operational;
get new workload allocations while dynamic roles of all the nodes change with every pulse.
Delegated and Utility Roles¶
In addition to static and dynamic roles, nodes can take on delegated and utility roles that serve additional functions: caching, inter-globula coordination, and node joining.
The Insolar’s main principle is that everything is a contract on the Insolar Platform. Contracts are stored as lifelines in the ledger and are based on general-purpose programming languages such as Golang or Java. They allow existing practices, libraries, and development environments to be used straightforwardly.
A contract developer may focus solely on the contract logic and calls of other contracts, while such details as location & implementation of other contracts are managed transparently by the platform. Every contract has domain-level managed rules that define the contracts handling:
policies for code updates,
inbound or outbound call permissions.
In addition to governance with logical rules, domains can also be deployed in separate clouds for stronger network security and data inspection on network edges, while contract/business logic can dynamically tune validation performed by the Insolar Platform to balance costs, risks, and performance by adjusting quantity and quality (stake or liability levels) of validators involved.
Contracts also have individual time tracking and resources which can be subsequently connected to custom billing procedures and prepaid (or on-spot) allocation of hardware capacities. Moreover, the ledger that stores contract data applies strict controls on the following:
Data access by requiring signatures from nodes that need the access;
Scattering of versioned data across multiple storage nodes to significantly reduce risks of fraud, intrusions, or data leaks.
Furthermore, Insolar guarantees to execute any contract and ensures duplicate calls will not emerge in case of hardware, system, or network failure.
For practical enterprise use, Insolar contracts can store and transfer large data objects with the following benefits:
on-chain, without the need for additional systems integrations;
As the platform already reduces determinism via network messaging, Insolar applies relatively relaxed requirements regarding the determinism of contracts. As such, a method invocation:
produce exactly the same results,
consume roughly the same amount of CPU resources.
Contract execution methods that run longer than one full pulse must be explicitly declared with an execution duration policy.
A contract that does not produce the same results under given conditions will not pass validation. In this case, all expended efforts will be at the cost of the party that deploys the contract (as opposed to the caller). Insolar records information on spent efforts in sidelines and can track assigned limits, however, the actual billing and payment execution must be handled by governance logic (i.e., by other contracts).
Although virtual nodes are used to isolate contracts incompatible with security or governance rules, the new contract’s code can only be introduced to Insolar as source code, with compilation and static inspection performed by nodes in accordance with an applicable governance model.
To provide contract execution determinism, Insolar utilizes its network consistency.
To this end, Insolar:
sets apart the functionality requiring different resources and permissions,
distributes workloads across all available/active nodes of the Insolar network using entropy.
As a result, all nodes have:
Insolar does not use node workload statistics to provide network consistency, instead, it implements pseudo-random workload distribution.
The reason is simple: a trustful workload factor in distributed systems requires full visibility and operations aggregation but they still do not guarantee smooth workload distribution when workloads fluctuate faster than the average duration of a workload control cycle (aggregate statistics – balance – execute).
Pseudo-random workload distribution can cause distribution anomalies within a workload control cycle but it provides a relatively smooth distribution on longer timescales, without the need for full visibility and operations aggregation.
Such a workload distribution and the entorpy-based allocation functions for dynamic roles are the core instruments that enable the omni-scaling feature of the Insolar Platform. This feature provides a balance in accordance with client’s needs.
Processing costs can be traded off against:
Uninsured risks. Suitable for situations where a cheaper transaction is executed but fewer validators verify said transaction, meaning greater risk of loss.
Processing speed. It can be increased to the detriment of operational risk:
frequent transactions could be processed without awaiting validation, or
validations may be batched together and processed following some delay, leading to the possibility of resource-consuming rollbacks.
Execution & Validation¶
The Insolar Platform works on the principle of actions executed by one node, validated by many.
The number of selected validators can be determined in accordance with the business process at hand and, since validators in shared enterprise networks will have liability and legal guarantees, this works as transaction insurance.
As described in the network consistency section, validator selections are not based on voting; instead, they are part of the omni-scaling feature. Insolar uses the active node list and entropy generated by consensus of the globula network protocol, and then applies deterministic allocation functions for node roles. This avoids wasting efforts on numerous per-transaction and network-wide consensuses.
Since Insolar sets apart functionality using node roles, it has two sets of execution & validation procedures: virtual and material.
Virtual Execution & Validation¶
Nodes with virtual static roles carry out virtual execution & validation:
Registers the request within the current pulse.
In case the request arrives to a ‘busy’ virtual executor, it can delegate the execution of an object to other virtual nodes (not necessary to virtual executors). Moreover, multiple requests can be executed within the same pulse when opportunistic execution/validation is allowed by the caller or by the called object.
Executes the request on the object (contract).
Collects the results of outbound calls.
Once the executor’s status expires, the network selects virtual validators from the list of active virtual nodes on a new pulse (new entropy), meaning executors cannot predict which nodes will validate transactions, thereby avoiding a collusion scenario.
Each virtual validator:
Lastly, the outbound calls validation is stacked into a single validation round as validators use signed results collected by previous executors.
A single virtual executor can execute long requests that span several pulses. To do this, the virtual node that started the execution asks current executors in each pulse for tokens that give the execution permission.
Material Execution & Validation¶
Nodes with light material static roles carry out material execution & validation:
The network selects (determines based on entropy) a specific light material node to become a light material executor. Upon receiving data requests from the virtual executor in the current pulse, the light material executor:
Once the executor’s status expires, the network selects material validators from the list of active light material nodes on a new pulse (new entropy), meaning executors cannot predict which nodes will validate transactions, thereby avoiding a collusion scenario.
Each material validator checks that the light material executor has formed the last block correctly. The block must have:
No contradictions between records in the filaments.
In addition, each validator ensures that the executor made the right decision to split (or merge) the corresponding jet.
Light material stash nodes are nodes which have been light material executors for a number of past pulses. The number is called a stash history limit and its default value is 5 but it is configurable within a cloud. Thus, stash material nodes provide caching for recent data.
Consensus procedures vary in their degree of control by business logic, with two consensus procedures available:
Domain-defined consensus: procedures that are a set of Raft-like protocols with entropy-controlled voter selection. These protocols are applied to an object after a series of changes. Such protocols can be chosen at the domain level and configured at the transaction level.
Utility consensus: procedures – a set of protocols – that cover various platform operations not directly operated or required by business logic, including network consensus, pulsar consensus, and traffic cascade.
Ensures that actions applied to an object were performed correctly considering the object’s state, input parameters, and external dependencies (calls).
For more information on logic consensus, see the virtual execution & validation section.
Nodes which participated in logical consensus had allocated roles.
Records generated by the nodes are structurally and referentially valid.
For more information on storage consensus, see the material execution & validation section.
Ensures node availability and synchronization of time and state among nodes and provides consistent allocation of dynamic roles to nodes. There are two consensus protocols behind the network consensus:
Globula network protocol: a truly decentralized BFT-like protocol without any consensus leader that establishes the consistency of a globula (a smaller network of up to 1,000 nodes).
Inter-globula network protocol: a leader-based protocol that extends the GNP and establishes consistency among globulas of the Insolar network (up to 100 globulas or 100,000 nodes).
The entropy’s consistency and the set of active nodes on the network are vital for the methodology of executed by one node, validated by many. Nodes are selected from the active node list to perform different functions, while entropy and consistency ensure behavioral consensus across all nodes. Validator nodes are selected only on a new pulse to ensure that executor nodes cannot collude with validators.
Pulsars running on a pulsar protocol represent a separate logical layer that is responsible for network synchronization and provides a source of randomness (pulses). Interoperability of nodes within a single cloud depends on pulses and all nodes must be on the same pulse to process new requests or operations.
Pulsars can run either on the same network or an entirely separate one. Cases of the former include:
private networks that can implement a dedicated server;
cross-enterprise and hybrid networks that can use a shared network of pulsars yet run individual installations of Insolar networks;
and public networks that can use trusted pulsar nodes or run the pulsar function on other nodes.
In case of multiple pulsars on the network, their consensus generates the pulses.
Clouds define the pulsar selection rules and they can vary significantly. On enterprise networks, servers that complete no other operations manage the selection, whereas on public networks, it may be a random subset of 10 to 50 nodes with high uptime. Other configurations are also possible for different network types.
Default pulse generation is based on BFT-consensus among pulsars, where each member contributes to entropy and none can predict it. The pulsar protocol enables entropy generation in a way that prevents individual nodes from being able to predictably manipulate the entropy through vote withdrawals.
This protocol does not include negotiations related to pulsar membership or pulse duration – such parameters are considered as preconfigured or preagreed. The default pulse duration is 10 seconds.
As a consensus result, pulsars distribute the collaboratively-generated entropy signed by every pulsar to every node on the network.
Ledger is a common term for distributed storage, a network of nodes that store data.
As described in the static roles section, material nodes are responsible for storing data and providing it on requests for virtual nodes. Virtual nodes create and sign new information and pass it to material nodes to store. So, material nodes do not create or modify information (objects) with the exception of specifically defined meta data.
A typical object workflow is as follows:
Data is stored in the ledger as a series of immutable records. All records are created and signed by virtual nodes. Each record is addressed by its hash and a pulse number. Records can contain a reference to another record, thus, creating a chain. An example of a chain is the object’s lifeline. Each material node is responsible for its own lifelines determined by their hashes.
In the Insolar’s key-value storage, the key is a fixed structure – a combination of a pulse number and a value hash. The value can be one of several types:
Record – immutable structured data unit. Can form chains if each record references a previous one in succession.
Index – meta information about record chains, e.g., pointers to the latest record in a chain. Represents an object.
Blob – immutable payload. Used to store (potentially big) chunks of serialized data, e.g., object’s memory. Usually, records refer to blobs to store application data.
Each operation performed by virtual nodes is registered as a request in the ledger. Request is a single record that contains information necessary to perform an operation. Each request belongs to an object and is affined to it.
Each operation performed by virtual nodes has exactly one result. Although an operation can have many side effects (records stored in the ledger), result represents a summary of that operation. So, each finished request has its own result, i.e., result references its request. A request without an associated result stored in the ledger is a pending one.
Objects (contracts) are fundamental application building blocks. Borrowing OOP terminology, an object is a class instance. In other words, an object is a series of records that can be accessed via an index.
Each record represents an object’s state at a certain point. The state can contain the object’s memory at the point. Memory is a binary blob stored in the ledger and a contract can put any data it needs into it.
In a blockchain, objects cannot be modified, only appended by another record. Therefore, object states can be one of the following types:
Activated – the object has been initialized. This is the first state of any object and it contains initial memory.
Amended – the object’s memory has been modified. Contains new memory.
Deactivated – the object has been “removed” from the system. Since data cannot be removed from the chain, objects are simply marked as removed.
A succession of object records (states) is called a lifeline:
An object is assembled from a lifeline via its index. As stated above, index is a collection of pointers to object’s records (states, requests, etc.). So, to get an object, all we need is its index. The ledger stores multiple versions of the object’s index depending on the pulse.
To preserve consistency, each operation is performed on a particular object’s version. To get an object to execute on, a virtual node sends an operation request based on which the object’s version is calculated. This way, two concurrent operations can be performed on different versions of said object.
Object’s lifeline is not the only chain, though. The ledger stores any requests that belong to an object in a sideline. The general term for all the chains (lines) is a filament. So, a more complex object structure including all filaments is as follows:
Objects have relations to other entities and to each other. Most of those relations are references in the object’s activation record.
Key figures in those relations are:
Object. Directly references a prototype. This reference cannot be changed during the object’s lifetime, although multiple objects can have the same prototype. Serves as an instance of a prototype.
Prototype. Special kind of object that acts as a template for building other objects. It contains default memory and directly refers to relevant code.
Relations between the entities are as follows:
Since both prototype and object are technically objects, they contain a reference to either:
prototype in case of an object, or
code in case of a prototype.
The general term for this reference is an image. In other words, object’s image is its prototype, and prototype’s image is its code.