Data Flows
180Protocol provides a coordination and communication layer above lower-level enclave computing platforms.
Enclave computing refers to the use of specialized hardware enclaves to protect data-in-use by embedding certain application processes within the enclave. 180Protocol use Intel SGX enclave technology. For background information on Intel SGX please see here. For the initial release, we use R3's Conclave technology as a layer between 180Protocol and core SGX primitives. Please see here for additional information on Conclave.


180Protocol is designed to enable better collaborative data sharing among enterprises. Business data is sensitive, making privacy a top priority. The use of enclave computing tools introduces additional risk mitigation against data leakage, lowering the barrier to data sharing.
Within a 180Protocol data aggregation flows, data is encrypted on the client-side prior to transmission to the enclave. The enclave is operated by the coalition host in a dedicated cloud environment on Microsoft Azure. The coalition upon initiation must agree on the data transformation algorithm and rewards engine variables that will operate trustlessly within the enclave. Outputs from the enclave are similarly encrypted and are decrypted by the designated data consumer.

Consumer Aggregation Flow

Consumer Aggregation Flow
  1. 1.
    ConsumerAggregationFlow allows consumers in coalitions to initiate a data aggregation request.
  2. 2.
    The consumer can specify a supported dataType and initiate a request to the coalition host to perform an aggregation
  3. 3.
    The host checks the coalition configuration to determine whether the requested data type is valid and supported by the coalition
  4. 4.
    The host initializes the enclave and gets the enclave attestation using the Conclave Mail API
  5. 5.
    The host then sends the enclave attestation to the data consumer and each provider in the network and requests encrypted data from the providers
  6. 6.
    Providers receive the request from the host, validate the data type requested, validate the enclave attestation bytes from the host and query their private data stores for the specified data set. Note: currently providers must have a data set pre uploaded on their Corda node attachment store for each supported data type to participate in the aggregation and receive rewards
  7. 7.
    Providers encrypt the private data set by generating a new private key for the aggregation and the enclave attestation bytes. See Conclave Mail API for further details on the PostOffice used to manage communication between the enclave and the client
  8. 8.
    Providers send encrypted data back to the host
  9. 9.
    Host gathers encrypted data from all providers and sends the encrypted data along with the requested aggregation dataType to the enclave
  10. 10.
    The enclave receives the aggregation request and validates whether the requested dataType is supported
  11. 11.
    Encrytped data from the providers is decrypted by the enclave
  12. 12.
    Apache Avro is utilized to deserialize encrypted data according to a pre agreed input schema
  13. 13.
    Enclave computes the data out for the consumer and calculates rewards for each provider. Note: The algorithms to compute the data output and rewards can be configured by extending the AggregationEnclave interface
  14. 14.
    Enclave send encrypted data output and decrypted rewards to the host
  15. 15.
    The host sends the encrypted rewards to the consumer which decrypts the rewards using its own instance of PostOffice
  16. 16.
    Consumer creates a DataOutputState, transacting and signing with the host to store on their respective ledgers as a proof of data aggregation. Consumer also stores the data on its local Corda database for ease of querying
  17. 17.
    Providers receive the rewards from the host and create a RewardsState,transacting and signing with the host to store on their respective ledgers as a proof of reward for the data aggregation

Provider Aggregation Flow

Coming soon..