Executive Summary

September updates for the Codex project were main focused on the ongoing research and analysis of the proofing schemes and their impact on the overall architecture and network economy.

Key Updates

Personnel

A new Business Development (BD) job description was posted and candidates are currently being interviewed. This role is expected to help facilitate strategy around the much needed partnerships for Codex and liaise with the other BD related resources we have within Logos to ensure efficient communications.

Milestones

The Codex team is broken up into 5 sections, and the weekly reports give details on how each of them have performed. Currently the Milestone definitions are not in line with this reporting process and will be worked on in the subsequent month. The teams are broken up into the following sections:

  • Client
  • Infra
  • Marketplace
  • Research
  • DAS

Below is summary of key updates to these sections over the month of September 2023

Client

The client continues to push towards Milestone 1.3: Codex v1.0 - Katana, which is slotted to be completed by the end of the year. Significant work has been done on merkelization which is required in order to integrate the proving system, and can be followed in this working branch.

The Block Exchange protocol got some attention and refinement. Notes on the associated thoughts can be found in these two writeups:

In an effort to focus on the critical development path, this work is paused in lieu of attention on the distributed systems testing work.

Progress was made on the ability for Codex to manage asynchronous and threaded disk IO. In the process of doing this work, a bug within Nim’s SharedPtr was discovered and fixed.

Infra

A Grafana and Kibana instance were deployed in order to facilitate the various testing being done.

Marketplace

In order to alleviate a concurrency issue with Data Availabilities in the contract, a Reservation System has been proposed and worked on. This removes the previous constraint that current downloads was limited by the number of Availabilities.

Research

The Codex Whitepaper v0.1 was drafted and scheduled for release in October 2023. It is currently under review and improving based on feedback.

There has been a large discussion this month around Erasure Coding (EC) for sampling. An analysiswas performed which looked at the various effects Erasure Coding schemes have on the sampling process and associated data guarantees. A quote of the conclusion on parameter choices is below:

Quote

  • we cannot have a small slot size, because that would mean too many proofs by a node (≈ 1 Tb seems to be a minimum)
  • we cannot have a too small block size, because the Merkle tree of the commitments will take too much space (say a minimum of 1024 bytes)
  • we cannot have a too big “checked sample” size, because we cannot do proofs for large amount of data (say a maximum of 65536 bytes)
  • we cannot have too much sampling checks per slot, because we cannot do proofs for many samples (depends on the block size and SNARK tech)
  • we probably want as big N, K parameters as possible, but actual implementations have limit

A short review of the Interleaving Schemes for Multidimensional Cluster Errors was performed here and some general notes on Erasure Coding as it pertains to Codex was written up here. Much of these thoughts is being captured in the Erasure Coding Proofing document here. The conclusion section (at time of writing) is copied here for convenience:

Quote

It is likely that with the current state of the art in SNARK design and erasure coding implementations we can only support slot sizes up to 4GB. There are two design directions that allow an increase of slot size. One is to extend or implement an erasure coding implementation to use a larger field size. The other is to use existing erasure coding implementation in a multi-dimensional setting.

Two concrete options are:

  1. Erasure code with a field size that allows for 2^28 shards. Check 20 shards per proof. For 1TB this leads to shards of 4KB. This means the SNARK needs to hash 80KB plus the Merkle paths for a storage proof. Requires custom implementation of Reed-Solomon, and requires at least 1 GB of memory while performing erasure coding.
  2. Erasure code with a field size of 2^16 in two dimensions. Check 160 shards per proof. For 1TB this leads to a shards of 256 bytes. This means that the SNARK needs to hash 40KB plus the Merkle paths for a storage proof. We can use the leopard library for erasure coding and keep memory requirements for erasure coding to a negligable level.

It appears as though the team is preferring to go with the multi-dimensional approach to EC.

DAS

Work continues on the DAS research in coordination with the Ethereum Foundation (EF). As a result of SBC, a blog post was written by the EF that discussed a forward thinking proposal for PeerDAS - a simpler DAS approach using battle-tested p2p components which the team has contributed to (referenced inside). Conversations of relevancy continue.

A Codex Blog post was published discussing two by-products of the DAS research: the characterization of big block diffusion latency on the existing Ethereum Mainnet and the existence of big organic blocks on Ethereum Mainnet and its implications. The conclusion is quoted below:

Quote

We have discovered a large number of big blocks (>250 KB) that occur organically every day in the Ethereum Mainnet. We have measured the propagation time of those blocks in three different world regions and compared their latency based on geographical location as well as block size. We have analysed how these propagation differences are reflected in the five CL clients separately, as they have different ways of reporting blocks. The empirical results measured in Ethereum Mainnet and presented in this work give us the first clear idea of how block propagation times might look when EIP-4844 is deployed, and 1 MB blocks become the standard and not the exception.

In the future, we plan to continue with these block propagation measurements and monitor the behaviour of big blocks in the Ethereum network. Additionally, we want to help different CL clients harmonise their event recording and publication systems in order to be able to compare CL clients between them.

Discussions with Felix Lange began around some fixes for Discv5.

Other

A Codex YouTube channel has been setup and many tutorial videos and conference talks were uploaded. Go like and subscribe!

Perceived Changes in Project Risk

In an effort to meet the MVP launch by the end of the year, significant resources have been diverted to engineering efforts. Jessie has taken on more responsibility in the administration and project management duties while Dmitriy has started to focus more on the research and engineering needs

The ongoing research around the Data Availability Proof system still has potential to have drastic changes to the overall architecture of the system and associated resource costs of the various participants within the Codex Network. It is unclear how “locked in” parts of the system are that are included in the MVP launch.

Future Plans

Insight

Because of the mismatch of weekly updates with Milestone definitions, it is difficult to assess the impact of any given update. Next month should have all milestone definitions within this site and a reporting structure that is more intuitively associated with it. It has been noted that the current structure makes it difficult to track cross-team work which the changes next month hope to fix.

A Logos Collaborations section will be included next month to highlight differences in alignment with the Logos Collective as well as cross project collaboration updates.

The reporting process has missed a lot of work around the network simulation and modeling of Codex, which we expect to be corrected by next month by previously mentioned actions.

Depending on the uptake and viability of the Waku reporting process to other projects, then a myriad of quantitative measures will be included in the next monthly report.

Project

NEED INPUT HERE

Weekly Reports