Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All Core Devs - Consensus (ACDC) #154 #1399

Open
ralexstokes opened this issue Mar 20, 2025 · 7 comments
Open

All Core Devs - Consensus (ACDC) #154 #1399

ralexstokes opened this issue Mar 20, 2025 · 7 comments
Labels
ACD Type: All Core Dev calls - execution & consensus Consensus Layer: Issues that affect the consensus layer

Comments

@ralexstokes
Copy link
Member

ralexstokes commented Mar 20, 2025

All Core Devs - Consensus (ACDC) #154, April 3, 2025

Agenda

  1. Electra
  2. PeerDAS / blob scaling
  3. Research, spec, etc.

Facilitator email: stokes@ethereum.org

@github-actions github-actions bot added ACD Type: All Core Dev calls - execution & consensus Consensus Layer: Issues that affect the consensus layer labels Mar 20, 2025
@vbuterin
Copy link

One thing that we should start agreeing on asap is the exact way "blob parameter only forks" can be done.

Here's the approach that I currently favor:

  1. We make the blob target/limit staker voted, just like the gas limit (perhaps, we hard-code target = limit * 2/3, and make the blob limit staker voted)
  2. We have a social consensus rule that it's okay for clients to proactively increase the default value that stakers vote for

This has a few benefits:

  1. With this approach, increasing the blob parameter count becomes at worst as hard as than doing an actual (minimal) hard fork. This is because doing any kind of hard fork requires stakers to update their nodes, and this approach also gives us blob parameter increases as soon as stakers update their nodes, because new versions of node software will have new default voting blob parameters.
  2. In fact, it becomes faster than a hard fork. This is because (i) we don't need the overhead of testing, devops, getting consensus, and (ii) we only have to wait for 51% of stakers updating nodes, rather than 100%.
  3. If the hard fork machinery stalls for whatever reason (this has happened in the past), we can still keep increasing blob parameters.

@KolbyML
Copy link
Member

KolbyML commented Mar 26, 2025

LightClientHeader includes invalid ExecutionHeader due to incorrect transactions_root and withdrawals_root calculation #4214

Currently the Light Client Protocol provides invalid Execution Headers, this causes a problem for users trying to track the EL chain using a Consensus Light Client.

Possible solutions

    1. updated the EL to use SSZ hash_tree_root for calculating transactions_root and withdrawals_root
      or
    1. update the CL to properly build ExecutionPayloadHeader in the light client protocol

I assume option 2. would be simpler, but option 1. would be more aligned with the long term goal to sszify the EL

This is a fairly big problem for any projects which are trying to utilize the Light Client Protocol e.g. Portal being able to validate the last 8192 EL blocks, since historical_summaries are only updated every 8192 slots or so.

Ideally we could resolve this in either Electra or Fulu, it would be great if there could be clarity by others on what would be the most realistic path to resolve this problem and get the fix included in a fork.

@ethDreamer
Copy link

ethDreamer commented Mar 26, 2025

@vbuterin

One thing that we should start agreeing on asap is the exact way "blob parameter only forks" can be done.

We make the blob target/limit staker voted, just like the gas limit (perhaps, we hard-code target = limit * 2/3, and make the blob limit staker voted)

I also like this. Only problem is things are pretty tight in fusaka right now and people wanna ship peerDAS ASAP. So I proposed a less invasive version which just uses a config change just so we can easily fit BPO forks into fusaka. I'm happy with an on-chain voting version like that but if people think that's too much, I'd rather get the minimal version in than nothing:

https://eips.ethereum.org/EIPS/eip-7892

The EIP is a little outdated on the reasoning, the main reasons to do this (IMO) are here:

  • More Flexibility - can increase blob count outside hard forks
  • Easier Testing - single testnet that continuously increases blob limits at regular intervals to find bottlenecks
  • Easy to Implement - i prototyped it in lighthouse and reth and got it working in ~100 lines each. Hearing it might be more in Nimbus though.

More flexibility is especially important given that:

  1. It's difficult to say how well testnet performance will translate to mainnet given the number of unknown parameters (bandwidth distribution, supernode distribution, timing games)
  2. There are a number of proposals to improve gossip to scale blobs further:

Work on gossipsub will continue in parallel to peerDAS work but will likely lag behind. Blob capacity when peerDAS is ready will likely not match blob capacity after these networking improvements. We shouldn't need to delay shipping a moderate capacity increase in peerDAS because a larger capacity increase will be available later. Much better to do a BPO fork once the networking improvements are done.

Either way, would love to discuss on the call

@KolbyML
Copy link
Member

KolbyML commented Mar 26, 2025

LightClientHeader includes invalid ExecutionHeader due to incorrect transactions_root and withdrawals_root calculation #4214

Currently the Light Client Protocol provides invalid Execution Headers, this causes a problem for users trying to track the EL chain using a Consensus Light Client.

Possible solutions

    1. updated the EL to use SSZ hash_tree_root for calculating transactions_root and withdrawals_root
      or
    1. update the CL to properly build ExecutionPayloadHeader in the light client protocol

I assume option 2. would be simpler, but option 1. would be more aligned with the long term goal to sszify the EL

This is a fairly big problem for any projects which are trying to utilize the Light Client Protocol e.g. Portal being able to validate the last 8192 EL blocks, since historical_summaries are only updated every 8192 slots or so.

Ideally we could resolve this in either Electra or Fulu, it would be great if there could be clarity by others on what would be the most realistic path to resolve this problem and get the fix included in a fork.

After talking with @ralexstokes and @etan-status in this issue ethereum/consensus-specs#4214 (comment) it looks like option 2 was already attempted 3 years ago and didn't gain any steam.

So I don't think this problem should be discussed on ACDC as the only clear way to resolve this problem to my understanding is to push for EL to use SSZ hash_tree_root for calculating transactions_root and withdrawals_root under the serialization harmonization section of the Ethereum Roadmap. By updating the EL Header, EL light clients could follow the chain solely using a Consensus LightClient, as the EL header would be then included in the LightClientUpdates

@etan-status
Copy link

etan-status commented Mar 27, 2025

This is a fairly big problem for any projects which are trying to utilize the Light Client Protocol e.g. Portal being able to validate the last 8192 EL blocks, since historical_summaries are only updated every 8192 slots or so.

That's somewhat possible to validate today (via REST against an untrusted node)!

For the current sync committee period (since the last 8192 slot boundary), you can use the BeaconState.state_roots / BeaconState.block_roots property to root your proof. See the approach I am using for (WIP) beacon state snap sync work: https://github.com/etan-status/consensus-specs/blob/lc-snapsync/specs/altair/light-client/beacon-snapshot.md#beacon-snapshot-validation

For the last 8192 EL blocks, you can also use eth_call on the eip-4788 contract (with an in-process trusted EVM backed by eth_getProof based "beam sync")

Image

@timbeiko
Copy link
Contributor

On ACDE#208, we tentatively set the Pectra mainnet date to April 30. We should validate this on ACDC and ensure all teams have filled out the incident response plan.

@ralexstokes
Copy link
Member Author

leaving this here to start the discussion around fusaka blob counts:

https://hackmd.io/@ralexstokes/blob-acc-2025

Vitalik also suggested a more gradual formula that I think is good to consider

`max = 2 * (8 + (weeks % 8)) * (2 ** (weeks // 8))` and then `target = max * 2 / 3`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ACD Type: All Core Dev calls - execution & consensus Consensus Layer: Issues that affect the consensus layer
Projects
None yet
Development

No branches or pull requests

6 participants