This EIP introduces Blob Parameter Only (BPO) Hardforks, a lightweight mechanism for incrementally scaling Ethereum’s blob capacity through targeted hard forks that modify only blob-related parameters: target
, max
, baseFeeUpdateFraction
, and maxBlobsPerTx
. Unlike traditional hard forks, which require extensive coordination and introduce broader protocol changes, BPO forks enable rapid, low-overhead scaling of blob capacity in response to real-world demand and network conditions.
Ethereum's scaling strategy relies on Layer 2 (L2) solutions for transaction execution while using Ethereum as a data availability (DA) layer. However, the demand for DA has increased rapidly, and the current approach of only modifying blob parameters in large, infrequent hard forks is not agile enough to keep up with L2 growth.
The key motivations for BPO forks are as follows:
BPO forks allow for more frequent, safer capacity increases.
Reduced Operational Overhead
By isolating blob parameter changes, BPO forks reduce the complexity of upgrades.
Enhanced Stability with New Scaling Technologies
Rather than forcing core developers to accept a suboptimal tradeoff between stability and capacity, BPO forks allow developers to safely increase parameters after observing mainnet performance and stability.
Predictable Upgrades for Builders
BPO hardforks are defined as protocol upgrades that modify only blob-related parameters through configuration, without requiring any client-side code changes. The new parameters take effect immediately at the specified activation time.
The following protocol parameters are now managed by the blob schedule configuration:
target
): The expected number of blobs per block.max
): The maximum number of blobs per block.maxBlobsPerTx
): The maximum number of blobs per transaction.baseFeeUpdateFraction
): Determines how blob gas pricing adjusts per block.To ensure consistency, when a regular hardfork changes any of these parameters, it MUST do so by adding an entry to the blob schedule configuration.
To facilitate these changes on the execution layer, each fork in the blobSchedule
object defined in EIP-7840 is linked to an activation timestamp via a top-level <fork_name>Time
field, which holds the Unix timestamp of the activation slot as a JSON number. BPO forks SHOULD be named using the convention bpo<index>
, where <index>
starts at 1
. Left padding is unnecessary since these labels are not subject to lexicographic sorting. Activation timestamps are required only for forks that occur after Prague.
"blobSchedule": {
"cancun": {
"target": 3,
"max": 6,
"baseFeeUpdateFraction": 3338477
},
"prague": {
"target": 6,
"max": 9,
"baseFeeUpdateFraction": 5007716
},
"osaka": {
"target": 6,
"max": 9,
"maxBlobsPerTx": 6,
"baseFeeUpdateFraction": 5007716
},
"bpo1": {
"target": 12,
"max": 16,
"maxBlobsPerTx": 12,
"baseFeeUpdateFraction": 5007716
},
"bpo2": {
"target": 16,
"max": 24,
"maxBlobsPerTx": 12,
"baseFeeUpdateFraction": 5007716
},
},
"cancunTime": 0, // no backporting
"pragueTime": 0, // no backporting
"osakaTime": 1747387400,
"bpo1Time": 1757387400,
"bpo2Time": 1767387784,
A new BLOB_SCHEDULE
field is added to consensus layer configuration, containing a sequence of entries representing blob parameter changes after ELECTRA_FORK_EPOCH
. There exists one entry per fork that changes blob parameters, whether it is a regular or a Blob-Parameter-Only fork.
BLOB_SCHEDULE:
- EPOCH: 380000 # FULU_FORK_EPOCH (illustrative)
MAX_BLOBS_PER_BLOCK: 12
- EPOCH: 400000 # A future anonymous BPO fork
MAX_BLOBS_PER_BLOCK: 24
- EPOCH: 420000 # A future anonymous BPO fork
MAX_BLOBS_PER_BLOCK: 56
- EPOCH: 440000 # GLOAS_FORK_EPOCH; a future named fork introducing blob parameter changes
MAX_BLOBS_PER_BLOCK: 72
The parameters and schedules above are purely illustrative. Actual values and schedules are beyond the scope of this specification.
Requirements:
blobSchedule
MUST align with the start of the epoch specified in the consensus layer configuration.max
field in the EL's blobSchedule
MUST equal the MAX_BLOBS_PER_BLOCK
value in the consensus layer configuration.maxBlobsPerTx
field is optional and defaults to value of max
when omitted. It is not used by the consensus layer.compute_fork_digest
The compute_fork_digest
helper is updated to account for BPO forks:
@dataclass
class BlobScheduleEntry:
epoch: Epoch
max_blobs_per_block: uint64 # Aligning with the type of MAX_BLOBS_PER_BLOCK
def compute_fork_digest(
current_version: Version, # Unchanged; refers to the baseline hardfork atop which the blob schedule is applied
genesis_validators_root: Root, # Unchanged
current_epoch: Epoch, # New
blob_schedule: Sequence[BlobScheduleEntry] # New
) -> ForkDigest:
"""
Return the 4-byte fork digest for the ``current_version`` and ``genesis_validators_root``,
bitmasking blob parameters after ``ELECTRA_FORK_VERSION``.
This is a digest primarily used for domain separation on the p2p layer.
4-bytes suffices for practical separation of forks/chains.
"""
base_digest = compute_fork_data_root(current_version, genesis_validators_root)[:4]
# Find the blob parameters applicable to this epoch.
sorted_schedule = sorted(blob_schedule, key=lambda e: e.epoch, reverse=True)
blob_params = None
for entry in sorted_schedule:
if current_epoch >= entry.epoch:
blob_params = entry
break
# This check enables us to roll out the BPO mechanism without a concurrent parameter change.
if blob_params is None:
return ForkDigest(base_digest)
# Safely bitmask blob parameters into the digest.
assert 0 <= blob_params.max_blobs_per_block <= 0xFFFFFFFF
mask = blob_params.max_blobs_per_block.to_bytes(4, 'big')
masked_digest = bytes(a ^ b for a, b in zip(base_digest, mask))
return ForkDigest(masked_digest)
In the consensus layer, ENRs are extended with an additional entry nfd
, short for "next fork digest". This field communicates the digest of the next scheduled fork, regardless of whether it is a regular or BPO fork. This approach is preferred over encoding BPO-specific parameters because it is agnostic to specific use cases and offers greater long-term flexibility.
Key | Value |
---|---|
nfd |
SSZ Bytes4 ForkDigest |
When discovering and interfacing with peers, nodes MUST evaluate nfd
alongside their existing consideration of the ENRForkID::next_*
fields under the eth2
key, to form a more accurate view of the peer's intended next fork.
Status
req/respNo changes are needed in this interaction, but it is noted that the response payload must correctly contain the updated fork_digest
.
No changes are required to topic structure or configuration. However, all topics will automatically rotate at a BPO fork due to changes in their ForkDigestValue
component.
A BPO fork may decrease the number of blobs allowed per block or transaction. This means that transactions in the pool with a high blob count may become ineligible for inclusion. Such transactions MUST NOT be propagated and should be considered for eviction from the transaction pool.
Full hard forks require extensive coordination, testing, and implementation changes beyond parameter adjustments. For example, in Lighthouse, a typical hard fork implementation requires thousands of lines of boilerplate before any protocol changes occur. BPO forks streamline this process by avoiding the need for this boilerplate code.
Allowing blob parameters to be configured externally enables rapid experimentation, testing, and adjustments without requiring code changes across client implementations. Testing teams can investigate different parameters with minimal involvement from client implementers.
BPO forks introduce no backwards compatibility concerns.
No security risks have been identified.
Copyright and related rights waived via CC0.