Execution missed rewards computation
This page discusses Rated's methodology on calculating missed execution layer rewards emanating from missed block proposals.
Last updated
This page discusses Rated's methodology on calculating missed execution layer rewards emanating from missed block proposals.
Last updated
At a high enough level, all missed rewards at the EL
stem from missing block proposals. This is clear enough to conceptualize.
Where things start getting complicated is when trying to ascertain the value of the block that would be. Unlike CL
rewards, EL
rewards for validators vary a lot more, based on the following two factors:
Demand for blockspace (i.e. transaction volume) on the chain
Procuring blocks from block builders vs crafting your own blocks (as a validator)
Complexity increases further once you consider groups of validators with similar configurations under a given operator; does one look at “what could have been” from the validator index perspective, or does it pay to aggregate upwards to the operator level and project back down to the index level from there?
In the following sections we discuss the merits and drawbacks of different approaches before arriving to the “current best” approach.
We are offering Approach 1 via our . If you are interested in integrating any of the other approaches outlined in this page, get in touch with us!
In the simplest form, missed EL
rewards would be calculated similarly to CL
missed rewards:
There are some clear advantages with this approach, namely that:
It is easy to understand.
It is easy enough to replicate.
It does not really require information that does not live on-chain (so long as the calculation of validator rewards is performed correctly).
It works well at the atomic level (a pubkey
).
At the same time there are disadvantages to consider:
It flattens validators and operators insofar as their adoption of mev-boost goes, with the potential to unfairly penalize (for example) a validator that does not run mev-boost, but missed a block in an epoch that all produced blocks were mev-boost blocks.
It does not more accurately capture the specifics of the circumstance as well as a “next block” or “average relay bid” does.
Another approach is to reference the global average of bids from the relay APIs for that particular block slot (addendum: bids that match the parent_hash
with the previous block).
While we initially considered taking the max_bid
from relays to measure up against, we quickly have found that in the majority of cases winning_bid
≠ max_bid
. This is most likely due to the fact that bids keep arriving after the winning_bid
gets picked; naturally these bids pack more transactions, and with more transactions, the configurations for MEV multiply. The proposer could keep waiting for a later bid, but also that would increase the probability of missing that block. Overall, while this would be the purest form of opportunity cost, it is also highly unrealistic given our observations and penalizes missed proposals unfairly.
There are some clear advantages with this approach, namely that:
It captures a pure version of opportunity cost that references the state of the world at t=n when said block would have been produced.
It offers realistic hard data about the state of the world at t=n.
But at the same time there are some pretty serious disadvantages:
It assumes that every validator is running mev-boost, unfairly penalizing those that do not.
At any scale of downstream adoption, it creates incentives for running mev-boost, and is therefore more opinionated.
It is harder to replicate as it assumes widespread access to the archive of relay bids; this archive is not on-chain, data is in parts ephemeral, and while Rated does have all this data, it also becomes a choke point.
The third approach available is to reference the value of the next block produced, such that:
The rationale behind this is that the validator who missed the proposal would have had access to the same transactions as the next proposer and could then have at least built the same block with the same transactions and corresponding fees.
There are some advantages with this approach, namely that:
It is more specific to the condition of said validator that missed the block.
The data is on-chain an therefore easy to replicate.
It is simple enough to calculate.
But at the same time there are some pretty serious disadvantages:
It is more stochastic than either Approach 1 or 2 as there is no smoothing, and is sensitive to spikes in demand for blockspace.
It does not distinguish between whether the validator is running mev-boost or not, and is therefore subject to the same class of disadvantages as Approach 2 is.
The final possible approach is to combine elements of Approach 1 and 2 with some probabilistic weighing of where the next block might come from, guided from an operator level distribution.
Say validator A belongs to a set of keys under operator B. Operator B has produced x% mev-boost blocks and y% vanilla blocks. We could therefore calculate the value of a missed block as:
There are some strong advantages with this approach, namely that:
It captures the probability space well and produces an expected value of opportunity cost.
It is specific to the context of each of the validators.
In a world of perfect information, it is probably the most accurate representation of missed value.
But at the same time there are some pretty serious disadvantages:
It assumes perfect knowledge of pubkey to operator mappings. In reality, this is very fickle and only translates well to operations that have on-chain registries that they correspond to (e.g. the Lido set). Hence this methodology does not scale well horizontally.
It comes with the same challenges as Approach 2 with respect to access to MEV relay data.