EL
stem from missing block proposals. This is clear enough to conceptualize.
Where things start getting complicated is when trying to ascertain the value of the block that would be. Unlike CL
rewards, EL
rewards for validators vary a lot more, based on the following two factors:
- Demand for blockspace (i.e. transaction volume) on the chain
- Procuring blocks from block builders vs crafting your own blocks (as a validator)
We are offering Approach 1 via our API. If you are interested in integrating any of the other approaches outlined in this page, get in touch with us!
Approach 1: Simple average of block value in an epoch
In the simplest form, missedEL
rewards would be calculated similarly to CL
missed rewards:
- It is easy to understand.
- It is easy enough to replicate.
- It does not really require information that does not live on-chain (so long as the calculation of validator rewards is performed correctly).
-
It works well at the atomic level (a
pubkey
).
- It flattens validators and operators insofar as their adoption of mev-boost goes, with the potential to unfairly penalize (for example) a validator that does not run mev-boost, but missed a block in an epoch that all produced blocks were mev-boost blocks.
- It does not more accurately capture the specifics of the circumstance as well as a “next block” or “average relay bid” does.
Approach 2: Referencing relay bids for opportunity cost
Another approach is to reference the global average of bids from the relay APIs for that particular block slot (addendum: bids that match theparent_hash
with the previous block).
max_bid
from relays to measure up against, we quickly have found that in the majority of cases winning_bid
≠ max_bid
. This is most likely due to the fact that bids keep arriving after the winning_bid
gets picked; naturally these bids pack more transactions, and with more transactions, the configurations for MEV multiply. The proposer could keep waiting for a later bid, but also that would increase the probability of missing that block. Overall, while this would be the purest form of opportunity cost, it is also highly unrealistic given our observations and penalizes missed proposals unfairly.
There are some clear advantages with this approach, namely that:
- It captures a pure version of opportunity cost that references the state of the world at t=n when said block would have been produced.
- It offers realistic hard data about the state of the world at t=n.
- It assumes that every validator is running mev-boost, unfairly penalizing those that do not.
- At any scale of downstream adoption, it creates incentives for running mev-boost, and is therefore more opinionated.
- It is harder to replicate as it assumes widespread access to the archive of relay bids; this archive is not on-chain, data is in parts ephemeral, and while Rated does have all this data, it also becomes a choke point.
Approach 3: Referencing the next block produced
The third approach available is to reference the value of the next block produced, such that:- It is more specific to the condition of said validator that missed the block.
- The data is on-chain an therefore easy to replicate.
- It is simple enough to calculate.
- It is more stochastic than either Approach 1 or 2 as there is no smoothing, and is sensitive to spikes in demand for blockspace.
- It does not distinguish between whether the validator is running mev-boost or not, and is therefore subject to the same class of disadvantages as Approach 2 is.
Approach 4: Abstracting to the operator level
The final possible approach is to combine elements of Approach 1 and 2 with some probabilistic weighing of where the next block might come from, guided from an operator level distribution. Say validator A belongs to a set of keys under operator B. Operator B has produced x% mev-boost blocks and y% vanilla blocks. We could therefore calculate the value of a missed block as:- It captures the probability space well and produces an expected value of opportunity cost.
- It is specific to the context of each of the validators.
- In a world of perfect information, it is probably the most accurate representation of missed value.
- It assumes perfect knowledge of pubkey to operator mappings. In reality, this is very fickle and only translates well to operations that have on-chain registries that they correspond to (e.g. the Lido set). Hence this methodology does not scale well horizontally.
- It comes with the same challenges as Approach 2 with respect to access to MEV relay data.