Verifiable Randomness Systems
Developers often assume that once they have a single random seed, everything else becomes simple.
This is a dangerous assumption.
In systems that require fairness, replayability, or verification, a seed alone is not sufficient. You also need a deterministic and canonical way to derive multiple outcomes from that seed.
This document explains why.
A typical belief looks like this:
“I have a seed. If my algorithm is deterministic, I can always regenerate the same random values.”
This is only partially true.
A seed guarantees determinism, but it does NOT guarantee:
Real systems never generate just one random value. They generate many:
If all of these are derived from the same seed without structure, you immediately face ambiguity. Questions arise:
Without strict rules, verification becomes impossible.
Consider two systems using the same seed.
Even with the same seed and same algorithm, the results differ. A verifier has no way to know which order was “correct”. This destroys auditability.
To make randomness verifiable, each derived value must be tied to an explicit, immutable counter.
Instead of:
“Give me the next random number”
You must say:
“Give me the random number at index 3”
The index is part of the input. Not implicit state. Not execution order. Not call count.
A counter:
With counters, any value can be recomputed in isolation.
The correct model looks like this:
FinalSeed + CounterIndex → One specific outcome
Examples:
Each counter index has a fixed meaning. This mapping must never change.
If counter meanings are flexible, verification breaks.
A canonical mapping means:
Once published, counter allocation becomes part of the protocol. Changing it later breaks backward verification.
Some systems simply advance the RNG state repeatedly. This creates hidden dependencies:
Counters remove all of this fragility.
Without counters, a malicious server can:
With counters:
We use deterministic published counters in BlockRand, so that we achieve the above