Verifiable Randomness Systems
Rolling a dice sounds trivial.
Most developers assume that generating a random number between 1 and 6 is as simple as:
This approach is wrong in most real systems — and the bias it introduces is subtle enough that it often goes unnoticed, yet significant enough to matter in games, betting systems, and simulations.
This document explains:
A very common implementation looks like this:
RR % 61 to shift into [1, 6]At first glance, this seems reasonable.
It is not.
Most random number generators produce values in a fixed range, for example:
0 to 2^32 - 10 to 2^64 - 1These ranges are not divisible by 6.
That means:
This is called modulo bias.
Assume a generator produces numbers from 0 to 9 (10 total values).
If you compute R % 6, the mapping looks like this:
0 → 0
1 → 1
2 → 2
3 → 3
4 → 4
5 → 5
6 → 0
7 → 1
8 → 2
9 → 3
Outcome frequencies:
0, 1, 2, 3 → occur twice4, 5 → occur onceThis is not uniform.
The same problem exists with real RNG ranges — just harder to see.
Bias in dice rolls leads to:
Even a tiny bias becomes meaningful when:
To generate an unbiased dice roll:
This ensures:
Let the RNG produce values in [0, M)
maxMultiple = (M / 6) * 6R >= maxMultiple, reject and retry(R % 6) + 1This guarantees uniform distribution.
A common worry is that rejection sampling is “slow”.
In reality:
Correctness matters far more than theoretical micro-optimizations.
In verifiable systems, dice rolls should be:
This means:
The same principles apply to:
The number of sides changes — the math does not.
A dice roll is only fair if every outcome is equally reachable from the entropy source.
Modulo alone does not guarantee this.
Rejection sampling does.