Verifiable Randomness Systems
Many systems need to select a value from a very large range:
Developers often map randomness like this: random % N
This is incorrect when the random source does not divide evenly into the range. The result is distribution bias. In large outcome spaces, even tiny bias can affect real money outcomes.
Range mapping converts raw entropy into a number inside a desired interval.
Example:
If the entropy space is not a multiple of the target range, some numbers appear more often.
Example: 2^64 possible values mapped to 10 outcomes. Since 2^64 is not divisible by 10, the first few outcomes occur more frequently. This is called modulo bias.
In small games this is minor, but in:
It becomes financially significant.
When mapping into:
Bias becomes harder to detect but still exists. Audits often miss this. Attackers can exploit predictable skew.
The correct approach is:
This guarantees perfect uniformity.
Entropy space: 0 → 99
Target range: 0 → 5
Largest multiple of 6 below 100 is 96.
Accept only: 0 → 95
Reject: 96 → 99
Then compute: value mod 6
Now each outcome is perfectly uniform.
With 64-bit entropy:
So performance impact is negligible while correctness is preserved.
When rejection occurs, a new entropy draw is required. In deterministic RNG systems this is handled by:
This ensures:
Without counters, reproducing the exact result becomes impossible.
Example: int(random_float * N)
Problems:
Floating point should not be used for large discrete ranges.
Taking only a few bytes:
Always use sufficient bit width.
Overflow or sign issues can:
Biased large-range mapping can allow:
In financial systems this becomes a liability. Even tiny bias can be exploited at scale.
A secure implementation should expose:
This allows independent verification.
Rejection sampling is often avoided due to “performance concerns”. In practice:
Correctness should always win over micro-optimizations.
Secure large-range mapping is required for:
Always:
Never rely on:
Mapping randomness into large ranges is deceptively dangerous. Even tiny bias:
Rejection sampling with sufficient entropy is the only reliable way to guarantee uniform outcomes across large spaces.