Verifiable Randomness Systems
Most developers think audits are about code.
They are not.
Audits are about evidence.
This document explains how randomness systems are actually audited in practice, what third parties look for, and why many systems fail audits even when the math is correct.
An audit does not ask:
“Is this random?”
It asks:
“Can anyone influence outcomes without being detected?”
Randomness quality is secondary. Influence resistance is primary.
Auditors typically fall into four groups:
Each group has different skills, but they all ask the same core questions.
First audit step:
If replay fails:
Determinism is mandatory.
Auditors enumerate inputs:
If any input is:
Then trust is required. Auditors reject trust.
This is the most important step.
Auditors reconstruct a timeline:
If any party knew the final outcome early, fairness is compromised.
Auditors look for:
If outcomes can be discarded quietly, bias is possible. Fair systems force outcomes to complete.
Auditors check:
They verify that:
All map to fixed indices.
If meaning can change, verification breaks.
Auditors do not accept:
They expect:
Especially in gambling systems, theoretical bias matters.
A strong audit requirement:
If verification requires “trusting the SDK”, the system fails.
Auditors ask:
Randomness systems must be versioned like protocols. Mutable logic destroys historical trust.
They fail because:
Not because developers were dishonest. Because fairness was added late.
Systems that pass:
BlockRand passes all these.
Every audit ends with one question:
“Could the operator have influenced this outcome without leaving evidence?”
If the answer is no, the system passes. If the answer is “probably not”, it fails.
Audits do not reward cleverness. They reward constraint.
The best randomness systems are not impressive. They are inevitable.