Verifiable Randomness Systems
“Don’t trust — verify” isn’t a slogan here, it’s the design requirement.
A provably fair system is only meaningful if you can verify outcomes without believing anything the server says, including:
This article shows how verification works even if you assume the server is hostile.
In a proper provably fair system, the verifier trusts only:
That’s it.
No APIs, no screenshots, no “trust us” dashboards.
To independently verify any outcome, you should be able to obtain:
If any of these are missing, verification is incomplete.
First, confirm neither party changed their input.
SHA256(server_seed) == server_commitment SHA256(user_seed) == user_commitment
If either fails:
This alone eliminates post-hoc manipulation.
Using the documented combination rule, recompute the final entropy source.
Example:
| combined_seed = SHA256(server_seed | user_seed) |
Important:
If the platform can’t clearly explain this step, walk away.
From the combined seed, derive randomness exactly as specified.
Examples:
This must be:
Running it twice must produce the same output every time.
Now compare:
recomputed_result == published_result
If they match:
If they don’t:
Even if:
Verification still works because:
The server becomes irrelevant after publishing the data.
Be alert for these red flags:
❌ “Verification available via our API only”
❌ Commitments not publicly logged
❌ Seeds revealed only on request
❌ Conditional re-rolls or retries
❌ Extra entropy injected server-side
Each of these reintroduces trust — and defeats the entire purpose.
Ask one question:
“Can I verify this result offline, with nothing but the data and the algorithm?”
If the answer is no, the system is not provably fair.
True verification gives you:
Once users can verify without trusting the server, fairness becomes enforceable.