Transparency
Methodology
How facts are sourced, rated, kept current, and corrected. We prioritize peer-reviewed research and real-world data over theory, and recent evidence over older claims — because Bitcoin is a fast-evolving field.
Editorial position
The Bitcoin Evidence Base aims to be a one-stop reference for common claims about Bitcoin — energy and environment, money and volatility, security and quantum risk, regulation, decentralization, financial inclusion, and more. Every entry pairs a common claim with what peer-reviewed research and primary data actually show.
We are not neutral on facts. Where the evidence is settled, we say so. Where it is contested or evolving, we say that too. Our goal is to inform, not persuade.
Source hierarchy
Sources are ranked by independence, peer review, and proximity to primary data:
- 1
Peer-reviewed academic research
e.g. Joule, Nature, Cell Reports, Andrew M. Bailey (NUS)
- 2
Primary data from authoritative institutions
e.g. Cambridge CCAF, ERCOT, EIA, IPCC, NIST, World Bank
- 3
Independent on-chain or industry analysis
e.g. Chainalysis, Glassnode, Bloomberg Intelligence, MIT DCI
- 4
Industry reports with disclosed data
e.g. Galaxy Digital, CoinShares, BTC Policy Institute
- 5
Recognized analysts with track record
e.g. Daniel Batten, Lyn Alden, Nic Carter, Alex Gladstein
- 6
Quality journalism citing the above
e.g. BBC, WSJ, Reuters, Bloomberg, FT
When sources at different tiers disagree, we report the disagreement rather than picking a side. Where a peer-reviewed paper has been formally rebutted (e.g., by DARI), we mark the original claim as debunked and link the rebuttal.
Confidence ratings
Tier 1–3 sources, primary data, or multiple independent confirmations. The claim is well-established and we would defend it confidently.
Tier 4–5 sources, single-source data, or fast-evolving topics where the evidence is reasonable but not yet conclusive.
Theoretical arguments, contested topics, or claims where we're relying on a single analyst. Use with caveats.
Recency rule
Bitcoin and its surrounding context — energy mix, mining geography, regulation, network topology — change quickly. Our priority order:
Within each tier: recent (non-debunked) data and studies take precedence over older ones. Where a 2020 study has been superseded by 2024 data, we cite the newer source.
Phrasing adapted from Daniel Batten's editorial guidance for this project.
Editorial principles
We follow Daniel Batten's communication framework:
- 🛡️Truth First. Acknowledge what is true in the criticism. Never exaggerate or spin.
- 💗Influence, don't just inform. Create emotional connection before presenting data.
- 🎯Check intention.Educate; don't try to win.
- 🏅Authority + humility. Cite sources; admit complexity.
- 🌉"Yes, and" — never "Yes, but". Acknowledge points without negating them.
Found something outdated or incorrect?
The fastest way to keep this database current is for readers to flag issues. If you spot a fact that seems out of date, contradicted by newer research, or simply wrong — please tell us, ideally with the source that updates it.
Every fact we receive is reviewed manually before being added, updated, or archived. Public corrections to existing facts include a changelog entry on /resources.
About the AI research tool
The AI assistant at /agent uses Anthropic's Claude Sonnet 4 with the entire fact database loaded as context. It does not invent facts — it draws from the same curated, sourced entries you can browse at /facts.
The tool follows the editorial principles above: it acknowledges valid criticism, cites sources, and uses "yes, and" framing rather than dismissive rebuttal. It is bound by an instruction to address only what the user actually said — not import objections from related but different claims.
If you see the AI generating something that misrepresents a source or makes up a citation, please report it via the channels above. We continually refine the system prompt to reduce these issues.
Related pages