Trust Without Ceremonies: How Nostr Fixed the Web of Trust
- The ceremony problem
- Trust as byproduct
- Computing trust from the social graph
- Vertex and npub.world
- What remains
In 1992, Phil Zimmermann added a feature to PGP version 2.0 that was supposed to solve one of cryptography’s hardest problems. He called it the “web of trust.” The idea was elegant: users would vouch for each other’s keys directly, building chains of signatures through which strangers could establish each other’s identities. You sign my key, I sign yours, and the network grows from those attestations.
The vision was decentralized. It was also a complete failure.
Thirty years later, the PGP keyserver network is dead. GnuPG disabled web of trust functionality by default after spam attacks made keys unusable. The dream of cryptographic trust anchored in peer attestation collapsed because the design asked too much of humans.
Nostr has quietly succeeded where PGP failed. Its web of trust works precisely because users think nothing of it.
The ceremony problem
PGP’s web of trust required explicit trust rituals. You would attend a “key signing party,” verify someone’s identity through government documents, sign their key with your private key, then upload that signature to a keyserver. Back home, you would configure your keyring, assigning trust levels (unknown, marginal, full, ultimate) to various keys. The software would then calculate which keys were “valid” based on weighted combinations of trusted signatures.
The workflow topped out at cryptography enthusiasts. Tim Berners-Lee, reflecting on PGP’s failure to reach mass adoption, noted the UX failures: dialog boxes telling users to “do X” with no button to do X, multi-step processes with no explanation of what any of it meant, the general sense that using encryption required joining a secret priesthood.
But the deeper problem was structural. Most users believed the web of trust worked like “six degrees of separation,” where trust would propagate through long chains of connections. It did not. As Hal Finney explained in 1994, “You can only communicate securely with people who are at most two hops away in the web of connections.” You could trust keys signed by people you personally knew. That was it.
By 2019, the keyserver infrastructure was collapsing. Malicious actors discovered they could flood popular keys with thousands of garbage signatures, causing GnuPG to crash on import. The SKS keyserver network, which had synchronized keys globally since the early 2000s, shut down entirely in 2021 after operators found GDPR deletion requests impossible to process on a system designed to be append-only.
The explicit trust model created a bureaucracy. Bureaucracies don’t survive contact with spam.
Trust as byproduct
Nostr takes the opposite approach, extracting trust signals from actions users already take.
When you follow someone on Nostr, you publish a kind 3 event listing every pubkey you follow. This is the normal behavior of using a social network, with no security ritual attached. But that follow list, signed by your key, is a cryptographic attestation. You are implicitly saying: these are the people whose content I want to see, whose judgment I find valuable enough to include in my feed.
When you mute someone, that too becomes a signed event, a warning to anyone who shares your sensibilities.
When you zap someone, you attach an economic cost to your endorsement. Fake accounts are cheap; sats are scarce.
The Nostr protocol did not invent these actions. Follows, mutes, and tips existed on centralized platforms for years. What Nostr added was cryptographic signatures, public attestability, and the ability to aggregate these signals into trust scores. The same behaviors that made Twitter addictive now make Nostr’s web of trust function.
What eluded PGP was this: trust should be a byproduct of normal activity, a signal captured from what users were going to do anyway. The cypherpunk who wants encrypted communication and the newcomer who just wants to post both produce useful trust signals through ordinary use.
Computing trust from the social graph
Raw follow lists and zap receipts are data. Turning them into usable trust scores requires computation.
The dominant approach borrows from Google’s original insight. PageRank, the algorithm that made web search work, solved a similar problem: determining which pages were important based on link structure. A page linked by many important pages was itself important. The algorithm resisted spam because fake pages linking to you provided no benefit unless those fake pages were themselves linked by real pages.
Personalized PageRank adapts this for social trust, computing importance relative to a specific user’s position in the graph. To determine how much to trust some pubkey you’ve never seen, the algorithm simulates random walks through the follow graph starting from your account. The more often those walks land on that pubkey, the more connected they are to people you already trust.
Nostr.Band uses this when filtering search results. It seeds initial trust to accounts with verified NIP-05 identities, then lets PageRank propagate through the network. “If initial weight is given to a spammer by some accident,” their documentation explains, “they are most likely losing it all by the end of the calculation, because almost no one interacts with their content.”
Coracle, the client built by hodlbod, implements a simpler version directly: your WoT score for someone equals how many people you follow who also follow them, penalized by how many people you follow who have muted them. Crude but effective.
Vertex and npub.world
For developers who want web of trust scores without building graph analysis infrastructure, Vertex offers them as a service. Their system crawls Nostr follow lists continuously, computes Monte Carlo PageRank scores, and exposes them through a DVM (data vending machine) interface. Query with a source pubkey and a target pubkey; get back a personalized trust score, follower counts, and the target’s highest-ranked followers.
The companion tool npub.world provides a search interface for finding profiles within the Nostr network, leveraging the same trust infrastructure.
Vertex explicitly rejected the emerging NIP-85 standard for “trusted assertions,” which takes a different architectural approach. Under NIP-85, service providers publish kind 30382 events that make claims about entities. The d tag identifies the subject (typically a pubkey), and additional tags carry the assertions: a rank score, follower counts, zap totals, or any other metric the provider computes. These events sit on relays like any other Nostr data, and clients can subscribe to assertions from providers they trust.
The model has appeal. It keeps everything in Nostr’s event system. Users choose which assertion providers to trust, similar to choosing which relays to use. A client could subscribe to assertions from multiple WoT services and weight them according to user preferences. The data is cacheable, auditable, and portable.
Vertex identified a fundamental limitation in the approach: NIP-85 assertions are computed for a generic audience, personalized to nobody. If you ask “how trustworthy is pubkey X,” the answer depends on who is asking. Your social graph differs from mine; your trust scores should differ too. Pre-published assertions answer “how trustworthy is X according to service provider Y,” which is a different question from “how trustworthy is X from my perspective.”
The deeper problem is discovery. Static assertions require you to already know the pubkey you want to evaluate. A web of trust should help you find trustworthy accounts you don’t yet know about. “Who should I follow?” is a harder question than “should I trust this specific person?” Real-time personalized computation enables recommendations that static assertions cannot.
This debate is ongoing. The WoT-a-thon hackathon running through April 2026 is pushing for NIP-85 adoption, with a dedicated prize track for implementations. The tension between pre-computed portability and real-time personalization may not have a single correct answer.
What remains
New users face a cold-start problem: without history, they have no trust scores, making it hard to break into existing networks. The computation itself, while based on decentralized data, currently runs on centralized services like Vertex and Nostr.Band. Public follow lists, which make WoT possible, also leak social graph information to anyone watching relay traffic.
The fundamental architecture is sound, though. Trust signals emerge from normal behavior. Algorithms convert those signals into personalized scores. The user never has to attend a key signing party.
Zimmermann’s 1992 vision was right about the goal: decentralized trust anchored in peer relationships. He was wrong about the method: making trust a separate task requiring deliberate effort. Nostr’s contribution is recognizing that the effort was always happening. It just needed to be captured.
Loading comments…