Proving You Belong Without Saying Who You Are
- The problem
- The surveillance problem with current solutions
- The architecture
- Alternatives worth knowing
- Bridging to Nostr’s keys
- The protocol in practice
- What you could build
- Building this
- The deeper point
The problem
Somewhere out there, a service provider maintains a Web of Trust list. Thousands of Nostr pubkeys, each assigned a score between 0 and 1, where lower numbers mean greater trust. It might be a company running graph analysis on the social network, or a respected community member curating by hand. Your own client can compute scores from your follow list. The source matters less than what the list represents: a set of identities someone has vouched for, to varying degrees.
Now a relay wants to use that list. Users with scores below 0.3 get priority message delivery. Those below 0.1 get access to premium features. The relay doesn’t want to build its own reputation system. It just wants to accept users who’ve already been vetted by a provider it trusts.
The direct path has users authenticate with their pubkey. The relay checks the provider’s list and grants access based on the score it finds. But this creates a record of exactly who connected and when. The relay learns exactly which trusted user arrived. Over time, a detailed picture emerges of individual behavior patterns.
What if there were another way? A user could prove membership in the trusted set along with score eligibility and rate-limit availability, while keeping their identity private. The relay would learn only that an authorized user connected, with no link to prior sessions and no identity exposed. Just a cryptographic assurance of eligibility.
Zero-knowledge proofs make this possible.
The surveillance problem with current solutions
Nostr relays struggle with spam and resource abuse. The protocol’s openness, which makes it resilient to censorship, also makes it hospitable to bad actors. Various solutions have emerged: paid subscriptions, NIP-05 verification, invite codes, allowlists. Each works well enough at filtering, but each also creates linkage between identity and behavior. The relay knows who you are, and it remembers.
Zero-knowledge authentication breaks this coupling. A relay can restrict access to reputable users without learning their identities. Rate limiting can constrain abuse without building behavioral profiles. Quality control becomes possible without surveillance.
Waku’s messaging network already runs this architecture in production, handling thousands of authenticated but anonymous messages daily. The cryptography holds up under production load. Bringing it to Nostr is the remaining design problem.
The architecture
Three cryptographic pieces combine to make anonymous authentication work, and understanding each one matters for seeing how the whole system fits together.
Merkle trees and set membership
The WoT provider builds a Merkle tree. Each leaf contains a hash that binds a pubkey commitment to a trust score, salted for uniqueness. The tree’s root - a single short value that commits to the entire structure - gets published as a signed Nostr event from the provider’s pubkey.
When a user wants to prove membership, they take their leaf data and the sibling hashes along the path from their leaf to the root. Anyone can verify that this path, when hashed upward, reproduces the published root. The path proves the leaf exists in the tree.
Wrapped inside a zero-knowledge proof, this verification transforms into something more powerful. The user proves “I know a leaf and a valid path to this root” while keeping both the leaf and the path hidden. The verifier sees only that the proof checks out and that the root matches the one they trust.
A tree twenty levels deep can hold about a million leaves. Verifying a path means computing twenty hashes, which translates to roughly 4,600 constraints in a ZK circuit when using Poseidon, a hash function designed for exactly this kind of computation.
Threshold proofs
Membership alone isn’t enough. The relay also needs assurance that the user’s score qualifies them for access. Since lower scores indicate greater trust, the proof must demonstrate that the score falls below whatever maximum the relay requires.
This check happens inside the same circuit that verifies membership. The user’s score, already committed in their Merkle leaf, gets compared against the threshold. A less-than operation on 64-bit integers adds only about 65 constraints to the circuit - almost nothing compared to the Merkle verification.
Scores like “0.201718191” get scaled by a billion and stored as integers. The circuit proves the relationship holds while the actual value stays hidden. The relay learns that the user’s score is low enough, but not how low.
Nullifiers and rate limiting
The third piece prevents abuse. Each proof includes a nullifier: a value derived deterministically from the user’s secret and the current time epoch. If the same user generates two proofs in the same epoch, both nullifiers will be identical. Different users always produce different nullifiers. The same user in different epochs produces different nullifiers.
The relay keeps a set of nullifiers seen during the current epoch. A duplicate means someone is trying to exceed their allowance. Unlike a traditional rate limit tied to an IP address or account, this one reveals nothing about identity. The relay knows only that some limit was hit.
The Rate Limiting Nullifier protocol refines this further. It uses Shamir secret sharing to permit N messages per epoch. The user’s secret gets encoded as a polynomial, and each proof reveals one point on that polynomial. Stay under N and your secret remains safe. Exceed it and the polynomial becomes recoverable. Anyone can interpolate it to extract your secret and demonstrate the violation. Slashing becomes possible - the protocol can ban or penalize the offender.
Waku’s implementation uses one-minute epochs. Nullifier storage costs about 200 bytes per user per epoch, which means roughly two megabytes per minute for ten thousand active users. Trivial for any modern system.
Alternatives worth knowing
SNARKs aren’t the only cryptographic path to anonymous authentication, and the alternatives illuminate different tradeoffs.
Ring signatures let a signer prove they control one of N keys, with the specific key remaining hidden. Monero built its privacy model on this foundation. The scheme needs no trusted setup and works directly with the secp256k1 keys Nostr already uses. Signature size grows with the ring, and the basic construction is limited to proving set membership. Predicates on associated data like trust scores remain out of reach.
Linkable ring signatures add detection of double-signing, which helps for one-person-one-vote scenarios. Recent constructions achieve sizes that grow only logarithmically with the ring. Predicates on associated data like trust scores remain out of reach here as well.
BBS+ signatures take a different approach entirely. A provider signs a credential containing various attributes - pubkey commitment, trust score, tier level - and the user can later selectively disclose only certain attributes while proving the signature remains valid. This works well for credentialing systems, but requires the provider to sign each user’s credential individually, not publishing a single Merkle root. The trust model differs in subtle but important ways.
For Nostr’s needs, SNARKs offer the most flexibility. They handle arbitrary predicates on private data and produce compact proofs around 200 bytes with Groth16. The tooling has matured, with production deployments providing a solid reference baseline.
Bridging to Nostr’s keys
A practical problem arises from Nostr’s cryptographic choices. The protocol uses secp256k1 with BIP-340 Schnorr signatures, and verifying these inside a SNARK is expensive. The elliptic curve arithmetic doesn’t map naturally to the prime fields these proof systems work over. A direct implementation costs roughly 1.5 million constraints and takes over 45 seconds to prove.
The solution sidesteps the problem. Users sign a deterministic message with their Nostr key outside the circuit:
message = "nostr-wot-identity-v1:" + provider_pubkey
signature = nostr_sign(private_key, message)
From this signature, they derive a ZK-friendly commitment:
identity_secret = poseidon(sha256(signature + ":secret"))
identity_commitment = poseidon(identity_secret)
This commitment goes into the Merkle leaf over the raw pubkey. The circuit operates on commitments, collapsing the constraint count from 1.5 million to around 5,000.
The binding remains cryptographically solid. Only someone controlling the Nostr private key can produce the signature needed to derive the correct commitment. Registration happens once per WoT provider. After that, all proofs use the efficient path.
The protocol in practice
The full system involves four phases, with responsibilities distributed between WoT providers, relays, and users.
A WoT provider computes trust scores for pubkeys it tracks. For each pubkey, it derives an identity commitment from a deterministic signature, then constructs a Merkle tree whose leaves bind pubkey commitments and trust scores together, each salted. The provider publishes the root as a signed Nostr event and makes the full tree data available through Blossom or similar storage. Different providers might use different scoring algorithms, trust different seed sets, update at different frequencies. The system can support many.
A relay configures which WoT roots it trusts. It might accept proofs against any of three major providers, or run its own scoring and publish its own root. The relay’s policy determines what proofs it will verify. The underlying user data stays off the relay entirely; it sees only roots and proofs.
A user who wants access registers once with each WoT provider whose root they might need. They sign the deterministic message with their Nostr key, derive their commitment, find their leaf in the provider’s tree, and cache their Merkle path locally. This data stays on their device.
When connecting to a relay, the user checks which roots the relay accepts, picks one they have a path for, and constructs a proof. Private inputs include their identity secret, score, salt, and Merkle path. Public inputs specify the root, the maximum allowed score, the current epoch, and an application identifier. The proof outputs a nullifier for rate limiting.
The relay verifies the proof against the claimed root, confirms that root is one it trusts, checks the nullifier against its current-epoch set, and grants or denies access. Verification takes milliseconds. The user has proven everything necessary while revealing nothing beyond eligibility.
What you could build
Anonymous authentication opens design space that wasn’t accessible before.
A relay could filter for trusted users without keeping logs of who connected. Rate limiting through nullifiers would prevent abuse, but no surveillance record would accumulate. Users would come and go as anonymous members of a trusted set.
A community could run polls where members prove eligibility and cast exactly one vote per proposal. The tally would reflect real sentiment without anyone learning how individuals voted. Neither the poll operator nor other members could connect votes to identities.
Multiple relays could share trust in the same WoT provider’s root. Someone who misbehaves would see their score rise or find themselves removed from the tree, losing access everywhere at once. But no relay would share ban lists or user data with the others. Coordination would happen through the WoT layer, not through surveillance.
Journalists could maintain verified source lists. Sources would prove their verified status while remaining anonymous to the journalist receiving the submission. Protection comes from mathematics, not from operational security disciplines.
Lightning payments could gate access. Users who’ve paid above some threshold would appear in a tree. They’d prove payment while keeping the specific transaction private, preserving financial privacy and enabling paid services.
Building this
The tooling exists and has matured considerably. Circom paired with circomlib provides circuit templates for Poseidon hashing and Merkle verification, along with arithmetic comparison primitives. The Semaphore and RLN circuits from Privacy & Scaling Explorations are audited and battle-tested starting points.
For proof generation, snarkjs compiles to WebAssembly and runs in browsers. Native applications can use rapidsnark in C++ or gnark in Go for faster proving. Mobile support is emerging through projects like mopro.
Tree management can use @zk-kit/incremental-merkle-tree for append-only structures. When scores change and leaves need updating in place, sparse Merkle trees handle it better.
Rate limiting implementation finds production code in vacp2p/zerokit, complete with WASM bindings.
The integration work remains: defining how WoT providers publish roots and trees, standardizing the proof format for relay consumption, smoothing registration and proof generation into invisible user experience. NIPs to write, libraries to build, rough edges to sand down.
The deeper point
Authentication has always assumed that verification requires disclosure. To prove you’re authorized, you show credentials. To prove you’re who you claim, you reveal identifying information. The verifier learns your identity as a side effect of checking your access rights.
Zero-knowledge proofs break this assumption. Verification proceeds without revelation. The proof demonstrates a statement’s truth while keeping the underlying witness hidden. A relay can confirm you’re trusted while the underlying identity stays hidden.
For Nostr, this means the protocol’s core strength - pseudonymous, portable, user-controlled identity - can coexist with legitimate quality control. Relays can filter for reputable users while preserving the privacy those users came to Nostr for. The tradeoff dissolves.
The cryptography has matured past the experimental stage. These systems run in production under audit and carry real traffic. What remains is protocol design and library construction, with the user experience smoothed into invisibility.
Infrastructure that knows you’re welcome but can’t tell who walked in. The math is settled and the implementations exist. Engineering is what remains.
Loading comments…