Storage Pact Risk Mitigations — Implementation Plan
For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Document all 13 storage pact risk mitigations across the Gozzip wiki — schema updates, new ADR, and architecture docs.
Architecture: 4 protocol schema updates (checkpoint merkle_root, pact type/status variants, challenge serve mode, blinded data requests), 9 client-side behavior specifications, 1 new ADR.
Tech Stack: Markdown documentation only. No code changes.
Task 1: Write ADR 006 — Storage Pact Risk Mitigations
Files:
- Create:
docs/decisions/006-storage-pact-risk-mitigations.md - Modify:
docs/decisions/index.md
Step 1: Create the ADR
# ADR 006: Storage Pact Risk Mitigations
**Date:** 2026-02-28
**Status:** Accepted
## Context
The storage pact layer (ADR 005) introduces reciprocal storage commitments between WoT peers. Risk analysis identified 13 failure modes across security, availability, privacy, and incentive alignment. These need concrete mitigations before the design is production-ready.
Three risks are critical (P0):
- **Completeness** — signatures prove authenticity but not that all events are present
- **Cold start** — new users have no WoT peers to form pacts with
- **Eclipse attack** — attacker becomes majority of a user's storage peers
## Decision
Address all 13 risks. Only 4 require protocol changes; the remaining 9 are client-side logic.
### Protocol changes
1. **Merkle root in checkpoint** — add `merkle_root` tag to kind 10051, enabling completeness verification
2. **Pact type variants** — add `type` tag to kind 10053 (`bootstrap`, `archival`) and `status` tag (`standby`)
3. **Serve challenge mode** — add `type` tag to kind 10054 (`hash`, `serve`) for latency-based fraud detection
4. **Blinded data requests** — replace `p` tag in kind 10057 with `bp` (blinded pubkey) using daily-rotating hash
### Client-side mitigations (no protocol changes)
5. WoT cluster diversity for peer selection (eclipse prevention)
6. Peer reputation scoring (identity age, challenge success rate)
7. Graduated reliability scoring (replace binary pass/fail)
8. Popularity-scaled pact counts (serving asymmetry)
9. Per-peer request rate limiting (serving asymmetry)
10. Warm standby pact promotion (rebalancing window)
11. Pact offer WoT/age/volume filtering (spam prevention)
12. Independent checkpoint Merkle verification (checkpoint trust)
13. Jittered response coordination (response flooding)
## Consequences
- Completeness is now provable — requesters can verify they received all events
- New users can bootstrap via first-follow temporary pacts
- Eclipse attacks require penetrating multiple WoT clusters
- Data request privacy preserved via blinded identifiers
- 9 of 13 mitigations are client-side only — backward compatible, incrementally adoptable
- 4 schema additions are additive — old clients ignore new tags
Step 2: Add to decisions index
Add at end of list in docs/decisions/index.md:
- [006 — Storage Pact Risk Mitigations](006-storage-pact-risk-mitigations.md)
Task 2: Update kind 10051 (checkpoint) schema in messages.md
Files:
- Modify:
docs/protocol/messages.md(lines 48–66)
Step 1: Update the JSON schema
In the kind 10051 JSON block, add the merkle_root tag after profile_ref:
["profile_ref", "<kind_0_event_id>"],
["merkle_root", "<root_hash>", "<event_count>"]
Step 2: Add bullet explaining merkle_root
After the profile_ref bullet, add:
- `merkle_root` — Merkle root of all events in the current checkpoint window, ordered by sequence number. Enables completeness verification: requesters compute the root from received events and compare. `event_count` provides expected total. See [ADR 006](../decisions/006-storage-pact-risk-mitigations.md).
Task 3: Update kind 10053 (storage pact) schema in messages.md
Files:
- Modify:
docs/protocol/messages.md(lines 94–113)
Step 1: Update the JSON schema
Add type and status tags to the kind 10053 JSON block:
{
"kind": 10053,
"pubkey": "<root_pubkey>",
"tags": [
["d", "<partner_root_pubkey>"],
["type", "standard"],
["status", "active"],
["since_checkpoint", "<checkpoint_event_id>"],
["volume", "<bytes>"],
["expires", "<unix_timestamp>"]
]
}
Step 2: Replace the bullet list with expanded documentation
Replace the existing bullets with:
- Both parties publish their own 10053 referencing each other
- The pact exists when both sides have published
- Private — exchanged via encrypted DM or direct connection
**Pact types** (`type` tag):
- `standard` — default reciprocal pact, covers events since last checkpoint (~monthly)
- `bootstrap` — one-sided temporary pact for new users. The followed user stores the new user's data. Auto-expires after 90 days or when the new user reaches 10 reciprocal pacts. See [ADR 006](../decisions/006-storage-pact-risk-mitigations.md).
- `archival` — covers full history or a deep range. Lower challenge frequency (weekly). For power users and archivists.
**Pact status** (`status` tag):
- `active` — peer is challenged and expected to serve data requests
- `standby` — peer receives events but isn't challenged. Promoted to active when an active pact drops, providing instant failover with no discovery delay.
Task 4: Update kind 10054 (storage challenge) schema in messages.md
Files:
- Modify:
docs/protocol/messages.md(lines 115–130)
Step 1: Update the JSON schema
Add type tag to the kind 10054 JSON block:
{
"kind": 10054,
"tags": [
["p", "<challenged_peer_pubkey>"],
["type", "hash"],
["challenge", "<nonce>"],
["range", "<start_seq>", "<end_seq>"]
]
}
Step 2: Update the description
Replace the existing description text (lines 117 and 130) with:
Challenge-response proof of storage. Supports two modes. See [ADR 006](../decisions/006-storage-pact-risk-mitigations.md).
[JSON block here]
**Challenge types** (`type` tag):
- `hash` — "give me H(events[start..end] || nonce)." Proves possession of the event range. Nonce prevents pre-computation.
- `serve` — "give me the full event at position N within the range." Measures response latency. Consistently slow responses (> 500ms) suggest the peer is fetching remotely instead of storing locally.
Clients track a rolling latency score per peer. Peers flagged as likely proxying get challenged 3x more frequently.
Task 5: Update kind 10057 (data request) schema in messages.md
Files:
- Modify:
docs/protocol/messages.md(lines 161–173)
Step 1: Update the description and schema
Replace the kind 10057 section with:
### Kind 10057 — Data Request (DVM-style, blinded)
Broadcast requesting events for a specific user. Uses a blinded identifier to prevent surveillance of who reads whom. See [ADR 006](../decisions/006-storage-pact-risk-mitigations.md).
\```json
{
"kind": 10057,
"tags": [
["bp", "<H(target_root_pubkey || daily_rotation_salt)>"],
["since", "<checkpoint_event_id_or_timestamp>"]
]
}
\```
- `bp` (blinded pubkey) — `H(target_pubkey || YYYY-MM-DD)`. Rotates daily.
- Storage peers compute the same blind for each pubkey they store and match incoming requests
- Observers see a hash that changes daily — cannot link requests across days or to a specific user
- Storage peers respond privately via kind 10058
Task 6: Update kind 10056 (pact offer) schema in messages.md
Files:
- Modify:
docs/protocol/messages.md(lines 147–159)
Step 1: Add optional tz tag to the schema
Update the JSON block:
{
"kind": 10056,
"tags": [
["p", "<requester_root_pubkey>"],
["volume", "<bytes>"],
["tz", "UTC+9"]
]
}
Step 2: Add note about tz tag
After the JSON block, add:
- `tz` (optional) — timezone offset. Used by clients for geographic diversity in peer selection. Target 3+ timezone bands across storage peers to protect against correlated regional failures.
Task 7: Rewrite architecture/storage.md with mitigations
Files:
- Modify:
docs/architecture/storage.md
Step 1: Add bootstrap pacts subsection under Storage Pacts
After the "### Data scope" subsection, add:
### Bootstrap pacts
New users have no WoT to form reciprocal pacts. The first person they follow becomes a temporary storage peer.
- One-sided — the followed user stores the new user's data, no reciprocal obligation
- Auto-expires after 90 days or when the new user reaches 10 reciprocal pacts
- The followed user's client auto-accepts if capacity allows
- Transition: as the user builds WoT, bootstrap pacts phase out and reciprocal pacts take over
### Archival pacts
Standard pacts cover ~monthly windows. For long-term persistence, users can form archival pacts:
- Cover full history or a specified deep range
- Lower challenge frequency (weekly instead of daily)
- For power users, archivists, and users running always-on nodes
- Not mandatory — users without archival pacts are advised to run a persistent node
### Standby pacts
Maintain 3 extra pacts in standby mode to eliminate rebalancing delays:
- Standby peers receive events but aren't challenged or expected to serve
- When an active pact drops, promote a standby immediately — no discovery delay
- Backfill standby pool in the background
Step 2: Rewrite Proof of Storage section
Replace the existing Proof of Storage section with:
## Proof of Storage
Challenge-response protocol via kind 10054. Two challenge modes:
**Hash challenge:** Alice sends Bob "hash events [47..53] with this nonce." Bob computes hash from local copy. Proves possession.
**Serve challenge:** Alice sends Bob "give me the full event at position 47." Measures response latency. Consistently slow responses suggest the peer is fetching remotely.
**Completeness verification:** The checkpoint (kind 10051) includes a Merkle root of all events in the current window. Requesters compute the Merkle root from received events and compare against the checkpoint. Mismatch = events are missing.
**Reliability scoring:** Clients track a rolling 30-day reliability score per peer:
| Score | Status | Action |
|-------|--------|--------|
| 90%+ | Healthy | No action |
| 70–90% | Degraded | Increase challenge frequency |
| 50–70% | Unreliable | Begin replacement |
| < 50% | Failed | Drop immediately |
Step 3: Add Peer Selection section
After the Privacy section, add:
## Peer Selection
Client-side rules for choosing storage peers:
**WoT cluster diversity:** Maximum 3 peers from any single social cluster. At least 4 distinct clusters across 20 peers. Prevents eclipse attacks.
**Geographic diversity:** Target 3+ timezone bands. Never more than 50% of peers in the same ±3 hour band. Protects against correlated regional failures.
**Peer reputation:** Weight offers by identity age, challenge success rate, and active pact count. Identities < 30 days old are limited to bootstrap pacts.
**Popularity scaling:** Scale pact count with follower count (< 100 followers → 10 pacts, 1,000+ → 30 pacts, 10,000+ → 40+). More pacts = more peers sharing serving load.
**Offer filtering:** Drop offers from non-WoT pubkeys, identities < 30 days old, or volume mismatch > 50%.
Step 4: Add note to Privacy section
Add to the Privacy bullets:
- **Blinded data requests** — kind 10057 uses `H(target_pubkey || daily_salt)` instead of raw pubkey. Observers can't identify whose data is being requested or link requests across days.
Task 8: Add completeness verification flow to data-flow.md
Files:
- Modify:
docs/architecture/data-flow.md
Step 1: Add completeness verification flow
Insert before "## Group Message Flow":
## Completeness Verification
\```
Bob fetches Alice's events from a storage peer
│
├─ Fetches Alice's latest checkpoint (kind 10051)
│ merkle_root = abc123, event_count = 100
│
├─ Receives 100 events from storage peer
│
├─ Computes Merkle root of received events
│ ├─ Computed root matches checkpoint → complete set ✓
│ └─ Mismatch or count ≠ 100 → peer is withholding events ✗
│ └─ Try another storage peer or relay
│
└─ Signatures verified + Merkle root matches = authentic AND complete
\```
Task 9: Update architecture/multi-device-sync.md checkpoint description
Files:
- Modify:
docs/architecture/multi-device-sync.md
Step 1: Add merkle_root to checkpoint description
In the "Checkpoint Reconciliation" section, after "Publish new kind 10051 with updated heads for all devices", add a note that the checkpoint also includes the Merkle root of all events in the window.
Find the checkpoint example block and add after device B head = B2:
merkle_root = H(A1,A2,B1,B2)
Task 10: Final consistency audit + verification
Step 1: Cross-check all files
Verify:
merkle_roottag appears in kind 10051 schema in messages.md- Kind 10053 shows
typeandstatustags - Kind 10054 shows
typetag with hash/serve modes - Kind 10057 uses
bp(blinded) instead ofp - Kind 10056 shows optional
tztag - ADR 006 exists and is linked from decisions/index.md
- storage.md has bootstrap, archival, standby, peer selection, and updated proof of storage
- data-flow.md has completeness verification flow
- multi-device-sync.md references merkle_root
- No contradictions between files