From e2feebee388b721ab07cf5d33a668c2fec1d181a Mon Sep 17 00:00:00 2001 From: davidsarah <> Date: Mon, 19 Oct 2009 18:25:09 +0000 Subject: [PATCH] [Imported from Trac: page TahoeTwo, version 4] --- TahoeTwo.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/TahoeTwo.md b/TahoeTwo.md index 1bf92e3..408b151 100644 --- a/TahoeTwo.md +++ b/TahoeTwo.md @@ -6,12 +6,12 @@ When a file is uploaded, the encoded shares are sent to other peers. But to which ones? The [PeerSelection](PeerSelection) algorithm is used to make this choice. Early in 2007, we were planning to use the following "Tahoe Two" algorithm. -By the time we released 0.2.0, we switched to "tahoe3", but when we released +By the time we released 0.2.0, we switched to "TahoeThree", but when we released v0.6, we switched back (ticket #132). -As in Tahoe Three, the verifierid is used to consistently-permute the set of -all peers (by sorting the peers by HASH(verifierid+peerid)). Each file gets a -different permutation, which (on average) will evenly distribute shares among +As in [TahoeThree](TahoeThree), the verifierid (= [StorageIndex](StorageIndex)) is used to consistently-permute +the set of all peers (by sorting the peers by HASH(verifierid+peerid)). Each file +gets a different permutation, which (on average) will evenly distribute shares among the grid and avoid hotspots. With our basket of (usually 10) shares to distribute in hand, we start at the @@ -51,8 +51,8 @@ peers we should talk to (perhaps by recording the permuted peerid of the last node to which we sent a share, or a count of the total number of peers we talked to during upload). -I suspect that this approach handles churn more efficiently than tahoe3, but +I suspect that this approach handles churn more efficiently than [TahoeThree](TahoeThree), but I haven't gotten my head around the math that could be used to show it. On the other hand, it takes a lot more round trips to find homes in small meshes -(one per share, whereas tahoe three can do just one per node). +(one per share, whereas [TahoeThree](TahoeThree) can do just one per node).