[Imported from Trac: page TahoeTwo, version 4]
parent
06ffb922fd
commit
e2feebee38
12
TahoeTwo.md
12
TahoeTwo.md
|
@ -6,12 +6,12 @@ When a file is uploaded, the encoded shares are sent to other peers. But to
|
|||
which ones? The [PeerSelection](PeerSelection) algorithm is used to make this choice.
|
||||
|
||||
Early in 2007, we were planning to use the following "Tahoe Two" algorithm.
|
||||
By the time we released 0.2.0, we switched to "tahoe3", but when we released
|
||||
By the time we released 0.2.0, we switched to "TahoeThree", but when we released
|
||||
v0.6, we switched back (ticket #132).
|
||||
|
||||
As in Tahoe Three, the verifierid is used to consistently-permute the set of
|
||||
all peers (by sorting the peers by HASH(verifierid+peerid)). Each file gets a
|
||||
different permutation, which (on average) will evenly distribute shares among
|
||||
As in [TahoeThree](TahoeThree), the verifierid (= [StorageIndex](StorageIndex)) is used to consistently-permute
|
||||
the set of all peers (by sorting the peers by HASH(verifierid+peerid)). Each file
|
||||
gets a different permutation, which (on average) will evenly distribute shares among
|
||||
the grid and avoid hotspots.
|
||||
|
||||
With our basket of (usually 10) shares to distribute in hand, we start at the
|
||||
|
@ -51,8 +51,8 @@ peers we should talk to (perhaps by recording the permuted peerid of the last
|
|||
node to which we sent a share, or a count of the total number of peers we
|
||||
talked to during upload).
|
||||
|
||||
I suspect that this approach handles churn more efficiently than tahoe3, but
|
||||
I suspect that this approach handles churn more efficiently than [TahoeThree](TahoeThree), but
|
||||
I haven't gotten my head around the math that could be used to show it. On
|
||||
the other hand, it takes a lot more round trips to find homes in small meshes
|
||||
(one per share, whereas tahoe three can do just one per node).
|
||||
(one per share, whereas [TahoeThree](TahoeThree) can do just one per node).
|
||||
|
||||
|
|
Loading…
Reference in a new issue