comment

[Imported from Trac: page ServerSelection, version 11]
zooko 2010-07-21 15:43:23 +00:00
parent f51956b240
commit d9c276417c

@ -7,7 +7,7 @@ Different users of Tahoe-LAFS have different desires for "Which servers should I
* Kevin Reid wants, at least for one of his use cases, to specify several servers each of which is guaranteed to get at least K shares of each file, in addition to potentially other servers also getting shares. * Kevin Reid wants, at least for one of his use cases, to specify several servers each of which is guaranteed to get at least K shares of each file, in addition to potentially other servers also getting shares.
* Shawn Willden wants, likewise, to specify a server (e.g. his mom's PC) which is guaranteed to get at least K shares of certain files (the family pictures and movies files). * Shawn Willden wants, likewise, to specify a server (e.g. his mom's PC) which is guaranteed to get at least K shares of certain files (the family pictures and movies files).
* Some people -- I'm sorry I forget who -- have said they want to upload at least K shares to the K fastest servers. * Some people -- I'm sorry I forget who -- have said they want to upload at least K shares to the K fastest servers.
* Jake Appelbaum and Harold Gonzales want to specify a set of servers which collectively are guaranteed to have at least K shares -- they intend to use this to specify the ones that are running as Tor hidden services and thus are attack-resistant (but also extra slow-and-expensive to reach). Interestingly the server selection policy on *download* should be that the K servers which are Tor hidden services should be downloaded from as a last resort. * Jacob Appelbaum and Harold Gonzales want to specify a set of servers which collectively are guaranteed to have at least K shares -- they intend to use this to specify the ones that are running as Tor hidden services and thus are attack-resistant (but also extra slow-and-expensive to reach). Interestingly the server selection policy on *download* should be that the K servers which are Tor hidden services should be downloaded from as a last resort.
* Several people -- again I'm sorry I've forgotten specific attribution -- want to identify which servers live in which cluster or co-lo or geographical area, and then to distribute shares evenly across clusters/colos/geographical-areas instead of evenly across servers. * Several people -- again I'm sorry I've forgotten specific attribution -- want to identify which servers live in which cluster or co-lo or geographical area, and then to distribute shares evenly across clusters/colos/geographical-areas instead of evenly across servers.
* Here's an example of this desire, Nathan Eisenberg asked on the mailing list for "Proximity Aware Decoding": <http://allmydata.org/pipermail/tahoe-dev/2009-December/003286.html> * Here's an example of this desire, Nathan Eisenberg asked on the mailing list for "Proximity Aware Decoding": <http://allmydata.org/pipermail/tahoe-dev/2009-December/003286.html>
* If you have *K+1* shares stored in a single location then you can repair after a loss (such as a hard drive failure) in that location without having to transfer data from other locations. This can save bandwidth expenses (since inter-location bandwidth is typically free), and of course it also means you can recover from that hard drive failure in that one location even if all the other locations have been stomped to death by Godzilla. * If you have *K+1* shares stored in a single location then you can repair after a loss (such as a hard drive failure) in that location without having to transfer data from other locations. This can save bandwidth expenses (since inter-location bandwidth is typically free), and of course it also means you can recover from that hard drive failure in that one location even if all the other locations have been stomped to death by Godzilla.
@ -60,6 +60,10 @@ to get these properties. I'd love to be able to get stronger diversity among
hosts, racks, or data centers, but I don't yet know how to get that **and** hosts, racks, or data centers, but I don't yet know how to get that **and**
get the properties listed above, while keeping the filecaps small. get the properties listed above, while keeping the filecaps small.
#### note
If you're using an immutable file upload erasure-coding helper then either you can't use this new feature of a new strategy for share placement or else we have to define some way for the user to communicate their share placement strategy to the helper.
#### tickets #### tickets
The main ticket: The main ticket:
* #573 (Allow client to control which storage servers receive shares) * #573 (Allow client to control which storage servers receive shares)