upload should take better advantage of existing shares #610

Open
opened 2009-02-08 20:28:47 +00:00 by warner · 9 comments
warner commented 2009-02-08 20:28:47 +00:00
Owner

Our current upload process (which is nearly the oldest code in the entire
tahoe tree) could be smarter in the presence of existing shares. If a file is
uploaded in January, then a few dozen servers are added in February, then in
March it is (for whatever reason) uploaded again, here's what currently
happens:

  • peer selection comes up with a permuted list of servers, with the same
    partial ordering as the original list but with the new servers inserted in
    various pseudo-random places
  • each server in the list is asked, in turn, if they would be willing to
    hold on to the next sequentially numbered share
  • each server might say yes or no. In addition, each server will return a
    list of shares that they might already have
  • the client never asks a server to accept a share that it already had a
    home for, but it also never unasks a server to hold a share that it later
    learns is housed somewhere else

So, if the client queries a server which already has a share, that server
will probably end up with two shares. In addition, many shares will probably
end up being sent to a new server even though some other server (later in the
permuted list) already has a copy.

To fix this, the upload process needs to do more work:

  • it needs to cancel share-upload requests if it later learns that some
    other server already has that particular share
  • perhaps it should perform some sort of validation on the claimed
    already-uploaded share
  • if it detects any evidence of pre-existing shares, it should put more
    energy into finding additional ones
  • it needs to ask more servers than it strictly needs (for upload purposes)
    to increase the chance that it can detect this evidence

We're planning an overhaul of immutable upload/download code, both to improve
parallelism and to replace the DeferredList with a state machine (to make it
easier to bypass stalled servers, for example). These goals should be
included in that work.

This process will work best when the shares are closer to the beginning of
the permuted list. A "share rebalancing" mechanism should be created to
gradually move shares in this direction over time. This is another facet of
repair: no only should there be enough shares in existence, but they should
be located in the best place for a downloader to find them quickly.

Our current upload process (which is nearly the oldest code in the entire tahoe tree) could be smarter in the presence of existing shares. If a file is uploaded in January, then a few dozen servers are added in February, then in March it is (for whatever reason) uploaded again, here's what currently happens: * peer selection comes up with a permuted list of servers, with the same partial ordering as the original list but with the new servers inserted in various pseudo-random places * each server in the list is asked, in turn, if they would be willing to hold on to the next sequentially numbered share * each server might say yes or no. In addition, each server will return a list of shares that they might already have * the client never asks a server to accept a share that it already had a home for, but it also never unasks a server to hold a share that it later learns is housed somewhere else So, if the client queries a server which already has a share, that server will probably end up with two shares. In addition, many shares will probably end up being sent to a new server even though some other server (later in the permuted list) already has a copy. To fix this, the upload process needs to do more work: * it needs to cancel share-upload requests if it later learns that some other server already has that particular share * perhaps it should perform some sort of validation on the claimed already-uploaded share * if it detects any evidence of pre-existing shares, it should put more energy into finding additional ones * it needs to ask more servers than it strictly needs (for upload purposes) to increase the chance that it can detect this evidence We're planning an overhaul of immutable upload/download code, both to improve parallelism and to replace the DeferredList with a state machine (to make it easier to bypass stalled servers, for example). These goals should be included in that work. This process will work best when the shares are closer to the beginning of the permuted list. A "share rebalancing" mechanism should be created to gradually move shares in this direction over time. This is another facet of repair: no only should there be enough shares in existence, but they should be located in the best place for a downloader to find them quickly.
tahoe-lafs added the
code-encoding
major
enhancement
1.2.0
labels 2009-02-08 20:28:47 +00:00
tahoe-lafs added this to the 1.5.0 milestone 2009-02-08 20:28:47 +00:00
zooko commented 2009-02-09 18:23:34 +00:00
Author
Owner

I'm going to add Cc: tahoe-dev to this ticket, and then I'm going to post the original contents of this ticket to tahoe-dev along with a link to this ticket.

I'm going to add Cc: tahoe-dev to this ticket, and then I'm going to post the original contents of this ticket to tahoe-dev along with a link to this ticket.
warner commented 2009-02-11 19:17:39 +00:00
Author
Owner

Zandr suggested a simple "good enough" solution for 1.3.1:

  • at the end of peer selection, examine the list of (placed shares, found shares) for duplicates
  • for any placed share that was present in the list of found shares:
  • cancel the upload of that share
  • remove it from the 'landlords' list

This wouldn't be perfect, because there could be other pre-existing shares beyond the end of the portion of the permuted list that we wouldn't see, but it would at least remove some of the duplicated shares.

Zandr suggested a simple "good enough" solution for 1.3.1: * at the end of peer selection, examine the list of (placed shares, found shares) for duplicates * for any placed share that was present in the list of found shares: * cancel the upload of that share * remove it from the 'landlords' list This wouldn't be perfect, because there could be other pre-existing shares beyond the end of the portion of the permuted list that we wouldn't see, but it would at least remove some of the duplicated shares.
tahoe-lafs modified the milestone from 1.5.0 to eventually 2009-06-30 12:40:36 +00:00
zooko commented 2009-08-10 15:29:29 +00:00
Author
Owner

The following clump of tickets might be of interest to people who are interested in this ticket: #711 (repair to different levels of M), #699 (optionally rebalance during repair or upload), #543 ('rebalancing manager'), #232 (Peer selection doesn't rebalance shares on overwrite of mutable file.), #678 (converge same file, same K, different M), #610 (upload should take better advantage of existing shares), #573 (Allow client to control which storage servers receive shares).

The following clump of tickets might be of interest to people who are interested in this ticket: #711 (repair to different levels of M), #699 (optionally rebalance during repair or upload), #543 ('rebalancing manager'), #232 (Peer selection doesn't rebalance shares on overwrite of mutable file.), #678 (converge same file, same K, different M), #610 (upload should take better advantage of existing shares), #573 (Allow client to control which storage servers receive shares).
zooko commented 2009-08-10 15:45:50 +00:00
Author
Owner

Also related: #778 ("shares of happiness" is the wrong measure; "servers of happiness" is better).

Also related: #778 ("shares of happiness" is the wrong measure; "servers of happiness" is better).
zooko commented 2010-05-16 05:19:31 +00:00
Author
Owner

This ticket was fixed by the patches that fixed #778. I think. Assigning this ticket to Kevan to verify and document that this ticket is fixed by his patches.

This ticket was fixed by the patches that fixed #778. I think. Assigning this ticket to Kevan to verify and document that this ticket is fixed by his patches.
tahoe-lafs modified the milestone from eventually to 1.7.0 2010-05-16 05:19:31 +00:00
kevan commented 2010-05-16 18:00:27 +00:00
Author
Owner

I don't think #778 fixed this.

  • it needs to cancel share-upload requests if it later learns that some other server already has that particular share
    • perhaps it should perform some sort of validation on the claimed already-uploaded share

The bipartite matching wording of this issue is: "If it later learns that some other server already has that particular share, and it can use that share on that other server to create a maximum bipartite matching (or: a bipartite matching that is larger than the happiness threshold, though I'd take maximum reliability over the inefficiency of a duplicate share), it needs to cancel share-upload requests". #778 does not do this.

(I think this part of the issue would probably be better solved by the uploader rewrite alluded to at the end of #778; it isn't a regression compared to 1.6.1, and it would be a lot nicer if we considered this as one of the requirements to a new uploader than if we tried to graft it onto the existing uploader)

  • if it detects any evidence of pre-existing shares, it should put more energy into finding additional ones

#778 searches for pre-existing shares regardless of whether it has seen them, but only on (some of the) servers that it can't place them on. I think what this ticket suggests is to expand that search to peers that we can write to, and make that search triggered or otherwise conditional on finding pre-existing shares when issuing allocate_buckets() requests. #778 doesn't do that.

The read-only share discovery logic in #778 was implemented to solve the specific case of happiness being undercounted due to the fact that the upload process never contacted read only peers at all, and would therefore never know about the shares that they held. This ticket is more about doing that for all servers to maximize the efficiency of share placement on the grid. An easy first stab at that would be to expand the share discovery logic to work across all of the peers that it knows about, though that would add even more queries to the upload process.

  • it needs to ask more servers than it strictly needs (for upload purposes) to increase the chance that it can detect this evidence

#778 does do this -- it asks servers that it would not have asked in 1.6.1 to try to detect evidence of pre-existing shares.

I don't think #778 fixed this. > * it needs to cancel share-upload requests if it later learns that some other server already has that particular share > * perhaps it should perform some sort of validation on the claimed already-uploaded share The bipartite matching wording of this issue is: "If it later learns that some other server already has that particular share, and it can use that share on that other server to create a maximum bipartite matching (or: a bipartite matching that is larger than the happiness threshold, though I'd take maximum reliability over the inefficiency of a duplicate share), it needs to cancel share-upload requests". #778 does not do this. (I think this part of the issue would probably be better solved by the uploader rewrite alluded to at the end of #778; it isn't a regression compared to 1.6.1, and it would be a lot nicer if we considered this as one of the requirements to a new uploader than if we tried to graft it onto the existing uploader) > * if it detects any evidence of pre-existing shares, it should put more energy into finding additional ones #778 searches for pre-existing shares regardless of whether it has seen them, but only on (some of the) servers that it can't place them on. I think what this ticket suggests is to expand that search to peers that we can write to, and make that search triggered or otherwise conditional on finding pre-existing shares when issuing `allocate_buckets()` requests. #778 doesn't do that. The read-only share discovery logic in #778 was implemented to solve the specific case of happiness being undercounted due to the fact that the upload process never contacted read only peers at all, and would therefore never know about the shares that they held. This ticket is more about doing that for *all* servers to maximize the efficiency of share placement on the grid. An easy first stab at that would be to expand the share discovery logic to work across all of the peers that it knows about, though that would add even more queries to the upload process. > * it needs to ask more servers than it strictly needs (for upload purposes) to increase the chance that it can detect this evidence #778 does do this -- it asks servers that it would not have asked in 1.6.1 to try to detect evidence of pre-existing shares.
tahoe-lafs modified the milestone from 1.7.0 to soon 2010-06-18 04:51:45 +00:00
zooko commented 2011-07-22 13:30:22 +00:00
Author
Owner

Kevan: does your patch from #1382 affect this issue?

Kevan: does your patch from #1382 affect this issue?
tahoe-lafs modified the milestone from soon to 1.10.0 2012-03-29 22:45:19 +00:00
davidsarah commented 2013-02-15 03:54:46 +00:00
Author
Owner

This is likely to be fixed by implementing the algorithm in ticket:1130#comment:111172, provided that it applies to upload as well as repair.

This is likely to be fixed by implementing the algorithm in ticket:1130#[comment:111172](/tahoe-lafs/trac-2024-07-25/issues/610#issuecomment-111172), provided that it applies to upload as well as repair.
markberger commented 2013-07-09 17:52:58 +00:00
Author
Owner

During 7/9/13's dev chat meeting, Brian suggested that the uploader contact 4n servers in the case where existing shares are found instead of the 2n used in ticket #1382. There was no decision made but we agreed that this ticket should be revisited once #1382 is closed and lands in trunk.

During 7/9/13's dev chat meeting, Brian suggested that the uploader contact 4n servers in the case where existing shares are found instead of the 2n used in ticket #1382. There was no decision made but we agreed that this ticket should be revisited once #1382 is closed and lands in trunk.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#610
No description provided.