unit test failure: failed to download file with 2 shares on one server and one share on another #1191

Closed
opened 2010-09-03 04:57:24 +00:00 by zooko · 25 comments

For example in this run:

http://tahoe-lafs.org/buildbot/builders/hardy-amd64/builds/687/steps/test/logs/stdio

[ERROR]: allmydata.test.test_hung_server.HungServerDownloadTest.test_2_good_8_broken_copied_share

Traceback (most recent call last):
Failure: allmydata.interfaces.NotEnoughSharesError: ran out of shares: complete=sh0,sh2 pending= overdue= unused= need 3. Last failure: None

The download should have succeeded because server 0 should have shares 0 and 2 and server 1 should have share 1.

source:trunk/src/allmydata/test/test_hung_server.py@4661#L197

This test failure is nondeterministic. The first step is probably to understand why that is and make it fail every time.

For example in this run: <http://tahoe-lafs.org/buildbot/builders/hardy-amd64/builds/687/steps/test/logs/stdio> ``` [ERROR]: allmydata.test.test_hung_server.HungServerDownloadTest.test_2_good_8_broken_copied_share Traceback (most recent call last): Failure: allmydata.interfaces.NotEnoughSharesError: ran out of shares: complete=sh0,sh2 pending= overdue= unused= need 3. Last failure: None ``` The download should have succeeded because server 0 should have shares 0 and 2 and server 1 should have share 1. source:trunk/src/allmydata/test/test_hung_server.py@4661#L197 This test failure is nondeterministic. The first step is probably to understand why that is and make it fail every time.
zooko added the
c/code-peerselection
p/major
t/defect
v/1.8β
labels 2010-09-03 04:57:24 +00:00
zooko added this to the 1.8.0 milestone 2010-09-03 04:57:24 +00:00
Author

This diff seems to have caused the test to be green every time. I'll leave it running overnight to be sure:

 Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u
diff -rN -u old-trunk/src/allmydata/test/no_network.py new-trunk/src/allmydata/test/no_network.py
--- old-trunk/src/allmydata/test/no_network.py  2010-09-02 23:24:54.000000000 -0600
+++ new-trunk/src/allmydata/test/no_network.py  2010-09-02 23:24:59.000000000 -0600
@@ -238,7 +238,8 @@
         self.rebuild_serverlist()

     def rebuild_serverlist(self):
-        self.all_servers = frozenset(self.servers_by_id.items())
+        # self.all_servers = list(reversed(sorted(frozenset(self.servers_by_id.items()))))
+        self.all_servers = sorted(frozenset(self.servers_by_id.items()))
         for c in self.clients:
             c._servers = self.all_servers
This diff seems to have caused the test to be green every time. I'll leave it running overnight to be sure: ``` Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u diff -rN -u old-trunk/src/allmydata/test/no_network.py new-trunk/src/allmydata/test/no_network.py --- old-trunk/src/allmydata/test/no_network.py 2010-09-02 23:24:54.000000000 -0600 +++ new-trunk/src/allmydata/test/no_network.py 2010-09-02 23:24:59.000000000 -0600 @@ -238,7 +238,8 @@ self.rebuild_serverlist() def rebuild_serverlist(self): - self.all_servers = frozenset(self.servers_by_id.items()) + # self.all_servers = list(reversed(sorted(frozenset(self.servers_by_id.items())))) + self.all_servers = sorted(frozenset(self.servers_by_id.items())) for c in self.clients: c._servers = self.all_servers ```
Author

No, that wasn't it. Even with that diff the test occasionally goes red.

No, that wasn't it. Even with that diff the test occasionally goes red.
Author

Okay, this patch makes it go red every time:

HACL Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u
diff -rN -u old-trunk/src/allmydata/test/test_hung_server.py new-trunk/src/allmydata/test/test_hung_server.py
--- old-trunk/src/allmydata/test/test_hung_server.py    2010-09-03 00:21:06.000000000 -0600
+++ new-trunk/src/allmydata/test/test_hung_server.py    2010-09-03 00:21:09.000000000 -0600
@@ -101,7 +101,8 @@
 
         self.c0 = self.g.clients[0]
         nm = self.c0.nodemaker
-        self.servers = [(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()]
+        self.servers = sorted([(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()])
+        self.servers = self.servers[5:] + self.servers[:5]
 
         if mutable:
             d = nm.create_mutable_file(mutable_plaintext)
                                                                               
Okay, this patch makes it go red every time: ``` HACL Zooko-Ofsimplegeos-MacBook-Pro:~/playground/tahoe-lafs/trunk$ time darcs diff -u diff -rN -u old-trunk/src/allmydata/test/test_hung_server.py new-trunk/src/allmydata/test/test_hung_server.py --- old-trunk/src/allmydata/test/test_hung_server.py 2010-09-03 00:21:06.000000000 -0600 +++ new-trunk/src/allmydata/test/test_hung_server.py 2010-09-03 00:21:09.000000000 -0600 @@ -101,7 +101,8 @@ self.c0 = self.g.clients[0] nm = self.c0.nodemaker - self.servers = [(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()] + self.servers = sorted([(id, ss) for (id, ss) in nm.storage_broker.get_all_servers()]) + self.servers = self.servers[5:] + self.servers[:5] if mutable: d = nm.create_mutable_file(mutable_plaintext) ```
Author

When I was looking at the behavior of the tests when it was non-deterministic, I noticed that there are two ways that this particular test could turn out: either the first server to respond claimed 2 shares and the second claimed 1 share, or the first server to respond claimed 1 share and the second claimed 2 shares. I observed that the download always failed in one case and always succeeded in the other. So I hypothesize that maybe the download logic which is trying to achieve diversity isn't realizing that it can use the newly arrived DYHB response for more than one share, or else it isn't realizing that it can use the previously arrived DYHB response for more than one share.

When I was looking at the behavior of the tests when it was non-deterministic, I noticed that there are two ways that this particular test could turn out: either the first server to respond claimed 2 shares and the second claimed 1 share, or the first server to respond claimed 1 share and the second claimed 2 shares. I observed that the download always failed in one case and always succeeded in the other. So I hypothesize that maybe the download logic which is trying to achieve diversity isn't realizing that it can use the newly arrived DYHB response for more than one share, or else it isn't realizing that it can use the previously arrived DYHB response for more than one share.
francois commented 2010-09-04 23:22:27 +00:00
Owner

Attachment test_2_good_8_broken_copied_share.txt (10638 bytes) added

**Attachment** test_2_good_8_broken_copied_share.txt (10638 bytes) added
francois commented 2010-09-04 23:24:11 +00:00
Owner

As first step, the failing test was run with verbose logging, see attachment:test_2_good_8_broken_copied_share.txt.

As first step, the failing test was run with verbose logging, see attachment:test_2_good_8_broken_copied_share.txt.
francois commented 2010-09-05 00:51:43 +00:00
Owner

I don't really know why the following patch makes the test succeed (and breaks two others). But it might give some insight about the actual bug. Unfortunately, I'll probably won't find time to work on this during the next few days.

--- old-tahoe-upstream/src/allmydata/immutable/downloader/finder.py     2010-09-05 02:21:25.000000000 +0200
+++ new-tahoe-upstream/src/allmydata/immutable/downloader/finder.py     2010-09-05 02:21:25.000000000 +0200
@@ -220,7 +220,7 @@
         shares_s = ",".join([str(sh) for sh in shares])
         self.log(format="delivering shares: %s" % shares_s,
                  level=log.NOISY, umid="2n1qQw")
-        eventually(self.share_consumer.got_shares, shares)
+        self.share_consumer.got_shares(shares)
 
     def _got_error(self, f, peerid, req, d_ev, lp):
         d_ev.finished("error", now())
I don't really know why the following patch makes the test succeed (and breaks two others). But it might give some insight about the actual bug. Unfortunately, I'll probably won't find time to work on this during the next few days. ``` --- old-tahoe-upstream/src/allmydata/immutable/downloader/finder.py 2010-09-05 02:21:25.000000000 +0200 +++ new-tahoe-upstream/src/allmydata/immutable/downloader/finder.py 2010-09-05 02:21:25.000000000 +0200 @@ -220,7 +220,7 @@ shares_s = ",".join([str(sh) for sh in shares]) self.log(format="delivering shares: %s" % shares_s, level=log.NOISY, umid="2n1qQw") - eventually(self.share_consumer.got_shares, shares) + self.share_consumer.got_shares(shares) def _got_error(self, f, peerid, req, d_ev, lp): d_ev.finished("error", now()) ```
Author

Okay I think I understand the bug now. [ShareFinder._deliver_shares()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L217 calls eventually(self.share_consumer.got_shares, shares) and then in the next tick—before got_shares() has been executed—[ShareFinder.loop()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L89 runs and gives up and aborts the download because it isn't aware of the shares that were going to be delivered to it in a subsequent tick.

Okay I think I understand the bug now. [ShareFinder._deliver_shares()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L217 calls `eventually(self.share_consumer.got_shares, shares)` and then in the next tick—*before* `got_shares()` has been executed—[ShareFinder.loop()]source:trunk/src/allmydata/immutable/downloader/finder.py@4707#L89 runs and gives up and aborts the download because it isn't aware of the shares that were going to be delivered to it in a subsequent tick.

Oh, yeah, that's a bug. I think the proper behavior would be for the call to self.share_consumer.no_more_shares() to use the same eventual-send that the call to self.share_consumer.got_shares() uses (to make sure the no_more_shares cannot race ahead of the got_shares call).

This bug probably appeared during the diversity-seeking patch: part of that patch was to deliver all shares as quickly as possible, instead of trickling them out one share at a time.

I'll think about what the unit test for this should look like.. something that fails consistently without the patch, of course. Maybe just a way to consistentify test_2_good_8_broken_copied_share.

Oh, yeah, that's a bug. I think the proper behavior would be for the call to `self.share_consumer.no_more_shares()` to use the same eventual-send that the call to `self.share_consumer.got_shares()` uses (to make sure the `no_more_shares` cannot race ahead of the `got_shares` call). This bug probably appeared during the diversity-seeking patch: part of that patch was to deliver all shares as quickly as possible, instead of trickling them out one share at a time. I'll think about what the unit test for this should look like.. something that fails consistently without the patch, of course. Maybe just a way to consistentify `test_2_good_8_broken_copied_share`.

Attachment 1191-fix.diff (644 bytes) added

fix for the bug, but lacks a specific unit test

**Attachment** 1191-fix.diff (644 bytes) added fix for the bug, but lacks a specific unit test

that patch definitely fixes the problem (consistently, thanks to the recent patch to test_hung_server.py which makes the share order consistent). But I'd like to have a test case which specifically exercises this code path to go along with the fix.

that patch definitely fixes the problem (consistently, thanks to the recent patch to test_hung_server.py which makes the share order consistent). But I'd like to have a test case which specifically exercises this code path to go along with the fix.
Author

To believe that attachment:1191-fix.diff fixes this issue, I think I would have to believe that eventually(A);eventually(B) where A does eventually(C) will always execute in order A;B;C and never A;C;B. Is it true that this fix requires that property?

It would be nice if the fix to this bug were more "local" and one could see by inspecting the source that the state was kept consistent. That is: isn't the problem here that a tick updates the ShareFinder's state to remove the last request from pending_requests but does not update the state to add the newly discovered share? And then loop runs between that tick and the later tick which is going to add the resulting share with got_shares? Wouldn't it be better if there was never a time in between ticks where one of those bits of state had been updated and the other had not?

If I understand correctly this patch leaves it the case that one tick updates pending_requests and a subsequent tick calls got_shares, but prevents the problem by ensuring that loop doesn't get run between those two ticks. This seems a bit fragile, but I guess as long as we have a robust test case for it then we'll know if it regresses. Maybe we could add some sort of internal consistency check assertion to the effect that, um, loop never gets called after _request_retired but before _got_shares ?

To believe that [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c) fixes this issue, I think I would have to believe that `eventually(A);eventually(B)` where `A` does `eventually(C)` will always execute in order `A;B;C` and never `A;C;B`. Is it true that this fix requires that property? It would be nice if the fix to this bug were more "local" and one could see by inspecting the source that the state was kept consistent. That is: isn't the problem here that a tick updates the `ShareFinder`'s state to remove the last request from `pending_requests` but does not update the state to add the newly discovered share? And then `loop` runs between that tick and the later tick which is going to add the resulting share with `got_shares`? Wouldn't it be better if there was never a time in between ticks where one of those bits of state had been updated and the other had not? If I understand correctly this patch leaves it the case that one tick updates `pending_requests` and a subsequent tick calls `got_shares`, but prevents the problem by ensuring that `loop` doesn't get run between those two ticks. This seems a bit fragile, but I guess as long as we have a robust test case for it then we'll know if it regresses. Maybe we could add some sort of internal consistency check assertion to the effect that, um, `loop` never gets called after `_request_retired` but before `_got_shares` ?
Author

Okay here is a log created by instrumenting ShareFinder to log each method call and DownloadNode to log got_shares and no_more_shares. This is without attachment:1191-fix.diff.

local#231 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._request_retired(<allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>)
local#232 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._got_response({9: <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>}, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$:      , <allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>, <allmydata.immutable.downloader.status.DYHBEvent instance at 0x104ef9cb0>, 1283835719.96, 210)
local#233 23:01:59.969: got shnums [9] from [xgru5adv]
local#234 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._create_share(9, <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$:        , 0.0133891105652)
local#235 23:01:59.970: Share(sh9-on-xgru5) created
local#236 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._deliver_shares([Share(sh9-on-xgru5)])
local#237 23:01:59.970: delivering shares: Share(sh9-on-xgru5)
local#238 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#239 23:01:59.970: ShareFinder loop: running=True hungry=False, pending=
local#240 23:01:59.971: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#241 23:01:59.971: ShareFinder loop: running=True hungry=False, pending=
local#242 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.hungry()
local#243 23:01:59.972: ShareFinder[si=dglevpj4ueb7] hungry
local#244 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.start_finding_servers()
local#245 23:01:59.973: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#246 23:01:59.973: ShareFinder loop: running=True hungry=True, pending=
local#247 23:01:59.973: ShareFinder.loop: no_more_shares, ever
local#248 23:01:59.973: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638>
local#249 23:01:59.975: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop()
local#250 23:01:59.975: ShareFinder loop: running=True hungry=True, pending=
local#251 23:01:59.975: ShareFinder.loop: no_more_shares, ever
local#252 23:01:59.975: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638>
local#253 23:01:59.976: ran out of shares: complete=sh1,sh8 pending= overdue= unused= need 3. Last failure: None
local#254 23:01:59.976: SegmentFetcher(dglevpj4ueb7).stop
local#255 23:01:59.977: xxx ImmutableDownloadNode(dglevpj4ueb7).got_shares([Share(sh9-on-xgru5)])

(Sorry about the wide lines there.)

So at local#231 23:01:59.969 the request is retired but the resulting eventual got_shares won't happen until local#255 23:01:59.977 which is shortly after the loop at local#247 23:01:59.973 which said no_more_shares, ever, which set a flag named _no_more_shares in the SegmentFetcher so that the next time SegmentFetcher._do_loop runs then it gives up and says ran out of shares at local#253 23:01:59.976.

Now attachment:1191-fix.diff makes it so that when loop decides no_more_shares, ever then it sets an eventual task to set the _no_more_shares flag in SegmentFetcher instead of doing it immediately. Is this guaranteed to always prevent this bug? I guess it is because when the _request_retired (local#231 23:01:59.969) is done immediately then during that same tick the got_shares (local#255 23:01:59.977) is put on the eventual queue, so when the setting of _no_more_shares is put on the eventual queue it will always take effect after the got_shares does.

Okay.

But this still feels fragile to me, for example after we apply attachment:1191-fix.diff, then if someone were to change the code of ShareFinder._got_response so that it invoked _deliver_shares eventually instead of immediately, or if they were to change _deliver_shares so that it invoked DownloadNode.got_shares eventually instead of immediately, or if they were to change DownloadNode.got_shares so that it updated its _shares data structure eventually instead of immediately, then that would reintroduce this bug.

It would feel nicer to me if we could update both the ShareFinder.pending_requests data structure and the DownloadNode._shares data structure in the same immediate call so that there is no tick that begins when those two data structures are in a mutually inconsistent state (with the request removed from the former but the share not yet added to the latter).

Okay now I'll try to make a narrow test case of this issue.

Okay here is a log created by instrumenting `ShareFinder` to log each method call and `DownloadNode` to log `got_shares` and `no_more_shares`. This is without [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c). ``` local#231 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._request_retired(<allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>) local#232 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._got_response({9: <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>}, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$: , <allmydata.immutable.downloader.finder.RequestToken instance at 0x104ef9c68>, <allmydata.immutable.downloader.status.DYHBEvent instance at 0x104ef9cb0>, 1283835719.96, 210) local#233 23:01:59.969: got shnums [9] from [xgru5adv] local#234 23:01:59.969: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._create_share(9, <allmydata.test.no_network.LocalWrapper instance at 0x1049dc3f8>, {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 128704925696, 'tolerates-immutable-read-overrun': True, 'delete-mutable-shares-with-zero-length-writev': True}, 'application-version': 'allmydata-tahoe/1.8.0c3-r4715'}, <B9><A3>N<80>u<9C>_<F7><97>FSS<A7><BD>^B<F9>f$: , 0.0133891105652) local#235 23:01:59.970: Share(sh9-on-xgru5) created local#236 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>._deliver_shares([Share(sh9-on-xgru5)]) local#237 23:01:59.970: delivering shares: Share(sh9-on-xgru5) local#238 23:01:59.970: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop() local#239 23:01:59.970: ShareFinder loop: running=True hungry=False, pending= local#240 23:01:59.971: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop() local#241 23:01:59.971: ShareFinder loop: running=True hungry=False, pending= local#242 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.hungry() local#243 23:01:59.972: ShareFinder[si=dglevpj4ueb7] hungry local#244 23:01:59.972: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.start_finding_servers() local#245 23:01:59.973: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop() local#246 23:01:59.973: ShareFinder loop: running=True hungry=True, pending= local#247 23:01:59.973: ShareFinder.loop: no_more_shares, ever local#248 23:01:59.973: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638> local#249 23:01:59.975: xxx <allmydata.immutable.downloader.finder.ShareFinder instance at 0x1049f6050>.loop() local#250 23:01:59.975: ShareFinder loop: running=True hungry=True, pending= local#251 23:01:59.975: ShareFinder.loop: no_more_shares, ever local#252 23:01:59.975: xxx ImmutableDownloadNode(dglevpj4ueb7).no_more_shares() ; _active_segment: <allmydata.immutable.downloader.fetcher.SegmentFetcher instance at 0x1049f6638> local#253 23:01:59.976: ran out of shares: complete=sh1,sh8 pending= overdue= unused= need 3. Last failure: None local#254 23:01:59.976: SegmentFetcher(dglevpj4ueb7).stop local#255 23:01:59.977: xxx ImmutableDownloadNode(dglevpj4ueb7).got_shares([Share(sh9-on-xgru5)]) ``` (Sorry about the wide lines there.) So at `local#231 23:01:59.969` the request is retired but the resulting eventual `got_shares` won't happen until `local#255 23:01:59.977` which is shortly after the `loop` at `local#247 23:01:59.973` which said `no_more_shares, ever`, which set a flag named `_no_more_shares` in the `SegmentFetcher` so that the next time `SegmentFetcher._do_loop` runs then it gives up and says `ran out of shares` at `local#253 23:01:59.976`. Now [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c) makes it so that when `loop` decides `no_more_shares, ever` then it sets an eventual task to set the `_no_more_shares` flag in `SegmentFetcher` instead of doing it immediately. Is this guaranteed to always prevent this bug? I guess it is because when the `_request_retired` (`local#231 23:01:59.969`) is done immediately then during that same tick the `got_shares` (`local#255 23:01:59.977`) is put on the eventual queue, so when the setting of `_no_more_shares` is put on the eventual queue it will always take effect after the `got_shares` does. Okay. But this still feels fragile to me, for example after we apply [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c), then if someone were to change the code of `ShareFinder._got_response` so that it invoked `_deliver_shares` eventually instead of immediately, or if they were to change `_deliver_shares` so that it invoked `DownloadNode.got_shares` eventually instead of immediately, or if they were to change `DownloadNode.got_shares` so that it updated its `_shares` data structure eventually instead of immediately, then that would reintroduce this bug. It would feel nicer to me if we could update both the `ShareFinder.pending_requests` data structure and the `DownloadNode._shares` data structure in the same immediate call so that there is no tick that begins when those two data structures are in a mutually inconsistent state (with the request removed from the former but the share not yet added to the latter). Okay now I'll try to make a narrow test case of this issue.
Author

Okay I've written a test of ShareFinder in which everything that is provided to the ShareFinder is a fake/mock thing. I think that's pretty cool and it helped me learn the code better. I have to go to sleep now and currently the test doesn't exercise the bug because it doesn't arrange for the ShareFinder to execute its own loop in between the final request being retired and the final share being added to the node's _share. However, I'll attach the patch in case someone wants to play with it.

Okay I've written a test of `ShareFinder` in which everything that is provided to the `ShareFinder` is a fake/mock thing. I think that's pretty cool and it helped me learn the code better. I have to go to sleep now and currently the test doesn't exercise the bug because it doesn't arrange for the `ShareFinder` to execute its own `loop` in between the final request being retired and the final share being added to the node's `_share`. However, I'll attach the patch in case someone wants to play with it.
Author

Attachment test.patch.txt (10209 bytes) added

a test of ShareFinder in which everything that is provided to the ShareFinder is a fake/mock thing. This doesn't quite succeed at exercising the bug because it doesn't arrange for ShareFinder to run its own .loop after the last request is retired and before the last share is added to the node's _shares.

**Attachment** test.patch.txt (10209 bytes) added a test of [ShareFinder](wiki/ShareFinder) in which everything that is provided to the [ShareFinder](wiki/ShareFinder) is a fake/mock thing. This doesn't quite succeed at exercising the bug because it doesn't arrange for [ShareFinder](wiki/ShareFinder) to run its own .loop after the last request is retired and before the last share is added to the node's _shares.
Author
<zooko> Okay I need to talk out loud.				        [22:25]
<zooko> finder tells node "no more shares"
<zooko> when finder knows that it will never find more shares during this
	download, *and* node has said "I'm hungry--feed me more shares."
<zooko> This is fine, except that recently we (Brian) extended it so that the
	node will be hungry for more shares even though it has enough shares,
	because it wants better-distributed shares.
<zooko> So now this is an alternate (better) 
<zooko> explanation for #1191.
<zooko> That finder then tells node "WAH! Give up in despair!" when finder
	runs out of new shares.					        [22:27]
<zooko> But actually node should be content with the unevenly distributed set
	that it already has.
<zooko> Okay, cool. So now can I write a unit test of *that*...
<zooko> Well, for one thing this shows that it isn't really finder's fault.
<zooko> Oh no wait these are really *both* bugs.		        [22:28]
<zooko> I've just written a unit test for the bug that finder says "No more
	shares, ever!" immediately followed by "here's another share". 
<zooko> That's not right. :-)
<zooko> But then we also need *another* test which is that node, when told "no
	more shares ever" goes ahead and makes do with what it has if it can.
``` <zooko> Okay I need to talk out loud. [22:25] <zooko> finder tells node "no more shares" <zooko> when finder knows that it will never find more shares during this download, *and* node has said "I'm hungry--feed me more shares." <zooko> This is fine, except that recently we (Brian) extended it so that the node will be hungry for more shares even though it has enough shares, because it wants better-distributed shares. <zooko> So now this is an alternate (better) <zooko> explanation for #1191. <zooko> That finder then tells node "WAH! Give up in despair!" when finder runs out of new shares. [22:27] <zooko> But actually node should be content with the unevenly distributed set that it already has. <zooko> Okay, cool. So now can I write a unit test of *that*... <zooko> Well, for one thing this shows that it isn't really finder's fault. <zooko> Oh no wait these are really *both* bugs. [22:28] <zooko> I've just written a unit test for the bug that finder says "No more shares, ever!" immediately followed by "here's another share". <zooko> That's not right. :-) <zooko> But then we also need *another* test which is that node, when told "no more shares ever" goes ahead and makes do with what it has if it can. ```
Author

Okay I wrote a test for the first part -- it goes red if finder says "no more shares ever" and then delivers another share. I started writing a test for the second part -- it should go red if fetcher, given two shares on one server and one share on another server, gives up in despair. By inspection, it looks like fetcher doesn't have that bug: source:trunk/src/allmydata/immutable/downloader/fetcher.py@4707#L128. Also maybe that test is redundant with bigger downloader tests that we already have... I'll attach the patch as it exists in my sandbox at this moment.

Okay I wrote a test for the first part -- it goes red if finder says "no more shares ever" and then delivers another share. I started writing a test for the second part -- it should go red if fetcher, given two shares on one server and one share on another server, gives up in despair. By inspection, it looks like fetcher doesn't have that bug: source:trunk/src/allmydata/immutable/downloader/fetcher.py@4707#L128. Also maybe that test is redundant with bigger downloader tests that we already have... I'll attach the patch as it exists in my sandbox at this moment.
Author

Attachment mockingtests.dpatch.txt (14928 bytes) added

**Attachment** mockingtests.dpatch.txt (14928 bytes) added
Author

Okay here is a narrowly focussed unit test which gives the finder a red mark because the finder announced "no more shares, ever" and then subsequently announces a new share: changeset:56a3258ff7560af3. (Note: it may be appropriate to write a more lenient unit test which allows the finder to do that and get away with it. However, the documentation for "no more shares" should then be updated and also I haven't yet been able to think of a unit test which is looser than this one and which still narrowly focusses on this bug.)

Okay here is a narrowly focussed unit test which gives the finder a red mark because the finder announced "no more shares, ever" and then subsequently announces a new share: changeset:56a3258ff7560af3. (Note: it may be appropriate to write a more lenient unit test which allows the finder to do that and get away with it. However, the documentation for "no more shares" should then be updated and also I haven't yet been able to think of a unit test which is looser than this one and which still narrowly focusses on this bug.)
Author

I thought that the following patch (that I will attach to this ticket after I post this comment) would surely fix the bug by making the retire-request and add-new-share events happen in the same tick. Alas, it causes some other tests to fail in ways that I don't understand, namely allmydata.test.test_cli.Errors.test_get.

I thought that the following patch (that I will attach to this ticket after I post this comment) would surely fix the bug by making the retire-request and add-new-share events happen in the same tick. Alas, it causes some other tests to fail in ways that I don't understand, namely allmydata.test.test_cli.Errors.test_get.
Author

Attachment do-retire-and-got-share-in-same-tick.dpatch.txt (2056 bytes) added

**Attachment** do-retire-and-got-share-in-same-tick.dpatch.txt (2056 bytes) added
Author

Hm, how could attachment:do-retire-and-got-share-in-same-tick.dpatch.txt cause a failure of allmydata.test.test_cli.Errors.test_get? This means there is some other dependency between state-updating events related to _retired_request() and _got_response, right?

I'm giving up for now. Help!

Hm, how could [attachment:do-retire-and-got-share-in-same-tick.dpatch.txt](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-28fbad40c16d) cause a failure of `allmydata.test.test_cli.Errors.test_get`? This means there is some other dependency between state-updating events related to `_retired_request()` and `_got_response`, right? I'm giving up for now. Help!
Author

Okay I applied Brian's patch attachment:1191-fix.diff and it made [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 go green, so I guess I'll commit and stop worrying about why attachment:do-retire-and-got-share-in-same-tick.dpatch.txt made allmydata.test.test_cli.Errors.test_get hang. I consider attachment:1191-fix.diff combined with [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 to be a sufficient fix for this issue.

Okay I applied Brian's patch [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c) and it made [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 go green, so I guess I'll commit and stop worrying about why [attachment:do-retire-and-got-share-in-same-tick.dpatch.txt](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-28fbad40c16d) made `allmydata.test.test_cli.Errors.test_get` hang. I consider [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c) combined with [my new test]source:trunk/src/allmydata/test/test_immutable.py@4715#L63 to be a sufficient fix for this issue.
Brian Warner <warner@lothar.com> commented 2010-09-10 05:00:43 +00:00
Owner

In changeset:0475bd8e27ae8307:

immutable download: have the finder inform its share consumer "no more shares" in a subsequent tick, thus avoiding accidentally telling it "no more shares" now and then telling it "here's another share" in a subsequent tick
fixes #1191
Patch by Brian. This patch description was actually written by Zooko, but I forged Brian's name on the "author" field so that he would get credit for this patch in revision control history.
In changeset:0475bd8e27ae8307: ``` immutable download: have the finder inform its share consumer "no more shares" in a subsequent tick, thus avoiding accidentally telling it "no more shares" now and then telling it "here's another share" in a subsequent tick fixes #1191 Patch by Brian. This patch description was actually written by Zooko, but I forged Brian's name on the "author" field so that he would get credit for this patch in revision control history. ```
tahoe-lafs added the
r/fixed
label 2010-09-10 05:00:43 +00:00
Brian Warner <warner@lothar.com> closed this issue 2010-09-10 05:00:43 +00:00
Author

By the way there is a generalization that I could make about attachment:do-retire-and-got-share-in-same-tick.dpatch.txt compared to attachment:1191-fix.diff. We use state machines to handle asynchronous events, so when we want to have some behavior which spans multiple network events or timeouts and which has state, we have to store the state somewhere and check it in the next step of the state machine. But, we should really avoid the state machine paradigm whenever possible, so if we have some behavior which spans multiple ticks within one Twisted reactor, then we should not store the state somewhere and check the state in the function that we eventually run—we should instead pass that state as an argument to the eventually run function. That's just my opinion, man.

This is not to criticize attachment:1191-fix.diff. It is the smallest patch that fixes all the known problems, and it is what we're actually deploying in 1.8.0, but I wanted to explain why I spent time attempting (unsuccessfully) to write attachment:do-retire-and-got-share-in-same-tick.dpatch.txt instead.

A related idea that I have is: suppose you have a loop() method (we have several) which contains the core of the state machine. Suppose you have some new information/some new event to communicate to the state machine. Then do not write down the new event and trigger the state machine, like self._no_more_shares = True; eventually(self.loop), but instead extend the interface of loop() to accept this event, like eventually(self.loop, no_more_shares=True). That way people will not have to think about what happens if the state changes in other ways before that eventually(self.loop) happens, such as if there is already a different call to loop on the eventual queue. If we hadn't needed to consider such potential complications, it would have been easier for us to diagnose the issue in this ticket.

By the way there is a generalization that I could make about [attachment:do-retire-and-got-share-in-same-tick.dpatch.txt](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-28fbad40c16d) compared to [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c). We use state machines to handle asynchronous events, so when we want to have some behavior which spans multiple network events or timeouts and which has state, we have to store the state somewhere and check it in the next step of the state machine. *But*, we should really avoid the state machine paradigm whenever possible, so if we have some behavior which spans multiple ticks within one Twisted reactor, then we should *not* store the state somewhere and check the state in the function that we eventually run—we should instead *pass* that state as an argument to the eventually run function. That's just my opinion, man. This is not to criticize [attachment:1191-fix.diff](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-4425befc409c). It is the smallest patch that fixes all the known problems, and it is what we're actually deploying in 1.8.0, but I wanted to explain why I spent time attempting (unsuccessfully) to write [attachment:do-retire-and-got-share-in-same-tick.dpatch.txt](/tahoe-lafs/trac/attachments/000078ac-9060-63f0-0d25-28fbad40c16d) instead. A related idea that I have is: suppose you have a `loop()` method (we have several) which contains the core of the state machine. Suppose you have some new information/some new event to communicate to the state machine. Then do *not* write down the new event and trigger the state machine, like `self._no_more_shares = True; eventually(self.loop)`, but instead extend the interface of `loop()` to accept this event, like `eventually(self.loop, no_more_shares=True)`. That way people will not have to think about what happens if the state changes in other ways before that `eventually(self.loop)` happens, such as if there is already a different call to `loop` on the eventual queue. If we hadn't needed to consider such potential complications, it would have been easier for us to diagnose the issue in this ticket.
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#1191
No description provided.