two-hour delay to connect to a grid from Win32, if there are many storage servers unreachable #605

Open
opened 2009-02-03 23:11:23 +00:00 by zooko · 18 comments

Current trunk changeset:91a28da3aa2e98c9 takes something like two hours to connect to all of the testgrid storage servers. Two seconds would be more like it! Strangely, it spent a long time (tens of minutes) with a connection open to one tahoe server node on each server out of the three tahoe server nodes running on each server. Eventually it then connected to a second server node on each server, and finally to the third.

Current trunk changeset:91a28da3aa2e98c9 takes something like two hours to connect to all of the testgrid storage servers. Two seconds would be more like it! Strangely, it spent a long time (tens of minutes) with a connection open to *one* tahoe server node on each server out of the three tahoe server nodes running on each server. Eventually it then connected to a second server node on each server, and finally to the third.
zooko added the
c/code-network
p/major
t/defect
v/1.2.0
labels 2009-02-03 23:11:23 +00:00
zooko added this to the 1.3.0 milestone 2009-02-03 23:11:23 +00:00
zooko self-assigned this 2009-02-03 23:11:23 +00:00
Author

I'm trying to reproduce it with tahoe-1.2.0 on the same Windows laptop (zaulaw).

I'm trying to reproduce it with tahoe-1.2.0 on the same Windows laptop (zaulaw).
Author

A-ha! That's interesting! The same behavior is occurring with tahoe-1.2.0. Here is the full version string:

My versions: allmydata: 1.2.0, foolscap: 0.3.2, pycryptopp: 0.5.1, zfec: 1.3.4, twisted: 8.1.0, nevow: 0.9.32, simplejson: 1.7.3, pyopenssl: 0.8, setuptools: 0.7a1

The next experiment is to try an older version of foolscap. Let's see... tahoe-1.2.0 shipped saying that it required foolscap >= 0.2.9: source:_auto_deps@20080722010403-92b7f-a74ec17a8a9cff0834ecc122ffa275280f563cea, so I'll try that.

A-ha! That's interesting! The same behavior is occurring with tahoe-1.2.0. Here is the full version string: My versions: allmydata: 1.2.0, foolscap: 0.3.2, pycryptopp: 0.5.1, zfec: 1.3.4, twisted: 8.1.0, nevow: 0.9.32, simplejson: 1.7.3, pyopenssl: 0.8, setuptools: 0.7a1 The next experiment is to try an older version of foolscap. Let's see... tahoe-1.2.0 shipped saying that it required foolscap >= 0.2.9: source:_auto_deps@20080722010403-92b7f-a74ec17a8a9cff0834ecc122ffa275280f563cea, so I'll try that.
Author

That link should have been source:_auto_deps.py@20080722010403-92b7f-a74ec17a8a9cff0834ecc122ffa275280f563cea.

That link should have been source:_auto_deps.py@20080722010403-92b7f-a74ec17a8a9cff0834ecc122ffa275280f563cea.

could there be some sort of firewall that's rate-limiting connections to a single IP address? or a NAT table which can only hold one entry per destination address?

Try using telnet or netcat to connect to something on the not-yet-connected servers while it's in this state, or point a browser on that box at the server's webapi ports.

could there be some sort of firewall that's rate-limiting connections to a single IP address? or a NAT table which can only hold one entry per destination address? Try using telnet or netcat to connect to something on the not-yet-connected servers while it's in this state, or point a browser on that box at the server's webapi ports.
Author

Those are good ideas. The builtin Windows XP firewall is turned off. I can netcat to one of the servers that hasn't connected and the TCP connection is established, although I don't know how to type "hello foolscap begin your STARTTLS negotiation now", so all I can say is that the connection was established and then a while later was dropped.

It certainly can't be the fault of the wireless router or the DSL modem on my LAN, because four other computers on the same LAN connect to all blockserver quickly as expected.

Next step is to examine the flog tail while this is happening. If someone else wanted to try to reproduce it on their Windows machine (including allmydata.com's virtual Windows machine), that would be helpful.

Those are good ideas. The builtin Windows XP firewall is turned off. I can netcat to one of the servers that hasn't connected and the TCP connection is established, although I don't know how to type "hello foolscap begin your STARTTLS negotiation now", so all I can say is that the connection was established and then a while later was dropped. It certainly can't be the fault of the wireless router or the DSL modem on my LAN, because four other computers on the same LAN connect to all blockserver quickly as expected. Next step is to examine the flog tail while this is happening. If someone else wanted to try to reproduce it on their Windows machine (including allmydata.com's virtual Windows machine), that would be helpful.
Author

Attachment flogtail.txt (450242 bytes) added

**Attachment** flogtail.txt (450242 bytes) added
Author

Here is the output from flogtool tail while the client is running and connecting to several servers including:

lwkv6cjicbzqjwwwuifik3pogeupsicb  	amduser@tahoebs4:public/bs4c4  	Yes: to 207.7.153.162:51561  	20:41:20 03-Feb-2009  	20:41:18 03-Feb-2009  	1.2.0-r3056  	storage

but not:

xiktf6ok5f5ao5znxxttriv233hmvi4v  	amduser@tahoebs4:public/bs4c3  	No  	20:47:59 03-Feb-2009  	20:41:19 03-Feb-2009  	1.2.0-r3056  	storage

Here is an excerpt from this flogtail file:

$ grep -iEe"xikt|wkv6" flogtail.txt  | cut -c-180       
20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor
20:41:18.953 L20 []#119 attempting to connect to lwkv6cji
20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:18.953 L20 []#122 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490>
20:41:18.953 L20 []#124 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470>
20:41:19.030 L20 []#394 new announcement[storage]: ('pb://xiktf6ok5f5ao5znxxttriv233hmvi4v@207.7.153.162:55217,127.0.0.1:55217/duzihdt3iweli5eendljqihnouceczzl', 'storage', 'RIStor
20:41:19.030 L20 []#395 attempting to connect to xiktf6ok
20:41:19.030 L20 []#396 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:41:19.030 L20 []#398 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70>
20:41:19.030 L20 []#400 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50>
20:41:19.203 L10 []#521 initClient: to target pb://lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor
20:41:18.953 L20 []#119 attempting to connect to lwkv6cji
20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor
20:41:18.953 L20 []#119 attempting to connect to lwkv6cji
20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:18.953 L20 []#122 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490>
20:41:18.953 L20 []#124 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470>
20:41:19.030 L20 []#394 new announcement[storage]: ('pb://xiktf6ok5f5ao5znxxttriv233hmvi4v@207.7.153.162:55217,127.0.0.1:55217/duzihdt3iweli5eendljqihnouceczzl', 'storage', 'RIStor
20:41:19.030 L20 []#395 attempting to connect to xiktf6ok
20:41:19.030 L20 []#396 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:41:19.030 L20 []#398 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70>
20:41:19.030 L20 []#400 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50>
20:41:19.203 L10 []#521 initClient: to target pb://lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:19.203 L10 []#522 sendPlaintextClient: GET for tubID lwkv6cjicbzqjwwwuifik3pogeupsicb
20:41:19.796 L30 []#546 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A5F550 from cjzndszpggbecw2r7cpf3hsl3snd4bly to 7tlov7egj7ultza3dy2dlgev6gijlgv
20:41:20.187 L10 []#547 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: bab985ce462e6915\r\nmy-t
20:41:20.187 L10 []#548 evaluateHello(isClient=True): offer={'my-tub-id': 'lwkv6cjicbzqjwwwuifik3pogeupsicb', 'my-incarnation': 'bab985ce462e6915', 'initial-vocab-table-range': '0 
20:41:20.217 L20 []#555 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470> won
20:41:20.217 L20 []#557 connectorFinished (<foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb>)
20:41:20.217 L20 []#558 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490>
20:41:20.217 L20 []#559 connectorFinished (<foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb>)
20:41:20.217 L30 []#560 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsic
20:41:20.453 L20 []#567 got connection to lwkv6cji, getting versions
20:41:20.625 L10 []#573  reqID=2, rref=<RemoteReference at 0x28b0690 [pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh]>, 
20:41:20.625 L20 []#575 connected to lwkv6cji, version {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 4294967296L}, 'application-version': 'un
20:41:43.015 L30 []#646 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A76D30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to 6fyx5u4zr7tvz3szynihc4x3uc6ct5g
20:41:49.062 L20 []#792 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50>
20:41:49.062 L20 []#795 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028459B0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:41:49.062 L20 []#796 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70>
20:41:53.015 L20 []#1012 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:41:53.015 L20 []#1014 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996490>
20:41:53.015 L20 []#1016 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996630>
20:42:23.015 L20 []#1254 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996630>
20:42:23.015 L20 []#1257 connectorFinished (<foolscap.negotiate.TubConnector object at 0x02996650 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:42:23.015 L20 []#1258 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996490>
20:42:25.578 L20 []#1444 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:42:25.578 L20 []#1446 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD350>
20:42:25.578 L20 []#1448 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD310>
20:42:55.592 L20 []#1659 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD350>
20:42:55.592 L20 []#1660 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029FD850 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:42:55.592 L20 []#1661 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD310>
20:43:01.733 L20 []#1812 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:43:01.733 L20 []#1814 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FDAB0>
20:43:01.733 L20 []#1816 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD970>
20:43:31.750 L20 []#2013 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD970>
20:43:31.750 L20 []#2014 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029FD270 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:43:31.750 L20 []#2015 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FDAB0>
20:43:50.733 L20 []#2210 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:43:50.733 L20 []#2212 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E85F0>
20:43:50.733 L20 []#2214 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E8230>
20:44:20.750 L20 []#2411 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E85F0>
20:44:20.750 L20 []#2412 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029E8C30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:44:20.750 L20 []#2413 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E8230>
20:45:12.265 L20 []#2616 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:45:12.265 L20 []#2618 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2970>
20:45:12.265 L20 []#2620 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2070>
20:45:42.265 L20 []#2799 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2970>
20:45:42.265 L20 []#2800 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029A28B0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:45:42.265 L20 []#2801 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2070>
20:48:06.625 L20 []#3076 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v
20:48:06.625 L20 []#3078 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282E3D0>
20:48:06.625 L20 []#3080 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282EAD0>
20:48:36.640 L20 []#3186 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282EAD0>
20:48:36.640 L20 []#3187 connectorFinished (<foolscap.negotiate.TubConnector object at 0x0282E6D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:48:36.640 L20 []#3188 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282E3D0>

That's the complete output of that grep command at this time.

Attached is the current flogtail.txt file.

Here is the output from `flogtool tail` while the client is running and connecting to several servers including: ``` lwkv6cjicbzqjwwwuifik3pogeupsicb amduser@tahoebs4:public/bs4c4 Yes: to 207.7.153.162:51561 20:41:20 03-Feb-2009 20:41:18 03-Feb-2009 1.2.0-r3056 storage ``` but not: ``` xiktf6ok5f5ao5znxxttriv233hmvi4v amduser@tahoebs4:public/bs4c3 No 20:47:59 03-Feb-2009 20:41:19 03-Feb-2009 1.2.0-r3056 storage ``` Here is an excerpt from this flogtail file: ``` $ grep -iEe"xikt|wkv6" flogtail.txt | cut -c-180 20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor 20:41:18.953 L20 []#119 attempting to connect to lwkv6cji 20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:18.953 L20 []#122 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490> 20:41:18.953 L20 []#124 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470> 20:41:19.030 L20 []#394 new announcement[storage]: ('pb://xiktf6ok5f5ao5znxxttriv233hmvi4v@207.7.153.162:55217,127.0.0.1:55217/duzihdt3iweli5eendljqihnouceczzl', 'storage', 'RIStor 20:41:19.030 L20 []#395 attempting to connect to xiktf6ok 20:41:19.030 L20 []#396 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:41:19.030 L20 []#398 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70> 20:41:19.030 L20 []#400 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50> 20:41:19.203 L10 []#521 initClient: to target pb://lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor 20:41:18.953 L20 []#119 attempting to connect to lwkv6cji 20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:18.953 L20 []#118 new announcement[storage]: ('pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh', 'storage', 'RIStor 20:41:18.953 L20 []#119 attempting to connect to lwkv6cji 20:41:18.953 L20 []#120 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:18.953 L20 []#122 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490> 20:41:18.953 L20 []#124 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470> 20:41:19.030 L20 []#394 new announcement[storage]: ('pb://xiktf6ok5f5ao5znxxttriv233hmvi4v@207.7.153.162:55217,127.0.0.1:55217/duzihdt3iweli5eendljqihnouceczzl', 'storage', 'RIStor 20:41:19.030 L20 []#395 attempting to connect to xiktf6ok 20:41:19.030 L20 []#396 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:41:19.030 L20 []#398 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70> 20:41:19.030 L20 []#400 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50> 20:41:19.203 L10 []#521 initClient: to target pb://lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:19.203 L10 []#522 sendPlaintextClient: GET for tubID lwkv6cjicbzqjwwwuifik3pogeupsicb 20:41:19.796 L30 []#546 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A5F550 from cjzndszpggbecw2r7cpf3hsl3snd4bly to 7tlov7egj7ultza3dy2dlgev6gijlgv 20:41:20.187 L10 []#547 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: bab985ce462e6915\r\nmy-t 20:41:20.187 L10 []#548 evaluateHello(isClient=True): offer={'my-tub-id': 'lwkv6cjicbzqjwwwuifik3pogeupsicb', 'my-incarnation': 'bab985ce462e6915', 'initial-vocab-table-range': '0 20:41:20.217 L20 []#555 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F470> won 20:41:20.217 L20 []#557 connectorFinished (<foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb>) 20:41:20.217 L20 []#558 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to lwkv6cji] at 0x01A5F490> 20:41:20.217 L20 []#559 connectorFinished (<foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsicb>) 20:41:20.217 L30 []#560 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A5F3D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to lwkv6cjicbzqjwwwuifik3pogeupsic 20:41:20.453 L20 []#567 got connection to lwkv6cji, getting versions 20:41:20.625 L10 []#573 reqID=2, rref=<RemoteReference at 0x28b0690 [pb://lwkv6cjicbzqjwwwuifik3pogeupsicb@207.7.153.162:51561,127.0.0.1:51561/yadf5xhmz5hrv2iqc6ygr52w73fhqnnh]>, 20:41:20.625 L20 []#575 connected to lwkv6cji, version {'http://allmydata.org/tahoe/protocols/storage/v1': {'maximum-immutable-share-size': 4294967296L}, 'application-version': 'un 20:41:43.015 L30 []#646 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x01A76D30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to 6fyx5u4zr7tvz3szynihc4x3uc6ct5g 20:41:49.062 L20 []#792 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A50> 20:41:49.062 L20 []#795 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028459B0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:41:49.062 L20 []#796 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02845A70> 20:41:53.015 L20 []#1012 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:41:53.015 L20 []#1014 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996490> 20:41:53.015 L20 []#1016 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996630> 20:42:23.015 L20 []#1254 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996630> 20:42:23.015 L20 []#1257 connectorFinished (<foolscap.negotiate.TubConnector object at 0x02996650 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:42:23.015 L20 []#1258 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x02996490> 20:42:25.578 L20 []#1444 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:42:25.578 L20 []#1446 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD350> 20:42:25.578 L20 []#1448 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD310> 20:42:55.592 L20 []#1659 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD350> 20:42:55.592 L20 []#1660 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029FD850 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:42:55.592 L20 []#1661 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD310> 20:43:01.733 L20 []#1812 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:43:01.733 L20 []#1814 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FDAB0> 20:43:01.733 L20 []#1816 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD970> 20:43:31.750 L20 []#2013 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FD970> 20:43:31.750 L20 []#2014 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029FD270 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:43:31.750 L20 []#2015 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029FDAB0> 20:43:50.733 L20 []#2210 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:43:50.733 L20 []#2212 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E85F0> 20:43:50.733 L20 []#2214 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E8230> 20:44:20.750 L20 []#2411 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E85F0> 20:44:20.750 L20 []#2412 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029E8C30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:44:20.750 L20 []#2413 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029E8230> 20:45:12.265 L20 []#2616 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:45:12.265 L20 []#2618 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2970> 20:45:12.265 L20 []#2620 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2070> 20:45:42.265 L20 []#2799 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2970> 20:45:42.265 L20 []#2800 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029A28B0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:45:42.265 L20 []#2801 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x029A2070> 20:48:06.625 L20 []#3076 TubConnector created from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v 20:48:06.625 L20 []#3078 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282E3D0> 20:48:06.625 L20 []#3080 Starting factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282EAD0> 20:48:36.640 L20 []#3186 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282EAD0> 20:48:36.640 L20 []#3187 connectorFinished (<foolscap.negotiate.TubConnector object at 0x0282E6D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:48:36.640 L20 []#3188 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x0282E3D0> ``` That's the complete output of that grep command at this time. Attached is the current flogtail.txt file.
Author

So it looks like there's something funny about foolscap's attempts to connect to xikt but not lwkv. What's the next step? Perhaps it is to attach flog gatherers to xikt as well as this client? Also, of course, I would like someone to try to reproduce this problem elsewhere. To reproduce it:

  1. Start a node pointed at the test grid.
  2. See if it has connected to all the blockservers within a few seconds.

Of course, even better would be automated tests of this functionality! We currently have automated tests of this functionality, but they are run by buildbot only on a Linux system. Perhaps we could run them on a Windows buildslave as well.

So it looks like there's something funny about foolscap's attempts to connect to `xikt` but not `lwkv`. What's the next step? Perhaps it is to attach flog gatherers to `xikt` as well as this client? Also, of course, I would like someone to try to reproduce this problem elsewhere. To reproduce it: 1. Start a node pointed at the test grid. 2. See if it has connected to all the blockservers within a few seconds. Of course, even better would be automated tests of this functionality! We currently have automated tests of this functionality, but they are run by buildbot only on a Linux system. Perhaps we could run them on a Windows buildslave as well.
Author

Okay, at 20:52:54 it connected:

20:52:53.625 L10 []#3657 initClient: to target pb://xiktf6ok5f5ao5znxxttriv233hmvi4v
20:52:53.625 L10 []#3658 sendPlaintextClient: GET for tubID xiktf6ok5f5ao5znxxttriv233hmvi4v
20:52:53.625 L10 []#3659 sendPlaintextClient: wantEncryption=True
20:52:53.780 L10 []#3660 dataReceived(isClient=True,phase=0,options={}): 'HTTP/1.1 101 Switching Protocols\r\nUpgrade: TLS/1.0, PB/1.0\r\nConnection: Upgrade\r\n\r\n'
20:52:53.780 L10 []#3661 handlePLAINTEXTClient: header='HTTP/1.1 101 Switching Protocols^M
Upgrade: TLS/1.0, PB/1.0^M
Connection: Upgrade'
20:52:53.780 L10 []#3662 startENCRYPTED(isClient=True, encrypted=True)
20:52:53.780 L10 []#3663 startTLS, client=True
20:52:53.780 L10 []#3664 Negotiate.sendHello (isClient=True): {'my-incarnation': 'd60b43291400a40a', 'last-connection': 'none 0', 'my-tub-id': 'cjzndszpggbecw2r7cpf3hsl3snd4bly', '
initial-vocab-table-range': '0 1', 'banana-negotiation-range': '3 3'}
20:52:53.780 L10 []#3665 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: 8e3b459d16af4062\r\nmy-
tub-id: ivjakubrruewknqg7wgb5hbinasqupj6\r\n\r\n'
20:52:53.780 L10 []#3666 evaluateHello(isClient=True): offer={'my-tub-id': 'ivjakubrruewknqg7wgb5hbinasqupj6', 'my-incarnation': '8e3b459d16af4062', 'initial-vocab-table-range': '0
 1', 'banana-negotiation-range': '3 3'}
20:52:53.780 L10 []#3667 iAmTheMaster: False
20:52:53.905 L20 []#3668 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to sskpc5gr] at 0x01A5F810>
20:52:53.905 L20 []#3669 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to hvoivvgm] at 0x01A5F350>
20:52:53.905 L20 []#3670 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to ivjakubr] at 0x029DC4F0>
20:52:54.092 L10 []#3671 dataReceived(isClient=True,phase=2,options={}): 'banana-decision-version: 3\r\ncurrent-connection: 8e3b459d16af4062 10\r\ninitial-vocab-table-index: 1 bb33
\r\n\r\n'
20:52:54.092 L10 []#3672 handleDECIDING(isClient=True): banana-decision-version: 3^M
current-connection: 8e3b459d16af4062 10^M
initial-vocab-table-index: 1 bb33
20:52:54.092 L20 []#3673 Banana.acceptDecision: got {'banana-decision-version': '3', 'initial-vocab-table-index': '1 bb33', 'current-connection': '8e3b459d16af4062 10'}
20:52:54.092 L10 []#3674 Negotiate.switchToBanana(isClient=True)
20:52:54.092 L10 []#3675 params: {'banana-decision-version': 3, 'initial-vocab-table-index': 1}
20:52:54.092 L20 []#3676 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to ivjakubr] at 0x029DC670> won
20:52:54.092 L20 []#3677 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029DCD90 from cjzndszpggbecw2r7cpf3hsl3snd4bly to ivjakubrruewknqg7wgb5hbinasqupj6>)
20:52:54.250 L10 []#3678 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: 962e316d4519b914\r\nmy-
tub-id: xiktf6ok5f5ao5znxxttriv233hmvi4v\r\n\r\n'
20:52:54.250 L10 []#3679 evaluateHello(isClient=True): offer={'my-tub-id': 'xiktf6ok5f5ao5znxxttriv233hmvi4v', 'my-incarnation': '962e316d4519b914', 'initial-vocab-table-range': '0
 1', 'banana-negotiation-range': '3 3'}
20:52:54.250 L10 []#3680 iAmTheMaster: False
20:52:54.296 L20 []#3681 got connection to ivjakubr, getting versions
20:52:54.530 L10 []#3682 dataReceived(isClient=True,phase=2,options={}): 'banana-decision-version: 3\r\ncurrent-connection: 962e316d4519b914 10\r\ninitial-vocab-table-index: 1 bb33
\r\n\r\n'
20:52:54.530 L10 []#3683 handleDECIDING(isClient=True): banana-decision-version: 3^M
current-connection: 962e316d4519b914 10^M
initial-vocab-table-index: 1 bb33
20:52:54.530 L20 []#3684 Banana.acceptDecision: got {'banana-decision-version': '3', 'initial-vocab-table-index': '1 bb33', 'current-connection': '962e316d4519b914 10'}
20:52:54.530 L10 []#3685 Negotiate.switchToBanana(isClient=True)
20:52:54.530 L10 []#3686 params: {'banana-decision-version': 3, 'initial-vocab-table-index': 1}
20:52:54.530 L20 []#3687 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x028B04F0> won
20:52:54.530 L20 []#3688 told to disconnect
20:52:54.530 L20 []#3689 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:52:54.530 L20 []#3690 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x028B0CD0>
20:52:54.530 L20 []#3691 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>)
20:52:54.530 L30 []#3692 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi
4v> is not in [<foolscap.negotiate.TubConnector object at 0x029DCFF0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to jrpz55j6rlzqjh5ghnx4hlpa4pmnd4th>, <foolscap.negotiate.TubConnector ob
ject at 0x029DCD30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to mnvtm46ne4r4he5qeoavek3gvpry7lpt>, <foolscap.negotiate.TubConnector object at 0x028B0D30 from cjzndszpggbecw2r7cpf3hsl3s
nd4bly to trjdor3okozw4eld3l6zl4ap4z6h5tk6>, <foolscap.negotiate.TubConnector object at 0x028B0930 from cjzndszpggbecw2r7cpf3hsl3snd4bly to reezjqi5hylz4nmsajurqosvo6ou44xc>, <fool
Okay, at 20:52:54 it connected: ``` 20:52:53.625 L10 []#3657 initClient: to target pb://xiktf6ok5f5ao5znxxttriv233hmvi4v 20:52:53.625 L10 []#3658 sendPlaintextClient: GET for tubID xiktf6ok5f5ao5znxxttriv233hmvi4v 20:52:53.625 L10 []#3659 sendPlaintextClient: wantEncryption=True 20:52:53.780 L10 []#3660 dataReceived(isClient=True,phase=0,options={}): 'HTTP/1.1 101 Switching Protocols\r\nUpgrade: TLS/1.0, PB/1.0\r\nConnection: Upgrade\r\n\r\n' 20:52:53.780 L10 []#3661 handlePLAINTEXTClient: header='HTTP/1.1 101 Switching Protocols^M Upgrade: TLS/1.0, PB/1.0^M Connection: Upgrade' 20:52:53.780 L10 []#3662 startENCRYPTED(isClient=True, encrypted=True) 20:52:53.780 L10 []#3663 startTLS, client=True 20:52:53.780 L10 []#3664 Negotiate.sendHello (isClient=True): {'my-incarnation': 'd60b43291400a40a', 'last-connection': 'none 0', 'my-tub-id': 'cjzndszpggbecw2r7cpf3hsl3snd4bly', ' initial-vocab-table-range': '0 1', 'banana-negotiation-range': '3 3'} 20:52:53.780 L10 []#3665 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: 8e3b459d16af4062\r\nmy- tub-id: ivjakubrruewknqg7wgb5hbinasqupj6\r\n\r\n' 20:52:53.780 L10 []#3666 evaluateHello(isClient=True): offer={'my-tub-id': 'ivjakubrruewknqg7wgb5hbinasqupj6', 'my-incarnation': '8e3b459d16af4062', 'initial-vocab-table-range': '0 1', 'banana-negotiation-range': '3 3'} 20:52:53.780 L10 []#3667 iAmTheMaster: False 20:52:53.905 L20 []#3668 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to sskpc5gr] at 0x01A5F810> 20:52:53.905 L20 []#3669 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to hvoivvgm] at 0x01A5F350> 20:52:53.905 L20 []#3670 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to ivjakubr] at 0x029DC4F0> 20:52:54.092 L10 []#3671 dataReceived(isClient=True,phase=2,options={}): 'banana-decision-version: 3\r\ncurrent-connection: 8e3b459d16af4062 10\r\ninitial-vocab-table-index: 1 bb33 \r\n\r\n' 20:52:54.092 L10 []#3672 handleDECIDING(isClient=True): banana-decision-version: 3^M current-connection: 8e3b459d16af4062 10^M initial-vocab-table-index: 1 bb33 20:52:54.092 L20 []#3673 Banana.acceptDecision: got {'banana-decision-version': '3', 'initial-vocab-table-index': '1 bb33', 'current-connection': '8e3b459d16af4062 10'} 20:52:54.092 L10 []#3674 Negotiate.switchToBanana(isClient=True) 20:52:54.092 L10 []#3675 params: {'banana-decision-version': 3, 'initial-vocab-table-index': 1} 20:52:54.092 L20 []#3676 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to ivjakubr] at 0x029DC670> won 20:52:54.092 L20 []#3677 connectorFinished (<foolscap.negotiate.TubConnector object at 0x029DCD90 from cjzndszpggbecw2r7cpf3hsl3snd4bly to ivjakubrruewknqg7wgb5hbinasqupj6>) 20:52:54.250 L10 []#3678 dataReceived(isClient=True,phase=1,options={}): 'banana-negotiation-range: 3 3\r\ninitial-vocab-table-range: 0 1\r\nmy-incarnation: 962e316d4519b914\r\nmy- tub-id: xiktf6ok5f5ao5znxxttriv233hmvi4v\r\n\r\n' 20:52:54.250 L10 []#3679 evaluateHello(isClient=True): offer={'my-tub-id': 'xiktf6ok5f5ao5znxxttriv233hmvi4v', 'my-incarnation': '962e316d4519b914', 'initial-vocab-table-range': '0 1', 'banana-negotiation-range': '3 3'} 20:52:54.250 L10 []#3680 iAmTheMaster: False 20:52:54.296 L20 []#3681 got connection to ivjakubr, getting versions 20:52:54.530 L10 []#3682 dataReceived(isClient=True,phase=2,options={}): 'banana-decision-version: 3\r\ncurrent-connection: 962e316d4519b914 10\r\ninitial-vocab-table-index: 1 bb33 \r\n\r\n' 20:52:54.530 L10 []#3683 handleDECIDING(isClient=True): banana-decision-version: 3^M current-connection: 962e316d4519b914 10^M initial-vocab-table-index: 1 bb33 20:52:54.530 L20 []#3684 Banana.acceptDecision: got {'banana-decision-version': '3', 'initial-vocab-table-index': '1 bb33', 'current-connection': '962e316d4519b914 10'} 20:52:54.530 L10 []#3685 Negotiate.switchToBanana(isClient=True) 20:52:54.530 L10 []#3686 params: {'banana-decision-version': 3, 'initial-vocab-table-index': 1} 20:52:54.530 L20 []#3687 negotiationComplete, <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x028B04F0> won 20:52:54.530 L20 []#3688 told to disconnect 20:52:54.530 L20 []#3689 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:52:54.530 L20 []#3690 Stopping factory <foolscap.negotiate.TubConnectorClientFactory object [from cjzndszp] [to xiktf6ok] at 0x028B0CD0> 20:52:54.530 L20 []#3691 connectorFinished (<foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi4v>) 20:52:54.530 L30 []#3692 Tub.connectorFinished: WEIRD, <foolscap.negotiate.TubConnector object at 0x028B05D0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to xiktf6ok5f5ao5znxxttriv233hmvi 4v> is not in [<foolscap.negotiate.TubConnector object at 0x029DCFF0 from cjzndszpggbecw2r7cpf3hsl3snd4bly to jrpz55j6rlzqjh5ghnx4hlpa4pmnd4th>, <foolscap.negotiate.TubConnector ob ject at 0x029DCD30 from cjzndszpggbecw2r7cpf3hsl3snd4bly to mnvtm46ne4r4he5qeoavek3gvpry7lpt>, <foolscap.negotiate.TubConnector object at 0x028B0D30 from cjzndszpggbecw2r7cpf3hsl3s nd4bly to trjdor3okozw4eld3l6zl4ap4z6h5tk6>, <foolscap.negotiate.TubConnector object at 0x028B0930 from cjzndszpggbecw2r7cpf3hsl3snd4bly to reezjqi5hylz4nmsajurqosvo6ou44xc>, <fool ```
Author

ndurner on IRC said that Windows XP SP 2 limits the number of concurrent TCP connections, and said that if this had happened then there would be a note in Start -> Control Panel -> Administrative Tools -> Event Viewer. Sure enough there it is:

"TCP/IP has reached the security limit imposed on the number of concurrent TCP connection attempts."

It offers to connect to a Microsoft server and fetch more information, and I can't find a URL for this information on the public web, so here it is:

Details 
Product: Windows Operating System 
ID: 4226 
Source: Tcpip 
Version: 5.2 
Symbolic Name: EVENT_TCPIP_TCP_CONNECT_LIMIT_REACHED 
Message: TCP/IP has reached the security limit imposed on the number of concurrent (incomplete) TCP connect attempts. 
    
Explanation 
The TCP/IP stack in Windows XP with Service Pack 2 (SP2) installed limits the number of concurrent, incomplete outbound TCP connection attempts. When the limit is reached, subsequent connection attempts are put in a queue and resolved at a fixed rate so that there are only a limited number of connections in the incomplete state. During normal operation, when programs are connecting to available hosts at valid IP addresses, no limit is imposed on the number of connections in the incomplete state. When the number of incomplete connections exceeds the limit, for example, as a result of programs connecting to IP addresses that are not valid, connection-rate limitations are invoked, and this event is logged.

Establishing connection–rate limitations helps to limit the speed at which malicious programs, such as viruses and worms, spread to uninfected computers. Malicious programs often attempt to reach uninfected computers by opening simultaneous connections to random IP addresses. Most of these random addresses result in failed connections, so a burst of such activity on a computer is a signal that it may have been infected by a malicious program.

Connection-rate limitations may cause certain security tools, such as port scanners, to run more slowly.
 
    
User Action 
This event is a warning that a malicious program or a virus might be running on the system. To troubleshoot the issue, find the program that is responsible for the failing connection attempts and, if the program might be malicious, close the program as follows.

To close the program

At the command prompt, type
Netstat –no
Find the process with a large number of open connections that are not yet established.
These connections are indicated by the TCP state SYN_SENT in the State column of the Active Connections information.
Note the process identification number (PID) of the process in the PID column.
Press CTRL+ALT+DELETE and then click Task Manager.
On the Processes tab, select the processes with the matching PID, and then click End Process.
If you need to select the option to view the PID for processes, on the View menu, click Select Columns, select the PID (Process Identifier) check box, and then click OK.
 
 

--------------------------------------------------------------------------------
 
 Related Knowledge Base articles
 You can find additional information on this topic in the following Microsoft Knowledge Base articles: 
 • A file sharing connection to a Windows Server 2008-based server drops unexpectedly if the server has Symantec Antivirus installed 
 Describes an issue in which a file sharing connection to a Windows Server 2008-based server may drop unexpectedly if the server has Symantec Antivirus installed. Provides a workaround and a resolution. 
 
 
ndurner on IRC said that Windows XP SP 2 limits the number of concurrent TCP connections, and said that if this had happened then there would be a note in Start -> Control Panel -> Administrative Tools -> Event Viewer. Sure enough there it is: "TCP/IP has reached the security limit imposed on the number of concurrent TCP connection attempts." It offers to connect to a Microsoft server and fetch more information, and I can't find a URL for this information on the public web, so here it is: ``` Details Product: Windows Operating System ID: 4226 Source: Tcpip Version: 5.2 Symbolic Name: EVENT_TCPIP_TCP_CONNECT_LIMIT_REACHED Message: TCP/IP has reached the security limit imposed on the number of concurrent (incomplete) TCP connect attempts. Explanation The TCP/IP stack in Windows XP with Service Pack 2 (SP2) installed limits the number of concurrent, incomplete outbound TCP connection attempts. When the limit is reached, subsequent connection attempts are put in a queue and resolved at a fixed rate so that there are only a limited number of connections in the incomplete state. During normal operation, when programs are connecting to available hosts at valid IP addresses, no limit is imposed on the number of connections in the incomplete state. When the number of incomplete connections exceeds the limit, for example, as a result of programs connecting to IP addresses that are not valid, connection-rate limitations are invoked, and this event is logged. Establishing connection–rate limitations helps to limit the speed at which malicious programs, such as viruses and worms, spread to uninfected computers. Malicious programs often attempt to reach uninfected computers by opening simultaneous connections to random IP addresses. Most of these random addresses result in failed connections, so a burst of such activity on a computer is a signal that it may have been infected by a malicious program. Connection-rate limitations may cause certain security tools, such as port scanners, to run more slowly. User Action This event is a warning that a malicious program or a virus might be running on the system. To troubleshoot the issue, find the program that is responsible for the failing connection attempts and, if the program might be malicious, close the program as follows. To close the program At the command prompt, type Netstat –no Find the process with a large number of open connections that are not yet established. These connections are indicated by the TCP state SYN_SENT in the State column of the Active Connections information. Note the process identification number (PID) of the process in the PID column. Press CTRL+ALT+DELETE and then click Task Manager. On the Processes tab, select the processes with the matching PID, and then click End Process. If you need to select the option to view the PID for processes, on the View menu, click Select Columns, select the PID (Process Identifier) check box, and then click OK. -------------------------------------------------------------------------------- Related Knowledge Base articles You can find additional information on this topic in the following Microsoft Knowledge Base articles: • A file sharing connection to a Windows Server 2008-based server drops unexpectedly if the server has Symantec Antivirus installed Describes an issue in which a file sharing connection to a Windows Server 2008-based server may drop unexpectedly if the server has Symantec Antivirus installed. Provides a workaround and a resolution. ```
Author

Note that even though the production grid has more servers than the test grid, and even though this happened with tahoe-1.2.0 and foolscap-0.2.9 in my own tests, this doesn't seem to be happening with tahoe-prod-3.0.0 on the production grid.

Note that even though the production grid has more servers than the test grid, and even though this happened with tahoe-1.2.0 and foolscap-0.2.9 in my own tests, this doesn't seem to be happening with tahoe-prod-3.0.0 on the production grid.
Author

Ah, here's why it doesn't effect the prod grid:

"""During normal operation, when programs are connecting to available hosts at valid IP addresses, no limit is imposed on the number of connections in the incomplete state. When the number of incomplete connections exceeds the limit, for example, as a result of programs connecting to IP addresses that are not valid, connection-rate limitations are invoked, and this event is logged."""

There are too many no-longer-reachable servers on the test grid.

Moving this out of the 1.3.0 Milestone.

Ah, here's why it doesn't effect the prod grid: """During normal operation, when programs are connecting to available hosts at valid IP addresses, no limit is imposed on the number of connections in the incomplete state. When the number of incomplete connections exceeds the limit, for example, as a result of programs connecting to IP addresses that are not valid, connection-rate limitations are invoked, and this event is logged.""" There are too many no-longer-reachable servers on the test grid. Moving this out of the 1.3.0 Milestone.
zooko removed this from the 1.3.0 milestone 2009-02-05 15:42:43 +00:00
Author

Changing the name to indicate the severity.

Changing the name to indicate the severity.
zooko added this to the eventually milestone 2009-04-13 03:54:15 +00:00
zooko changed title from delayed connection on Windows to two-hour delay to connect to a grid from Win32, if there are many storage servers unreachable 2009-04-13 03:54:15 +00:00

I'm not sure whether this is actually fixable; it might be an unavoidable problem with Windows TCP/IP, but we should investigate it further.

I'm not sure whether this is actually fixable; it might be an unavoidable problem with Windows TCP/IP, but we should investigate it further.
daira modified the milestone from eventually to 1.7.0 2010-02-27 08:50:01 +00:00
Author

Changes which reduce the number of attempts to connect to servers, such as #448 (download: speak to as few servers as possible), might help. If a Windows user starts up their Tahoe-LAFS gateway and then tries to download a single file, it might not start too many attempts to connect to unreachable servers and thus avoid this penalty from Windows. However, if the user tries to download many files at once, it would still incur this penalty (since each file would trigger attempts to connect to a different set of servers).
Another way we can fix, or "work-around" the problem in some sense is by implementing some sort of presence and routing protocol other than the current naïve one.
The current design of introduction (which I rather like for its simplicity) is that we separate out three notions: announcement of a node's existence, routing information (currently: the IP addresses and port numbers you can use to reach the node), and presence notification (whether the node is currently on-line or off-line). The current design of introduction, which I rather like for its simplicity, is that announcement-of-existence and propagation-of-routing-information are both done by the introducer and presence is done directly from each node to each of its peers.
You could imagine an alternative design in which the Introducer (or hopefully instead a decentralized introduction scheme (#68, #295)) informed you "Here is a server, but it is currently off-line", or else didn't tell you about off-line servers at all. In that case, the storage client running on Windows would usually open connections only to currently-live servers, and avoid this penalty from Windows entirely.

Changes which reduce the number of attempts to connect to servers, such as #448 (download: speak to as few servers as possible), might help. If a Windows user starts up their Tahoe-LAFS gateway and then tries to download a single file, it might not start too many attempts to connect to unreachable servers and thus avoid this penalty from Windows. However, if the user tries to download many files at once, it would still incur this penalty (since each file would trigger attempts to connect to a different set of servers). Another way we can fix, or "work-around" the problem in some sense is by implementing some sort of presence and routing protocol other than the current naïve one. The current design of introduction (which I rather like for its simplicity) is that we separate out three notions: announcement of a node's existence, routing information (currently: the IP addresses and port numbers you can use to reach the node), and presence notification (whether the node is currently on-line or off-line). The current design of introduction, which I rather like for its simplicity, is that announcement-of-existence and propagation-of-routing-information are both done by the introducer and presence is done directly from each node to each of its peers. You could imagine an alternative design in which the Introducer (or hopefully instead a decentralized introduction scheme (#68, #295)) informed you "Here is a server, but it is currently off-line", or else didn't tell you about off-line servers at all. In that case, the storage client running on Windows would usually open connections only to currently-live servers, and avoid this penalty from Windows entirely.
Author

This isn't going to happen for v1.7.0, and also it is blocked on #448 because I want to see what effect #448 has on this issue.

This isn't going to happen for v1.7.0, and also it is blocked on #448 because I want to see what effect #448 has on this issue.
zooko modified the milestone from 1.7.0 to eventually 2010-05-08 19:42:40 +00:00
Author

If you like this ticket, you might also like the "Brian's New Downloader" bundle of tickets: #800 (improve alacrity by downloading only the part of the Merkle Tree that you need), #798 (improve random-access download to retrieve/decrypt less data), #809 (Measure how segment size affects upload/download speed.), #287 (download: tolerate lost or missing servers), and #448 (download: speak to as few servers as possible).

If you like this ticket, you might also like the "Brian's New Downloader" bundle of tickets: #800 (improve alacrity by downloading only the part of the Merkle Tree that you need), #798 (improve random-access download to retrieve/decrypt less data), #809 (Measure how segment size affects upload/download speed.), #287 (download: tolerate lost or missing servers), and #448 (download: speak to as few servers as possible).
Author

#448 is fixed. I looked in the code and this issue #605 is still going to be unchanged by #448 being fixed because the storage client starts attempting to connect to each storage server as soon as it learns about the server: source:trunk/src/allmydata/storage_client.py@4131#L102. One way to improve the behavior of the client with regard to this ticket would be to make that call to start_connection() be invoked lazily, only when the client actually wanted to send a message to that particular server. However, this would harm alacrity—it would move TCP and foolscap handshaking from introduction time to request time and thus increase the delay between making the request and receiving the response (for the first request made to each server). I don't think that is a good trade-off.

That means this ticket is going to stay open until we implement something more along the lines of comment:370131 — a way for clients to get hints that certain servers are currently off-line without attempting to open a TCP connection to those servers. Oh, also something that might help with this would be μTP (#1179).

#448 is fixed. I looked in the code and this issue #605 is still going to be unchanged by #448 being fixed because the storage client starts attempting to connect to each storage server as soon as it learns about the server: source:trunk/src/allmydata/storage_client.py@4131#L102. One way to improve the behavior of the client with regard to this ticket would be to make that call to `start_connection()` be invoked lazily, only when the client actually wanted to send a message to that particular server. However, this would harm alacrity—it would move TCP and foolscap handshaking from introduction time to request time and thus increase the delay between making the request and receiving the response (for the first request made to each server). I don't think that is a good trade-off. That means this ticket is going to stay open until we implement something more along the lines of [comment:370131](/tahoe-lafs/trac/issues/605#issuecomment-370131) — a way for clients to get hints that certain servers are currently off-line without attempting to open a TCP connection to those servers. Oh, also something that might help with this would be μTP (#1179).
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#605
No description provided.