separate Tub per server connection #2759

Closed
opened 2016-03-29 21:41:38 +00:00 by warner · 24 comments

Leif, dawuud, and I had an idea during today's devchat: what it we used a separate Tub for each server connection?

The context was Leif's use case, where he wants a grid in which all servers (including his own) advertise a Tor .onion address, but he wants to connect to his own servers over faster direct TCP connections (these servers are on the local network).

Through a combination of the #68 multi-introducer work, and the #517 server-override work, the plan is:

  • each introducer's data is written into a cache file (YAML-format, with one clause per server)
  • there is also an override file, which contains YAML clauses of server data that should be used instead-of/in-addition-to the data received from the introducer
  • the StorageFarmBroker, when deciding how to contact a server, combines data from all introducers, then updates that dict with data from the override file

So Leif can:

  • start up the node normally, wait for the introducers to collect announcements
  • copy the cached clauses for his local servers into the override file
  • edit the override file to modify the FURL to use a direct "tcp:HOST:PORT" hint, instead of the "tor:XYZ.onion:80" hint that they advertised

But now the issue is: tahoe.cfg has an anonymous=true flag, which tells it to configure Foolscap to remove the DefaultTCP connection-hint handler, for safety: no direct-TCP hints will be honored. So how should this overridden server use an otherwise-prohibited TCP connection?

So our idea was that each YAML clause has two chunks of data: one local, one copied from the introducer announcement. The local data should include a string of some form that specifies the properties of the Tub that should be used for connections to this server. The StorageFarmBroker will spin up a new Tub for each connection, configure it according to those properties, then call getReference() (actually connectTo(), to get the reconnect-on-drop behavior).

The tahoe.cfg settings for foolscap connection-hint handlers get written into the cached introducer data. StorageFarmBroker creates Tubs that obey those rules because those rules are sitting next to the announcement that will contain the FURL.

In this world, we'll have one Tub for the server (if any), with a persistent identity (storing its key in private/node.privkey as usual). Then we'll have a separate ephemeral Tub for each storage server, which doesn't store its private key anywhere. (I think we'll also have a separate persistent Tub for the control-port / logport).

Potential issues:

  • performance: we have to make a new TLS key (probably RSA?) for each connection. probably not a big deal.
  • We can't share client-side objects between storage servers. We don't do this now, so it's no big loss. The idea would be something like: instead of the client getting access to a server ShareWriter object and sending .write(data) messages to it, we could flip it around and give the server access to a client-side ShareReader object, and the server would issue .read(length) calls to it. That would let the server set the pace more directly. And then the server could sub-contract to a different server by passing it the ShareReader object, then step out of the conversation entirely. However this would only work if our client could accept inbound connections, or if the subcontractor server already had a connection to the client (maybe the client connected to them as well).
  • We lose the sneaky NAT-bypass trick that lets you run a storage server on a NAT-bound machine. The trick is that you also run a client on your machine, it connects to other client+server nodes, then when those nodes want to use your server, they utilize the existing reverse connection (Foolscap doesn't care who originally initiated a connection, as long as both sides have proved control over the right TLS key). This trick only worked when those other clients had public IP addresses, so your box could connect to them.

None of those issues are serious: I think we could live with them.

And one benefit is that we'd eliminate the TubID-based correlation between connections to different storage servers. This is the correlation that foils your plans when you call yourself Alice when you connect to server1, and Bob when you connect to server2.

It would leave the #666 Accounting pubkey relationship (but you'd probably turn that off if you wanted anonymity), and the timing relationship (server1 and server2 compare notes, and see that "Alice" and "Bob" connect at exactly the same time, and conclude that Alice==Bob). And of course there's the usual storage-index correlation: Alice and Bob are always asking for the same shares. But removing the TubID correlation is a good (and necessary) first step.

The StorageFarmBroker object has responsibility for creating IServer objects for each storage server, and it doesn't have to expose what Tub it's using, so things would be encapsulated pretty nicely. (In the long run, the IServer objects it provides won't be using Foolscap at all).

Leif, dawuud, and I had an idea during today's devchat: what it we used a separate Tub for each server connection? The context was Leif's use case, where he wants a grid in which all servers (including his own) advertise a Tor .onion address, but he wants to connect to his own servers over faster direct TCP connections (these servers are on the local network). Through a combination of the #68 multi-introducer work, and the #517 server-override work, the plan is: * each introducer's data is written into a cache file (YAML-format, with one clause per server) * there is also an override file, which contains YAML clauses of server data that should be used instead-of/in-addition-to the data received from the introducer * the StorageFarmBroker, when deciding how to contact a server, combines data from all introducers, then updates that dict with data from the override file So Leif can: * start up the node normally, wait for the introducers to collect announcements * copy the cached clauses for his local servers into the override file * edit the override file to modify the FURL to use a direct "tcp:HOST:PORT" hint, instead of the "tor:XYZ.onion:80" hint that they advertised But now the issue is: tahoe.cfg has an `anonymous=true` flag, which tells it to configure Foolscap to remove the `DefaultTCP` connection-hint handler, for safety: no direct-TCP hints will be honored. So how should this overridden server use an otherwise-prohibited TCP connection? So our idea was that each YAML clause has two chunks of data: one local, one copied from the introducer announcement. The local data should include a string of some form that specifies the properties of the Tub that should be used for connections to this server. The StorageFarmBroker will spin up a new Tub for each connection, configure it according to those properties, then call `getReference()` (actually `connectTo()`, to get the reconnect-on-drop behavior). The tahoe.cfg settings for foolscap connection-hint handlers get written into the cached introducer data. StorageFarmBroker creates Tubs that obey those rules because those rules are sitting next to the announcement that will contain the FURL. In this world, we'll have one Tub for the server (if any), with a persistent identity (storing its key in private/node.privkey as usual). Then we'll have a separate ephemeral Tub for each storage server, which doesn't store its private key anywhere. (I think we'll also have a separate persistent Tub for the control-port / logport). Potential issues: * performance: we have to make a new TLS key (probably RSA?) for each connection. probably not a big deal. * We can't share client-side objects between storage servers. We don't do this now, so it's no big loss. The idea would be something like: instead of the client getting access to a server ShareWriter object and sending `.write(data)` messages to it, we could flip it around and *give* the server access to a client-side ShareReader object, and the server would issue `.read(length)` calls to it. That would let the server set the pace more directly. And then the server could sub-contract to a different server by passing it the ShareReader object, then step out of the conversation entirely. However this would only work if our client could accept inbound connections, or if the subcontractor server already had a connection to the client (maybe the client connected to them as well). * We lose the sneaky NAT-bypass trick that lets you run a storage server on a NAT-bound machine. The trick is that you also run a client on your machine, it connects to other client+server nodes, then when those nodes want to use your server, they utilize the existing reverse connection (Foolscap doesn't care who originally initiated a connection, as long as both sides have proved control over the right TLS key). This trick only worked when those other clients had public IP addresses, so your box could connect to them. None of those issues are serious: I think we could live with them. And one benefit is that we'd eliminate the TubID-based correlation between connections to different storage servers. This is the correlation that foils your plans when you call yourself Alice when you connect to server1, and Bob when you connect to server2. It would leave the #666 Accounting pubkey relationship (but you'd probably turn that off if you wanted anonymity), and the timing relationship (server1 and server2 compare notes, and see that "Alice" and "Bob" connect at exactly the same time, and conclude that Alice==Bob). And of course there's the usual storage-index correlation: Alice and Bob are always asking for the same shares. But removing the TubID correlation is a good (and necessary) first step. The StorageFarmBroker object has responsibility for creating IServer objects for each storage server, and it doesn't have to expose what Tub it's using, so things would be encapsulated pretty nicely. (In the long run, the IServer objects it provides won't be using Foolscap at all).
warner added the
c/code-network
p/normal
t/enhancement
v/1.10.2
labels 2016-03-29 21:41:38 +00:00
warner added this to the undecided milestone 2016-03-29 21:41:38 +00:00

Here's my latest dev branch that partially implements this design:

https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1

  • StorageFarmBroker makes a Tub for each storage server

  • the caching needs a little bit of work still; i never delete the old cache but grow it. Maybe we should delete the old cache file upon connecting to the introducer?

TODO:

  • teach tahoe to use another YAML configuration file that specifies ALL the introducers w/ furl and transport handler map + server overrides with transport handler map
Here's my latest dev branch that partially implements this design: <https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1> - [StorageFarmBroker](wiki/StorageFarmBroker) makes a Tub for each storage server - the caching needs a little bit of work still; i never delete the old cache but grow it. Maybe we should delete the old cache file upon connecting to the introducer? TODO: - teach tahoe to use another YAML configuration file that specifies ALL the introducers w/ furl and transport handler map + server overrides with transport handler map

like this?

connections.yaml

introducers:
    intro_nick1:
      furl: "furl://my_furl1"
      connection_types:
        tor:
          handler: foolscap_plugins.socks
          parameters:
            endpoint: "unix:/var/lib/tor/tor_unix.socket"
    intro_nick2:
      furl: "fur2://my_furl2"
      connection_types: ...
servers:
    server_id_1:
       server_options:
         key_s: "..."
         announcement:
           server_id: "..."
           furl: "furl://my_storage_server1/..."
           nickname: "storage1"
       connection_types: ...
    server_id_2:
       server_options:
         key_s: "my_secret_crypto_key2"
         announcement: announcement_2
       connection_types: ...
like this? connections.yaml ``` introducers: intro_nick1: furl: "furl://my_furl1" connection_types: tor: handler: foolscap_plugins.socks parameters: endpoint: "unix:/var/lib/tor/tor_unix.socket" intro_nick2: furl: "fur2://my_furl2" connection_types: ... servers: server_id_1: server_options: key_s: "..." announcement: server_id: "..." furl: "furl://my_storage_server1/..." nickname: "storage1" connection_types: ... server_id_2: server_options: key_s: "my_secret_crypto_key2" announcement: announcement_2 connection_types: ... ```

I like this connections.yaml layout a lot!

Maybe we should have a top-level default connection_types key too, to avoid repeating ourselves in each server and introducer definition? (When it exists, the server and introducer-level connection_types dictionary should be used in place of the default dictionary, not in addition to it).

I'm a little hesitant about requiring (local) introducer nicknames because people will have to make one up and it'll probably often end up being "My Introducer" or something like that, but it will certainly make the introducer list on the welcome page easier to understand when there are several introducers. The nickname can also be used as the filename for the introducer's yaml announcement cache.

I like this `connections.yaml` layout a lot! Maybe we should have a top-level default `connection_types` key too, to avoid repeating ourselves in each server and introducer definition? (When it exists, the server and introducer-level `connection_types` dictionary should be used in place of the default dictionary, not in addition to it). I'm a little hesitant about requiring (local) introducer nicknames because people will have to make one up and it'll probably often end up being "My Introducer" or something like that, but it will certainly make the introducer list on the welcome page easier to understand when there are several introducers. The nickname can also be used as the filename for the introducer's yaml announcement cache.

my latest dev branch i have all the unit tests working... and i rewrote the multi-intro tests
to use our new connections.yaml file; also got the static server config working although
i haven't written unit tests for that yet:

https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1

the next step is to load the connection_types sections of the yaml file.

my latest dev branch i have all the unit tests working... and i rewrote the multi-intro tests to use our new connections.yaml file; also got the static server config working although i haven't written unit tests for that yet: <https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1> the next step is to load the connection_types sections of the yaml file.

ok! my dev branch is ready for code review. it passes ALL unit tests except two:

  • allmydata.test.test_introducer.SystemTest.test_system_v1_server
  • allmydata.test.test_introducer.SystemTest.test_system_v2_server

note: i did not make these features work for the v1 intro client.

ok! my dev branch is ready for code review. it passes ALL unit tests except two: - *allmydata.test.test_introducer.SystemTest.test_system_v1_server* - *allmydata.test.test_introducer.SystemTest.test_system_v2_server* note: i did not make these features work for the v1 intro client.
david415 self-assigned this 2016-04-02 17:07:24 +00:00

here's a usefull diff to show how my dev branch differs from my introless-multiintro which is the same as Leif's introless-multiintro except that it has the latest upstream/master merged in.

https://github.com/david415/tahoe-lafs/pull/7/files

here's a usefull diff to show how my dev branch differs from my introless-multiintro which is the same as Leif's introless-multiintro except that it has the latest upstream/master merged in. <https://github.com/david415/tahoe-lafs/pull/7/files>

i made a new foolscap dev branch with the SOCKS5 plugin and merge upstream master into it
https://github.com/david415/foolscap/tree/tor-client-plugin.4

i've also updated the latest tahoe-lafs dev branch and i fixed some of the introducer unit tests that were failing... but i thought that i had previously gotten all or almost all of them to pass.

https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1

i'm also a bit confused as to why the web interface is totally broken.

i made a new foolscap dev branch with the SOCKS5 plugin and merge upstream master into it <https://github.com/david415/foolscap/tree/tor-client-plugin.4> i've also updated the latest tahoe-lafs dev branch and i fixed some of the introducer unit tests that were failing... but i thought that i had previously gotten all or almost all of them to pass. <https://github.com/david415/tahoe-lafs/tree/introless-multiintro_yaml_config.1> i'm also a bit confused as to why the web interface is totally broken.

replying here to comment of meejah
https://tahoe-lafs.org/trac/tahoe-lafs/ticket/517#comment:73

since my foolscap changes aren't merged upstream they require you to do some extra work to get it all to build correctly. i usually pip install tahoe-lafs first and then uninstall old foolscap and install my new foolscap.

replying here to comment of meejah <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/517#comment:73> since my foolscap changes aren't merged upstream they require you to do some extra work to get it all to build correctly. i usually pip install tahoe-lafs first and then uninstall old foolscap and install my new foolscap.

this shows a diff relative to the changes in leif's multiintro introducerless branch but it also has the upstream/master merged in:
https://github.com/david415/tahoe-lafs/pull/8

this is diffed against upstream/master
https://github.com/tahoe-lafs/tahoe-lafs/pull/260

further code review should be conducted against one of these pull requests and not an older one

this shows a diff relative to the changes in leif's multiintro introducerless branch but it also has the upstream/master merged in: <https://github.com/david415/tahoe-lafs/pull/8> this is diffed against upstream/master <https://github.com/tahoe-lafs/tahoe-lafs/pull/260> further code review should be conducted against one of these pull requests and not an older one
09:33 < warner> dawuud: I haven't looked at the branch recently, so maybe you already did it this way, but I think the first step would 
                be a single (merging + test-passing) PR that only adds Tub creation to the NativeStorageServer, and doesn't make any 
                changes to the Introducer or adds the yaml stuff
09:34 < warner> I think that's something which would be ok to land on it's own, and ought not to break anything
09:35 < warner> and wouldn't change user-visible behavior (mostly), so wouldn't require a lot of docs or study
09:35 < warner> step 2 is probably to have the introducer start writing to the yaml file, but not have anything which reads from it yet
09:36 < warner> step 3 would be reading from the yaml file too, but still have exactly one introducer
09:36 < warner> step 4 is to add the override file (but still the only permissible connection type is "tcp")
09:37 < warner> step 5 is to add multiple/zero introducers
09:37 < warner> step 6 is to add tor and the allowed-connection-types stuff
``` 09:33 < warner> dawuud: I haven't looked at the branch recently, so maybe you already did it this way, but I think the first step would be a single (merging + test-passing) PR that only adds Tub creation to the NativeStorageServer, and doesn't make any changes to the Introducer or adds the yaml stuff 09:34 < warner> I think that's something which would be ok to land on it's own, and ought not to break anything 09:35 < warner> and wouldn't change user-visible behavior (mostly), so wouldn't require a lot of docs or study 09:35 < warner> step 2 is probably to have the introducer start writing to the yaml file, but not have anything which reads from it yet 09:36 < warner> step 3 would be reading from the yaml file too, but still have exactly one introducer 09:36 < warner> step 4 is to add the override file (but still the only permissible connection type is "tcp") 09:37 < warner> step 5 is to add multiple/zero introducers 09:37 < warner> step 6 is to add tor and the allowed-connection-types stuff ```

here's my attempt to make the storage broker client make one tub per storage server:

https://github.com/david415/tahoe-lafs/tree/storage_broker_tub.0

so far i've been unable to make some of the unit tests pass.

here's my attempt to make the storage broker client make one tub per storage server: <https://github.com/david415/tahoe-lafs/tree/storage_broker_tub.0> so far i've been unable to make some of the unit tests pass.

last night meejah fixed the test that was failing, here:
https://github.com/meejah/tahoe-lafs/tree/storage_broker_tub.0

warner, please review. this is step 1 as you outlined above.

i'm going to begin work on step 2

last night meejah fixed the test that was failing, here: <https://github.com/meejah/tahoe-lafs/tree/storage_broker_tub.0> warner, please review. this is step 1 as you outlined above. i'm going to begin work on step 2

warner,

step 2 --> https://github.com/david415/tahoe-lafs/tree/intro_yaml_cache.0

I'll wait for review before proceeding further with this ticket.

warner, step 2 --> <https://github.com/david415/tahoe-lafs/tree/intro_yaml_cache.0> I'll wait for review before proceeding further with this ticket.
Brian Warner <warner@lothar.com> commented 2016-05-03 23:12:57 +00:00
Owner

In f5291b9/trunk:

document Tub-per-server change

refs ticket:2759
In [f5291b9/trunk](/tahoe-lafs/trac/commit/f5291b9366f45329585991d001f37ee2c308d1e1): ``` document Tub-per-server change refs ticket:2759 ```
Author

Landed step 1.. thanks!

Looking at step 2: here's some thoughts:

  • which yaml library should we use? Could you update _auto_deps.py to add it's pypi name?
  • let's make self.cache_filepath into self._cache_filepath
  • should we use yaml.safe_load() instead of plain load()? (I don't know what exactly is unsafe about plain load, and we aren't parsing files written by others, but maybe it's good general practice to use safe_load() by default)
  • I didn't know about FilePath.setContent().. that's cool, maybe we should replace fileutil.write_atomically() with it
  • at some point (maybe not now) we should put a helper method in allmydata.node.Node which returns the NODEDIR/private/ filename for a given basename, so the magic string "private" isn't duplicated all over the place.
  • let's wrap the new long lines in test_introducer.py
  • we need a test that adds an announcement, then loads the YAML file and makes sure the announcement is present. Probably in test_introducer.Announcements.test_client_*, basically a yaml.load(filename) and checking that the announcement and key string are correct (including the cases when there is no key, because the sender didn't sign their announcement, or because it went through an old v1 introducer)
  • we should also test duplicate announcements: I'm guessing that we want the YAML file to only contain a single instance of each announcement, and new announcements of the same server should replace the old one (instead of storing both). What's our plan for managing the lifetime of these cached servers? Do we remember everything forever? Or until they've been unreachable for more than X days? (in that case we need to store last-reached timestamps too).

I like where this is going!

Landed step 1.. thanks! Looking at step 2: here's some thoughts: * which yaml library should we use? Could you update `_auto_deps.py` to add it's pypi name? * let's make `self.cache_filepath` into `self._cache_filepath` * should we use `yaml.safe_load()` instead of plain `load()`? (I don't know what exactly is unsafe about plain `load`, and we aren't parsing files written by others, but maybe it's good general practice to use `safe_load()` by default) * I didn't know about `FilePath.setContent()`.. that's cool, maybe we should replace `fileutil.write_atomically()` with it * at some point (maybe not now) we should put a helper method in `allmydata.node.Node` which returns the `NODEDIR/private/` filename for a given basename, so the magic string "private" isn't duplicated all over the place. * let's wrap the new long lines in test_introducer.py * we need a test that adds an announcement, then loads the YAML file and makes sure the announcement is present. Probably in `test_introducer.Announcements.test_client_*`, basically a `yaml.load(filename)` and checking that the announcement and key string are correct (including the cases when there is no key, because the sender didn't sign their announcement, or because it went through an old v1 introducer) * we should also test duplicate announcements: I'm guessing that we want the YAML file to only contain a single instance of each announcement, and new announcements of the same server should replace the old one (instead of storing both). What's our plan for managing the lifetime of these cached servers? Do we remember everything forever? Or until they've been unreachable for more than X days? (in that case we need to store last-reached timestamps too). I like where this is going!
Author

idnar (on IRC) pointed out that yaml.load() will, in fact, perform arbitrary code execution. So I guess safe_load() is a good idea.

idnar (on IRC) pointed out that `yaml.load()` will, in fact, perform arbitrary code execution. So I guess `safe_load()` is a good idea.

OK i've made those corrections. Although I think my unit tests need a bit of work. I found that the nickname was not propagated into the announcement for some reason.

I was thinking that instead of having a cache expirey policy we can just replace the old cache file once we connect to the introducer. What do you think of this?

OK i've made those corrections. Although I think my unit tests need a bit of work. I found that the nickname was not propagated into the announcement for some reason. I was thinking that instead of having a cache expirey policy we can just replace the old cache file once we connect to the introducer. What do you think of this?
Author

Oh, I like that. It sounds like the simplest thing to implement, and mostly retains the current behavior.

We need to think through how replacement announcements get made: I think announcements have sequence numbers, and highest-seqnum wins. If we write all announcements into the cache (as opposed to rewriting the cache each time with only the latest announcement for each server), then we'll have lots of old seqnums in the file, but we can filter those out when we read it.

Also there's a small window when the introducer restarts, before the servers have reconnected to it, when it won't be announcing very much. Our client will erase its cache when it reconnects, and we'll have a small window when the cache is pretty empty. However if the client is still running (it hasn't bounced), it will still remember all the old announcements in RAM, so those connections will stay up. And if it does bounce, then it's no worse than it was before the cache.

Oh, I like that. It sounds like the simplest thing to implement, and mostly retains the current behavior. We need to think through how replacement announcements get made: I think announcements have sequence numbers, and highest-seqnum wins. If we write all announcements into the cache (as opposed to rewriting the cache each time with only the latest announcement for each server), then we'll have lots of old seqnums in the file, but we can filter those out when we read it. Also there's a small window when the introducer restarts, before the servers have reconnected to it, when it won't be announcing very much. Our client will erase its cache when it reconnects, and we'll have a small window when the cache is pretty empty. However if the client is still running (it hasn't bounced), it will still remember all the old announcements in RAM, so those connections will stay up. And if it does bounce, then it's no worse than it was before the cache.

OK here's my
"step 3 would be reading from the yaml file too, but still have exactly one introducer"

https://github.com/david415/tahoe-lafs/tree/read_intro_yaml_cache.0

I am not sure exactly how to implement cache purging or announcement replacements.
The naive way I described isn't even implemented here... but to do that I could simply remove
the cache file when we successfully connect to the introducer.

OK here's my "step 3 would be reading from the yaml file too, but still have exactly one introducer" <https://github.com/david415/tahoe-lafs/tree/read_intro_yaml_cache.0> I am not sure exactly how to implement cache purging or announcement replacements. The naive way I described isn't even implemented here... but to do that I could simply remove the cache file when we successfully connect to the introducer.

Here's the latest "step 2 is probably to have the introducer start writing to the yaml file, but not have anything which reads from it yet" :

https://github.com/tahoe-lafs/tahoe-lafs/pull/278

please review

I also have dev branches available for "step 3", here:
https://github.com/david415/tahoe-lafs/tree/read_intro_yaml_cache.2

but maybe i can "regenerate" that branch after "step 2" is landed... please do let us know.

Here's the latest "step 2 is probably to have the introducer start writing to the yaml file, but not have anything which reads from it yet" : <https://github.com/tahoe-lafs/tahoe-lafs/pull/278> please review I also have dev branches available for "step 3", here: <https://github.com/david415/tahoe-lafs/tree/read_intro_yaml_cache.2> but maybe i can "regenerate" that branch after "step 2" is landed... please do let us know.
Brian Warner <warner@lothar.com> commented 2016-05-10 20:04:21 +00:00
Owner

In b49b409/trunk:

Merge branch 'pr278': write (but don't read) YAML cache

refs ticket:2759
In [b49b409/trunk](/tahoe-lafs/trac/commit/b49b409c324870a0b936f7db0a7eaf85113bd114): ``` Merge branch 'pr278': write (but don't read) YAML cache refs ticket:2759 ```
"step 3" -> <https://github.com/tahoe-lafs/tahoe-lafs/pull/281> please review.

09:36 < warner> step 4 is to add the override file (but still the only permissible connection type is "tcp")

"step 4" -> https://github.com/david415/tahoe-lafs/tree/2759.add_connections_yaml_config.0

Here in this minimal code change I've only added one feature:

  • a connections.yaml configuration file with a "storage" section which allows the user to specify storage nodes. This effectively overrides announcements from the introducer about those storage nodes.

please review.

09:36 < warner> step 4 is to add the override file (but still the only permissible connection type is "tcp") "step 4" -> <https://github.com/david415/tahoe-lafs/tree/2759.add_connections_yaml_config.0> Here in this minimal code change I've only added one feature: - a connections.yaml configuration file with a "storage" section which allows the user to specify storage nodes. This effectively overrides announcements from the introducer about those storage nodes. please review.
Author

I think we've exhausted the purview of this ticket, which is specifically about using a separate Tub for each storage-server connection. Let's move the more general "cache server information and use it later, maybe with overrides" into a separate ticket: #2788

Since f5291b9 landed the per-server Tub, I'm closing this ticket. Work on PR281 and dawuud's other branches will continue in #2788.

I think we've exhausted the purview of this ticket, which is specifically about using a separate Tub for each storage-server connection. Let's move the more general "cache server information and use it later, maybe with overrides" into a separate ticket: #2788 Since f5291b9 landed the per-server Tub, I'm closing this ticket. Work on PR281 and dawuud's other branches will continue in #2788.
warner added the
r/fixed
label 2016-05-11 23:58:28 +00:00
warner modified the milestone from undecided to 1.12.0 2016-05-11 23:58:28 +00:00
Sign in to join this conversation.
No labels
c/code
c/code-dirnodes
c/code-encoding
c/code-frontend
c/code-frontend-cli
c/code-frontend-ftp-sftp
c/code-frontend-magic-folder
c/code-frontend-web
c/code-mutable
c/code-network
c/code-nodeadmin
c/code-peerselection
c/code-storage
c/contrib
c/dev-infrastructure
c/docs
c/operational
c/packaging
c/unknown
c/website
kw:2pc
kw:410
kw:9p
kw:ActivePerl
kw:AttributeError
kw:DataUnavailable
kw:DeadReferenceError
kw:DoS
kw:FileZilla
kw:GetLastError
kw:IFinishableConsumer
kw:K
kw:LeastAuthority
kw:Makefile
kw:RIStorageServer
kw:StringIO
kw:UncoordinatedWriteError
kw:about
kw:access
kw:access-control
kw:accessibility
kw:accounting
kw:accounting-crawler
kw:add-only
kw:aes
kw:aesthetics
kw:alias
kw:aliases
kw:aliens
kw:allmydata
kw:amazon
kw:ambient
kw:annotations
kw:anonymity
kw:anonymous
kw:anti-censorship
kw:api_auth_token
kw:appearance
kw:appname
kw:apport
kw:archive
kw:archlinux
kw:argparse
kw:arm
kw:assertion
kw:attachment
kw:auth
kw:authentication
kw:automation
kw:avahi
kw:availability
kw:aws
kw:azure
kw:backend
kw:backoff
kw:backup
kw:backupdb
kw:backward-compatibility
kw:bandwidth
kw:basedir
kw:bayes
kw:bbfreeze
kw:beta
kw:binaries
kw:binutils
kw:bitcoin
kw:bitrot
kw:blacklist
kw:blocker
kw:blocks-cloud-deployment
kw:blocks-cloud-merge
kw:blocks-magic-folder-merge
kw:blocks-merge
kw:blocks-raic
kw:blocks-release
kw:blog
kw:bom
kw:bonjour
kw:branch
kw:branding
kw:breadcrumbs
kw:brians-opinion-needed
kw:browser
kw:bsd
kw:build
kw:build-helpers
kw:buildbot
kw:builders
kw:buildslave
kw:buildslaves
kw:cache
kw:cap
kw:capleak
kw:captcha
kw:cast
kw:centos
kw:cffi
kw:chacha
kw:charset
kw:check
kw:checker
kw:chroot
kw:ci
kw:clean
kw:cleanup
kw:cli
kw:cloud
kw:cloud-backend
kw:cmdline
kw:code
kw:code-checks
kw:coding-standards
kw:coding-tools
kw:coding_tools
kw:collection
kw:compatibility
kw:completion
kw:compression
kw:confidentiality
kw:config
kw:configuration
kw:configuration.txt
kw:conflict
kw:connection
kw:connectivity
kw:consistency
kw:content
kw:control
kw:control.furl
kw:convergence
kw:coordination
kw:copyright
kw:corruption
kw:cors
kw:cost
kw:coverage
kw:coveralls
kw:coveralls.io
kw:cpu-watcher
kw:cpyext
kw:crash
kw:crawler
kw:crawlers
kw:create-container
kw:cruft
kw:crypto
kw:cryptography
kw:cryptography-lib
kw:cryptopp
kw:csp
kw:curl
kw:cutoff-date
kw:cycle
kw:cygwin
kw:d3
kw:daemon
kw:darcs
kw:darcsver
kw:database
kw:dataloss
kw:db
kw:dead-code
kw:deb
kw:debian
kw:debug
kw:deep-check
kw:defaults
kw:deferred
kw:delete
kw:deletion
kw:denial-of-service
kw:dependency
kw:deployment
kw:deprecation
kw:desert-island
kw:desert-island-build
kw:design
kw:design-review-needed
kw:detection
kw:dev-infrastructure
kw:devpay
kw:directory
kw:directory-page
kw:dirnode
kw:dirnodes
kw:disconnect
kw:discovery
kw:disk
kw:disk-backend
kw:distribute
kw:distutils
kw:dns
kw:do_http
kw:doc-needed
kw:docker
kw:docs
kw:docs-needed
kw:dokan
kw:dos
kw:download
kw:downloader
kw:dragonfly
kw:drop-upload
kw:duplicity
kw:dusty
kw:earth-dragon
kw:easy
kw:ec2
kw:ecdsa
kw:ed25519
kw:egg-needed
kw:eggs
kw:eliot
kw:email
kw:empty
kw:encoding
kw:endpoint
kw:enterprise
kw:enum34
kw:environment
kw:erasure
kw:erasure-coding
kw:error
kw:escaping
kw:etag
kw:etch
kw:evangelism
kw:eventual
kw:example
kw:excess-authority
kw:exec
kw:exocet
kw:expiration
kw:extensibility
kw:extension
kw:failure
kw:fedora
kw:ffp
kw:fhs
kw:figleaf
kw:file
kw:file-descriptor
kw:filename
kw:filesystem
kw:fileutil
kw:fips
kw:firewall
kw:first
kw:floatingpoint
kw:flog
kw:foolscap
kw:forward-compatibility
kw:forward-secrecy
kw:forwarding
kw:free
kw:freebsd
kw:frontend
kw:fsevents
kw:ftp
kw:ftpd
kw:full
kw:furl
kw:fuse
kw:garbage
kw:garbage-collection
kw:gateway
kw:gatherer
kw:gc
kw:gcc
kw:gentoo
kw:get
kw:git
kw:git-annex
kw:github
kw:glacier
kw:globalcaps
kw:glossary
kw:google-cloud-storage
kw:google-drive-backend
kw:gossip
kw:governance
kw:grid
kw:grid-manager
kw:gridid
kw:gridsync
kw:grsec
kw:gsoc
kw:gvfs
kw:hackfest
kw:hacktahoe
kw:hang
kw:hardlink
kw:heartbleed
kw:heisenbug
kw:help
kw:helper
kw:hint
kw:hooks
kw:how
kw:how-to
kw:howto
kw:hp
kw:hp-cloud
kw:html
kw:http
kw:https
kw:i18n
kw:i2p
kw:i2p-collab
kw:illustration
kw:image
kw:immutable
kw:impressions
kw:incentives
kw:incident
kw:init
kw:inlineCallbacks
kw:inotify
kw:install
kw:installer
kw:integration
kw:integration-test
kw:integrity
kw:interactive
kw:interface
kw:interfaces
kw:interoperability
kw:interstellar-exploration
kw:introducer
kw:introduction
kw:iphone
kw:ipkg
kw:iputil
kw:ipv6
kw:irc
kw:jail
kw:javascript
kw:joke
kw:jquery
kw:json
kw:jsui
kw:junk
kw:key-value-store
kw:kfreebsd
kw:known-issue
kw:konqueror
kw:kpreid
kw:kvm
kw:l10n
kw:lae
kw:large
kw:latency
kw:leak
kw:leasedb
kw:leases
kw:libgmp
kw:license
kw:licenss
kw:linecount
kw:link
kw:linux
kw:lit
kw:localhost
kw:location
kw:locking
kw:logging
kw:logo
kw:loopback
kw:lucid
kw:mac
kw:macintosh
kw:magic-folder
kw:manhole
kw:manifest
kw:manual-test-needed
kw:map
kw:mapupdate
kw:max_space
kw:mdmf
kw:memcheck
kw:memory
kw:memory-leak
kw:mesh
kw:metadata
kw:meter
kw:migration
kw:mime
kw:mingw
kw:minimal
kw:misc
kw:miscapture
kw:mlp
kw:mock
kw:more-info-needed
kw:mountain-lion
kw:move
kw:multi-users
kw:multiple
kw:multiuser-gateway
kw:munin
kw:music
kw:mutability
kw:mutable
kw:mystery
kw:names
kw:naming
kw:nas
kw:navigation
kw:needs-review
kw:needs-spawn
kw:netbsd
kw:network
kw:nevow
kw:new-user
kw:newcaps
kw:news
kw:news-done
kw:news-needed
kw:newsletter
kw:newurls
kw:nfc
kw:nginx
kw:nixos
kw:no-clobber
kw:node
kw:node-url
kw:notification
kw:notifyOnDisconnect
kw:nsa310
kw:nsa320
kw:nsa325
kw:numpy
kw:objects
kw:old
kw:openbsd
kw:openitp-packaging
kw:openssl
kw:openstack
kw:opensuse
kw:operation-helpers
kw:operational
kw:operations
kw:ophandle
kw:ophandles
kw:ops
kw:optimization
kw:optional
kw:options
kw:organization
kw:os
kw:os.abort
kw:ostrom
kw:osx
kw:osxfuse
kw:otf-magic-folder-objective1
kw:otf-magic-folder-objective2
kw:otf-magic-folder-objective3
kw:otf-magic-folder-objective4
kw:otf-magic-folder-objective5
kw:otf-magic-folder-objective6
kw:p2p
kw:packaging
kw:partial
kw:password
kw:path
kw:paths
kw:pause
kw:peer-selection
kw:performance
kw:permalink
kw:permissions
kw:persistence
kw:phone
kw:pickle
kw:pip
kw:pipermail
kw:pkg_resources
kw:placement
kw:planning
kw:policy
kw:port
kw:portability
kw:portal
kw:posthook
kw:pratchett
kw:preformance
kw:preservation
kw:privacy
kw:process
kw:profile
kw:profiling
kw:progress
kw:proxy
kw:publish
kw:pyOpenSSL
kw:pyasn1
kw:pycparser
kw:pycrypto
kw:pycrypto-lib
kw:pycryptopp
kw:pyfilesystem
kw:pyflakes
kw:pylint
kw:pypi
kw:pypy
kw:pysqlite
kw:python
kw:python3
kw:pythonpath
kw:pyutil
kw:pywin32
kw:quickstart
kw:quiet
kw:quotas
kw:quoting
kw:raic
kw:rainhill
kw:random
kw:random-access
kw:range
kw:raspberry-pi
kw:reactor
kw:readonly
kw:rebalancing
kw:recovery
kw:recursive
kw:redhat
kw:redirect
kw:redressing
kw:refactor
kw:referer
kw:referrer
kw:regression
kw:rekey
kw:relay
kw:release
kw:release-blocker
kw:reliability
kw:relnotes
kw:remote
kw:removable
kw:removable-disk
kw:rename
kw:renew
kw:repair
kw:replace
kw:report
kw:repository
kw:research
kw:reserved_space
kw:response-needed
kw:response-time
kw:restore
kw:retrieve
kw:retry
kw:review
kw:review-needed
kw:reviewed
kw:revocation
kw:roadmap
kw:rollback
kw:rpm
kw:rsa
kw:rss
kw:rst
kw:rsync
kw:rusty
kw:s3
kw:s3-backend
kw:s3-frontend
kw:s4
kw:same-origin
kw:sandbox
kw:scalability
kw:scaling
kw:scheduling
kw:schema
kw:scheme
kw:scp
kw:scripts
kw:sdist
kw:sdmf
kw:security
kw:self-contained
kw:server
kw:servermap
kw:servers-of-happiness
kw:service
kw:setup
kw:setup.py
kw:setup_requires
kw:setuptools
kw:setuptools_darcs
kw:sftp
kw:shared
kw:shareset
kw:shell
kw:signals
kw:simultaneous
kw:six
kw:size
kw:slackware
kw:slashes
kw:smb
kw:sneakernet
kw:snowleopard
kw:socket
kw:solaris
kw:space
kw:space-efficiency
kw:spam
kw:spec
kw:speed
kw:sqlite
kw:ssh
kw:ssh-keygen
kw:sshfs
kw:ssl
kw:stability
kw:standards
kw:start
kw:startup
kw:static
kw:static-analysis
kw:statistics
kw:stats
kw:stats_gatherer
kw:status
kw:stdeb
kw:storage
kw:streaming
kw:strports
kw:style
kw:stylesheet
kw:subprocess
kw:sumo
kw:survey
kw:svg
kw:symlink
kw:synchronous
kw:tac
kw:tahoe-*
kw:tahoe-add-alias
kw:tahoe-admin
kw:tahoe-archive
kw:tahoe-backup
kw:tahoe-check
kw:tahoe-cp
kw:tahoe-create-alias
kw:tahoe-create-introducer
kw:tahoe-debug
kw:tahoe-deep-check
kw:tahoe-deepcheck
kw:tahoe-lafs-trac-stream
kw:tahoe-list-aliases
kw:tahoe-ls
kw:tahoe-magic-folder
kw:tahoe-manifest
kw:tahoe-mkdir
kw:tahoe-mount
kw:tahoe-mv
kw:tahoe-put
kw:tahoe-restart
kw:tahoe-rm
kw:tahoe-run
kw:tahoe-start
kw:tahoe-stats
kw:tahoe-unlink
kw:tahoe-webopen
kw:tahoe.css
kw:tahoe_files
kw:tahoewapi
kw:tarball
kw:tarballs
kw:tempfile
kw:templates
kw:terminology
kw:test
kw:test-and-set
kw:test-from-egg
kw:test-needed
kw:testgrid
kw:testing
kw:tests
kw:throttling
kw:ticket999-s3-backend
kw:tiddly
kw:time
kw:timeout
kw:timing
kw:to
kw:to-be-closed-on-2011-08-01
kw:tor
kw:tor-protocol
kw:torsocks
kw:tox
kw:trac
kw:transparency
kw:travis
kw:travis-ci
kw:trial
kw:trickle
kw:trivial
kw:truckee
kw:tub
kw:tub.location
kw:twine
kw:twistd
kw:twistd.log
kw:twisted
kw:twisted-14
kw:twisted-trial
kw:twitter
kw:twn
kw:txaws
kw:type
kw:typeerror
kw:ubuntu
kw:ucwe
kw:ueb
kw:ui
kw:unclean
kw:uncoordinated-writes
kw:undeletable
kw:unfinished-business
kw:unhandled-error
kw:unhappy
kw:unicode
kw:unit
kw:unix
kw:unlink
kw:update
kw:upgrade
kw:upload
kw:upload-helper
kw:uri
kw:url
kw:usability
kw:use-case
kw:utf-8
kw:util
kw:uwsgi
kw:ux
kw:validation
kw:variables
kw:vdrive
kw:verify
kw:verlib
kw:version
kw:versioning
kw:versions
kw:video
kw:virtualbox
kw:virtualenv
kw:vista
kw:visualization
kw:visualizer
kw:vm
kw:volunteergrid2
kw:volunteers
kw:vpn
kw:wapi
kw:warners-opinion-needed
kw:warning
kw:weapi
kw:web
kw:web.port
kw:webapi
kw:webdav
kw:webdrive
kw:webport
kw:websec
kw:website
kw:websocket
kw:welcome
kw:welcome-page
kw:welcomepage
kw:wiki
kw:win32
kw:win64
kw:windows
kw:windows-related
kw:winscp
kw:workaround
kw:world-domination
kw:wrapper
kw:write-enabler
kw:wui
kw:x86
kw:x86-64
kw:xhtml
kw:xml
kw:xss
kw:zbase32
kw:zetuptoolz
kw:zfec
kw:zookos-opinion-needed
kw:zope
kw:zope.interface
p/blocker
p/critical
p/major
p/minor
p/normal
p/supercritical
p/trivial
r/cannot reproduce
r/duplicate
r/fixed
r/invalid
r/somebody else's problem
r/was already fixed
r/wontfix
r/worksforme
t/defect
t/enhancement
t/task
v/0.2.0
v/0.3.0
v/0.4.0
v/0.5.0
v/0.5.1
v/0.6.0
v/0.6.1
v/0.7.0
v/0.8.0
v/0.9.0
v/1.0.0
v/1.1.0
v/1.10.0
v/1.10.1
v/1.10.2
v/1.10a2
v/1.11.0
v/1.12.0
v/1.12.1
v/1.13.0
v/1.14.0
v/1.15.0
v/1.15.1
v/1.2.0
v/1.3.0
v/1.4.1
v/1.5.0
v/1.6.0
v/1.6.1
v/1.7.0
v/1.7.1
v/1.7β
v/1.8.0
v/1.8.1
v/1.8.2
v/1.8.3
v/1.8β
v/1.9.0
v/1.9.0-s3branch
v/1.9.0a1
v/1.9.0a2
v/1.9.0b1
v/1.9.1
v/1.9.2
v/1.9.2a1
v/cloud-branch
v/unknown
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac#2759
No description provided.