support multiple storage backends, including amazon s3 #999

Closed
opened 2010-03-16 16:03:05 +00:00 by zooko · 208 comments
zooko commented 2010-03-16 16:03:05 +00:00
Owner

The focus of this ticket is (now) adapting the existing codebase to use multiple backends, rather than supporting any particular backend.
We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like.
There is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could start by refactoring the existing code that reads or writes shares on disk, to use a local backend implementation supporting an IStorageProvider interface (probably a fairly simplistic filesystem-ish API).
(This involves changing the code in source:src/allmydata/storage/server.py that reads from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359 method, and also changing [storage/shares.py]source:src/allmydata/storage/shares.py@3762, [storage/immutable.py]source:src/allmydata/storage/immutable.py@3871#L39, and [storage/mutable.py]source:src/allmydata/storage/mutable.py@3815#L34 that write shares to local disk.)
At this point all the existing tests should still pass, since we haven't actually changed the behaviour.
Then we have to add the ability to configure new storage providers. This involves figuring out how to map user configuration choices to what actually happens when a node is started, and how the credentials needed to log into a particular storage backend should be specified. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on.
Naturally, all of this would require a decent amount of documentation and testing, too.
Once we have all of this worked out, the rest of this project (probably to be handled in other tickets) would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing them. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.

The focus of this ticket is (now) adapting the existing codebase to use multiple backends, rather than supporting any particular backend. We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like. There is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could start by refactoring the existing code that reads or writes shares on disk, to use a local backend implementation supporting an IStorageProvider interface (probably a fairly simplistic filesystem-ish API). (This involves changing the code in source:src/allmydata/storage/server.py that reads from local disk in its [_iter_share_files()]source:src/allmydata/storage/server.py@4164#L359 method, and also changing [storage/shares.py]source:src/allmydata/storage/shares.py@3762, [storage/immutable.py]source:src/allmydata/storage/immutable.py@3871#L39, and [storage/mutable.py]source:src/allmydata/storage/mutable.py@3815#L34 that write shares to local disk.) At this point all the existing tests should still pass, since we haven't actually changed the behaviour. Then we have to add the ability to configure new storage providers. This involves figuring out how to map user configuration choices to what actually happens when a node is started, and how the credentials needed to log into a particular storage backend should be specified. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on. Naturally, all of this would require a decent amount of documentation and testing, too. Once we have all of this worked out, the rest of this project (probably to be handled in other tickets) would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing them. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.
tahoe-lafs added the
code-storage
major
enhancement
1.6.0
labels 2010-03-16 16:03:05 +00:00
tahoe-lafs added this to the undecided milestone 2010-03-16 16:03:05 +00:00
zooko commented 2010-03-16 16:03:35 +00:00
Author
Owner
See [the RAIC diagram](http://allmydata.org/~zooko/RAIC.png).
kevan commented 2010-03-24 04:52:20 +00:00
Author
Owner

(this is an email I sent to zooko a while ago with my thoughts on how this should be implemented:)

First, I'll summarize, to make sure that I understand what you had in
mind. Please correct me if you disagree with any of this.

The "redundant array of inexpensive clouds" idea means extending the
current storage server in tahoe-lafs to support storage backends that
aren't what we have now (writing shares to the local filesystem). Well
actually, the redundant array of inexpensive clouds idea means doing
that, then implementing plugins for popular existing cloud storage
services -- Amazon S3 and Rackspace are two that you've mentioned, but
there are probably others (if we end up going through with this, I'll
probably email tahoe-dev so I can get an idea of what else is out
there/what else people want to see supported, in addition to my own
research).

The benefit (or at least the benefit that seems clear to me from your
explanation -- perhaps there are others that are more obvious if you run
a big tahoe-lafs installation like allmydata.com, or if you're more
familiar with tahoe-lafs than I am) is decoupling the ability of a
tahoe-lafs node to store files from its physical filesystem. So if, say,
allmydata.com were to start running tahoe-lafs nodes using S3 as a
backend, and their grid was filled, they could create more space on the
grid by buying more S3 buckets, rather than upgrading physical servers
or adding new servers (I've never used S3, but I would bet that it is
easier to buy more S3 buckets than to upgrade servers). Or, if you
wanted to create a grid without purchasing a bunch of servers, you could
run a bunch of nodes on one machine (I was thinking vmware images, but
then I started wondering whether it was even necessary to have that
level of separation between tahoe-lafs nodes -- is it? but that's not
really on topic), each mapping to a different S3 bucket or buckets.

Am I missing anything (aside from more examples)?

It seems like -- at least for S3 -- you could already sort of do this.
There are projects like s3fs, which provide a FUSE interface to an
S3 bucket (though the last file for it is more than a year old. it
seems like there should be other projects like that, though) (edit: this is actually wrong -- I just hadn't found the Google code project, which is at http://code.google.com/p/s3fs/). Using
that, you could mount your S3 bucket somewhere in the filesystem of your
server, then kajigger the basedir of the tahoe-lafs node so that it
rests in that area of the filesystem, or otherwise configure the
tahoe-lafs node to save files there. This requires more work than what
we'd eventually want with "redundant array of inexpensive clouds", of
course, and (depending on how well FUSE or other S3 interfaces play) may
only work on tahoe-lafs nodes running one unix or other, but if an
operator got it working, it seems like they'd have most of the benefit
outlined above without any further work on my/our part.

(not that I mind working on this, of course, but I figured it would be
worthwhile to mention that)

In any case, I think implementing this would come down to two basic parts.

The first part would be adapting the existing codebase to use multiple
backends.

We already have one backend -- the filesystem backend -- which I think
should be a plugin in the same sense that the others will be plugins
(i.e.: other code in tahoe-lafs can interact with a filesystem plugin
without caring very much about how or where it is storing its files --
otherwise it doesn't seem very extensible). If you accept this, then
we'd need to figure out what a backend plugin should look like. Maybe we
can make each plugin implement RIStorageServer, and leave it at that.
Then we might not need to do very much work on the existing server to
make it work with the rest of the (new) system. However, it's possible
that there is backend-independent logic in the current server
implementation that we wouldn't want to duplicate in every other backend
implementation. To address this, we could instead make a sort of
backend-agnostic storage server that implements RIStorageServer, then
make another interface for backends to implement, say IStorageProvider.
The skeletal RIStorageServer would instantiate its IStorageProvider
based on what the user configured, and use it to write/read data, get
statistics, and so on. Then IStorageProvider would be a fairly
simplistic filesystem-ish API.

The other part of preparation would be figuring out how to map user
configuration choices to what actually happens when a node is started.
Also, we'd want to figure out how (if?) we need to do anything special
with the credentials that users might need to log in to their storage
backend. I'll have a better idea of how I'd implement this once I look
at the way it works for other things that users configure.

Naturally, all of this would require a decent amount of documentation
and testing, too.

(I'm open to other ideas, of course -- these are just what came to my mind)

Once we have all of this worked out, the rest of this project would be
identifying what other backends we'd want in tahoe-lafs, then
documenting, implementing, and testing those. We already have Amazon S3
and Rackspace as targets -- users of tahoe-lafs will probably have their
own suggestions, and more backends will come up with more research.

(this is an email I sent to zooko a while ago with my thoughts on how this should be implemented:) First, I'll summarize, to make sure that I understand what you had in mind. Please correct me if you disagree with any of this. The "redundant array of inexpensive clouds" idea means extending the current storage server in tahoe-lafs to support storage backends that aren't what we have now (writing shares to the local filesystem). Well actually, the redundant array of inexpensive clouds idea means doing that, then implementing plugins for popular existing cloud storage services -- Amazon S3 and Rackspace are two that you've mentioned, but there are probably others (if we end up going through with this, I'll probably email tahoe-dev so I can get an idea of what else is out there/what else people want to see supported, in addition to my own research). The benefit (or at least the benefit that seems clear to me from your explanation -- perhaps there are others that are more obvious if you run a big tahoe-lafs installation like allmydata.com, or if you're more familiar with tahoe-lafs than I am) is decoupling the ability of a tahoe-lafs node to store files from its physical filesystem. So if, say, allmydata.com were to start running tahoe-lafs nodes using S3 as a backend, and their grid was filled, they could create more space on the grid by buying more S3 buckets, rather than upgrading physical servers or adding new servers (I've never used S3, but I would bet that it is easier to buy more S3 buckets than to upgrade servers). Or, if you wanted to create a grid without purchasing a bunch of servers, you could run a bunch of nodes on one machine (I was thinking vmware images, but then I started wondering whether it was even necessary to have that level of separation between tahoe-lafs nodes -- is it? but that's not really on topic), each mapping to a different S3 bucket or buckets. Am I missing anything (aside from more examples)? It seems like -- at least for S3 -- you could already sort of do this. There are projects like s3fs, which provide a FUSE interface to an S3 bucket (though the last file for it is more than a year old. it seems like there should be other projects like that, though) (edit: this is actually wrong -- I just hadn't found the Google code project, which is at <http://code.google.com/p/s3fs/>). Using that, you could mount your S3 bucket somewhere in the filesystem of your server, then kajigger the basedir of the tahoe-lafs node so that it rests in that area of the filesystem, or otherwise configure the tahoe-lafs node to save files there. This requires more work than what we'd eventually want with "redundant array of inexpensive clouds", of course, and (depending on how well FUSE or other S3 interfaces play) may only work on tahoe-lafs nodes running one unix or other, but if an operator got it working, it seems like they'd have most of the benefit outlined above without any further work on my/our part. (not that I mind working on this, of course, but I figured it would be worthwhile to mention that) In any case, I think implementing this would come down to two basic parts. The first part would be adapting the existing codebase to use multiple backends. We already have one backend -- the filesystem backend -- which I think should be a plugin in the same sense that the others will be plugins (i.e.: other code in tahoe-lafs can interact with a filesystem plugin without caring very much about how or where it is storing its files -- otherwise it doesn't seem very extensible). If you accept this, then we'd need to figure out what a backend plugin should look like. Maybe we can make each plugin implement RIStorageServer, and leave it at that. Then we might not need to do very much work on the existing server to make it work with the rest of the (new) system. However, it's possible that there is backend-independent logic in the current server implementation that we wouldn't want to duplicate in every other backend implementation. To address this, we could instead make a sort of backend-agnostic storage server that implements RIStorageServer, then make another interface for backends to implement, say IStorageProvider. The skeletal RIStorageServer would instantiate its IStorageProvider based on what the user configured, and use it to write/read data, get statistics, and so on. Then IStorageProvider would be a fairly simplistic filesystem-ish API. The other part of preparation would be figuring out how to map user configuration choices to what actually happens when a node is started. Also, we'd want to figure out how (if?) we need to do anything special with the credentials that users might need to log in to their storage backend. I'll have a better idea of how I'd implement this once I look at the way it works for other things that users configure. Naturally, all of this would require a decent amount of documentation and testing, too. (I'm open to other ideas, of course -- these are just what came to my mind) Once we have all of this worked out, the rest of this project would be identifying what other backends we'd want in tahoe-lafs, then documenting, implementing, and testing those. We already have Amazon S3 and Rackspace as targets -- users of tahoe-lafs will probably have their own suggestions, and more backends will come up with more research.
davidsarah commented 2010-03-31 16:48:51 +00:00
Author
Owner

Generalizing this to include support for multiple backends (since I don't think we want to do it in a way that would only support S3 and local disk).

Generalizing this to include support for multiple backends (since I don't think we want to do it in a way that would only support S3 and local disk).
tahoe-lafs changed title from amazon s3 backend to support multiple storage backends, including amazon s3 2010-03-31 16:48:51 +00:00
davidsarah commented 2010-03-31 16:50:14 +00:00
Author
Owner

fix typo

fix typo
davidsarah commented 2010-03-31 17:17:57 +00:00
Author
Owner

Update description to reflect kevan's suggested approach.

Update description to reflect kevan's suggested approach.
arch_o_median commented 2011-03-22 05:34:38 +00:00
Author
Owner

Attachment storagemocktest01.darcs.patch (6013 bytes) added

**Attachment** storagemocktest01.darcs.patch (6013 bytes) added
arch_o_median commented 2011-03-25 20:41:34 +00:00
Author
Owner

Attachment sservertests.darcs.patch (9054 bytes) added

**Attachment** sservertests.darcs.patch (9054 bytes) added
zooko commented 2011-04-06 20:41:29 +00:00
Author
Owner

Here is an incomplete patch for others (arc) to look at or improve.

Here is an incomplete patch for others (arc) to look at or improve.
zooko commented 2011-04-06 20:41:41 +00:00
Author
Owner

Attachment for-arctic.darcs.patch (28992 bytes) added

**Attachment** for-arctic.darcs.patch (28992 bytes) added
zooko commented 2011-04-06 21:00:11 +00:00
Author
Owner

Attachment for-arctic-2.darcs.patch (630265 bytes) added

**Attachment** for-arctic-2.darcs.patch (630265 bytes) added
arch_o_median commented 2011-06-24 20:32:00 +00:00
Author
Owner

Attachment workingonbackend01.darcs.patch (46623 bytes) added

Implements tests of read and write for the nullbackend

**Attachment** workingonbackend01.darcs.patch (46623 bytes) added Implements tests of read and write for the nullbackend
arch_o_median commented 2011-06-26 05:35:28 +00:00
Author
Owner

Attachment snapshotofbackendimplementation.darcs.patch (96411 bytes) added

just so I don't lose it all...

**Attachment** snapshotofbackendimplementation.darcs.patch (96411 bytes) added just so I don't lose it all...
arch_o_median commented 2011-06-26 17:11:13 +00:00
Author
Owner

Attachment checkpoint3.darcs.patch (99326 bytes) added

another checkpoint

**Attachment** checkpoint3.darcs.patch (99326 bytes) added another checkpoint
arch_o_median commented 2011-06-28 20:24:26 +00:00
Author
Owner

Attachment checkpoint4.darcs.patch (111935 bytes) added

**Attachment** checkpoint4.darcs.patch (111935 bytes) added
arch_o_median commented 2011-07-05 04:29:25 +00:00
Author
Owner

Attachment checkpoint5.darcs.patch (124608 bytes) added

more precise tests in TestServerFSBackend

**Attachment** checkpoint5.darcs.patch (124608 bytes) added more precise tests in [TestServer](wiki/TestServer)FSBackend
arch_o_median commented 2011-07-06 19:08:50 +00:00
Author
Owner

Attachment checkpoint6.darcs.patch (130227 bytes) added

backing myself up, some comments cleaned in interfaces, new tests in test_backends

**Attachment** checkpoint6.darcs.patch (130227 bytes) added backing myself up, some comments cleaned in interfaces, new tests in test_backends
arch_o_median commented 2011-07-06 20:07:36 +00:00
Author
Owner

Attachment checkpoint7.darcs.patch (130662 bytes) added

tiny change, now tests that allocated returns correct value

**Attachment** checkpoint7.darcs.patch (130662 bytes) added tiny change, now tests that allocated returns correct value
arch_o_median commented 2011-07-06 22:31:09 +00:00
Author
Owner

Attachment checkpoint8.darcs.patch (132043 bytes) added

The null backend test is useful for testing what happens when there's no effective limit on the backend

**Attachment** checkpoint8.darcs.patch (132043 bytes) added The null backend test is useful for testing what happens when there's no effective limit on the backend
arch_o_median commented 2011-07-07 04:29:24 +00:00
Author
Owner

Attachment checkpoint9.darcs.patch (140783 bytes) added

checkpoint 9

**Attachment** checkpoint9.darcs.patch (140783 bytes) added checkpoint 9
arch_o_median commented 2011-07-07 17:45:22 +00:00
Author
Owner

Attachment checkpoint10.darcs.patch (144949 bytes) added

Completed coverage of remote_allocate_buckets

**Attachment** checkpoint10.darcs.patch (144949 bytes) added Completed coverage of remote_allocate_buckets
arch_o_median commented 2011-07-08 21:39:13 +00:00
Author
Owner

Attachment checkpoint11.darcs.patch (152965 bytes) added

(JACP) Just Another CheckPoint

**Attachment** checkpoint11.darcs.patch (152965 bytes) added (JACP) Just Another [CheckPoint](wiki/CheckPoint)
arch_o_median commented 2011-07-10 19:55:45 +00:00
Author
Owner

Attachment consistentifysi.darcs.patch (161829 bytes) added

all storage_index (word tokens) to storageindex in storage/server.py

**Attachment** consistentifysi.darcs.patch (161829 bytes) added all storage_index (word tokens) to storageindex in storage/server.py
arch_o_median commented 2011-07-11 19:08:47 +00:00
Author
Owner

Attachment checkpoint12.darcs.patch (170830 bytes) added

no longer trying to mock FS in TestServerFSBackend

**Attachment** checkpoint12.darcs.patch (170830 bytes) added no longer trying to mock FS in [TestServer](wiki/TestServer)FSBackend
arch_o_median commented 2011-07-12 02:52:35 +00:00
Author
Owner

Attachment jacp13.darcs.patch (192631 bytes) added

**Attachment** jacp13.darcs.patch (192631 bytes) added
arch_o_median commented 2011-07-12 06:11:10 +00:00
Author
Owner

Attachment jacp14.darcs.patch (205520 bytes) added

**Attachment** jacp14.darcs.patch (205520 bytes) added
arch_o_median commented 2011-07-13 06:06:01 +00:00
Author
Owner

Attachment jacp15.darcs.patch (210813 bytes) added

**Attachment** jacp15.darcs.patch (210813 bytes) added
arch_o_median commented 2011-07-13 06:07:08 +00:00
Author
Owner

OK jacp15 contains a test that (almost) completely covers remote_allocate_buckets with the new backend. We should review this patches contents before writing more tests.

> OK jacp15 contains a test that (almost) completely covers remote_allocate_buckets with the new backend. We should review this patches contents before writing more tests.
davidsarah commented 2011-07-13 15:45:17 +00:00
Author
Owner

I'll review this.

I'll review this.
tahoe-lafs modified the milestone from undecided to soon 2011-07-13 15:45:17 +00:00
zooko commented 2011-07-14 00:31:09 +00:00
Author
Owner

Attachment work-in-progress-on-tests-from-pair-programming-with-Zancas.darcs.patch (227416 bytes) added

**Attachment** work-in-progress-on-tests-from-pair-programming-with-Zancas.darcs.patch (227416 bytes) added
zooko commented 2011-07-14 21:24:15 +00:00
Author
Owner

Attachment work-in-progress-2011-07-14_21_23.darcs.patch (235017 bytes) added

**Attachment** work-in-progress-2011-07-14_21_23.darcs.patch (235017 bytes) added
zooko commented 2011-07-15 19:16:16 +00:00
Author
Owner

Attachment work-in-progress-2011-07-15_19_15.darcs.patch (255454 bytes) added

**Attachment** work-in-progress-2011-07-15_19_15.darcs.patch (255454 bytes) added
zooko commented 2011-07-20 06:10:25 +00:00
Author
Owner

Attachment work-in-progress-2011-07-20_06_05Z.darcs.patch (283324 bytes) added

**Attachment** work-in-progress-2011-07-20_06_05Z.darcs.patch (283324 bytes) added
davidsarah commented 2011-07-20 16:55:00 +00:00
Author
Owner

Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:118065? Currently, storage directories with Unicode paths are intended to be supported on Windows.

Before going much further in relying on `twisted.python.filepath.FilePath`, can we think about the Unicode issue raised in ticket:1437#[comment:118065](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118065)? Currently, storage directories with Unicode paths are intended to be supported on Windows.
arch_o_median commented 2011-07-20 20:17:23 +00:00
Author
Owner

Replying to davidsarah:

Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:118065? Currently, storage directories with Unicode paths are intended to be supported on Windows.

OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...

I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736

Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118073): > Before going much further in relying on `twisted.python.filepath.FilePath`, can we think about the Unicode issue raised in ticket:1437#[comment:118065](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118065)? Currently, storage directories with Unicode paths are intended to be supported on Windows. > OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue... > I'm currently snooping for leads here: <http://twistedmatrix.com/trac/ticket/4736>
arch_o_median commented 2011-07-20 20:23:04 +00:00
Author
Owner

Replying to [arch_o_median]comment:13:

Replying to davidsarah:

Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:118065? Currently, storage directories with Unicode paths are intended to be supported on Windows.

OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...

I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736

So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here:

http://twistedmatrix.com/trac/ticket/2366

Replying to [arch_o_median]comment:13: > Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118073): > > Before going much further in relying on `twisted.python.filepath.FilePath`, can we think about the Unicode issue raised in ticket:1437#[comment:118065](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118065)? Currently, storage directories with Unicode paths are intended to be supported on Windows. > > > OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue... > > I'm currently snooping for leads here: <http://twistedmatrix.com/trac/ticket/4736> > So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here: <http://twistedmatrix.com/trac/ticket/2366>
arch_o_median commented 2011-07-20 20:27:29 +00:00
Author
Owner

Replying to [arch_o_median]comment:14:

Replying to [arch_o_median]comment:13:

Replying to davidsarah:

Before going much further in relying on twisted.python.filepath.FilePath, can we think about the Unicode issue raised in ticket:1437#comment:118065? Currently, storage directories with Unicode paths are intended to be supported on Windows.

OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue...

I'm currently snooping for leads here: http://twistedmatrix.com/trac/ticket/4736

So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here:

http://twistedmatrix.com/trac/ticket/2366

(Is replying to myself bad form?) OK so I can't tell how 2366 is (or is not resolved) should I get a twisted login so I can ask about it on that ticket... I await direction.

Replying to [arch_o_median]comment:14: > Replying to [arch_o_median]comment:13: > > Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118073): > > > Before going much further in relying on `twisted.python.filepath.FilePath`, can we think about the Unicode issue raised in ticket:1437#[comment:118065](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118065)? Currently, storage directories with Unicode paths are intended to be supported on Windows. > > > > > > OK... I guess that I should look into the twisted project's testing framework to determine what they know about this issue... > > > > I'm currently snooping for leads here: <http://twistedmatrix.com/trac/ticket/4736> > > > So it seems like there may be (but probably there is not) an issue regarding Windows path representations to users versus to "OS" API's snooping here: > > <http://twistedmatrix.com/trac/ticket/2366> (Is replying to myself bad form?) OK so I can't tell how 2366 is (or is not resolved) should I get a twisted login so I can ask about it on that ticket... I await direction.
zooko commented 2011-07-21 19:52:44 +00:00
Author
Owner

I did some investigation about non-ASCII filename handling in filepath and in Tahoe-LAFS and posted my notes on Twisted #5203.

I did some investigation about non-ASCII filename handling in filepath and in Tahoe-LAFS and posted my notes on [Twisted #5203](http://twistedmatrix.com/trac/ticket/5203).
arch_o_median commented 2011-07-22 07:03:25 +00:00
Author
Owner

Attachment jacp16Zancas20110722.darcs.patch (301848 bytes) added

**Attachment** jacp16Zancas20110722.darcs.patch (301848 bytes) added
arch_o_median commented 2011-07-22 20:32:40 +00:00
Author
Owner

Attachment jacp17Zancas20110723.darcs.patch (309840 bytes) added

**Attachment** jacp17Zancas20110723.darcs.patch (309840 bytes) added
arch_o_median commented 2011-07-23 03:19:05 +00:00
Author
Owner

Attachment jacp18Zancas20110723.darcs.patch (321159 bytes) added

**Attachment** jacp18Zancas20110723.darcs.patch (321159 bytes) added
arch_o_median commented 2011-07-25 20:39:34 +00:00
Author
Owner

After some chatting with zooko and warner in IRC, I've tentatively decided to use composition to inform the base Crawler object about the backend it is associated with. I'm not sure, but I think passing the whole Core object might be appropriate.

After some chatting with zooko and warner in IRC, I've tentatively decided to use composition to inform the base Crawler object about the backend it is associated with. I'm not sure, but I think passing the whole <backend>Core object might be appropriate.
Zancas commented 2011-07-27 08:05:16 +00:00
Author
Owner

Attachment jacp19Zancas20110727.darcs.patch (347272 bytes) added

**Attachment** jacp19Zancas20110727.darcs.patch (347272 bytes) added
Zancas commented 2011-07-27 23:05:55 +00:00
Author
Owner

My current test suite contains several tests that Zooko calls "transparent box". I need to decide whether they are appropriate:

  1. remote_allocate_buckets populates incoming with shnum(s)
  2. an attempt to allocate the same share (same ss) does not create a new bucketwriter
  3. test allocated size
  4. together remote_write, remote_close, get_shares, and read_share_data behave

Since I am altering the location (from server to backend/core) of some of this functionality, and since I am altering the mechanism by which the filesystem is manipulated (to FilePath)... I think all of these tests are necessary.

It would be nice if the tests were designed to ensure the proper behavior independent of the underlying storage medium... but I think I need to assume a filesystem-like interface for at least (1,2, and 4), probably (3) as well...

> My current test suite contains several tests that Zooko calls "transparent box". I need to decide whether they are appropriate: 1. remote_allocate_buckets populates incoming with shnum(s) 2. an attempt to allocate the same share (same ss) does _not_ create a new bucketwriter 3. test allocated size 4. together remote_write, remote_close, get_shares, and read_share_data behave > Since I am altering the location (from server to backend/core) of some of this functionality, and since I am altering the mechanism by which the filesystem is manipulated (to FilePath)... I think all of these tests are necessary. > It would be nice if the tests were designed to ensure the proper behavior independent of the underlying storage medium... but I think I need to assume a filesystem-like interface for at least (1,2, and 4), probably (3) as well...
Zancas commented 2011-07-28 07:23:47 +00:00
Author
Owner

Attachment jacp20Zancas20110728.darcs.patch (358454 bytes) added

**Attachment** jacp20Zancas20110728.darcs.patch (358454 bytes) added
Zancas commented 2011-07-29 02:31:31 +00:00
Author
Owner

I'm confused about leases. When I look at the constructor for an immutable share file in a 'pristine' repository, (or in my latest version for that matter) I see that in the "create" clause of the constructor a python string representation of a big endian '0' is used for the number of leases.

http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/immutable.py#L63

This is confusing because in my test vector data (created some time ago) I have '1' as the initial number of leases. My guess is that I somehow got a bum test-vector value, but it'd be nice to hear from an architect that immutable share files really should start life with '0' leases!

I'm confused about leases. When I look at the constructor for an immutable share file in a 'pristine' repository, (or in my latest version for that matter) I see that in the "create" clause of the constructor a python string representation of a big endian '0' is used for the number of leases. <http://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/storage/immutable.py#L63> This is confusing because in my test vector data (created some time ago) I have '1' as the initial number of leases. My guess is that I somehow got a bum test-vector value, but it'd be nice to hear from an architect that immutable share files really should start life with '0' leases!
Zancas commented 2011-07-29 04:39:33 +00:00
Author
Owner

Attachment FinishFPWRTest_Zancas20110728.darcs.patch (375575 bytes) added

Patch passes allmydata.test.test_backends.TestServerAndFSBackend.test_write_and_read_share

**Attachment** FinishFPWRTest_Zancas20110728.darcs.patch (375575 bytes) added Patch passes allmydata.test.test_backends.TestServerAndFSBackend.test_write_and_read_share
zooko commented 2011-07-29 14:24:35 +00:00
Author
Owner

Cool! Will review.

Cool! Will review.
Zancas commented 2011-07-29 23:54:48 +00:00
Author
Owner

Attachment readoldshpasses_Zancas20110729.darcs.patch (380678 bytes) added

TestServerAndFSBackend.test_read_old_share passes

**Attachment** readoldshpasses_Zancas20110729.darcs.patch (380678 bytes) added [TestServerAnd](wiki/TestServerAnd)FSBackend.test_read_old_share passes
Zancas commented 2011-07-30 00:59:39 +00:00
Author
Owner

Attachment TestServerandFSBackPasses_Zancas20110729.darcs.patch (392691 bytes) added

TestServerAndFSBackend passes all (3) tests

**Attachment** [TestServerand](wiki/TestServerand)FSBackPasses_Zancas20110729.darcs.patch (392691 bytes) added [TestServerAnd](wiki/TestServerAnd)FSBackend passes all (3) tests
Zancas commented 2011-07-30 03:41:42 +00:00
Author
Owner

Attachment test_backendpasses_Zancas20110729.darcs.patch (399923 bytes) added

5 test_backend tests pass

**Attachment** test_backendpasses_Zancas20110729.darcs.patch (399923 bytes) added 5 test_backend tests pass
tahoe-lafs added
n/a
and removed
1.6.0
labels 2011-07-30 04:23:07 +00:00
Zancas commented 2011-08-01 09:47:05 +00:00
Author
Owner

Attachment JACP20_Zancas20110801.darcs.patch (414808 bytes) added

uggg... bugs...

**Attachment** JACP20_Zancas20110801.darcs.patch (414808 bytes) added uggg... bugs...
Zancas commented 2011-08-01 20:05:17 +00:00
Author
Owner

Attachment jacp22_test_backendpasses_Zancas20110802.darcs.patch (425317 bytes) added

the 5 tests pass... so what?

**Attachment** jacp22_test_backendpasses_Zancas20110802.darcs.patch (425317 bytes) added the 5 tests pass... so what?
zancas commented 2011-08-29 16:45:15 +00:00
Author
Owner

Ticket 1465 more succinctly organizes the same code contained in these patches.

Ticket [1465](http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1465) more succinctly organizes the same code contained in these patches.
zooko commented 2011-09-01 03:33:27 +00:00
Author
Owner

Attachment backends-configuration-docs.darcs.patch (172619 bytes) added

**Attachment** backends-configuration-docs.darcs.patch (172619 bytes) added
zooko commented 2011-09-01 03:36:21 +00:00
Author
Owner

I added backends-configuration-docs.darcs.patch which contains documentation of the configuration options for the backends feature. I like Brian Warner's approach to development where he writes the docs first, even before the tests. (He writes tests second.) I encourage anyone working on this ticket to read (and possibly improve/fix/extend) these docs!

I added [backends-configuration-docs.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-7ca1c2eb177b) which contains documentation of the configuration options for the backends feature. I like Brian Warner's approach to development where he writes the docs first, even before the tests. (He writes tests second.) I encourage anyone working on this ticket to read (and possibly improve/fix/extend) these docs!
davidsarah commented 2011-09-02 01:44:21 +00:00
Author
Owner

Review of backends-configuration-docs.darcs.patch:

s3.rst:

  • Add a short introduction saying what S3 is and why anyone might want to use it.

  • It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_". Also, I would make it "s3.", since that's similar to the use of "." to group other related options.

  • Should the s3_url option include the scheme name, i.e. defaulting to http://s3.amazonaws.com ? We might want to support https in future (although there would be more to configure if we check certificates).

  • In the description of s3_max_space, copy the paragraph starting "This string contains a number" from disk.rst rather than referring to it.

  • "enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."

Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in s3.rst.

disk.rst:

  • "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem"

  • use backend = disk, not backend = local filesystem, and say that it is the default.

configuration.rst:

  • "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."

  • "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"

Review of [backends-configuration-docs.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-7ca1c2eb177b): `s3.rst`: * Add a short introduction saying what S3 is and why anyone might want to use it. * It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_*". Also, I would make it "s3.*", since that's similar to the use of "." to group other related options. * Should the `s3_url` option include the scheme name, i.e. defaulting to `http://s3.amazonaws.com` ? We might want to support https in future (although there would be more to configure if we check certificates). * In the description of `s3_max_space`, copy the paragraph starting "This string contains a number" from `disk.rst` rather than referring to it. * `"enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."` > Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in `s3.rst`. `disk.rst`: * "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem" * use `backend = disk`, not `backend = local filesystem`, and say that it is the default. `configuration.rst`: * "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server." * "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"
zooko commented 2011-09-02 04:47:04 +00:00
Author
Owner

I closed the subsidiary ticket #1465 as "fixed". The current patch set for this ticket as of this writing is [attachment:20110829passespyflakes.darcs.patch]attachment:20110829passespyflakes.darcs.patch🎫1465 (from that ticket) plus attachment:backends-configuration-docs.darcs.patch.

I closed the subsidiary ticket #1465 as "fixed". The current patch set for this ticket as of this writing is [attachment:20110829passespyflakes.darcs.patch]attachment:20110829passespyflakes.darcs.patch:ticket:1465 (from that ticket) plus attachment:backends-configuration-docs.darcs.patch.
davidsarah commented 2011-09-15 02:50:08 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah.darcs.patch (213441 bytes) added

This is just a "flat" recording of my refactoring of pluggable backends. I'll do a better recording tomorrow, and explain the refactoring.

**Attachment** pluggable-backends-davidsarah.darcs.patch (213441 bytes) added This is just a "flat" recording of my refactoring of pluggable backends. I'll do a better recording tomorrow, and explain the refactoring.
davidsarah commented 2011-09-17 02:13:03 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v2.darcs.patch (305880 bytes) added

This is still just a flat recording (a lot more changes to tests were needed than I anticipated).

**Attachment** pluggable-backends-davidsarah-v2.darcs.patch (305880 bytes) added This is still just a flat recording (a lot more changes to tests were needed than I anticipated).
davidsarah commented 2011-09-19 20:33:29 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v3.darcs.patch (346554 bytes) added

Bleeding edge pluggable backends code from David-Sarah. refs #999

**Attachment** pluggable-backends-davidsarah-v3.darcs.patch (346554 bytes) added Bleeding edge pluggable backends code from David-Sarah. refs #999
davidsarah commented 2011-09-19 23:38:51 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v4.darcs.patch (295378 bytes) added

Rerecording of pluggable-backends-davidsarah-v3.darcs.patch that should fix the darcs performance problem when applied to trunk.

**Attachment** pluggable-backends-davidsarah-v4.darcs.patch (295378 bytes) added Rerecording of pluggable-backends-davidsarah-v3.darcs.patch that should fix the darcs performance problem when applied to trunk.
davidsarah commented 2011-09-20 03:42:59 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v5.darcs.patch (315537 bytes) added

Work-in-progress, includes fix to bug involving BucketWriter. refs #999

**Attachment** pluggable-backends-davidsarah-v5.darcs.patch (315537 bytes) added Work-in-progress, includes fix to bug involving [BucketWriter](wiki/BucketWriter). refs #999
zancas commented 2011-09-20 17:04:34 +00:00
Author
Owner

Replying to davidsarah:

Review of backends-configuration-docs.darcs.patch:

s3.rst:

  • Add a short introduction saying what S3 is and why anyone might want to use it.

  • It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_". Also, I would make it "s3.", since that's similar to the use of "." to group other related options.

  • Should the s3_url option include the scheme name, i.e. defaulting to <http://s3.amazonaws.com> ? We might want to support https in future (although there would be more to configure if we check certificates).

  • In the description of s3_max_space, copy the paragraph starting "This string contains a number" from disk.rst rather than referring to it.

  • "enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."

    Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in s3.rst.

disk.rst:

  • "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem"

  • use backend = disk, not backend = local filesystem, and say that it is the default.

configuration.rst:

  • "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."

  • "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space"

  • currently clients are aware of backend type.
Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118086): > Review of [backends-configuration-docs.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-7ca1c2eb177b): > > `s3.rst`: > * Add a short introduction saying what S3 is and why anyone might want to use it. > > * It's a bit inconsistent that the value of the backend option is uppercase "S3", but the other option names are lowercase "s3_*". Also, I would make it "s3.*", since that's similar to the use of "." to group other related options. > > * Should the `s3_url` option include the scheme name, i.e. defaulting to `<http://s3.amazonaws.com>` ? We might want to support https in future (although there would be more to configure if we check certificates). > > * In the description of `s3_max_space`, copy the paragraph starting "This string contains a number" from `disk.rst` rather than referring to it. > > * `"enabling ``s3_max_space`` causes an extra S3 usage query to be sent for each share upload, causing the upload process to run slightly slower and incur more S3 request charges."` > > Each space query could be amortized over several uploads, using an estimate of the used space in-between. (That wouldn't be accurate if there are several storage servers accessing the same bucket, but it would be accurate enough if the maximum number of such servers is limited.) Even if we don't implement that right away, I'm not sure that this performance issue needs to go in `s3.rst`. > > `disk.rst`: > * "Storing Shares in local filesystem" -> "Storing Shares on a Local Filesystem" > > * use `backend = disk`, not `backend = local filesystem`, and say that it is the default. > > `configuration.rst`: > * "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server." > > * "including how to limit the space that will be consumed" -> "including how to reserve a minimum amount of free space" * currently clients _are_ aware of backend type.
davidsarah commented 2011-09-20 17:26:01 +00:00
Author
Owner

Attachment backends-configuration-docs-v2.darcs.patch (24325 bytes) added

docs: document the configuration options for the new backends scheme. This takes into account /tahoe-lafs/trac-2024-07-25/issues/8504#comment:-1 and is rerecorded to avoid darcs context problems.

**Attachment** backends-configuration-docs-v2.darcs.patch (24325 bytes) added docs: document the configuration options for the new backends scheme. This takes into account [/tahoe-lafs/trac-2024-07-25/issues/8504](/tahoe-lafs/trac-2024-07-25/issues/8504)#[comment:-1](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment--1) and is rerecorded to avoid darcs context problems.
zooko commented 2011-09-20 17:44:31 +00:00
Author
Owner

Replying to [zancas]comment:28:

  • currently clients are aware of backend type.

They are? I don't think so. How would they find out about the backend type?

Replying to [zancas]comment:28: > * currently clients _are_ aware of backend type. They are? I don't think so. How would they find out about the backend type?
zooko commented 2011-09-20 17:51:49 +00:00
Author
Owner

backends-configuration-docs-v2.darcs.patch looks good to me. One thing I would change is to remove the "Issues" section about the costs of querying S3 objects and the effects on our crawler/lease-renewal scheme. I'm not sure that this branch will eventually land without a lease-checker implemented, so that part is making a statement that might be wrong. Also I'm not really sure the costs of querying S3 objects are worth mentioning. The current S3 pricing has 10,000 GET requests for $0.01. Let's remove that documentation for now and add in documentation when we understand better what the actual limitations or costs will be.

[backends-configuration-docs-v2.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-aa92fa0eb29c) looks good to me. One thing I would change is to remove the "Issues" section about the costs of querying S3 objects and the effects on our crawler/lease-renewal scheme. I'm not sure that this branch will eventually land without a lease-checker implemented, so that part is making a statement that might be wrong. Also I'm not really sure the costs of querying S3 objects are worth mentioning. The [current S3 pricing](http://aws.amazon.com/s3/pricing/) has 10,000 GET requests for $0.01. Let's remove that documentation for now and add in documentation when we understand better what the actual limitations or costs will be.
davidsarah commented 2011-09-20 19:53:04 +00:00
Author
Owner

Replying to [zancas]comment:28:

Replying to davidsarah:

configuration.rst:

  • "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
  • currently clients are aware of backend type.

The doc meant that client nodes need not be aware of backend type. Although the current hack to wire up a StorageServer to a backend in pluggable-backends-davidsarah-v5.darcs.patch is in allmydata/client.py, that code isn't actually run by clients, it is run only when setting up a storage server.

Replying to [zancas]comment:28: > Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118086): > > `configuration.rst`: > > * "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server." > > * currently clients _are_ aware of backend type. The doc meant that client nodes need not be aware of backend type. Although the current hack to wire up a StorageServer to a backend in [pluggable-backends-davidsarah-v5.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-9dd3add0fb1d) is in allmydata/client.py, that code isn't actually run by clients, it is run only when setting up a storage server.
davidsarah commented 2011-09-20 19:57:02 +00:00
Author
Owner

Replying to [davidsarah]comment:31:

Replying to [zancas]comment:28:

Replying to davidsarah:

configuration.rst:

  • "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server."
  • currently clients are aware of backend type.

The doc meant that client nodes need not be aware of backend type.

Ugh, I should never use the term "node" :-/. I meant the code that acts as a storage protocol client.

Replying to [davidsarah]comment:31: > Replying to [zancas]comment:28: > > Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118086): > > > `configuration.rst`: > > > * "Clients will be unaware of what backend is used by the server." -> "Clients need not be aware of which backend is used by a server." > > > > * currently clients _are_ aware of backend type. > > The doc meant that client nodes need not be aware of backend type. Ugh, I should never use the term "node" :-/. I meant the code that acts as a storage protocol client.
davidsarah commented 2011-09-21 03:21:58 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v6.darcs.patch (329873 bytes) added

v6. Tests are looking in much better shape now -- still some problems with path vs FilePath and other stale assumptions in the test framework, but the disk backend basically works now.

**Attachment** pluggable-backends-davidsarah-v6.darcs.patch (329873 bytes) added v6. Tests are looking in much better shape now -- still some problems with path vs [FilePath](wiki/FilePath) and other stale assumptions in the test framework, but the disk backend basically works now.
davidsarah commented 2011-09-21 15:54:50 +00:00
Author
Owner

Attachment trace-exceptions-option.darcs.patch (19736 bytes) added

Add --trace-exceptions option to trace raised exceptions on stderr. refs #999

**Attachment** trace-exceptions-option.darcs.patch (19736 bytes) added Add --trace-exceptions option to trace raised exceptions on stderr. refs #999
davidsarah commented 2011-09-21 18:54:37 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v7.darcs.patch (368250 bytes) added

Latest snapshot, more tests passing.

**Attachment** pluggable-backends-davidsarah-v7.darcs.patch (368250 bytes) added Latest snapshot, more tests passing.
zooko commented 2011-09-21 21:12:15 +00:00
Author
Owner

Attachment snapshot-backend-config-parse.patch (6145 bytes) added

snapshot of work in progress

**Attachment** snapshot-backend-config-parse.patch (6145 bytes) added snapshot of work in progress
davidsarah commented 2011-09-21 22:29:14 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v8.darcs.patch (384096 bytes) added

v8 snapshot. More tests pass.

**Attachment** pluggable-backends-davidsarah-v8.darcs.patch (384096 bytes) added v8 snapshot. More tests pass.
davidsarah commented 2011-09-22 05:11:43 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v9.darcs.patch (420760 bytes) added

Still more test fixes.

**Attachment** pluggable-backends-davidsarah-v9.darcs.patch (420760 bytes) added Still more test fixes.
davidsarah commented 2011-09-22 15:40:59 +00:00
Author
Owner

Josh wrote, re: pluggable-backends-davidsarah-v8.darcs.patch:

I think the test_crawlers failure stems from ShareCrawler being passed a FilePath object in its constructor where it expects a string literal to use in an old-style call to open (specifically in its "load_state" method). I'm not certain yet, but I think I'll stop here for the night.

No, load_state uses pickle.loads(self.statefp.getContent()) which is correct. The state handling is a red herring for the test_crawlers failure, I think.

Josh wrote, re: [pluggable-backends-davidsarah-v8.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-eb16a6d9137a): > I think the test_crawlers failure stems from ShareCrawler being passed a FilePath object in its constructor where it expects a string literal to use in an old-style call to open (specifically in its "load_state" method). I'm not certain yet, but I think I'll stop here for the night. No, `load_state` uses `pickle.loads(self.statefp.getContent())` which is correct. The state handling is a red herring for the test_crawlers failure, I think.
davidsarah commented 2011-09-22 15:48:02 +00:00
Author
Owner

In v9, allmydata.test.test_storage.LeaseCrawler.test_basic is hanging due to an infinite recursion in pickle.py. Use

bin/tahoe --trace-exceptions debug trial --rterror allmydata.test.test_storage.LeaseCrawler.test_basic

(with trace-exceptions-option.darcs.patch applied) to see the recursion. I'm on the case...

In v9, `allmydata.test.test_storage.LeaseCrawler.test_basic` is hanging due to an infinite recursion in pickle.py. Use ``` bin/tahoe --trace-exceptions debug trial --rterror allmydata.test.test_storage.LeaseCrawler.test_basic ``` (with [trace-exceptions-option.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-099871b64b6c) applied) to see the recursion. I'm on the case...
davidsarah commented 2011-09-22 16:09:36 +00:00
Author
Owner

Replying to davidsarah:

In v9, allmydata.test.test_storage.LeaseCrawler.test_basic is hanging due to an infinite recursion in pickle.py.

That was another red herring; there was an innocuous exception in pickle.py that was happening in each iteration of whatever other code is livelocking.

Replying to [davidsarah](/tahoe-lafs/trac-2024-07-25/issues/999#issuecomment-118094): > In v9, `allmydata.test.test_storage.LeaseCrawler.test_basic` is hanging due to an infinite recursion in pickle.py. That was another red herring; there was an innocuous exception in pickle.py that was happening in each iteration of whatever other code is livelocking.
davidsarah commented 2011-09-22 18:38:53 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v10.darcs.patch (435928 bytes) added

Fix most of the crawler tests. Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999

**Attachment** pluggable-backends-davidsarah-v10.darcs.patch (435928 bytes) added Fix most of the crawler tests. Reinstate the cancel_lease methods of [ImmutableDiskShare](wiki/ImmutableDiskShare) and [MutableDiskShare](wiki/MutableDiskShare), since they are needed for lease expiry. refs #999
davidsarah commented 2011-09-23 04:20:00 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v11.darcs.patch (498823 bytes) added

Includes a fix for iterating over a dict while removing entries from it in mutable/publish.py, some cosmetic changes, and a start on the S3 backend.

**Attachment** pluggable-backends-davidsarah-v11.darcs.patch (498823 bytes) added Includes a fix for iterating over a dict while removing entries from it in mutable/publish.py, some cosmetic changes, and a start on the S3 backend.
davidsarah commented 2011-09-23 20:59:31 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v12.darcs.patch (528987 bytes) added

Updates to null and S3 backends.

**Attachment** pluggable-backends-davidsarah-v12.darcs.patch (528987 bytes) added Updates to null and S3 backends.
zancas commented 2011-09-27 06:37:30 +00:00
Author
Owner

Attachment passtest_status_bad_disk_stats.darcs.patch (512142 bytes) added

contains changes in v12

**Attachment** passtest_status_bad_disk_stats.darcs.patch (512142 bytes) added contains changes in v12
davidsarah commented 2011-09-27 07:47:54 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v13.darcs.patch (555578 bytes) added

Includes fixes to test_status_bad_disk_stats and test_no_st_blocks in test_storage.py, and more work on the S3 backend.

**Attachment** pluggable-backends-davidsarah-v13.darcs.patch (555578 bytes) added Includes fixes to test_status_bad_disk_stats and test_no_st_blocks in test_storage.py, and more work on the S3 backend.
davidsarah commented 2011-09-27 07:48:49 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v14.darcs.patch (602336 bytes) added

Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999

**Attachment** pluggable-backends-davidsarah-v14.darcs.patch (602336 bytes) added Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999
davidsarah commented 2011-09-28 00:09:58 +00:00
Author
Owner

In v13, test_storage.LeaseCrawler.test_share_corruption fails. However this is a test that is known to have race conditions -- it used to fail when logging was enabled (#923), and we tried to fix that in changeset:3b1b0147a867759c, but in a way that in retrospect didn't really address the cause of the race condition. The problem is that it's trying to check for a particular instantaneous state of the lease crawler while it is running, which is inherently race-prone.

I suggest we not worry about this test for the current LAE iteration.

In v13, `test_storage.LeaseCrawler.test_share_corruption` fails. However this is a test that is known to have race conditions -- it used to fail when logging was enabled (#923), and we tried to fix that in changeset:3b1b0147a867759c, but in a way that in retrospect didn't really address the cause of the race condition. The problem is that it's trying to check for a particular instantaneous state of the lease crawler while it is running, which is inherently race-prone. I suggest we not worry about this test for the current LAE iteration.
davidsarah commented 2011-09-28 01:45:53 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v13a.darcs.patch (544869 bytes) added

This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.

**Attachment** pluggable-backends-davidsarah-v13a.darcs.patch (544869 bytes) added This does not include the asyncification changes from v14, but does include a couple of fixes for failures in test_system.
davidsarah commented 2011-09-28 05:34:24 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v15.darcs.patch (703824 bytes) added

bleeding edge of asyncification work

**Attachment** pluggable-backends-davidsarah-v15.darcs.patch (703824 bytes) added bleeding edge of asyncification work
zancas commented 2011-09-28 09:24:27 +00:00
Author
Owner

Huh... weird, I can't apply v15...

0 /home/arc/sandbox/working 550 $ darcs apply pluggable-backends-davidsarah-v15.darcs.patch

darcs failed: Bad patch bundle!
2 /home/arc/sandbox/working 551 $

> Huh... weird, I can't apply v15... 0 /home/arc/sandbox/working 550 $ darcs apply pluggable-backends-davidsarah-v15.darcs.patch darcs failed: Bad patch bundle! 2 /home/arc/sandbox/working 551 $
davidsarah commented 2011-09-29 04:19:16 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v16.darcs.patch (841151 bytes) added

Latest asyncified patch. About 90% of tests pass.

**Attachment** pluggable-backends-davidsarah-v16.darcs.patch (841151 bytes) added Latest asyncified patch. About 90% of tests pass.
davidsarah commented 2011-09-29 04:26:51 +00:00
Author
Owner

Attachment s3-v13a-to-v16.diff (26727 bytes) added

Differences, just in the S3 backend, between v13a and v16.

**Attachment** s3-v13a-to-v16.diff (26727 bytes) added Differences, just in the S3 backend, between v13a and v16.
zooko commented 2011-09-29 05:25:30 +00:00
Author
Owner

Attachment split_s3share_classes_and_prune_unused_methods.diff (10735 bytes) added

**Attachment** split_s3share_classes_and_prune_unused_methods.diff (10735 bytes) added
zooko commented 2011-09-29 05:53:00 +00:00
Author
Owner

Attachment split_s3share_classes_and_prune_unused_methods.dpatch (20675 bytes) added

**Attachment** split_s3share_classes_and_prune_unused_methods.dpatch (20675 bytes) added
zooko commented 2011-09-29 06:14:14 +00:00
Author
Owner

Attachment configure-backends-incomplete.dpatch (26608 bytes) added

**Attachment** configure-backends-incomplete.dpatch (26608 bytes) added
davidsarah commented 2011-09-29 08:24:10 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v17.darcs.patch (861252 bytes) added

Completes the splitting of IStoredShare into IShareForReading and IShareForWriting. Does not include configuration changes.

**Attachment** pluggable-backends-davidsarah-v17.darcs.patch (861252 bytes) added Completes the splitting of IStoredShare into IShareForReading and IShareForWriting. Does not include configuration changes.
davidsarah commented 2011-09-29 17:14:08 +00:00
Author
Owner

Attachment test_backends.py (7734 bytes) added

Snapshot of test_backends.py in David-Sarah's tree

**Attachment** test_backends.py (7734 bytes) added Snapshot of test_backends.py in David-Sarah's tree
davidsarah commented 2011-09-29 18:33:41 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v18.darcs.patch (879876 bytes) added

Includes backend configuration (rerecorded from zooko's patch), and other minor fixes.

**Attachment** pluggable-backends-davidsarah-v18.darcs.patch (879876 bytes) added Includes backend configuration (rerecorded from zooko's patch), and other minor fixes.
zooko commented 2011-09-29 20:29:16 +00:00
Author
Owner

Attachment asyncify-tests.dpatch (13774 bytes) added

**Attachment** asyncify-tests.dpatch (13774 bytes) added
davidsarah commented 2011-09-29 21:27:35 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v19.darcs.patch (919584 bytes) added

Include missing files for real and mock S3 backends. Also some fixes to tests, scripts/debug.py, and config parsing.

**Attachment** pluggable-backends-davidsarah-v19.darcs.patch (919584 bytes) added Include missing files for real and mock S3 backends. Also some fixes to tests, scripts/debug.py, and config parsing.
david-sarah@jacaranda.org commented 2011-09-29 23:51:43 +00:00
Author
Owner

In [5373/ticket999-S3-backend]:

test_storage.py: only run test_large_share on the disk backend. (It will wedge your machine if run on the S3 backend with MockS3Bucket.) refs #999
In [5373/ticket999-S3-backend]: ``` test_storage.py: only run test_large_share on the disk backend. (It will wedge your machine if run on the S3 backend with MockS3Bucket.) refs #999 ```
david-sarah@jacaranda.org commented 2011-09-29 23:51:44 +00:00
Author
Owner

In [5374/ticket999-S3-backend]:

test/mock_s3.py: fix a typo. refs #999
In [5374/ticket999-S3-backend]: ``` test/mock_s3.py: fix a typo. refs #999 ```
david-sarah@jacaranda.org commented 2011-09-29 23:51:45 +00:00
Author
Owner

In [5375/ticket999-S3-backend]:

Make sure that the statedir is created before trying to use it. refs #999
In [5375/ticket999-S3-backend]: ``` Make sure that the statedir is created before trying to use it. refs #999 ```
david-sarah@jacaranda.org commented 2011-09-29 23:58:30 +00:00
Author
Owner

In [5376/ticket999-S3-backend]:

s3_bucket.py: fix an incorrect argument signature for list_objects. refs #999
In [5376/ticket999-S3-backend]: ``` s3_bucket.py: fix an incorrect argument signature for list_objects. refs #999 ```
david-sarah@jacaranda.org commented 2011-09-30 00:15:11 +00:00
Author
Owner

In [5379/ticket999-S3-backend]:

mock_s3.py: fix bug in MockS3Error constructor. refs #999
In [5379/ticket999-S3-backend]: ``` mock_s3.py: fix bug in MockS3Error constructor. refs #999 ```
david-sarah@jacaranda.org commented 2011-09-30 00:15:11 +00:00
Author
Owner

In [5380/ticket999-S3-backend]:

test_storage.py: Server class uses ShouldFailMixin. refs #999
In [5380/ticket999-S3-backend]: ``` test_storage.py: Server class uses ShouldFailMixin. refs #999 ```
david-sarah@jacaranda.org commented 2011-09-30 02:19:02 +00:00
Author
Owner

In [5382/ticket999-S3-backend]:

Add dummy lease methods to immutable S3 share objects. refs #999
In [5382/ticket999-S3-backend]: ``` Add dummy lease methods to immutable S3 share objects. refs #999 ```
zooko commented 2011-09-30 06:05:43 +00:00
Author
Owner

Attachment debug-mutable-hash-validation-failure.dpatch (35380 bytes) added

**Attachment** debug-mutable-hash-validation-failure.dpatch (35380 bytes) added
david-sarah@jacaranda.org commented 2011-09-30 21:28:44 +00:00
Author
Owner

In [5387/ticket999-S3-backend]:

s3/immutable.py: minor simplification in ImmutableS3ShareForReading. refs #999
In [5387/ticket999-S3-backend]: ``` s3/immutable.py: minor simplification in ImmutableS3ShareForReading. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-04 01:12:02 +00:00
Author
Owner

In [5388/ticket999-S3-backend]:

Add a share._get_filepath() method used by tests to get the FilePath for a share, rather than accessing the _home attribute. refs #999
In [5388/ticket999-S3-backend]: ``` Add a share._get_filepath() method used by tests to get the FilePath for a share, rather than accessing the _home attribute. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-04 01:12:05 +00:00
Author
Owner

In [5391/ticket999-S3-backend]:

s3/s3_common.py: remove incorrect 'self' arguments from interface methods in IS3Bucket. refs #999
In [5391/ticket999-S3-backend]: ``` s3/s3_common.py: remove incorrect 'self' arguments from interface methods in IS3Bucket. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-04 01:12:05 +00:00
Author
Owner

In [5392/ticket999-S3-backend]:

More asyncification of tests. Also fix some bugs due to capture of slots in for loops. refs #999
In [5392/ticket999-S3-backend]: ``` More asyncification of tests. Also fix some bugs due to capture of slots in for loops. refs #999 ```
davidsarah commented 2011-10-07 08:25:56 +00:00
Author
Owner

Attachment pluggable-backends-davidsarah-v20.darcs.patch (1101695 bytes) added

Fix various bugs and tests. v20

**Attachment** pluggable-backends-davidsarah-v20.darcs.patch (1101695 bytes) added Fix various bugs and tests. v20
davidsarah commented 2011-10-07 15:44:01 +00:00
Author
Owner

Re: pluggable-backends-davidsarah-v20.darcs.patch, I made a mistake in recording it that will cause a conflict with the ticket999-S3-backend branch. I'll attach a fixed version.

Re: [pluggable-backends-davidsarah-v20.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-fa3acd494c3d), I made a mistake in recording it that will cause a conflict with the ticket999-S3-backend branch. I'll attach a fixed version.
david-sarah@jacaranda.org commented 2011-10-07 19:39:49 +00:00
Author
Owner

In [5400/ticket999-S3-backend]:

Add a get_share method to IShareSet, to get a specific share. refs #999
In [5400/ticket999-S3-backend]: ``` Add a get_share method to IShareSet, to get a specific share. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:50 +00:00
Author
Owner

In [5402/ticket999-S3-backend]:

Add a _get_sharedir() method on IShareSet, implemented by the disk and mock S3 backends, for use by tests. refs #999
In [5402/ticket999-S3-backend]: ``` Add a _get_sharedir() method on IShareSet, implemented by the disk and mock S3 backends, for use by tests. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:51 +00:00
Author
Owner

In [5403/ticket999-S3-backend]:

Fix some miscapture bugs. refs #999
In [5403/ticket999-S3-backend]: ``` Fix some miscapture bugs. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:52 +00:00
Author
Owner

In [5404/ticket999-S3-backend]:

Fix a duplicate umid. refs #999
In [5404/ticket999-S3-backend]: ``` Fix a duplicate umid. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:53 +00:00
Author
Owner

In [5405/ticket999-S3-backend]:

Remove unused load method and _loaded attribute from s3/mutable.py. refs #999
In [5405/ticket999-S3-backend]: ``` Remove unused load method and _loaded attribute from s3/mutable.py. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:54 +00:00
Author
Owner

In [5406/ticket999-S3-backend]:

Remove an inapplicable comment. refs #999
In [5406/ticket999-S3-backend]: ``` Remove an inapplicable comment. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:55 +00:00
Author
Owner

In [5407/ticket999-S3-backend]:

Make sure that get_size etc. work correctly on an ImmutableS3ShareForWriting after it has been closed. Also simplify by removing the _end_offset attribute. refs #999
In [5407/ticket999-S3-backend]: ``` Make sure that get_size etc. work correctly on an ImmutableS3ShareForWriting after it has been closed. Also simplify by removing the _end_offset attribute. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:56 +00:00
Author
Owner

In [5408/ticket999-S3-backend]:

unlink() on share objects should be idempotent. refs #999
In [5408/ticket999-S3-backend]: ``` unlink() on share objects should be idempotent. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:57 +00:00
Author
Owner

In [5409/ticket999-S3-backend]:

Partially asyncify crawlers. refs #999
In [5409/ticket999-S3-backend]: ``` Partially asyncify crawlers. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:58 +00:00
Author
Owner

In [5410/ticket999-S3-backend]:

More miscapture fixes. refs #999
In [5410/ticket999-S3-backend]: ``` More miscapture fixes. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:59 +00:00
Author
Owner

In [5411/ticket999-S3-backend]:

Ensure that helper classes are not treated as test cases. Also fix a missing mixin. refs #999
In [5411/ticket999-S3-backend]: ``` Ensure that helper classes are not treated as test cases. Also fix a missing mixin. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:39:59 +00:00
Author
Owner

In [5412/ticket999-S3-backend]:

disk backend: size methods should no longer return Deferreds. refs #999
In [5412/ticket999-S3-backend]: ``` disk backend: size methods should no longer return Deferreds. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-07 19:59:24 +00:00
Author
Owner

In [5414/ticket999-S3-backend]:

test_storage.py: fix a trivial bug in LeaseCrawler.test_unpredictable_future. refs #999
In [5414/ticket999-S3-backend]: ``` test_storage.py: fix a trivial bug in LeaseCrawler.test_unpredictable_future. refs #999 ```
davidsarah commented 2011-10-07 20:02:16 +00:00
Author
Owner

Please ignore pluggable-backends-davidsarah-v20.darcs.patch; the equivalent of that patch is on the ticket999-S3-backend branch now.

Please ignore [pluggable-backends-davidsarah-v20.darcs.patch](/tahoe-lafs/trac-2024-07-25/attachments/000078ac-afc9-7ea1-31fd-fa3acd494c3d); the equivalent of that patch is on the ticket999-S3-backend branch now.
david-sarah@jacaranda.org commented 2011-10-09 23:25:13 +00:00
Author
Owner

In [5415/ticket999-S3-backend]:

storage/backends/disk/mutable.py: put back a correct assertion that had been disabled. storage/base.py: fix the bug that was causing that assertion to fail. refs #999
In [5415/ticket999-S3-backend]: ``` storage/backends/disk/mutable.py: put back a correct assertion that had been disabled. storage/base.py: fix the bug that was causing that assertion to fail. refs #999 ```
davidsarah commented 2011-10-10 00:22:44 +00:00
Author
Owner

[5415/ticket999-S3-backend] fixes all but one of the tests in test_mutable.py.

[5415/ticket999-S3-backend] fixes all but one of the tests in `test_mutable.py`.
david-sarah@jacaranda.org commented 2011-10-10 18:15:16 +00:00
Author
Owner

In [5416/ticket999-S3-backend]:

test_storage.py: move some tests that were not applicable to all backends out of ServerTest. refs #999
In [5416/ticket999-S3-backend]: ``` test_storage.py: move some tests that were not applicable to all backends out of ServerTest. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 19:19:47 +00:00
Author
Owner

In [5417/ticket999-S3-backend]:

Instrument some assertions to report the failed values. refs #999
In [5417/ticket999-S3-backend]: ``` Instrument some assertions to report the failed values. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 20:07:49 +00:00
Author
Owner

In [5419/ticket999-S3-backend]:

interfaces.py: resolve conflicts with trunk. refs #999
In [5419/ticket999-S3-backend]: ``` interfaces.py: resolve conflicts with trunk. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 20:10:57 +00:00
Author
Owner

In [5421/ticket999-S3-backend]:

interfaces.py: resolve another conflict with trunk. refs #999
In [5421/ticket999-S3-backend]: ``` interfaces.py: resolve another conflict with trunk. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 20:48:02 +00:00
Author
Owner

In [5422/ticket999-S3-backend]:

test_download.py: fix test_download_failover (it should tolerate non-existing shares in _clobber_most_shares). refs #999
In [5422/ticket999-S3-backend]: ``` test_download.py: fix test_download_failover (it should tolerate non-existing shares in _clobber_most_shares). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 20:48:02 +00:00
Author
Owner

In [5423/ticket999-S3-backend]:

Null backend: implement unlink and readv more correctly. refs #999
In [5423/ticket999-S3-backend]: ``` Null backend: implement unlink and readv more correctly. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 20:48:03 +00:00
Author
Owner

In [5424/ticket999-S3-backend]:

Make unlink() on share objects consistently idempotent. refs #999
In [5424/ticket999-S3-backend]: ``` Make unlink() on share objects consistently idempotent. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 23:17:29 +00:00
Author
Owner

In [5425/ticket999-S3-backend]:

S3 backend: move the implementation of list_objects from s3_bucket.py to s3_common.py, making s3_bucket.py simpler and list_objects easier to test independently. refs #999
In [5425/ticket999-S3-backend]: ``` S3 backend: move the implementation of list_objects from s3_bucket.py to s3_common.py, making s3_bucket.py simpler and list_objects easier to test independently. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 23:17:30 +00:00
Author
Owner

In [5426/ticket999-S3-backend]:

Add fileutil.fp_list(fp) which is like fp.children(), but returns [] in case of a directory that does not exist. Use it to simplify the disk backend and mock S3 bucket implementations. refs #999
In [5426/ticket999-S3-backend]: ``` Add fileutil.fp_list(fp) which is like fp.children(), but returns [] in case of a directory that does not exist. Use it to simplify the disk backend and mock S3 bucket implementations. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-10 23:17:31 +00:00
Author
Owner

In [5427/ticket999-S3-backend]:

test/mock_s3.py: fix a bug that was causing us to use the wrong directory for share files. refs #999
In [5427/ticket999-S3-backend]: ``` test/mock_s3.py: fix a bug that was causing us to use the wrong directory for share files. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 00:32:41 +00:00
Author
Owner

In [5429/ticket999-S3-backend]:

test_storage.py: make MutableServer.test_leases pass. refs #999
In [5429/ticket999-S3-backend]: ``` test_storage.py: make MutableServer.test_leases pass. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 04:44:30 +00:00
Author
Owner

In [5430/ticket999-S3-backend]:

test_storage.py: fix a bug introduced by asyncification of test_allocate. refs #999
In [5430/ticket999-S3-backend]: ``` test_storage.py: fix a bug introduced by asyncification of test_allocate. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 04:54:21 +00:00
Author
Owner

In [5431/ticket999-S3-backend]:

test_storage.py: fix a typo in test_null_backend. refs #999
In [5431/ticket999-S3-backend]: ``` test_storage.py: fix a typo in test_null_backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 04:59:26 +00:00
Author
Owner

In [5432/ticket999-S3-backend]:

test_storage.py: fix a trivial bug in MDMFProxies.test_write. refs #999
In [5432/ticket999-S3-backend]: ``` test_storage.py: fix a trivial bug in MDMFProxies.test_write. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 05:16:34 +00:00
Author
Owner

In [5433/ticket999-S3-backend]:

test_storage.py: fix asyncification of three tests in MDMFProxies. refs #999
In [5433/ticket999-S3-backend]: ``` test_storage.py: fix asyncification of three tests in MDMFProxies. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-11 05:20:45 +00:00
Author
Owner

In [5434/ticket999-S3-backend]:

Fix two pyflakes warnings about unused imports. refs #999
In [5434/ticket999-S3-backend]: ``` Fix two pyflakes warnings about unused imports. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:44 +00:00
Author
Owner

In [5445/ticket999-S3-backend]:

test_storage.py: cosmetics. refs #999
In [5445/ticket999-S3-backend]: ``` test_storage.py: cosmetics. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:47 +00:00
Author
Owner

In [5446/ticket999-S3-backend]:

test_storage.py: fix test failures in MDMFProxies. refs #999
In [5446/ticket999-S3-backend]: ``` test_storage.py: fix test failures in MDMFProxies. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:49 +00:00
Author
Owner

In [5447/ticket999-S3-backend]:

Move configuration of each backend into the backend itself. refs #999
In [5447/ticket999-S3-backend]: ``` Move configuration of each backend into the backend itself. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:50 +00:00
Author
Owner

In [5448/ticket999-S3-backend]:

util/deferredutil.py: remove unneeded utility functions. refs #999
In [5448/ticket999-S3-backend]: ``` util/deferredutil.py: remove unneeded utility functions. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:52 +00:00
Author
Owner

In [5449/ticket999-S3-backend]:

test_storage.py: Move test_seek to its own class, since it is independent of the backend. Also move test_reserved_space to ServerWithDiskBackend, since reserved_space is specific to that backend. refs #999
In [5449/ticket999-S3-backend]: ``` test_storage.py: Move test_seek to its own class, since it is independent of the backend. Also move test_reserved_space to ServerWithDiskBackend, since reserved_space is specific to that backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:53 +00:00
Author
Owner

In [5450/ticket999-S3-backend]:

test_storage.py: add a test that we can create a share, exercising the backend's get_share and get_shares methods. This may explicate particular kinds of backend failure better than the existing tests. refs #999
In [5450/ticket999-S3-backend]: ``` test_storage.py: add a test that we can create a share, exercising the backend's get_share and get_shares methods. This may explicate particular kinds of backend failure better than the existing tests. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:54 +00:00
Author
Owner

In [5451/ticket999-S3-backend]:

test_storage.py: asyncify some more tests, and fix create methods. refs #999
In [5451/ticket999-S3-backend]: ``` test_storage.py: asyncify some more tests, and fix create methods. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:56 +00:00
Author
Owner

In [5452/ticket999-S3-backend]:

S3 backend: fix corruption advisories and listing of shares for mock S3 bucket. refs #999
In [5452/ticket999-S3-backend]: ``` S3 backend: fix corruption advisories and listing of shares for mock S3 bucket. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:57 +00:00
Author
Owner

In [5453/ticket999-S3-backend]:

no_network.py: fix delete_all_shares. refs #999
In [5453/ticket999-S3-backend]: ``` no_network.py: fix delete_all_shares. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 21:47:58 +00:00
Author
Owner

In [5454/ticket999-S3-backend]:

test_download.py: fix and reenable Corruption.test_each_byte. Add a comment noting that catalog_detection = True has bitrotted. refs #999
In [5454/ticket999-S3-backend]: ``` test_download.py: fix and reenable Corruption.test_each_byte. Add a comment noting that catalog_detection = True has bitrotted. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 23:43:38 +00:00
Author
Owner

In [5455/ticket999-S3-backend]:

test_storage.py: add test_write_and_read_share and test_read_old_share originally from test_backends.py. refs #999
In [5455/ticket999-S3-backend]: ``` test_storage.py: add test_write_and_read_share and test_read_old_share originally from test_backends.py. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 23:43:40 +00:00
Author
Owner

In [5456/ticket999-S3-backend]:

Remove test_backends.py, since all its tests are now redundant with tests in test_storage.py or test_client.py. refs #999
In [5456/ticket999-S3-backend]: ``` Remove test_backends.py, since all its tests are now redundant with tests in test_storage.py or test_client.py. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 23:43:41 +00:00
Author
Owner

In [5457/ticket999-S3-backend]:

Null backend: make NullShareSet inherit from ShareSet, which should implement readv correctly. Remove its implementation of testv_and_readv_and_writev since the one from ShareSet should work (if it doesn't that would be a separate bug). refs #999
In [5457/ticket999-S3-backend]: ``` Null backend: make NullShareSet inherit from ShareSet, which should implement readv correctly. Remove its implementation of testv_and_readv_and_writev since the one from ShareSet should work (if it doesn't that would be a separate bug). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 23:43:42 +00:00
Author
Owner

In [5458/ticket999-S3-backend]:

S3 backend: correct list_objects to list_all_objects in IS3Bucket. refs #999
In [5458/ticket999-S3-backend]: ``` S3 backend: correct list_objects to list_all_objects in IS3Bucket. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-12 23:43:42 +00:00
Author
Owner

In [5459/ticket999-S3-backend]:

storage/backends/base.py: allow readv to work for both mutable and immutable shares. refs #999
In [5459/ticket999-S3-backend]: ``` storage/backends/base.py: allow readv to work for both mutable and immutable shares. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 03:53:24 +00:00
Author
Owner

In [5461/ticket999-S3-backend]:

test_storage.py: test_read_old_share and test_write_and_read_share should only expect to be able to read input share data. refs #999
In [5461/ticket999-S3-backend]: ``` test_storage.py: test_read_old_share and test_write_and_read_share should only expect to be able to read input share data. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 03:53:25 +00:00
Author
Owner

In [5462/ticket999-S3-backend]:

S3 backend: keep track of incoming shares, so that the storage server can avoid creating BucketWriters for shnums that have an incoming share. refs #999
In [5462/ticket999-S3-backend]: ``` S3 backend: keep track of incoming shares, so that the storage server can avoid creating BucketWriters for shnums that have an incoming share. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 05:08:45 +00:00
Author
Owner

In [5463/ticket999-S3-backend]:

docs/backends/S3.rst: note that storage servers should use different buckets. refs #999
In [5463/ticket999-S3-backend]: ``` docs/backends/S3.rst: note that storage servers should use different buckets. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 22:30:09 +00:00
Author
Owner

In [5464/ticket999-S3-backend]:

test_storage.py: fix a typo (d vs d2) in test_remove_incoming. refs #999
In [5464/ticket999-S3-backend]: ``` test_storage.py: fix a typo (d vs d2) in test_remove_incoming. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 23:30:32 +00:00
Author
Owner

In [5465/ticket999-S3-backend]:

test_storage: rename the two test_leases methods to ServerTest.test_immutable_leases and MutableServer.test_mutable_leases. refs #999
In [5465/ticket999-S3-backend]: ``` test_storage: rename the two test_leases methods to ServerTest.test_immutable_leases and MutableServer.test_mutable_leases. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 23:30:33 +00:00
Author
Owner

In [5466/ticket999-S3-backend]:

test_storage: fix some typos introduced when asyncifying test_immutable_leases. refs #999
In [5466/ticket999-S3-backend]: ``` test_storage: fix some typos introduced when asyncifying test_immutable_leases. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 23:30:33 +00:00
Author
Owner

In [5467/ticket999-S3-backend]:

test_storage: in test_no_st_blocks, print the rec 'dict' if checking one of its fields fails. refs #999
In [5467/ticket999-S3-backend]: ``` test_storage: in test_no_st_blocks, print the rec 'dict' if checking one of its fields fails. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 23:37:20 +00:00
Author
Owner

In [5468/ticket999-S3-backend]:

test_storage.py: remove some redundant coercions to bool. refs #999
In [5468/ticket999-S3-backend]: ``` test_storage.py: remove some redundant coercions to bool. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-13 23:44:17 +00:00
Author
Owner

In [5469/ticket999-S3-backend]:

test_storage.py: print more info when checks fail. refs #999
In [5469/ticket999-S3-backend]: ``` test_storage.py: print more info when checks fail. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-14 03:01:00 +00:00
Author
Owner

In [5470/ticket999-S3-backend]:

test_storage.py: fix two bugs in test_no_st_blocks -- the _cleanup function was being called too early, and we needed to treat directories as using no space in order for the configured-sharebytes == configured-diskbytes check to be correct. refs #999
In [5470/ticket999-S3-backend]: ``` test_storage.py: fix two bugs in test_no_st_blocks -- the _cleanup function was being called too early, and we needed to treat directories as using no space in order for the configured-sharebytes == configured-diskbytes check to be correct. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-14 06:21:15 +00:00
Author
Owner

In [5471/ticket999-S3-backend]:

Undo partial asyncification of crawlers, and enable crawlers only for the disk backend. refs #999
In [5471/ticket999-S3-backend]: ``` Undo partial asyncification of crawlers, and enable crawlers only for the disk backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-16 01:43:11 +00:00
Author
Owner

In [5472/ticket999-S3-backend]:

test_storage.py: fix a bug in _backdate_leases (it was returning too early). refs #999
In [5472/ticket999-S3-backend]: ``` test_storage.py: fix a bug in _backdate_leases (it was returning too early). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-16 03:53:15 +00:00
Author
Owner

In [5473/ticket999-S3-backend]:

scripts/debug.py: fix stale code in describe_share that had not been updated for changes in share interfaces. refs #999
In [5473/ticket999-S3-backend]: ``` scripts/debug.py: fix stale code in describe_share that had not been updated for changes in share interfaces. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-16 03:53:16 +00:00
Author
Owner

In [5474/ticket999-S3-backend]:

Disk backend: make sure that disk shares with a storageindex of None (as sometimes created by test code) can be printed using __repr__. refs #999
In [5474/ticket999-S3-backend]: ``` Disk backend: make sure that disk shares with a storageindex of None (as sometimes created by test code) can be printed using __repr__. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-16 04:45:11 +00:00
Author
Owner

In [5475/ticket999-S3-backend]:

Change accesses of ._sharehomedir on a disk shareset to _get_sharedir(). refs #999
In [5475/ticket999-S3-backend]: ``` Change accesses of ._sharehomedir on a disk shareset to _get_sharedir(). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-16 04:45:12 +00:00
Author
Owner

In [5476/ticket999-S3-backend]:

test_storage.py: cleanup to style of test_limited_history to match other tests. refs #999
In [5476/ticket999-S3-backend]: ``` test_storage.py: cleanup to style of test_limited_history to match other tests. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-18 06:47:04 +00:00
Author
Owner

In [5477/ticket999-S3-backend]:

Change IShareSet.get_shares[_synchronous] to return a pair (list of share objects, set of corrupt shnums). This is necessary to allow crawlers to record but skip over corrupt shares. This patch also changes the behaviour of storage servers to ignore corrupt shares on read, which may or may not be what we want. Note that the S3 backend does not yet report corrupt shares. refs #999
In [5477/ticket999-S3-backend]: ``` Change IShareSet.get_shares[_synchronous] to return a pair (list of share objects, set of corrupt shnums). This is necessary to allow crawlers to record but skip over corrupt shares. This patch also changes the behaviour of storage servers to ignore corrupt shares on read, which may or may not be what we want. Note that the S3 backend does not yet report corrupt shares. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-18 06:47:08 +00:00
Author
Owner

In [5478/ticket999-S3-backend]:

Allow crawlers and storage servers to use a deterministic clock, for testing. We do not yet take advantage of this in tests. refs #999
In [5478/ticket999-S3-backend]: ``` Allow crawlers and storage servers to use a deterministic clock, for testing. We do not yet take advantage of this in tests. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-18 06:47:09 +00:00
Author
Owner

In [5479/ticket999-S3-backend]:

Fix race conditions in crawler tests. (storage.LeaseCrawler.test_unpredictable_future may still be racy.) refs #999
In [5479/ticket999-S3-backend]: ``` Fix race conditions in crawler tests. (storage.LeaseCrawler.test_unpredictable_future may still be racy.) refs #999 ```
david-sarah@jacaranda.org commented 2011-10-18 06:47:10 +00:00
Author
Owner

In [5480/ticket999-S3-backend]:

Add some __repr__ methods. refs #999
In [5480/ticket999-S3-backend]: ``` Add some __repr__ methods. refs #999 ```
davidsarah commented 2011-10-18 17:30:39 +00:00
Author
Owner

In [5479/ticket999-S3-backend], there's also a fix to a preexisting bug in test_storage.LeaseCrawler.test_unpredictable_future, where it was checking the s["estimated-remaining-cycle"]["space-recovered"] key twice, rather than both that key and s["estimated-current-cycle"]["space-recovered"] as intended.

In [5479/ticket999-S3-backend], there's also a fix to a preexisting bug in `test_storage.LeaseCrawler.test_unpredictable_future`, where it was checking the `s["estimated-remaining-cycle"]["space-recovered"]` key twice, rather than both that key and `s["estimated-current-cycle"]["space-recovered"]` as intended.
david-sarah@jacaranda.org commented 2011-10-18 18:35:12 +00:00
Author
Owner

In [5481/ticket999-S3-backend]:

test_storage.py, test_crawler.py: change 'bucket' terminology to 'shareset' where appropriate. refs #999
In [5481/ticket999-S3-backend]: ``` test_storage.py, test_crawler.py: change 'bucket' terminology to 'shareset' where appropriate. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-18 23:40:47 +00:00
Author
Owner

In [5482/ticket999-S3-backend]:

S3 backend: remove max_space option. refs #999
In [5482/ticket999-S3-backend]: ``` S3 backend: remove max_space option. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-19 06:19:29 +00:00
Author
Owner

In [5483/ticket999-S3-backend]:

Enable mutable tests for S3 backend (they all fail, as expected). refs #999
In [5483/ticket999-S3-backend]: ``` Enable mutable tests for S3 backend (they all fail, as expected). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 03:08:45 +00:00
Author
Owner

In [5484/ticket999-S3-backend]:

storage/backends/disk/mutable.py: correct a typo. refs #999
In [5484/ticket999-S3-backend]: ``` storage/backends/disk/mutable.py: correct a typo. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 03:08:47 +00:00
Author
Owner

In [5485/ticket999-S3-backend]:

Disk backend: fix incorrect arguments in a call to create_mutable_disk_share. refs #999
In [5485/ticket999-S3-backend]: ``` Disk backend: fix incorrect arguments in a call to create_mutable_disk_share. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 03:08:49 +00:00
Author
Owner

In [5486/ticket999-S3-backend]:

test_storage.py: move the test_container_size test to MutableServerWithDiskBackend for now, because it tries to create a very large container which will wedge your machine. refs #999
In [5486/ticket999-S3-backend]: ``` test_storage.py: move the test_container_size test to MutableServerWithDiskBackend for now, because it tries to create a very large container which will wedge your machine. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 03:08:53 +00:00
Author
Owner

In [5487/ticket999-S3-backend]:

S3 backend: finish implementation of mutable shares. refs #999
In [5487/ticket999-S3-backend]: ``` S3 backend: finish implementation of mutable shares. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:17:59 +00:00
Author
Owner

In [5488/ticket999-S3-backend]:

test_storage.py: reduce duplicated code by factoring 'create' methods into CreateS3Backend and CreateDiskBackend classes. refs #999
In [5488/ticket999-S3-backend]: ``` test_storage.py: reduce duplicated code by factoring 'create' methods into CreateS3Backend and CreateDiskBackend classes. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:18:00 +00:00
Author
Owner

In [5489/ticket999-S3-backend]:

S3 backend: make sure that the container size limit is checked before writing. refs #999
In [5489/ticket999-S3-backend]: ``` S3 backend: make sure that the container size limit is checked before writing. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:18:01 +00:00
Author
Owner

In [5490/ticket999-S3-backend]:

S3 backend: make precondition failures show more information. refs #999
In [5490/ticket999-S3-backend]: ``` S3 backend: make precondition failures show more information. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:56:25 +00:00
Author
Owner

In [5491/ticket999-S3-backend]:

S3 backend: new_length argument to MutableS3Share.writev should only be able to truncate the share (after applying writes), not extend it. refs #999
In [5491/ticket999-S3-backend]: ``` S3 backend: new_length argument to MutableS3Share.writev should only be able to truncate the share (after applying writes), not extend it. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:56:26 +00:00
Author
Owner

In [5492/ticket999-S3-backend]:

S3 backend: the mutable size limit should be on the data length, not the container size. Also simplify by removing _check_size_limit. refs #999
In [5492/ticket999-S3-backend]: ``` S3 backend: the mutable size limit should be on the data length, not the container size. Also simplify by removing _check_size_limit. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:56:27 +00:00
Author
Owner

In [5493/ticket999-S3-backend]:

Disk backend: make sure that the size limit is checked before writing. Also, the size limit is on the data length, not the container size. refs #999
In [5493/ticket999-S3-backend]: ``` Disk backend: make sure that the size limit is checked before writing. Also, the size limit is on the data length, not the container size. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 11:56:28 +00:00
Author
Owner

In [5494/ticket999-S3-backend]:

test_storage.py: reenable MutableServer.test_container_size for the S3 backend. refs #999
In [5494/ticket999-S3-backend]: ``` test_storage.py: reenable MutableServer.test_container_size for the S3 backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-20 17:35:41 +00:00
Author
Owner

In [5495/ticket999-S3-backend]:

test_storage.py: the part of test_remove that checks non-existence of the share directory after deleting a share, is only applicable to the disk backend; but, we can check that the shareset has no overhead at that point. refs #999
In [5495/ticket999-S3-backend]: ``` test_storage.py: the part of test_remove that checks non-existence of the share directory after deleting a share, is only applicable to the disk backend; but, we can check that the shareset has no overhead at that point. refs #999 ```
tahoe-lafs modified the milestone from soon to 1.10.0 2011-10-20 17:42:43 +00:00
david-sarah@jacaranda.org commented 2011-10-21 00:18:57 +00:00
Author
Owner

In [5514/ticket999-S3-backend]:

Add a '[storage]backend = mock_s3' option for use by tests. Move mock_s3.py to src/allmydataa/storage/backends/s3 since it is now imported by non-test code. refs #999
In [5514/ticket999-S3-backend]: ``` Add a '[storage]backend = mock_s3' option for use by tests. Move mock_s3.py to src/allmydataa/storage/backends/s3 since it is now imported by non-test code. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 00:18:58 +00:00
Author
Owner

In [5515/ticket999-S3-backend]:

test_system.py: enable system tests to run against S3 backend as well as disk backend. refs #999
In [5515/ticket999-S3-backend]: ``` test_system.py: enable system tests to run against S3 backend as well as disk backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 01:11:39 +00:00
Author
Owner

In [5516/ticket999-S3-backend]:

test_system.py: fix a typo. refs #999
In [5516/ticket999-S3-backend]: ``` test_system.py: fix a typo. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 01:11:40 +00:00
Author
Owner

In [5517/ticket999-S3-backend]:

test_system.py: rename ServerTestWith*Backend to ServerWith*Backend, for consistency with tst_storage.py. refs #999
In [5517/ticket999-S3-backend]: ``` test_system.py: rename ServerTestWith*Backend to ServerWith*Backend, for consistency with tst_storage.py. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 01:52:43 +00:00
Author
Owner

In [5518/ticket999-S3-backend]:

test_system.py: make checks in _test_runner more picky about field names to avoid accidental suffix matches. refs #999
In [5518/ticket999-S3-backend]: ``` test_system.py: make checks in _test_runner more picky about field names to avoid accidental suffix matches. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 01:52:44 +00:00
Author
Owner

In [5519/ticket999-S3-backend]:

test_system.py: ensure that subclasses of SystemTest use different test directories. refs #999
In [5519/ticket999-S3-backend]: ``` test_system.py: ensure that subclasses of SystemTest use different test directories. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 03:22:38 +00:00
Author
Owner

In [5520/ticket999-S3-backend]:

test_system.py: fix SystemWithS3Backend.test_mutable by only requiring the line specifying which nodeid the lease secrets are for when the node has a disk backend. refs #999
In [5520/ticket999-S3-backend]: ``` test_system.py: fix SystemWithS3Backend.test_mutable by only requiring the line specifying which nodeid the lease secrets are for when the node has a disk backend. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 03:43:38 +00:00
Author
Owner

In [5521/ticket999-S3-backend]:

scripts/debug.py: in catalog-shares, gracefully handle the case where a share has no leases (for example because it is an S3 share). refs #999
In [5521/ticket999-S3-backend]: ``` scripts/debug.py: in catalog-shares, gracefully handle the case where a share has no leases (for example because it is an S3 share). refs #999 ```
david-sarah@jacaranda.org commented 2011-10-21 04:42:32 +00:00
Author
Owner

In [5522/ticket999-S3-backend]:

test_system.py: check that there is no error output from invocations of 'tahoe debug'. refs #999
In [5522/ticket999-S3-backend]: ``` test_system.py: check that there is no error output from invocations of 'tahoe debug'. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-22 04:58:36 +00:00
Author
Owner

In [5523/ticket999-S3-backend]:

mock_s3.py: remove bucketname argument to MockS3Bucket constructor, since it is not needed. refs #999
In [5523/ticket999-S3-backend]: ``` mock_s3.py: remove bucketname argument to MockS3Bucket constructor, since it is not needed. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-24 18:31:36 +00:00
Author
Owner

In [5524/ticket999-S3-backend]:

S3 backend: remove support for [storage]readonly option. refs #999, #1568
In [5524/ticket999-S3-backend]: ``` S3 backend: remove support for [storage]readonly option. refs #999, #1568 ```
david-sarah@jacaranda.org commented 2011-10-24 18:31:40 +00:00
Author
Owner

In [5525/ticket999-S3-backend]:

S3 backend: the s3.region option is unnecessary; it is only used for EC2 endpoints, and we only need an S3 one. Also simplify wording in S3.rst. refs #999
In [5525/ticket999-S3-backend]: ``` S3 backend: the s3.region option is unnecessary; it is only used for EC2 endpoints, and we only need an S3 one. Also simplify wording in S3.rst. refs #999 ```
david-sarah@jacaranda.org commented 2011-10-25 10:10:05 +00:00
Author
Owner

In [5526/ticket999-S3-backend]:

docs/backends/S3.rst: document the requirement for the storage server to have the correct time to within 15 minutes. refs #999
In [5526/ticket999-S3-backend]: ``` docs/backends/S3.rst: document the requirement for the storage server to have the correct time to within 15 minutes. refs #999 ```
davidsarah commented 2011-12-16 16:17:27 +00:00
Author
Owner

Further work on this functionality will be in ticket #1569.

Further work on this functionality will be in ticket #1569.
tahoe-lafs added the
fixed
label 2011-12-16 16:17:27 +00:00
davidsarah closed this issue 2011-12-16 16:17:27 +00:00
tahoe-lafs modified the milestone from 1.11.0 to eventually 2012-03-31 23:58:14 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: tahoe-lafs/trac-2024-07-25#999
No description provided.