2 patches for repository zooko@tahoe-lafs.org:/home/source/darcs/tahoe-lafs/ticket999-S3-backend: Thu Sep 29 23:46:28 MDT 2011 zooko@zooko.com * debugprint the values of blocks and hashes thereof; make the test data and the seg size small in order to make the debugprints easy to look at Thu Sep 29 23:59:43 MDT 2011 zooko@zooko.com * make randomness of salts explicit in method arguments This is an experiment, and so far it is not going well. The idea is: don't let code call os.urandom() to get new random strings, but instead let the code receive a random seed as one of its arguments. The main reason to do this is to increase testability by making things repeatable. There may also be other benefits. However, the drawback is that you have to pass this "randseed" argument through many different levels of the call stack, and at each level a mistake which causes a randseed to be re-used could lead to a failure of confidentiality. It hardly seems worth it. However, since I'm currently trying to understand a failure of a complex test in test_mutable, I'm continuing to use this patch for now in the attempt to reduce non-repeatability between different test runs or different variants of the code. New patches: [debugprint the values of blocks and hashes thereof; make the test data and the seg size small in order to make the debugprints easy to look at zooko@zooko.com**20110930054628 Ignore-this: bcfedc06aeedb090dfb02440f6e6c3bc ] { hunk ./src/allmydata/mutable/publish.py 28 SDMFSlotWriteProxy KiB = 1024 -DEFAULT_MAX_SEGMENT_SIZE = 128 * KiB +DEFAULT_MAX_SEGMENT_SIZE = 64 PUSHING_BLOCKS_STATE = 0 PUSHING_EVERYTHING_ELSE_STATE = 1 DONE_STATE = 2 hunk ./src/allmydata/mutable/publish.py 766 hashed = sharedata block_hash = hashutil.block_hash(hashed) self.blockhashes[shareid][segnum] = block_hash + log.msg("yyy 0 shareid: %s, segnum: %s, blockhash: %s, sharedata: %s, salt: %s" % (shareid, segnum, base32.b2a(block_hash), base32.b2a(sharedata), base32.b2a(salt),)) # find the writer for this share writer = self.writers[shareid] writer.put_block(sharedata, segnum, salt) hunk ./src/allmydata/mutable/retrieve.py 771 sharehashes[1].keys()) bht = self._block_hash_trees[reader.shnum] + for bhk, bhv in blockhashes.iteritems(): + log.msg("xxx 0 blockhash: %s %s" % (bhk, base32.b2a(bhv),)) + if bht.needed_hashes(segnum, include_leaf=True): try: bht.set_hashes(blockhashes) hunk ./src/allmydata/test/test_mutable.py 2944 self.set_up_grid() self.c = self.g.clients[0] self.nm = self.c.nodemaker - self.data = "test data" * 100000 # about 900 KiB; MDMF + self.data = "test data" * 32 # about 900 KiB; MDMF self.small_data = "test data" * 10 # about 90 B; SDMF hunk ./src/allmydata/test/test_mutable.py 3374 self.set_up_grid() self.c = self.g.clients[0] self.nm = self.c.nodemaker - self.data = "testdata " * 100000 # about 900 KiB; MDMF + self.data = "testdata " * 30 # about 900 KiB; MDMF self.small_data = "test data" * 10 # about 90 B; SDMF } [make randomness of salts explicit in method arguments zooko@zooko.com**20110930055943 Ignore-this: ad9634d250a2fe72abbaa5f96d0a5c9 This is an experiment, and so far it is not going well. The idea is: don't let code call os.urandom() to get new random strings, but instead let the code receive a random seed as one of its arguments. The main reason to do this is to increase testability by making things repeatable. There may also be other benefits. However, the drawback is that you have to pass this "randseed" argument through many different levels of the call stack, and at each level a mistake which causes a randseed to be re-used could lead to a failure of confidentiality. It hardly seems worth it. However, since I'm currently trying to understand a failure of a complex test in test_mutable, I'm continuing to use this patch for now in the attempt to reduce non-repeatability between different test runs or different variants of the code. ] { hunk ./src/allmydata/mutable/filenode.py 134 return self - def create_with_keys(self, (pubkey, privkey), contents, + def create_with_keys(self, (pubkey, privkey), contents, randseed, version=SDMF_VERSION): """Call this to create a brand-new mutable file. It will create the shares, find homes for them, and upload the initial contents (created hunk ./src/allmydata/mutable/filenode.py 141 with the same rules as IClient.create_mutable_file() ). Returns a Deferred that fires (with the MutableFileNode instance you should use) when it completes. + + @param randseed is required to be a unique value every time you + invoke this method. Using a repeated value could lead to a + failure of confidentiality. """ hunk ./src/allmydata/mutable/filenode.py 146 + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) self._pubkey, self._privkey = pubkey, privkey pubkey_s = self._pubkey.serialize() privkey_s = self._privkey.serialize() hunk ./src/allmydata/mutable/filenode.py 163 self._readkey = self._uri.readkey self._storage_index = self._uri.storage_index initial_contents = self._get_initial_contents(contents) - return self._upload(initial_contents, None) + return self._upload(initial_contents, None, randseed) def _get_initial_contents(self, contents): if contents is None: hunk ./src/allmydata/mutable/filenode.py 688 return d - def _upload(self, new_contents, servermap): + def _upload(self, new_contents, servermap, randseed): """ A MutableFileNode still has to have some way of getting published initially, which is what I am here for. After that, hunk ./src/allmydata/mutable/filenode.py 694 all publishing, updating, modifying and so on happens through MutableFileVersions. + + @param randseed is required to be a unique value every time you + invoke this method. Using a repeated value could lead to a + failure of confidentiality. """ assert self._pubkey, "update_servermap must be called before publish" hunk ./src/allmydata/mutable/filenode.py 703 # Define IPublishInvoker with a set_downloader_hints method? # Then have the publisher call that method when it's done publishing? - p = Publish(self, self._storage_broker, servermap) + p = Publish(self, self._storage_broker, servermap, randseed) if self._history: self._history.notify_publish(p.get_status(), new_contents.get_size()) hunk ./src/allmydata/mutable/filenode.py 1023 self._most_recent_size = size return res - def update(self, data, offset): + def update(self, data, offset, randseed): """ Do an update of this mutable file version by inserting data at offset within the file. If offset is the EOF, this is an append hunk ./src/allmydata/mutable/filenode.py 1036 O(data.get_size()) memory/bandwidth/CPU to perform the update. Otherwise, it must download, re-encode, and upload the entire file again, which will use O(filesize) resources. + + @param randseed is required to be a unique value every time you call + this method. Using a repeated value could lead to a critical + failure of confidentiality. """ hunk ./src/allmydata/mutable/filenode.py 1041 - return self._do_serialized(self._update, data, offset) + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) + return self._do_serialized(self._update, data, offset, randseed) hunk ./src/allmydata/mutable/filenode.py 1045 - def _update(self, data, offset): + def _update(self, data, offset, randseed): """ I update the mutable file version represented by this particular IMutableVersion by inserting the data in data at the offset hunk ./src/allmydata/mutable/filenode.py 1051 offset. I return a Deferred that fires when this has been completed. + + @param randseed is required to be a unique value every time you call + this method. Using a repeated value could lead to a critical + failure of confidentiality. """ hunk ./src/allmydata/mutable/filenode.py 1056 + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) new_size = data.get_size() + offset old_size = self.get_size() segment_size = self._version[3] hunk ./src/allmydata/mutable/filenode.py 1077 log.msg("updating in place") d = self._do_update_update(data, offset) d.addCallback(self._decode_and_decrypt_segments, data, offset) - d.addCallback(self._build_uploadable_and_finish, data, offset) + d.addCallback(self._build_uploadable_and_finish, data, offset, randseed) return d def _do_modify_update(self, data, offset): hunk ./src/allmydata/mutable/filenode.py 1170 d3 = defer.succeed(blockhashes) return deferredutil.gatherResults([d1, d2, d3]) - def _build_uploadable_and_finish(self, segments_and_bht, data, offset): + def _build_uploadable_and_finish(self, segments_and_bht, data, offset, randseed): """ After the process has the plaintext segments, I build the TransformingUploadable that the publisher will eventually hunk ./src/allmydata/mutable/filenode.py 1177 re-upload to the grid. I then invoke the publisher with that uploadable, and return a Deferred when the publish operation has completed without issue. + + @param randseed is required to be a unique value every time you + invoke this method. Using a repeated value could lead to a + failure of confidentiality. """ hunk ./src/allmydata/mutable/filenode.py 1182 + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) u = TransformingUploadable(data, offset, self._version[3], segments_and_bht[0], hunk ./src/allmydata/mutable/filenode.py 1188 segments_and_bht[1]) - p = Publish(self._node, self._storage_broker, self._servermap) + p = Publish(self._node, self._storage_broker, self._servermap, randseed) return p.update(u, offset, segments_and_bht[2], self._version) def _update_servermap(self, mode=MODE_WRITE, update_range=None): hunk ./src/allmydata/mutable/publish.py 11 from twisted.python import failure from allmydata.interfaces import IPublishStatus, SDMF_VERSION, MDMF_VERSION, \ IMutableUploadable -from allmydata.util import base32, hashutil, mathutil, idlib, log +from allmydata.util import base32, hashutil, mathutil, idlib, log, randutil from allmydata.util.dictutil import DictOfSets hunk ./src/allmydata/mutable/publish.py 13 +from allmydata.util.assertutil import precondition from allmydata import hashtree, codec from allmydata.storage.server import si_b2a from pycryptopp.cipher.aes import AES hunk ./src/allmydata/mutable/publish.py 110 the current state of the world. To make the initial publish, set servermap to None. + + @param randseed is required to be a unique value every time you construct + a Publish instance; using a repeated value could lead to a critical + failure of confidentiality """ hunk ./src/allmydata/mutable/publish.py 116 - def __init__(self, filenode, storage_broker, servermap): + def __init__(self, filenode, storage_broker, servermap, randseed): + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) self._node = filenode self._storage_broker = storage_broker self._servermap = servermap hunk ./src/allmydata/mutable/publish.py 122 + self._rando = randutil.RandomObj(randseed) self._storage_index = self._node.get_storage_index() self._log_prefix = prefix = si_b2a(self._storage_index)[:5] num = self.log("Publish(%s): starting" % prefix, parent=None) hunk ./src/allmydata/mutable/publish.py 651 # return a deferred so that we don't block execution when this # is first called in the upload method. if self._state == PUSHING_BLOCKS_STATE: - return self.push_segment(self._current_segment) + return self.push_segment() elif self._state == PUSHING_EVERYTHING_ELSE_STATE: return self.push_everything_else() hunk ./src/allmydata/mutable/publish.py 661 return self._done() - def push_segment(self, segnum): + def push_segment(self): if self.num_segments == 0 and self._version == SDMF_VERSION: self._add_dummy_salts() hunk ./src/allmydata/mutable/publish.py 665 - if segnum > self.end_segment: + if self._current_segment > self.end_segment: # We don't have any more segments to push. self._state = PUSHING_EVERYTHING_ELSE_STATE return self._push() hunk ./src/allmydata/mutable/publish.py 670 - d = self._encode_segment(segnum) - d.addCallback(self._push_segment, segnum) + salt = self._rando.randstr(hashutil.IVLEN) + + d = self._encode_segment(self._current_segment, salt) + d.addCallback(self._push_segment, self._current_segment) def _increment_segnum(ign): self._current_segment += 1 hunk ./src/allmydata/mutable/publish.py 676 + # XXX: I don't think we need to do addBoth here -- any errBacks # should be handled within push_segment. d.addCallback(_increment_segnum) hunk ./src/allmydata/mutable/publish.py 699 won't make sense. This method adds a dummy salt to each of our SDMF writers so that they can write the signature later. """ - salt = os.urandom(16) assert self._version == SDMF_VERSION for writer in self.writers.itervalues(): hunk ./src/allmydata/mutable/publish.py 702 - writer.put_salt(salt) + writer.put_salt('\x00'*hashutil.IVLEN) hunk ./src/allmydata/mutable/publish.py 705 - def _encode_segment(self, segnum): + def _encode_segment(self, segnum, salt): """ I encrypt and encode the segment segnum. """ hunk ./src/allmydata/mutable/publish.py 724 assert len(data) == segsize, len(data) - salt = os.urandom(16) - key = hashutil.ssk_readkey_data_hash(salt, self.readkey) self._status.set_status("Encrypting") enc = AES(key) replace ./src/allmydata/mutable/publish.py [A-Za-z_0-9] IVLEN SALTLEN hunk ./src/allmydata/mutable/retrieve.py 11 from foolscap.api import eventually, fireEventually from allmydata.interfaces import IRetrieveStatus, NotEnoughSharesError, \ DownloadStopped, MDMF_VERSION, SDMF_VERSION -from allmydata.util import hashutil, log, mathutil +from allmydata.util import base32, hashutil, log, mathutil from allmydata.util.dictutil import DictOfSets from allmydata import hashtree, codec from allmydata.storage.server import si_b2a hunk ./src/allmydata/nodemaker.py 112 return self._create_dirnode(filenode) return None - def create_mutable_file(self, contents=None, keysize=None, + def create_mutable_file(self, randseed, contents=None, keysize=None, version=SDMF_VERSION): hunk ./src/allmydata/nodemaker.py 114 + """ + @param randseed is required to be a unique value every time you call + this method. Using a repeated value could lead to a critical + failure of confidentiality. + """ + precondition(isinstance(randseed, str), randseed) + precondition(len(randseed) == 32, randseed) n = MutableFileNode(self.storage_broker, self.secret_holder, self.default_encoding_parameters, self.history) d = self.key_generator.generate(keysize) hunk ./src/allmydata/nodemaker.py 124 - d.addCallback(n.create_with_keys, contents, version=version) + d.addCallback(n.create_with_keys, contents, randseed=randseed, version=version) d.addCallback(lambda res: n) return d hunk ./src/allmydata/test/test_mutable.py 8 from twisted.internet import defer, reactor from allmydata import uri, client from allmydata.nodemaker import NodeMaker -from allmydata.util import base32, consumer, fileutil, mathutil +from allmydata.util import base32, consumer, fileutil, mathutil, randutil from allmydata.util.hashutil import tagged_hash, ssk_writekey_hash, \ ssk_pubkey_fingerprint_hash from allmydata.util.consumer import MemoryConsumer hunk ./src/allmydata/test/test_mutable.py 3394 d.addCallback(_then2) return d - def do_upload_mdmf(self): - d = self.nm.create_mutable_file(MutableData(self.data), + def do_upload_mdmf(self, randseed): + rando = randutil.RandomObj(randseed) + d = self.nm.create_mutable_file(rando.randstr(32), MutableData(self.data), version=MDMF_VERSION) def _then(n): assert isinstance(n, MutableFileNode) hunk ./src/allmydata/test/test_mutable.py 3404 # Make MDMF node that has 255 shares. self.nm.default_encoding_parameters['n'] = 255 self.nm.default_encoding_parameters['k'] = 127 - return self.nm.create_mutable_file(MutableData(self.data), + return self.nm.create_mutable_file(rando.randstr(32), MutableData(self.data), version=MDMF_VERSION) d.addCallback(_then) def _then2(n): hunk ./src/allmydata/test/test_mutable.py 3413 d.addCallback(_then2) return d - def _test_replace(self, offset, new_data): + def _test_replace(self, offset, new_data, randseed): expected = self.data[:offset]+new_data+self.data[offset+len(new_data):] hunk ./src/allmydata/test/test_mutable.py 3415 - d0 = self.do_upload_mdmf() + d0 = self.do_upload_mdmf(randseed) def _run(ign): d = defer.succeed(None) for node in (self.mdmf_node, self.mdmf_max_shares_node): hunk ./src/allmydata/test/test_mutable.py 3421 d.addCallback(lambda ign: node.get_best_mutable_version()) d.addCallback(lambda mv: - mv.update(MutableData(new_data), offset)) + mv.update(MutableData(new_data), offset, randseed)) # close around node. d.addCallback(lambda ignored, node=node: node.download_best_version()) hunk ./src/allmydata/test/test_mutable.py 3439 def test_append(self): # We should be able to append data to a mutable file and get # what we expect. - return self._test_replace(len(self.data), "appended") + return self._test_replace(len(self.data), "appended", randseed='test_append000000000000000000000') def test_replace_middle(self): # We should be able to replace data in the middle of a mutable replace ./src/allmydata/util/hashutil.py [A-Za-z_0-9] IVLEN SALTLEN addfile ./src/allmydata/util/randutil.py hunk ./src/allmydata/util/randutil.py 1 +import random + +class RandomObj(random.Random): + def randstr(self, n): + return ''.join(map(chr, map(self.randrange, [0]*n, [256]*n))) } Context: [free up the buffer used to hold data while it is being written to ImmutableS3ShareForWriting zooko@zooko.com**20110930060238 Ignore-this: 603b2c8bb1f4656bdde5876ac95aa5c9 ] [FIX THE BUG! zooko@zooko.com**20110930032140 Ignore-this: fd32c4ac3054ae6fc2b9433f113b2fd6 ] [fix another bug in ImmutableShareS3ForWriting zooko@zooko.com**20110930025701 Ignore-this: 6ad7bd17111b12d96991172fbe04d76 ] [really fix the bug in ImmutableS3ShareForWriting zooko@zooko.com**20110930023501 Ignore-this: 36a7804433cab667566d119af7223425 ] [Add dummy lease methods to immutable S3 share objects. refs #999 david-sarah@jacaranda.org**20110930021703 Ignore-this: 7c21f140020edd64027c71be0f32c2b2 ] [test_storage.py: Server class uses ShouldFailMixin. refs #999 david-sarah@jacaranda.org**20110930001349 Ignore-this: 4cf1ef21bbf85d7fe52ab660f59ff237 ] [mock_s3.py: fix bug in MockS3Error constructor. refs #999 david-sarah@jacaranda.org**20110930001326 Ignore-this: 4d0ebd9120fc8e99b15924c671cd0927 ] [fix bug in ImmutableS3ShareForWriting zooko@zooko.com**20110930020535 Ignore-this: f7f63d2fc2086903a195cc000f306b88 ] [return res zooko@zooko.com**20110930000446 Ignore-this: 6f73b3e389612c73c6590007229ad8e ] [s3_bucket.py: fix an incorrect argument signature for list_objects. refs #999 david-sarah@jacaranda.org**20110929235646 Ignore-this: f02e3a23f28fadef71c70fd0b1592ba6 ] [Make sure that the statedir is created before trying to use it. refs #999 david-sarah@jacaranda.org**20110929234845 Ignore-this: b5f0529b1f2a5b5250c2ee2091cbe24b ] [test/mock_s3.py: fix a typo. refs #999 david-sarah@jacaranda.org**20110929234808 Ignore-this: ccdff591f9b301f7f486454a4366c2b3 ] [test_storage.py: only run test_large_share on the disk backend. (It will wedge your machine if run on the S3 backend with MockS3Bucket.) refs #999 david-sarah@jacaranda.org**20110929234725 Ignore-this: ffa7c08458ee0159455b6f1cd1c3ff48 ] [fix doc to say that secret access key goes into private/s3secret zooko@zooko.com**20110930000256 Ignore-this: c054ff78041a05b3177b3c1b3e9d4ae7 ] [Fixes to S3 config parsing, with tests. refs #999 david-sarah@jacaranda.org**20110929225014 Ignore-this: 19aa5a3e9575b0c2f77b19fe1bcbafcb ] [Add missing src/allmydata/test/mock_s3.py (mock implementation of an S3 bucket). refs #999 david-sarah@jacaranda.org**20110929212229 Ignore-this: a1433555d4bb0b8b36fb80feb122187b ] [Make the s3.region option case-insensitive (txaws expects uppercase). refs #999 david-sarah@jacaranda.org**20110929211606 Ignore-this: def83d3fa368c315573e5f1bad5ee7f9 ] [Fix missing add_lease method on ImmutableS3ShareForWriting. refs #999 david-sarah@jacaranda.org**20110929211524 Ignore-this: 832f0d94f912b17006b0dbaab94846b6 ] [Add missing src/allmydata/storage/backends/s3/s3_bucket.py. refs #999 david-sarah@jacaranda.org**20110929211416 Ignore-this: aa783c5d7c32af172b5c5a3d62c3faf2 ] [scripts/debug.py: repair stale code, and use the get_disk_share function defined by disk_backend instead of duplicating it. refs #999 david-sarah@jacaranda.org**20110929211252 Ignore-this: 5dda548e8703e35f0c103467346627ef ] [Fix a bug in the new config parsing code when reserved_space is not present for a disk backend. refs #999 david-sarah@jacaranda.org**20110929211106 Ignore-this: b05bd3c4ff7d90b5ecb1e6a54717b735 ] [test_storage.py: Avoid using the same working directory for different test classes. refs #999 david-sarah@jacaranda.org**20110929210954 Ignore-this: 3a01048e941c61c603eec603d064bebb ] [More asycification of tests. refs #999 david-sarah@jacaranda.org**20110929210727 Ignore-this: 87690a62f89a07e63b859c24948d262d ] [Fix a bug in disk_backend.py. refs #999 david-sarah@jacaranda.org**20110929182511 Ignore-this: 4f9a62adf03fc3221e46b54f7a4a960b ] [docs/backends/S3.rst: add s3.region option. Also minor changes to configuration.rst. refs #999 david-sarah@jacaranda.org**20110929182442 Ignore-this: 2992ead5f8d9357a0d9b912b1e0bd932 ] [Updates to test_backends.py. refs #999 david-sarah@jacaranda.org**20110929182016 Ignore-this: 3bac19179308e6f27e54c45c7cad4dc6 ] [Implement selection of backends from tahoe.cfg options. Also remove the discard_storage parameter from the disk backend. refs #999 david-sarah@jacaranda.org**20110929181754 Ignore-this: c7f78e7db98326723033f44e56858683 ] [test_storage.py: fix an incorrect argument in construction of S3Backend. refs #999 david-sarah@jacaranda.org**20110929081331 Ignore-this: 33ad68e0d3a15e3fa1dda90df1b8365c ] [Move the implementation of lease methods to disk_backend.py, and add stub implementations in s3_backend.py that raise NotImplementedError. Fix the lease methods in the disk backend to be synchronous. Also make sure that get_shares() returns a Deferred list sorted by shnum. refs #999 david-sarah@jacaranda.org**20110929081132 Ignore-this: 32cbad21c7236360e2e8e84a07f88597 ] [Make the make_bucket_writer method synchronous. refs #999 david-sarah@jacaranda.org**20110929080712 Ignore-this: 1de299e791baf1cf1e2a8d4b593e8ba1 ] [Add get_s3_share function in place of S3ShareSet._load_shares. refs #999 david-sarah@jacaranda.org**20110929080530 Ignore-this: f99665979612e42ecefa293bda0db5de ] [Complete the splitting of the immutable IStoredShare interface into IShareForReading and IShareForWriting. Also remove the 'load' method from shares, and other minor interface changes. refs #999 david-sarah@jacaranda.org**20110929075544 Ignore-this: 8c923051869cf162d9840770b4a08573 ] [split Immutable S3 Share into for-reading and for-writing classes, remove unused (as far as I can tell) methods, use cStringIO for buffering the writes zooko@zooko.com**20110929055038 Ignore-this: 82d8c4488a8548936285a975ef5a1559 TODO: define the interfaces that the new classes claim to implement ] [Comment out an assertion that was causing all mutable tests to fail. THIS IS PROBABLY WRONG. refs #999 david-sarah@jacaranda.org**20110929041110 Ignore-this: 1e402d51ec021405b191757a37b35a94 ] [Fix some incorrect or incomplete asyncifications. refs #999 david-sarah@jacaranda.org**20110929040800 Ignore-this: ed70e9af2190217c84fd2e8c41de4c7e ] [Add some debugging assertions that share objects are not Deferred. refs #999 david-sarah@jacaranda.org**20110929040657 Ignore-this: 5c7f56a146f5a3c353c6fe5b090a7dc5 ] [scripts/debug.py: take account of some API changes. refs #999 david-sarah@jacaranda.org**20110929040539 Ignore-this: 933c3d44b993c041105038c7d4514386 ] [Make get_sharesets_for_prefix synchronous for the time being (returning a Deferred breaks crawlers). refs #999 david-sarah@jacaranda.org**20110929040136 Ignore-this: e94b93d4f3f6173d9de80c4121b68748 ] [More asyncification of tests. refs #999 david-sarah@jacaranda.org**20110929035644 Ignore-this: 28b650a9ef593b3fd7524f6cb562ad71 ] [no_network.py: add some assertions that the things we wrap using LocalWrapper are not Deferred (which is not supported and causes hard-to-debug failures). refs #999 david-sarah@jacaranda.org**20110929035537 Ignore-this: fd103fbbb54fbbc17b9517c78313120e ] [Add some debugging code (switched off) to no_network.py. When switched on (PRINT_TRACEBACKS = True), this prints the stack trace associated with the caller of a remote method, mitigating the problem that the traceback normally gets lost at that point. TODO: think of a better way to preserve the traceback that can be enabled by default. refs #999 david-sarah@jacaranda.org**20110929035341 Ignore-this: 2a593ec3ee450719b241ea8d60a0f320 ] [Use factory functions to create share objects rather than their constructors, to allow the factory to return a Deferred. Also change some methods on IShareSet and IStoredShare to return Deferreds. Refactor some constants associated with mutable shares. refs #999 david-sarah@jacaranda.org**20110928052324 Ignore-this: bce0ac02f475bcf31b0e3b340cd91198 ] [Work in progress for asyncifying the backend interface (necessary to call txaws methods that return Deferreds). This is incomplete so lots of tests fail. refs #999 david-sarah@jacaranda.org**20110927073903 Ignore-this: ebdc6c06c3baa9460af128ec8f5b418b ] [mutable/publish.py: don't crash if there are no writers in _report_verinfo. refs #999 david-sarah@jacaranda.org**20110928014126 Ignore-this: 9999c82bb3057f755a6e86baeafb8a39 ] [scripts/debug.py: fix incorrect arguments to dump_immutable_share. refs #999 david-sarah@jacaranda.org**20110928014049 Ignore-this: 1078ee3f06a2f36b29e0cf694d2851cd ] [test_system.py: more debug output for a failing check in test_filesystem. refs #999 david-sarah@jacaranda.org**20110928014019 Ignore-this: e8bb77b8f7db12db7cd69efb6e0ed130 ] [test_system.py: incorrect arguments were being passed to the constructor for MutableDiskShare. refs #999 david-sarah@jacaranda.org**20110928013857 Ignore-this: e9719f74e7e073e37537f9a71614b8a0 ] [Undo an incompatible change to RIStorageServer. refs #999 david-sarah@jacaranda.org**20110928013729 Ignore-this: bea4c0f6cb71202fab942cd846eab693 ] [mutable/publish.py: resolve conflicting patches. refs #999 david-sarah@jacaranda.org**20110927073530 Ignore-this: 6154a113723dc93148151288bd032439 ] [test_storage.py: fix test_no_st_blocks. refs #999 david-sarah@jacaranda.org**20110927072848 Ignore-this: 5f12b784920f87d09c97c676d0afa6f8 ] [Cleanups to S3 backend (not including Deferred changes). refs #999 david-sarah@jacaranda.org**20110927071855 Ignore-this: f0dca788190d92b1edb1ee1498fb34dc ] [Cleanups to disk backend. refs #999 david-sarah@jacaranda.org**20110927071544 Ignore-this: e9d3fd0e85aaf301c04342fffdc8f26 ] [test_storage.py: fix test_status_bad_disk_stats. refs #999 david-sarah@jacaranda.org**20110927071403 Ignore-this: 6108fee69a60962be2df2ad11b483a11 ] [util/deferredutil.py: add some utilities for asynchronous iteration. refs #999 david-sarah@jacaranda.org**20110927070947 Ignore-this: ac4946c1e5779ea64b85a1a420d34c9e ] [Add 'has-immutable-readv' to server version information. refs #999 david-sarah@jacaranda.org**20110923220935 Ignore-this: c3c4358f2ab8ac503f99c968ace8efcf ] [Minor cleanup to disk backend. refs #999 david-sarah@jacaranda.org**20110923205510 Ignore-this: 79f92d7c2edb14cfedb167247c3f0d08 ] [Update the S3 backend. refs #999 david-sarah@jacaranda.org**20110923205345 Ignore-this: 5ca623a17e09ddad4cab2f51b49aec0a ] [Update the null backend to take into account interface changes. Also, it now records which shares are present, but not their contents. refs #999 david-sarah@jacaranda.org**20110923205219 Ignore-this: 42a23d7e253255003dc63facea783251 ] [Make EmptyShare.check_testv a simple function. refs #999 david-sarah@jacaranda.org**20110923204945 Ignore-this: d0132c085f40c39815fa920b77fc39ab ] [The cancel secret needs to be unique, even if it isn't explicitly provided. refs #999 david-sarah@jacaranda.org**20110923204914 Ignore-this: 6c44bb908dd4c0cdc59506b2d87a47b0 ] [Implement readv for immutable shares. refs #999 david-sarah@jacaranda.org**20110923204611 Ignore-this: 24f14b663051169d66293020e40c5a05 ] [Remove redundant si_s argument from check_write_enabler. refs #999 david-sarah@jacaranda.org**20110923204425 Ignore-this: 25be760118dbce2eb661137f7d46dd20 ] [interfaces.py: add fill_in_space_stats method to IStorageBackend. refs #999 david-sarah@jacaranda.org**20110923203723 Ignore-this: 59371c150532055939794fed6c77dcb6 ] [Add incomplete S3 backend. refs #999 david-sarah@jacaranda.org**20110923041314 Ignore-this: b48df65699e3926dcbb87b5f755cdbf1 ] [Move advise_corrupt_share to allmydata/storage/backends/base.py, since it will be common to the disk and S3 backends. refs #999 david-sarah@jacaranda.org**20110923041115 Ignore-this: 782b49f243bd98fcb6c249f8e40fd9f ] [A few comment cleanups. refs #999 david-sarah@jacaranda.org**20110923041003 Ignore-this: f574b4a3954b6946016646011ad15edf ] [mutable/publish.py: elements should not be removed from a dictionary while it is being iterated over. refs #393 david-sarah@jacaranda.org**20110923040825 Ignore-this: 135da94bd344db6ccd59a576b54901c1 ] [Blank line cleanups. david-sarah@jacaranda.org**20110923012044 Ignore-this: 8e1c4ecb5b0c65673af35872876a8591 ] [Reinstate the cancel_lease methods of ImmutableDiskShare and MutableDiskShare, since they are needed for lease expiry. refs #999 david-sarah@jacaranda.org**20110922183323 Ignore-this: a11fb0dd0078ff627cb727fc769ec848 ] [Fix most of the crawler tests. refs #999 david-sarah@jacaranda.org**20110922183008 Ignore-this: 116c0848008f3989ba78d87c07ec783c ] [Fix some more test failures. refs #999 david-sarah@jacaranda.org**20110922045451 Ignore-this: b726193cbd03a7c3d343f6e4a0f33ee7 ] [uri.py: resolve a conflict between trunk and the pluggable-backends patches. refs #999 david-sarah@jacaranda.org**20110921222038 Ignore-this: ffeeab60d8e71a6a29a002d024d76fcf ] [Fix more shallow bugs, mainly FilePathification. Also, remove the max_space_per_bucket parameter from BucketWriter since it can be obtained from the _max_size attribute of the share (via a new get_allocated_size() accessor). refs #999 david-sarah@jacaranda.org**20110921221421 Ignore-this: 600e3ccef8533aa43442fa576c7d88cf ] [More fixes to tests needed for pluggable backends. refs #999 david-sarah@jacaranda.org**20110921184649 Ignore-this: 9be0d3a98e350fd4e17a07d2c00bb4ca ] [docs/backends/S3.rst, disk.rst: describe type of space settings as 'quantity of space', not 'str'. refs #999 david-sarah@jacaranda.org**20110921031705 Ignore-this: a74ed8e01b0a1ab5f07a1487d7bf138 ] [docs/backends/S3.rst: remove Issues section. refs #999 david-sarah@jacaranda.org**20110921031625 Ignore-this: c83d8f52b790bc32488869e6ee1df8c2 ] [Fix some incorrect attribute accesses. refs #999 david-sarah@jacaranda.org**20110921031207 Ignore-this: f1ea4c3ea191f6d4b719afaebd2b2bcd ] [docs/backends: document the configuration options for the pluggable backends scheme. refs #999 david-sarah@jacaranda.org**20110920171737 Ignore-this: 5947e864682a43cb04e557334cda7c19 ] [Work-in-progress, includes fix to bug involving BucketWriter. refs #999 david-sarah@jacaranda.org**20110920033803 Ignore-this: 64e9e019421454e4d08141d10b6e4eed ] [Pluggable backends -- all other changes. refs #999 david-sarah@jacaranda.org**20110919233256 Ignore-this: 1a77b6b5d178b32a9b914b699ba7e957 ] [Pluggable backends -- new and moved files, changes to moved files. refs #999 david-sarah@jacaranda.org**20110919232926 Ignore-this: ec5d2d1362a092d919e84327d3092424 ] [interfaces.py: 'which -> that' grammar cleanup. david-sarah@jacaranda.org**20110825003217 Ignore-this: a3e15f3676de1b346ad78aabdfb8cac6 ] [test/test_runner.py: BinTahoe.test_path has rare nondeterministic failures; this patch probably fixes a problem where the actual cause of failure is masked by a string conversion error. david-sarah@jacaranda.org**20110927225336 Ignore-this: 6f1ad68004194cc9cea55ace3745e4af ] [docs/configuration.rst: add section about the types of node, and clarify when setting web.port enables web-API service. fixes #1444 zooko@zooko.com**20110926203801 Ignore-this: ab94d470c68e720101a7ff3c207a719e ] [TAG allmydata-tahoe-1.9.0a2 warner@lothar.com**20110925234811 Ignore-this: e9649c58f9c9017a7d55008938dba64f ] Patch bundle hash: 8a27d8a395a489260241d90e0f70e4b38e6b15cd